Software Hardware Co-Design

Software/Hardware co-design can be defined as the simultaneous design of both hardware and software to implement in a desired function. Successful co-design goes hand in hand with co-verification, which is the simultaneously verification of both software and hardware and in what extent it fits into the desired function. In today’s world it is necessary to incorporate co-design in the early system design phase and put software-hardware integration downstream because traditional methodologies aren’t effective any longer. Today, we try to put the foundation to incorporate co-design and co-verification into the normal product development in place. Especially when products incorporating embedded systems are involved.

There are very many traditional barriers to effective co-design and co-verification such as organizational structures and old fashioned paradigms of other companies in the same market or concepts developed in the past and worked well back then. Suppliers often lack an integrated view of the design process, too. What we need are tools which better estimate the constraints between the boundaries, before iterating through a difficult flow.

By using simulation models we can find conflicts between top-down constraints, which come from design requirements and bottom-up constraints, which come from physical data. Bottom-up constraints for software can only be realized in a hardware context because the abstraction-level of software is higher than that of hardware on which it is executed.

It is often the case that hardware is available (which is ‘physical data’), so this can’t be changed by software/hardware co-design. Only the software can be changed, and it should be fitted to this physical data. Therefore a certain modeling strategy is necessary to cover the existing hardware. This modeling isn’t easy and it will never be perfect because the reality is too complex to find a perfect model. As to that it seems easier to design both hardware and software, because it is often easier to design two things that have to work together, than design one thing, and fit it around another. But if both hardware and software have to designed, powerful verification is essential because you have to design two different ‘products’ who interact with each other and nothing is ‘physical’ on both ‘products’. Of course different techniques have been developed to verify combined hardware-software systems, but each of them has its own limitations. It’s possible to run code on models of hardware emulated through dedicated programmable hardware, offering near real-time speed for code execution. Unfortunately, sometimes real-time interaction with other hardware and external environments is required, so full speed code execution isn’t supported.

Hardware-software co-design exists for several decades. To ensure system capability designers had to face the realities of combining digital computing with software algorithms. To verify interaction between these two prototypes, hardware had to be build. But in the ’90s this won’t suffice because co-design is turning from a good idea into an economic necessity. Predictions for the future point to greater embedded software content in hardware systems than ever before. So something has to be done to speed up and improve traditional software-hardware codesign. Developments in this matter direct to:

  1. Top-down system level codesign and co-synthesis work at universities
  2. Major advances made by EDA (Electronic Design Automation) companies in high speed emulation systems.

Co-design focuses on the areas of system specification, architectural design, hardware-software partitioning and iteration between hardware and software as design progresses. Finally, co-design is complimented by hardware-software integration and tested. Design re-use is being applied more often, too. Previous and current generation IC’s are finding their way into new designs as embedded cores in a mix-and-match fashion. This requires greater convergence of methodologies for co-design and co-verification and high demands on system-on-a-chip-density. That’s why this concept was an elusion for many years, until recently. In the future the need for tools to estimate the impact of design changes earlier in the design process, will increase.

To get a hold of elusive design errors, quickly applying the right modeling strategy at the right time is essential. It is often necessary to consider multiple models, but how can multiple approaches be fit into a very tight design process ? This depends on the goals and constraints of the design project as well as the computational environment and the end-use. To find the right approach, iteration is the only way out.

Because there is no widely accepted methodology or tool available to help designers create a functional specification, mostly ad-hoc manners are used, heavily relying on informal and manual techniques and exploring only few possibilities. There should be developed a hierarchical modeling methodology to improve this situation. The main concern in such a methodology is precisely specifying the system’s functionality and exploring system-level implementations.

To create a system-level design, the following steps should be taken:

  1. Specification capture: Decomposing functionality into pieces by creating a conceptual model of the system. The result is a functional specification, which lacks any implementation detail.
  2. Exploration: Exploration of design alternatives and estimating their quality to find the best suitable one.
  3. Specification: The specification as noted in 1. is now refined into a new description reflecting the decisions made during exploration as noted in 2.
  4. Software and hardware: For each of the components an implementation is created, using software and hardware design techniques.
  5. Physical design: Manufacturing data is generated for each component

When successfully run over the steps above, embedded-system design methodology from product conceptualization to manufacturing is roughly defined. This hierarchical modeling methodology enables high productivity, preserving consistency through all levels and thus avoiding unnecessary iteration, which makes the process more efficient and faster.

Now let’s go get a closer look at some processes run through in the steps above. To describe a system’s functionality, the functionality should first be decomposed and relationships between the pieces should be described. There are many models for describing a system’s functionality, let’s name four important ones:

  1. Dataflow graph. A dataflow graph decomposes functionality into data-transforming activities and the dataflow between these activities.
  2. Finite-State Machine (FSM). By this model the system is represented as a set of states and a set of arcs that indicate transition of the system from one state to another as a result of certain occurring events.
  3. Communicating Sequential Processes (CSP). This model decomposes the system into a set of concurrently executing processes, processes that execute program instructions sequentially.
  4. Program-State Machine (PSM). This model combines FSM and CSP by permitting each state of a concurrent FSM to contain actions, described by program instructions.

Each model has its own advantages and disadvantages. No model is perfect for all classes of systems, so the best one should be chosen, matching closely as possible the characteristics of the system into the models.

This should be done very accurately because the choice of a model is the most important influence on the ability to understand and define system functionality during system specification.

To specify functionality, several languages are commonly used by designers. VHDL and Verilog are very popular standards because of the easy description of a CSP model through their process and sequential-statement constructs. Of course other languages are used as well but none of them directly supports state transitions. Just like some models are better suitable for specific systems, some languages are better suitable for specific models than others.

Finally, it should be noted that codesign still is a very new field and researchers in this area have rapidly evolving interests. Work is in progress aiming at introducing more sophisticated algorithms and features on top of a basic framework as discussed above. Most of the implementation effort is devoted to transformation algorithms and to cost/performance evaluation. Higher level of automation in optimization, direct user selection, analysis of data flow connectivity and resource-analysis is currently researched.