This article is the third in the SoC drawer series. The series aims to provide the system architect with a starting point and some tips to make system-on-a-chip (SoC) design easier. SoCs are either configurable or reconfigurable. Configurable SoCs are constructed from core logic that can be reused and modified for an application-specific integrated circuit (ASIC) implementation; reconfigurable SoCs use soft cores and field-programmable gate array (FPGA) fabrics that can be reprogrammed by simply downloading new in-circuit FPGA programming. The mutability of SoCs in terms of firmware and FPGA fabric is attractive because it allows for modification and addition of features. Likewise, for configurable SoCs, the mutability during early co-design and co-simulation allows for quick design changes in both hardware and firmware, making for a very agile project.
The downside to this increased flexibility and mutability of hardware is that too much change without a process can result in project churn. Churn comes about when the addition of one new feature breaks one or more existing features, or degrades stability or performance. Ideally, with a disciplined process, new firmware or hardware features can be added to an SoC line of products with little or no churn. The keys are the early identification and use of tools that support a disciplined process for development, and the establishment of policies for their use that help the project evolve.
An overview of concurrent hardware/firmware engineering for SoCs
The emergence of advanced electronic design automation (EDA) tools, hardware design languages (HDLs), test benches, and co- simulators (allowing firmware to run on simulated hardware designs) has helped speed SoC time-to-market. Partly as a result, it's now expected that, once SoC silicon has arrived, firmware should be up and running and ready to ship not long after. This can be done by testing firmware early, during hardware design, and making careful trade-offs between implementation of functions and features in either hardware or firmware. However, starting the process of hardware and firmware integration and test early also helps shorten time-to-market immensely, especially if disciplined processes are defined early so that final integration and test has little to no churn. (See the information on EDA tools in the Resources section below for more on this.)
Figure 1 shows major milestones in an SoC project from the perspective of hardware development, test, and firmware development. There are four main phases where hardware/firmware testing employs different tools and methods to move the overall system design forward. In the first phase, transaction-level models (TLM), built with C or SystemC, can be used to test the hardware/firmware interface and analyze trade-offs between feature implementation in hardware alone, firmware alone, or some combination. Use of an instruction set simulator (ISS) for the processor cores used can be helpful. With an ISS, firmware is tested at the instruction level; although the tests are not cycle-accurate, they allow for early deployment of the firmware code development tool chain.
The second phase begins once enough of the hardware design is complete that cycle-accurate register transfer language (RTL) simulations can be run with basic firmware boot code. An RTL simulation can ensure that firmware will come up on real hardware with little delay once it has been fabricated. At this point, the hardware team typically has significant verification work to do, and the firmware team has significant feature development and testing to complete. So, in phase three, the TLM model can be further developed and used (mostly by the firmware team) to continue code development and testing pre-silicon. The cycle-based TLM simulation also allows for regression testing of firmware so that stability and feature interactions can be tested as the firmware base matures. This is likely to be the longest phase of the overall co-design life cycle.
In the fourth and final phase, post-silicon, hardware/firmware integration requires quick firmware "bring up" as designed, along with diagnostic tests to assist with post-silicon hardware verification.
Figure 1. Hardware/firmware testing and integration phases
Overall, the co-design and co-simulation tools provide a method under which a system can be developed with frequent feedback from integrated testing at progressively refined levels: from transaction to cycle-accurate simulation, and then to actual integrated testing. Early and continuous re-testing (regression) is important because it helps to identify and fix problems early. However, the rapid pace of feature addition often causes backslides, where the stability and performance of the system suffer as features are added. Ideally, feature count, stability, and performance would monotonically increase throughout the phases of the development life cycle. Most often, however, performance and stability decline as features are added and only begin to improve again as the feature set becomes more constant.
In each phase, with a disciplined development approach, you can keep stability, performance, and the feature set all improving. It's likely that there will be apparent declines as you move from phase, as increased fidelity of testing will lead to the discovery of more esoteric and detailed issues. However, constant progress can be maintained, at least within a phase. One of the simplest methods is disciplined regression testing combined with change management tools.
How to keep iterative co-implementation on track during development phases
Some tips on how to keep features and modules on track:
- Identify hardware and firmware module owners to take responsibility through entire life cycle.
- Require tests to be developed in parallel with module development.
- Require early adoption of nightly testing using TLM simulation and/or RTL simulation.
- Adopt configuration management version control (CMVC) tools that allow for feature addition branches and version tagging.
While the recommendations above are followed in most projects, they often aren't implemented until the end of the process shown in Figure 1. Starting early and automating tests for nightly regression is now possible with EDA and co-simulation tools available for SoCs. In the days before early verification tools were available, hardware and firmware development proceeded much more independently than they can now. A typical process included independent development of firmware on an emulator while hardware was designed and developed, with most of the co-testing done during the final post-silicon verification. Despite advances in verification tools, many SoC developers still work along lines established in those days, and thus don't adopt testing and regression processes, or configuration and version control, to the extent that they should.
Since EDA and HDLs for SoC design make the hardware development process similar in nature to firmware development, both hardware and firmware can and should use configuration management tools -- the same ones, if at all possible! This almost seems blasphemous to organizations that have grown accustomed to a silo model for hardware and software development, where a quick hand-off is made post-silicon, and interaction is otherwise minimal.
One difficulty when testing changing firmware on changing hardware is that stability often suffers: this can greatly impede the progress of both hardware and firmware development teams. This problem can be solved by having the hardware team make releases of simulators to the firmware team. Likewise, the firmware team should make releases of boot code and diagnostic code to the hardware team. Both teams need well-disciplined processes for maintaining versions and releases. One way to go about this is to maintain a main line of C code or HDL that is guaranteed to be stable. As hardware or firmware developers add code, they do so on branches from the stable main line, and merge new features and bug fixes made on code branches back to the line. Figure 2 depicts this basic disciplined practice.
Figure 2. CMVC tool branch and merge
Figure 2 shows how a developer can take a stable baseline of C code or HDL, branch it for modification, add new features and test them on the branch, and then merge them with other potential changes on the main line. Once the merge is completed, the new result must once again be tested, then it can be put back on the main line, advancing the overall system and maintaining stability. The only place unstable code should be found with this process is on a branch. Once a CVS repository has been set up and code checked into it, developers can create a working sandbox for their code; the sandbox is their own private copy from the repository. For simple modifications to the code base, a file can be modified and, after testing, put back into the base with a simple command:
cvs commit filename
Branching is a more advanced method. It is useful when sets of files need to be modified, tested, and shared with other developers. A CVS branch has a tag, and therefore other developers can check out copies of a branch to their own branch sandbox, as Figure 2 shows. In Listing 1, the first set of commands does a checkout of the current code and then tags this revision of the code with a branch tag. Tags are simply a set of revisions of all files from the repository. The branch tag is a special tag: it not only defines a set of file revisions, but also allows for modification to those files with a new revision that remains separate from the main repository revisions. This is shown in Figure 2 as the branch line that includes a main line revision and branch revision. The developer or developers working on the branch can share updates and test the branch revisions without affecting the main line of code.
The middle set of commands in Listing 1 provides an example of updates to the branch revision. Once the developers are happy with the branch, the branched code set can be merged back to the main line of code with the final set of commands in Listing 1.
Listing 1. CVS commands for branching
cd stable_directory cvs checkout system make system; test system cvs tag -b -c branch_feature_date cd branch_directory cvs checkout -r branch_feature_date modify files make system; test system cvs commit modify files make system; test system cvs commit cd merge_directory cvs checkout system cvs update -j branch_feature_date -kk system make system; test system cvs commit
When to apply branches
Branches can be useful for almost any modification to a design maintained as a file set, but most often they are used for:
- Complicated multifile bug fixes
- Addition of new features
- Performance optimization and tuning
- Special releases to external customers or internal users
Optimization is a great example of an area where branches combined with regression testing can allow for significant and aggressive performance improvements while minimizing risk to system stability. You may very well have optimized a system to improve performance, only to find after subsequent development of more sophisticated regression tests that the optimization has destabilized the system. Or, in some cases, it may take some soak time before the destabilization is noticed. For example, if the optimization introduces a new race condition, then that condition might not be hit for many days or weeks, long after the optimization has been initially tested and integrated back into the system. At this point, the optimization might be harder to back out.
Optimizations performed on branches can be tested and maintained on the branch and merged with a very clear change set. You can more readily back out of merging a destabilizing optimization back into the main line if you use tags on the branch.
Disciplined processes, such as the use of branching to make code or HDL changes, often seem burdensome. However, the costs of not using such processes include loss of stability and, often, project delays. The best argument for disciplined processes such as CMVC and branching can best be summed up as, "Pay me ten cents now, or pay me a dollar later." The cost of the overhead of a process, once learned, is low for incremental usage, but the cost of stability loss and project delays simply grows higher and higher as the project life cycle progresses.
Compromises are possible. For example, it would be possible to use CMVC without branches. At least the main line could then be tagged and modifications backed out while work proceeds from a last-known good tag. However, with a little experience, branches have great pay-offs. For example, with a branch, a sub-team can work on and share a feature addition with CMVC features available to that sub- team and the branch. This can progress with no impact to the main line and other sub-teams. The main pitfall of branches is staying on them so long that merges become difficult. In general, CMVC also works best when care is taken to divide work and functionality with clear owners so that the frequency of modifications to common files and sub-systems will be very low. With a little added discipline, the full value of EDA and co-simulation can lead to hardware and firmware development projects for SoCs with minimal time-to-market and minimal hassle.
- Numerous open source CMVC tools are available, as described in David Wheeler's overview. Find more information on the CVS system described and used for the branching example in this article on the Concurrent Versions System home page. Newer systems, such as Subversion, have been made available and claim to offer improvements over CVS.
- Open source hardware/software co-design tools such as UC Berkeley's POLIS can be used during phase two of the co-development life cycle shown in Figure 1.
Get products and technologies
- Many EDA tool vendors provide support for hardware/software co-design and co-verification, including Mentor Graphics Seamless tool, Synopsis, Tensilica, Xlinx, and Debussy.
- Tools such as the IBM ChipBench can be used during phase one of the co-development life cycle shown in Figure 1.
- Find out more about the IBM Rational ClearCase CMVC development tools.
- IBM has made SystemC PowerPC core models available for download and use in system-level design -- the first phase shown in Figure 1.
Dig deeper into developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.