“Life is like an analogy” - Aaron Allston

  

Working for the past 6 years to build, grow, and sell SpectaReg, a tool for on-chip register management, I’ve discussed registers with a wide range of people. In explaining the underlying problem of registers and memory address mapping, whether the audience be technically lay or literate, I enjoy a good concrete analogy that people can relate to.

Here are some analogies that can be made for register management, each a potential topic for exploration in a future blog posting.

Register management has similarities to…

  • order management in a restaurant — synchronizing order info between many different workers, each using common info in different ways
  • building a house — different workers maintaining and working from a common blueprint, which may need to change or be revised over time
  • the human nervous system — millions of finely identifiable/controllable sensory inputs and motor outputs wired to the brain

Undoubtedly, there are many other analogies that could be made.  I hope readers can leave some comments with other ideas.

For the second year now I’m virtually attending the DAC, the 47th annual EDA conference held this week in Anaheim.  As a web oriented company we’ve yet to exhibit at the EDA industry’s biggest conference in Anaheim.  Not being there physically, I enjoy all the info I can obtain remotely using twitter, blogs, and other online media. This year there seems to be more chatter about cloud computing and EDA - a topic that’s of particularly interest to a web oriented EDA company like PDTi.

Firstly, I saw a twitter post from James Colgan, the CEO of EDA community provider Xuropa, indicating that Kevin Bushby claimed that the Cloud is the only way EDA can grow.   I’m assuming this is Kevin Busyby COO of FastScale Technology which was acquired by EMC, and who formerly worked at Cadence.  While I agree that the cloud can help EDA grow, I’m curious to understand how Kevin and others see it growing.

Here are some ways I can see EDA growing using the cloud:

  1. Lower costs for compute resources could lead to larger EDA budgets.
  2. The cost and overhead of supporting customer on-site installations and evaluations could be reduced due the more controlled deployment environment of the cloud.
  3. The ability to use and get billed for tools at finer granularities could provide access to higher-end tools for companies that can’t afford the traditional EDA license models.
  4. The availability of limitless computing resources in the cloud could result in EDA users paying a premium to get that synthesis job or verification regression suite done more quickly.
  5. More visibility into how customers are using tools can provide opportunities to better server the customer, adding more value, resulting in greater revenues and profits.
  6. Hosting EDA tools in the cloud could eliminate piracy and add to revenues.

Some of these are things that we have already realized with our SpectaReg web application for register management and automation, which is offered onsite, hosted by the customer, or online, hosted by us.  Whether hosted by the customer or us, the application is essentially the same, except the online user has the opportunity for some additional customizations. Interestingly, some of our customers are using virtualization technologies to create their own private cloud where they deploy SpectaReg onsite.

The great thing about the cloud is the ability to scale the compute resources, like RAM and CPUs on demand, and to have failover/redundancy available should some piece of hardware fail.  If one has a fairly static requirement for these then cloud computing might not make sense.  For example, a while back I ran the numbers on the cost of the equivalent of a dedicated machine would be on Amazon’s Elastic Compute Cloud (EC2).  To have the equivalent compute resources available 24 x 7 x 365 via EC2 would cost more; however, a lot of machines are not used full-time and the compute requirements are bursty. This burstyness of compute requirements is where cloud computing really adds value.

To really take advantage of cloud computing, the application must be able to monitor/predict it’s load and be able to scale things up or down dynamically as needed.  EDA applications or their wrapper scripts would need to get smarter to do this.

Another obstacle that critiques of EDA cloud computing often point out is that need to move files between tools and script flows.  Technically, I don’t think this is an issue, aside from perhaps the need for EDA users to increase the bandwidths of their network pipes.  Web service APIs would allow people to script all sorts of operations in their flows and move info between different EDA tools in the cloud, perhaps hosted by different cloud providers.

There are many other angles to cloud computing and EDA, and I could likely write 10 more blog postings on the topic.  In terms of an end market, the cloud is a electronic system and there are opportunities for EDA to serve this growing market . Lori Kate Smith of ARM wrote up the 47DAC reception, mentioning how Mary Olsson of Gary Smith EDA cites Cloud computing as an application driver for EDA.

Another opportunity for EDA and FPGA vendors would be to have a cloud of FPGAs that could be re-configured.  This re-configurable cloud computing would be pretty cool.  Perhaps we’d need FPGA virtualization first, if it doesn’t already exist. Wonder if the folks at Google are looking into stuff like that…

Of course there is also the issue of security when it comes to cloud computing.  I see Harry the ASIC guy was interviewing 2 cloud security experts at DAC and I’ve yet to check that out.  Knowing Harry it will be worthwile.  One concern is that if the cloud infrastructure becomes compromised then everything running on it can potentially become vulnerable.  This is a bit different than a data center with isolated and distinct dedicated machines where each machine would need to be compromised individually.

Clearly there are a lot of opportunities and challenges for EDA with respect to cloud computing.  It will be exciting to see how the future unravels.  Stay tuned.

Everyone in chip design uses a browser - there’s little doubt in that. I’d wager that most chip designers spend more time in a browser than in any other tool, including the command line, emacs or vi text editor, the Eclipse IDE, and the logic simulator.

Today, chip designers are likely to use a browser for:

  • Looking at various indexes of technical documentation in HTML and PDF, including IP and register map specifications and document control
  • Viewing development reports, test coverage data and analysis
  • Researching suppliers, IP, algorithms, technical standards, how to articles, …
  • Managing bugs
  • Collaborating through a Wiki
  • Accessing various other information on the corporate Intranet
  • Reading EE Times and Slashdot while waiting for that long test-case, simulation or synthesis job to complete

Though not a chip designer anymore, I’ve been spending more time working in a browser, especially now that I’ve warmed up to Google Apps. And I’m not alone. I’ve even heard of some chip design companies using it too. Now that the word is out that Google is building chips, there is a good chance that they’d use Google Apps. Let’s face it, there is a lot of spreadsheet work in chip design and Google Spreadsheets is quite powerful, especially in a collaborative context. There is also a lot of block diagram work too and Google’s new Drawings tool offers hope there. Whether you like web-apps or not, I think most chip designers would agree, the browser is increasingly used for legitimate work.

There are many areas in chip dev where the browser can play a bigger role, especially in a collaborative context. One recent example, is Synopsys’ Lynx Design System which appears to have a browser GUI for it’s Management Cockpit. At PDTi we’ve been pushing the limits of using the browser for all things register management, for capturing and modelling the executable specification and generating dependent code and documentation.

Google has been pushing the limits of what is possible in the browser. The impressive video showing Quake II running in a browser is mind-blowing and highlights the possibilities of HTML5 and the next-gen browser. This supports the argument that graphical EDA tools such as the simulation waveform debuggers and graphical layout tools are possible and could be supplied as a web application; perhaps even under the SaaS model.

Are the naysayers missing something here - could the browser be the ubiquitous platform for everything, even EDA tools and Chip Design?

For the system designer, the platform is a complex supply chain of internal/externally developed hardware/software intellectual property (IP). It’s a complex set of risks and trade offs that must be analyzed in the decision making process. The winning platform provider will go above and beyond, to provide a whole solution, including a programmers guide to the register map, driver code and maybe even an executable specification model.

There are many different requirements that the System Developer may have for the IP deliverables, including:

  • a programmers’ guide on interfaces, interrupts, registers, and so on
  • example/reference firmware for driving and testing the IP in a standard configuration
  • inter-operable models of the IP in various different languages, at different levels of abstraction (firmware, ESL, HVL, RTL, XML, etc.)
  • ability to integrate and even re-code firmware in a system-specific way across all IP in the system
  • ability to easily integrate and brand IP documentation across the entire system in a consistent, professional way

A good approach to the modelling and abstraction of the hardware/software interface can help to achieve these requirements for both the IP producer and the system integrator. The SpectaReg register management tool’s approach is to generate the different deliverables from a common, single-source specification model. This provides opportunities for “bottom up” and “top down” approaches to firmware abstraction and system documentation preparation.

Abstraction by the IP Provider - Bottom Up
The IP provider should provide a memory map document of the different memory mapped elements and registers. Better yet, they may additionally provide a device driver code that abstracts the registers to provides a higher level view of the IP. Since the abstraction is created by the engineers specializing in the IP rather than engineers specializing on the overall system we call this “bottom up” abstraction.

Abstraction by the System Integrator - Top Down:
The IP consumer may have their own special way of doing device drivers, system memory testing and diagnostics. They might be targeting a specialized processor or interconnect architecture, language or operating system. They might be optimizing for throughput, power, or memory. They might have custom monitoring, programming and debugging systems. For these reasons the IP consumer might choose to create their own firmware. This “top down” approach to hardware abstraction requires that both the IP provider and consumer have excellent and inter-operable register management workflows.

The Register Supply Chain:
Something we are seeing in our extensive work with registers is that there is a supply chain of register specifications from the different IP providers. For example, the provider of a SPI core may have several registers that they code in Verilog, VHDL, C/C++ and publish in HTML or PDF. Then, the consumer of the SPI core may want to integrate the registers of the SPI core into their overall register map and C/C++ driver code in a way that is consistent across the entire system. Within this process there are various different teams and perspectives that need to consume the specification and produce work based upon that. This is the Register supply chain.

A Semantic Register Specification and the Supply chain:
Ideally the IP developer captures the registers into a semantic specification model that can be used as a single source of the register interface specifications. With SpectaReg, this is done through a browser based GUI and imported/exported using IP-XACT (and many other formats too). The underlying model specifies all relevant information relating to the registers, including typing information relating to how the register’s bit-fields are implemented and how they function. The model also includes inter-relationships between register fields. For example, a certain bit may be defined as an interrupt and it may be associated with a trigger and mask bits. The firmware programmer knows how the interrupt will operate and the RTL developer knows how it is auto-implemented in the Verilog and/or VHDL, based on the typing of the bit. From the semantic model, the related RTL and firmware code can be auto-generated to target different bus interface protocols and different coding and presentation styles that suit the needs of the system integrator.

Simply throwing the semantic register specification over the wall to the IP consumer (say in an IP-XACT XML file) does not solve everything. There are opportunities for the supply chain to get out of synchronization and for non-formalized communication of information to get lost. The flows for making changes and feeding back information from the different parts of the supply chain are not well defined. Using a dynamic web application to manage the specification model and manage the dynamic and collaborative work-flows of the register supply is part of our vision. We see this as the best way to simplify the overall process and address the needs of all stakeholders. What do you think?

In English Bay off West Point Grey in Vancouver lies an ideal spot for catching Dungeness Crabs - a wonderful delight to eat.  My first experience crabbing was dropping a trap off a friend’s boat en route to Chinook Salmon fishing in the Geogria Straight.  It’s not unusual to catch your limit on Spanish Banks if you know what you’re doing and to buy the equivalent at the local fish market will cost you almost $100.


View Larger Map

Dungeness Crabs

Dungeness Crabs

This summer some friends and I went out in Kayaks and were quite successful at catching our limits.  Google maps on the iPhone helped us to locate the trap after a paddle around the bay.  Being on the water with a 3G connected smartphone is pretty handy.

On another occasion, while trolling on a Salmon fishing mission after having dropped the trap, we pondered how many crabs had entered the trap. At the same time internet radio streamed care of the iPhone’s 3G internet. I got thinking how cool it would be if we had a 3G link to a web camera down below to show us what was inside.  Next thing you know I was drawing up plans for an autonomous crabbing vessel.


Continue reading…

Meeting and talking with customers about their register needs and requirements is one of my favorite things to do.  Not too long ago I met with an embedded firmware guru over coffee and we discussed various topics about registers and register management tooling.  We discussed registers in the context of multi-processor system on chips (MPSoCs) with a lot of different interconnect channels or bridges and buses. In such a system, where there is concurrent software that may access the same registers, we discussed the pains of read-modify-write and how to reduce the need for such operations.  Some of our conclusions on this are discussed in this posting, which was spurred by one Gary Strigham’s recent Embedded Bridge issues.  Gary has hinted that his upcoming issue will discuss “an atomic register design that provides robust support for concurrent driver access to common registers” and I’m curious to see how his techniques compare to the ones discussed herein.

What is  a register read-modify-write operation?

Firmware often needs to modify only one single bit or bit-field of an addressable register without modifying other bits in the register.  This requires the firmware to read the register, change the desired bit(s), then write the register back.

Problem 1: Atomicity for register read-modify-write operation

In a system that has concurrent software accessing the registers, read-modify-writes are a real concern.  Without proper inter-process synchronization (using mutexes or some other form of guaranteed atomicity) there is a danger of race conditions.  These dangers are well described in Issue #37 of Gary Stringham’s Embedded Bridge.

Problem 2: Latency of read-modify-write operations

In a complex MPSoC, with a complex interconnect fabric, register read operations can be painfully slow when compared to pure register write operations.  System performance can be greatly improved by reducing the number of read operations required.

How to trade register read-modify-write transactions with a single register write

The trick to replacing the need for read-modify-write of a register with a register write operation is to create additional CLEAR and SET write-only registers.

Consider the following FLAGS register as an example which uses 8 bit registers for simplicity sake.

reg name bit 7 bit 6 bit 5 bit 4 bit 3 bit 2 bit 1 bit 0
FLAGS flag_a flag_b flag_c flag_d flag_e flag_f flag_g flag_h

Without any supporting CLEAR/SET registers, to modify flag_f would require a read-modify-write operation. However, when we add the supporting CLEAR and SET write-only registers and related RTL logic then each bit can be set or cleared independently.  The flag_f bit can be set by writing 0×04 to FLAGS_SET and cleared by writing ox04 to the FLAG_CLEAR register.  The following table shows how the three related registers would look.

reg name bit 7 bit 6 bit 5 bit 4 bit 3 bit 2 bit 1 bit 0
FLAGS flag_a flag_b flag_c flag_d flag_e flag_f flag_g flag_h
FLAGS_CLEAR clear_a clear_b clear_c clear_d clear_e clear_f clear_g clear_h
FLAGS_SET set_a set_b set_c set_d set_e set_f set_g set_h

Here the complexities of read-modify-write in a concurrent software environment are traded for additional complexity within the Verilog or VHDL RTL code, and the related verification code. A register management tool like SpectaReg can be setup to allow easy specification of such register patterns in a graphical environment and auto-generation of the related RTL, verification code, and the firmware abstraction of the bits. With such a register management tool, the related work is greatly simplified when compared to the tedious and error-prone process of doing it manually.  An additional advantage relates to architectural exploration - with an automated path between the specification and generation it’s much easier to switch between the two techniques: i) a single read/write register requiring read-modify-write, and ii) a read only register with supporting SET and CLEAR registers.

In addition to register read-modify-writes discussed at the coffee talk with the firmware guru, we also chatted about how specialized hardware registers offer valuable diagnostics, performance profiling, optimization and debugging of MPSoC systems - perhaps a good topic for a future posting so stay tuned.

Lastly, if you have any thoughts to contribute, be sure to leave a comment.

Good news for the FPGA masses who want access to the ARM ecosystem of operating systems, tools, IP, and applications — last week Xilinx and ARM announced their collaboration to enable ARM processors and interconnect on Xilinx FPGAs.  This new dimension of the Xilinx Targeted Design Platform is a dramatic shift by Xilinx, away from their traditional IBM Power PC Architecture.

Meanwhile, over on Innovation Drive, Altera is licensing the MIPS architecture, and the market awaits more information.

Having an on-FPGA ARM is not a new idea. Early this decade Altera introduced their ARM-based hard core then changed strategy toward their NIOS II soft processor. And of course Actel, Altera and Xilinx have been supporting ARM-based soft cores for some time.

The announcement reveals that Xilinx is adopting “performance-optimized ARM cell libraries and embedded memories,”  conjuring images of ARM-based hard cores. They mention that the roadmap is toward “joint definition of the next-generation ARM® AMBA® interconnect technology… optimized for FPGA architectures.” This hints that the interconnect will be at least partially in the fabric as one would expect in an FPGA embedded system.   How the FPGA architect extends the base system and configures and stitches the fabric remains to be seen. With only vague bits of information released there are many unanswered questions:

  1. What does this mean to Xilinx’s customers using IBM PowerPC processor, MicroBlaze processor with IBM CoreConnect (PLB & OPB)?
  2. What is the tool chain?  Will ARM/AMBA be supported within Xilinx tools (like XPS & EDK) or is the community supported by a third party tool-chain?
  3. Which of the AMBA protocols will be supported by Xilinx — AXI, AHB and/or APB? AXI is the only one explicitly mentioned in the Xilinx Targeted Design Platforms boiler plate.
  4. Will the ARM RISCs be available as hard and/or soft cores within Xilinx FPGAs?  As stated earlier, my guess is that it’s a hard core.

If you have any hard answers or guesses about what’s going on here, please to leave a comment.

Personally, I’m exited to get PDTi engineering hands on an ARM-based Xilinx dev kit so we can help our customers continue to simplify their hardware/software register interface management should they choose ARM-based Xilinx embedded systems.

[UPDATE 2009-11-05]

From the comments there are some other great questions:

  • How will Xilinx’s strategy with ARM differ from that of Altera and what did Altera miss (if anything) in getting customers onto their ARM-based FPGA platform? [Gary Dare]
  • Why did Altera veer towards their own NIOS after going through all that trouble to get ARM-based products? [Gary Dare]
  • With MIPS as their alternative architecture, is Altera looking to horn in on QuickLogic’s market? [Gary Dare]
  • In what market will ARM FPGA platform offerings be the most successful? What market/application is Xilinx going to focus in on first?

sp500-2009-08

Wow, what a spectacular run the equity markets have had since the low in March of 2009.  Meanwhile, the jury is out on whether this is a sustainable recovery, backed by fundamentals and precedent? There are people calling for hyperinflation and others for deflation ahead.  With such uncertainty, opinions vary widely regarding which way things will go as illustrated by the following articles which I found interesting (followed by my point form summaries):

The Greenback Effect
NT Times Opinion Article by Warren Buffett

  • “gusher of federal money” avoided meltdown & was needed to combat depression
  • “economy appears to be on a slow path to recovery”
  • US deficit/GDP will rise to 13% this year into uncharted territory - more than 2x the wartime record
    Continue reading…

Tweet from S_Tomasello Hows the attendance at #46DAC today? Umm...

Twitpic from S_Tomasello "How's the attendance at #46DAC today? Umm..."

Last week at the Moscone Center in San Fransisco, the 46th annual Design Automation Conference (DAC) took place.  I’ve attended this conference for the past 4 years and decided not to attend this year.  This year I attended virtually using the web.

In the EDA media and for EDA trade shows, as Bob Dylan sang, the times they are a-changin’.  It’s no secret that the incumbent media is struggling to find a business model that works in the uncharted waters of the future.  As history repeats itself, the “hidden hand of supply and demand” will no doubt fix some shortfall with the traditional model — a shortfall that may not be fully understood until it is solved.

With the electronic media shedding their top writers, the coverage of DAC by trade publications is diminishing.  At the same time, new media, such as blogs, Twitter, and LinkedIn are picking up some slack.  For example, Richard Goering and Michael Santarini who historically covered DAC for EETimes and EDN now write for Cadence and Xilinx respectively.  Some of the best DAC summaries that I read were blogged by:

Additionally, on Twitter, the #46DAC tag provided useful information about what was going on at the tradeshow.  For me, some tweeps who provided informative DAC coverage via Twitter included:

  • Dark_Faust — editor in chief of Chip Design & Embedded Intel magazines & editorial director of Extension Media
  • harrytheASICguy — ASIC consultant & blogger, did a Synopsys XYZ conversation central sessions at DAC
  • jlgray — Consultant with Verilab, photographer, coolverification.com blogger, conference presenter
  • karenbartleson — Sr. Director of Community Marketing at Synopsys and blogger on “The Standards Game.”  Karen won “EDA’s Next Top Blogger” at DAC.  Karen did a lot of tweeting to inform people about the #46DAC and Synopsys Conversation Central had a “Twitter Tower” that displayed the #46DAC stream.
  • MikeDemler — Tech industry analyst, former marketing insider (from Synopsys), blogs at “The world is Analog”
  • paycinera — EDA Confidential editor Peggy Aycinena broke her cryptic series of gobbledygook biography tweets, the EDA Town & Gown Twitter Project, to provide some of the best Twitter coverage from DAC
  • S_Tomasello — Marketing at Sonics, the providers of “On-chip communications networks for advanced SoCs”

Based on the various reports and summaries from DAC, there is an apparent need for collaboration (as mentioned by keynote Fu-Chieh Hsu of TSMC) and productivity (as mentioned by the CEO panel). The same forces that are changing EDA trade media and conferences — the power of the Internet, coupled with economic forces –may enable the solution to better collaboration and productivity. Cloud computing business models like Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) are starting to prove themselves in other industries and will continue to find their way into commonplace. Exactly what the “hidden hand of supply and demand” has in store for EDA and cloud computing has yet to be revealed and we are just in the early stages now.

From various blogs and Twitter, without having attended DAC, I understand that:

  • there continues to be a need for better collaboration, productivity, and higher levels of abstraction
  • today’s current economic situation, spurred by the US credit melt-down, has affected EDA
    • the traditional trade media is struggling
    • new chip design starts are down
    • Magma design Automation, released that they are re-negotiating debt as a result of an audit report regarding their solvency, just as the conference was kicking off
    • traffic on the trade floor was questionable: some said it was above expectations while others said it was below
    • new VC investment in EDA start-ups is pretty much non-existent
    • TSMC is becoming more and more of an ecosystem heavyweight
    • there is optimism about the future and the recovery of EDA — with change and crisis, there comes opportunity for those who see it
  • the media landscape is changing
    • there is a struggle between the blogsphere and traditional press to cover EDA
    • blogs are gaining acceptance and playing more and more of a role
    • filtering through and connecting disperse info is a problem
    • John Cooley dismisses the utility of blogging, LinkedIn and Twitter and critics say Cooley just doesn’t get it (or virtual platforms and virtual prototypes for that matter)
  • there are big opportunities for Software design, and EDA can play there
    • Embedded software has the possibility to double EDA, says Gary Smith, who has pointed to software as the problem for several years now
    • embedded software seats are growing but market is fragmented
    • software IP is of growing importance in the differentiation of SoC platforms
    • the programming models need to change for multi-core
    • multi-core and parallel computing programming models are still pretty low level, like assembly and micro-code
    • Mentor Graphics announced their acquisition of Embedded Alley Solutions, a leader in Android and Linux development systems, unveiling their new Android & Linux strategy
  • System Level is big, particularly for SoC virtual platforms, architectural optimization and IP
    • the SPIRIT Consortium and IP-XACT has merged into Accellera, and there continues to be a need for better standards
    • IP still has a lot of potential and the business model is becoming clearer
    • Despite the importance of ESL, much work is still done at lower levels of abstraction
    • ARC International, the IP and configurable processor provider, is rumored to be under acquisition
  • FPGA
    • Companies are moving to FPGAs and away from ASIC
    • ESL is big for FPGAs
    • Not nearly as much FPGA discussion at DAC as there should be
  • Cloud computing opportunities are being underlooked by EDA (let’s start with the on-site private cloud then look at multi-tenant ecosystem clouds)

In conclusion, I was able to absorb a lot of details about DAC without attending thanks to all the bloggers, Tweeters and trade media.  EDA is changing in some exciting ways that scream opportunity for some and failure for others, and that’s what makes the future so exciting.

While on an ocean side walk, I daydreamed of being struck by a great IP/subsystem idea with potential for royalty licensing.  I imagined organizing a team and jumping into action, developing the RTL logic, and processor integration.  Should I choose Virtual and/or FPGA prototyping?

English Bay, Oceanside

ocean side walk

Virtual Platform

It’s all about getting the software working as soon as possible.  The Google Android Emulator is an excellent example of how Google was able to get the software working without requiring developers to possess the device hardware.  The Android Emulator is described as a “mobile device emulator — a virtual mobile device that runs on your computer.”  Android abstracts the hardware, ARM processor, and Linux kernel with an Eclipse based Java framework, targeting Android’s Dalvik Virtual Machine, a register-based architecture that’s more memory efficient than the Java VM.

Virtual Prototyping

Clearly the virtual Android emulator/platform makes software development easier. Similarly, a virtual prototype makes abstract product validation easier. With a virtual prototype, the developers can explore different algorithms and architectures across the hardware and software abstractions.  Things like instruction set, on-chip interconnect, acceleration, memory, interrupt, and caching architecture can be explored by digital designers and firmware ahead of the VHDL and Verilog solidification.  Still, despite any effort spent on virtual prototyping, physical validation is essential before offering an IP for license.  FPGA prototyping is a good choice for all but the most complex and highest performance IP.

FPGA Prototyping

Those who know firmware and RTL coding, should have no problem getting basic examples running on an FPGA dev kit in the first day or two.  Those with experience validating a chip in the lab understand that so much really comes down to using embedded software to drive the tests.  Today’s Embedded System FPGA kits provide on-chip processors and the whole development environment.  Some available embedded FPGA processor options are listed in the following table:

Vendor/Processor On-chip interfaces Tools
Altera NIOS II

(soft core)

Avalon SOPC Builder

Quartus II

NIOS II IDE, with Eclipse CDT &

GNU compiler/debugger

Xilinx PowerPC

(hard core)

CoreConnect PLB & OPB Embedded Development Kit (EDK), with Eclipse CDT GNU compiler/debugger

Xilinx Platform Studio (XPS)

ISE

Xilinx MicroBlaze

(soft core)

CoreConnect PLB & OPB Embedded Development Kit (EDK), with Eclipse CDT GNU compiler/debugger

Xilinx Platform Studio (XPS)

ISE

Actel ARM

(soft core)

AMBA AHB & APB Libero IDE, CoreConsole, SoftConsole (Eclipse, GNU Compiler/debugger)

Short of having deep pockets for an ASIC flow, or a platform provider lined up to license the IP and spin prototype silicon, FPGA prototyping makes good sense.  Virtual platforms make sense for reaching software developers, so they can interface with the IP as soon as possible.  One problem with providing models for developing software before the hardware is complete is that it’s difficult to keep the different perspectives aligned as the design evolves.  Tools like SpectaReg that model hardware/software interfacing and auto-generate dependent code keep the different stakeholders aligned, resulting in quicker time to revenue for the IP.

So back to my oceanside daydream… how would I get the IP to market and start making money?  It depends on the target market.  If the end user targets embedded system FPGAs then it’s a no brainer - go straight to FPGA prototyping and don’t worry about virtual platform.  If the target market is mobile silicon platforms, then virtual prototyping/platforming makes sense.  Having validated silicon in hand is ideal but impractical in many circumstances. FPGA prototyping is pretty darn compelling when you consider the speedy turnaround times, and the low startup costs.