Archive for 'Business'

For the second year now I’m virtually attending the DAC, the 47th annual EDA conference held this week in Anaheim.  As a web oriented company we’ve yet to exhibit at the EDA industry’s biggest conference in Anaheim.  Not being there physically, I enjoy all the info I can obtain remotely using twitter, blogs, and other online media. This year there seems to be more chatter about cloud computing and EDA - a topic that’s of particularly interest to a web oriented EDA company like PDTi.

Firstly, I saw a twitter post from James Colgan, the CEO of EDA community provider Xuropa, indicating that Kevin Bushby claimed that the Cloud is the only way EDA can grow.   I’m assuming this is Kevin Busyby COO of FastScale Technology which was acquired by EMC, and who formerly worked at Cadence.  While I agree that the cloud can help EDA grow, I’m curious to understand how Kevin and others see it growing.

Here are some ways I can see EDA growing using the cloud:

  1. Lower costs for compute resources could lead to larger EDA budgets.
  2. The cost and overhead of supporting customer on-site installations and evaluations could be reduced due the more controlled deployment environment of the cloud.
  3. The ability to use and get billed for tools at finer granularities could provide access to higher-end tools for companies that can’t afford the traditional EDA license models.
  4. The availability of limitless computing resources in the cloud could result in EDA users paying a premium to get that synthesis job or verification regression suite done more quickly.
  5. More visibility into how customers are using tools can provide opportunities to better server the customer, adding more value, resulting in greater revenues and profits.
  6. Hosting EDA tools in the cloud could eliminate piracy and add to revenues.

Some of these are things that we have already realized with our SpectaReg web application for register management and automation, which is offered onsite, hosted by the customer, or online, hosted by us.  Whether hosted by the customer or us, the application is essentially the same, except the online user has the opportunity for some additional customizations. Interestingly, some of our customers are using virtualization technologies to create their own private cloud where they deploy SpectaReg onsite.

The great thing about the cloud is the ability to scale the compute resources, like RAM and CPUs on demand, and to have failover/redundancy available should some piece of hardware fail.  If one has a fairly static requirement for these then cloud computing might not make sense.  For example, a while back I ran the numbers on the cost of the equivalent of a dedicated machine would be on Amazon’s Elastic Compute Cloud (EC2).  To have the equivalent compute resources available 24 x 7 x 365 via EC2 would cost more; however, a lot of machines are not used full-time and the compute requirements are bursty. This burstyness of compute requirements is where cloud computing really adds value.

To really take advantage of cloud computing, the application must be able to monitor/predict it’s load and be able to scale things up or down dynamically as needed.  EDA applications or their wrapper scripts would need to get smarter to do this.

Another obstacle that critiques of EDA cloud computing often point out is that need to move files between tools and script flows.  Technically, I don’t think this is an issue, aside from perhaps the need for EDA users to increase the bandwidths of their network pipes.  Web service APIs would allow people to script all sorts of operations in their flows and move info between different EDA tools in the cloud, perhaps hosted by different cloud providers.

There are many other angles to cloud computing and EDA, and I could likely write 10 more blog postings on the topic.  In terms of an end market, the cloud is a electronic system and there are opportunities for EDA to serve this growing market . Lori Kate Smith of ARM wrote up the 47DAC reception, mentioning how Mary Olsson of Gary Smith EDA cites Cloud computing as an application driver for EDA.

Another opportunity for EDA and FPGA vendors would be to have a cloud of FPGAs that could be re-configured.  This re-configurable cloud computing would be pretty cool.  Perhaps we’d need FPGA virtualization first, if it doesn’t already exist. Wonder if the folks at Google are looking into stuff like that…

Of course there is also the issue of security when it comes to cloud computing.  I see Harry the ASIC guy was interviewing 2 cloud security experts at DAC and I’ve yet to check that out.  Knowing Harry it will be worthwile.  One concern is that if the cloud infrastructure becomes compromised then everything running on it can potentially become vulnerable.  This is a bit different than a data center with isolated and distinct dedicated machines where each machine would need to be compromised individually.

Clearly there are a lot of opportunities and challenges for EDA with respect to cloud computing.  It will be exciting to see how the future unravels.  Stay tuned.

Everyone in chip design uses a browser - there’s little doubt in that. I’d wager that most chip designers spend more time in a browser than in any other tool, including the command line, emacs or vi text editor, the Eclipse IDE, and the logic simulator.

Today, chip designers are likely to use a browser for:

  • Looking at various indexes of technical documentation in HTML and PDF, including IP and register map specifications and document control
  • Viewing development reports, test coverage data and analysis
  • Researching suppliers, IP, algorithms, technical standards, how to articles, …
  • Managing bugs
  • Collaborating through a Wiki
  • Accessing various other information on the corporate Intranet
  • Reading EE Times and Slashdot while waiting for that long test-case, simulation or synthesis job to complete

Though not a chip designer anymore, I’ve been spending more time working in a browser, especially now that I’ve warmed up to Google Apps. And I’m not alone. I’ve even heard of some chip design companies using it too. Now that the word is out that Google is building chips, there is a good chance that they’d use Google Apps. Let’s face it, there is a lot of spreadsheet work in chip design and Google Spreadsheets is quite powerful, especially in a collaborative context. There is also a lot of block diagram work too and Google’s new Drawings tool offers hope there. Whether you like web-apps or not, I think most chip designers would agree, the browser is increasingly used for legitimate work.

There are many areas in chip dev where the browser can play a bigger role, especially in a collaborative context. One recent example, is Synopsys’ Lynx Design System which appears to have a browser GUI for it’s Management Cockpit. At PDTi we’ve been pushing the limits of using the browser for all things register management, for capturing and modelling the executable specification and generating dependent code and documentation.

Google has been pushing the limits of what is possible in the browser. The impressive video showing Quake II running in a browser is mind-blowing and highlights the possibilities of HTML5 and the next-gen browser. This supports the argument that graphical EDA tools such as the simulation waveform debuggers and graphical layout tools are possible and could be supplied as a web application; perhaps even under the SaaS model.

Are the naysayers missing something here - could the browser be the ubiquitous platform for everything, even EDA tools and Chip Design?

For the system designer, the platform is a complex supply chain of internal/externally developed hardware/software intellectual property (IP). It’s a complex set of risks and trade offs that must be analyzed in the decision making process. The winning platform provider will go above and beyond, to provide a whole solution, including a programmers guide to the register map, driver code and maybe even an executable specification model.

There are many different requirements that the System Developer may have for the IP deliverables, including:

  • a programmers’ guide on interfaces, interrupts, registers, and so on
  • example/reference firmware for driving and testing the IP in a standard configuration
  • inter-operable models of the IP in various different languages, at different levels of abstraction (firmware, ESL, HVL, RTL, XML, etc.)
  • ability to integrate and even re-code firmware in a system-specific way across all IP in the system
  • ability to easily integrate and brand IP documentation across the entire system in a consistent, professional way

A good approach to the modelling and abstraction of the hardware/software interface can help to achieve these requirements for both the IP producer and the system integrator. The SpectaReg register management tool’s approach is to generate the different deliverables from a common, single-source specification model. This provides opportunities for “bottom up” and “top down” approaches to firmware abstraction and system documentation preparation.

Abstraction by the IP Provider - Bottom Up
The IP provider should provide a memory map document of the different memory mapped elements and registers. Better yet, they may additionally provide a device driver code that abstracts the registers to provides a higher level view of the IP. Since the abstraction is created by the engineers specializing in the IP rather than engineers specializing on the overall system we call this “bottom up” abstraction.

Abstraction by the System Integrator - Top Down:
The IP consumer may have their own special way of doing device drivers, system memory testing and diagnostics. They might be targeting a specialized processor or interconnect architecture, language or operating system. They might be optimizing for throughput, power, or memory. They might have custom monitoring, programming and debugging systems. For these reasons the IP consumer might choose to create their own firmware. This “top down” approach to hardware abstraction requires that both the IP provider and consumer have excellent and inter-operable register management workflows.

The Register Supply Chain:
Something we are seeing in our extensive work with registers is that there is a supply chain of register specifications from the different IP providers. For example, the provider of a SPI core may have several registers that they code in Verilog, VHDL, C/C++ and publish in HTML or PDF. Then, the consumer of the SPI core may want to integrate the registers of the SPI core into their overall register map and C/C++ driver code in a way that is consistent across the entire system. Within this process there are various different teams and perspectives that need to consume the specification and produce work based upon that. This is the Register supply chain.

A Semantic Register Specification and the Supply chain:
Ideally the IP developer captures the registers into a semantic specification model that can be used as a single source of the register interface specifications. With SpectaReg, this is done through a browser based GUI and imported/exported using IP-XACT (and many other formats too). The underlying model specifies all relevant information relating to the registers, including typing information relating to how the register’s bit-fields are implemented and how they function. The model also includes inter-relationships between register fields. For example, a certain bit may be defined as an interrupt and it may be associated with a trigger and mask bits. The firmware programmer knows how the interrupt will operate and the RTL developer knows how it is auto-implemented in the Verilog and/or VHDL, based on the typing of the bit. From the semantic model, the related RTL and firmware code can be auto-generated to target different bus interface protocols and different coding and presentation styles that suit the needs of the system integrator.

Simply throwing the semantic register specification over the wall to the IP consumer (say in an IP-XACT XML file) does not solve everything. There are opportunities for the supply chain to get out of synchronization and for non-formalized communication of information to get lost. The flows for making changes and feeding back information from the different parts of the supply chain are not well defined. Using a dynamic web application to manage the specification model and manage the dynamic and collaborative work-flows of the register supply is part of our vision. We see this as the best way to simplify the overall process and address the needs of all stakeholders. What do you think?

Good news for the FPGA masses who want access to the ARM ecosystem of operating systems, tools, IP, and applications — last week Xilinx and ARM announced their collaboration to enable ARM processors and interconnect on Xilinx FPGAs.  This new dimension of the Xilinx Targeted Design Platform is a dramatic shift by Xilinx, away from their traditional IBM Power PC Architecture.

Meanwhile, over on Innovation Drive, Altera is licensing the MIPS architecture, and the market awaits more information.

Having an on-FPGA ARM is not a new idea. Early this decade Altera introduced their ARM-based hard core then changed strategy toward their NIOS II soft processor. And of course Actel, Altera and Xilinx have been supporting ARM-based soft cores for some time.

The announcement reveals that Xilinx is adopting “performance-optimized ARM cell libraries and embedded memories,”  conjuring images of ARM-based hard cores. They mention that the roadmap is toward “joint definition of the next-generation ARM® AMBA® interconnect technology… optimized for FPGA architectures.” This hints that the interconnect will be at least partially in the fabric as one would expect in an FPGA embedded system.   How the FPGA architect extends the base system and configures and stitches the fabric remains to be seen. With only vague bits of information released there are many unanswered questions:

  1. What does this mean to Xilinx’s customers using IBM PowerPC processor, MicroBlaze processor with IBM CoreConnect (PLB & OPB)?
  2. What is the tool chain?  Will ARM/AMBA be supported within Xilinx tools (like XPS & EDK) or is the community supported by a third party tool-chain?
  3. Which of the AMBA protocols will be supported by Xilinx — AXI, AHB and/or APB? AXI is the only one explicitly mentioned in the Xilinx Targeted Design Platforms boiler plate.
  4. Will the ARM RISCs be available as hard and/or soft cores within Xilinx FPGAs?  As stated earlier, my guess is that it’s a hard core.

If you have any hard answers or guesses about what’s going on here, please to leave a comment.

Personally, I’m exited to get PDTi engineering hands on an ARM-based Xilinx dev kit so we can help our customers continue to simplify their hardware/software register interface management should they choose ARM-based Xilinx embedded systems.

[UPDATE 2009-11-05]

From the comments there are some other great questions:

  • How will Xilinx’s strategy with ARM differ from that of Altera and what did Altera miss (if anything) in getting customers onto their ARM-based FPGA platform? [Gary Dare]
  • Why did Altera veer towards their own NIOS after going through all that trouble to get ARM-based products? [Gary Dare]
  • With MIPS as their alternative architecture, is Altera looking to horn in on QuickLogic’s market? [Gary Dare]
  • In what market will ARM FPGA platform offerings be the most successful? What market/application is Xilinx going to focus in on first?

sp500-2009-08

Wow, what a spectacular run the equity markets have had since the low in March of 2009.  Meanwhile, the jury is out on whether this is a sustainable recovery, backed by fundamentals and precedent? There are people calling for hyperinflation and others for deflation ahead.  With such uncertainty, opinions vary widely regarding which way things will go as illustrated by the following articles which I found interesting (followed by my point form summaries):

The Greenback Effect
NT Times Opinion Article by Warren Buffett

  • “gusher of federal money” avoided meltdown & was needed to combat depression
  • “economy appears to be on a slow path to recovery”
  • US deficit/GDP will rise to 13% this year into uncharted territory - more than 2x the wartime record
    Continue reading…

Tweet from S_Tomasello Hows the attendance at #46DAC today? Umm...

Twitpic from S_Tomasello "How's the attendance at #46DAC today? Umm..."

Last week at the Moscone Center in San Fransisco, the 46th annual Design Automation Conference (DAC) took place.  I’ve attended this conference for the past 4 years and decided not to attend this year.  This year I attended virtually using the web.

In the EDA media and for EDA trade shows, as Bob Dylan sang, the times they are a-changin’.  It’s no secret that the incumbent media is struggling to find a business model that works in the uncharted waters of the future.  As history repeats itself, the “hidden hand of supply and demand” will no doubt fix some shortfall with the traditional model — a shortfall that may not be fully understood until it is solved.

With the electronic media shedding their top writers, the coverage of DAC by trade publications is diminishing.  At the same time, new media, such as blogs, Twitter, and LinkedIn are picking up some slack.  For example, Richard Goering and Michael Santarini who historically covered DAC for EETimes and EDN now write for Cadence and Xilinx respectively.  Some of the best DAC summaries that I read were blogged by:

Additionally, on Twitter, the #46DAC tag provided useful information about what was going on at the tradeshow.  For me, some tweeps who provided informative DAC coverage via Twitter included:

  • Dark_Faust — editor in chief of Chip Design & Embedded Intel magazines & editorial director of Extension Media
  • harrytheASICguy — ASIC consultant & blogger, did a Synopsys XYZ conversation central sessions at DAC
  • jlgray — Consultant with Verilab, photographer, coolverification.com blogger, conference presenter
  • karenbartleson — Sr. Director of Community Marketing at Synopsys and blogger on “The Standards Game.”  Karen won “EDA’s Next Top Blogger” at DAC.  Karen did a lot of tweeting to inform people about the #46DAC and Synopsys Conversation Central had a “Twitter Tower” that displayed the #46DAC stream.
  • MikeDemler — Tech industry analyst, former marketing insider (from Synopsys), blogs at “The world is Analog”
  • paycinera — EDA Confidential editor Peggy Aycinena broke her cryptic series of gobbledygook biography tweets, the EDA Town & Gown Twitter Project, to provide some of the best Twitter coverage from DAC
  • S_Tomasello — Marketing at Sonics, the providers of “On-chip communications networks for advanced SoCs”

Based on the various reports and summaries from DAC, there is an apparent need for collaboration (as mentioned by keynote Fu-Chieh Hsu of TSMC) and productivity (as mentioned by the CEO panel). The same forces that are changing EDA trade media and conferences — the power of the Internet, coupled with economic forces –may enable the solution to better collaboration and productivity. Cloud computing business models like Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) are starting to prove themselves in other industries and will continue to find their way into commonplace. Exactly what the “hidden hand of supply and demand” has in store for EDA and cloud computing has yet to be revealed and we are just in the early stages now.

From various blogs and Twitter, without having attended DAC, I understand that:

  • there continues to be a need for better collaboration, productivity, and higher levels of abstraction
  • today’s current economic situation, spurred by the US credit melt-down, has affected EDA
    • the traditional trade media is struggling
    • new chip design starts are down
    • Magma design Automation, released that they are re-negotiating debt as a result of an audit report regarding their solvency, just as the conference was kicking off
    • traffic on the trade floor was questionable: some said it was above expectations while others said it was below
    • new VC investment in EDA start-ups is pretty much non-existent
    • TSMC is becoming more and more of an ecosystem heavyweight
    • there is optimism about the future and the recovery of EDA — with change and crisis, there comes opportunity for those who see it
  • the media landscape is changing
    • there is a struggle between the blogsphere and traditional press to cover EDA
    • blogs are gaining acceptance and playing more and more of a role
    • filtering through and connecting disperse info is a problem
    • John Cooley dismisses the utility of blogging, LinkedIn and Twitter and critics say Cooley just doesn’t get it (or virtual platforms and virtual prototypes for that matter)
  • there are big opportunities for Software design, and EDA can play there
    • Embedded software has the possibility to double EDA, says Gary Smith, who has pointed to software as the problem for several years now
    • embedded software seats are growing but market is fragmented
    • software IP is of growing importance in the differentiation of SoC platforms
    • the programming models need to change for multi-core
    • multi-core and parallel computing programming models are still pretty low level, like assembly and micro-code
    • Mentor Graphics announced their acquisition of Embedded Alley Solutions, a leader in Android and Linux development systems, unveiling their new Android & Linux strategy
  • System Level is big, particularly for SoC virtual platforms, architectural optimization and IP
    • the SPIRIT Consortium and IP-XACT has merged into Accellera, and there continues to be a need for better standards
    • IP still has a lot of potential and the business model is becoming clearer
    • Despite the importance of ESL, much work is still done at lower levels of abstraction
    • ARC International, the IP and configurable processor provider, is rumored to be under acquisition
  • FPGA
    • Companies are moving to FPGAs and away from ASIC
    • ESL is big for FPGAs
    • Not nearly as much FPGA discussion at DAC as there should be
  • Cloud computing opportunities are being underlooked by EDA (let’s start with the on-site private cloud then look at multi-tenant ecosystem clouds)

In conclusion, I was able to absorb a lot of details about DAC without attending thanks to all the bloggers, Tweeters and trade media.  EDA is changing in some exciting ways that scream opportunity for some and failure for others, and that’s what makes the future so exciting.

EDA Veteran Paul McLellan recently wrote an entry in his EDA Graffiti blog entitled SaaS for EDA.  Being a vendor providing the only real SaaS tool for register automation, I was naturally excited. I was a bit less enthused when McLellan indicated on twitter (he tweeted) that he didn’t think SaaS for EDA would fly.  Upon reading his blog posting, it seems his position is more along the lines that SaaS for EDA will not fly for the traditional, EDA bread and butter applications.  He did indicate that SaaS for EDA does have a chance for the front-end of chip design and for FPGA flows.  This is good to hear because it aligns with our coordinates at PDTi, where we focus on the firmware/software register interface.

I started to write a comment for McLellan’s posting but soon realized that I had a lot to say - more than should really be put in a comment.  Hence I decided to write this RegisterBits blog entry.

In general, I know about the front end of EDA tool-flows and not much about the back end.  That being said, having worked with complex IC development flows at Intel and a smaller startup, I understand how there is a common back-end flow and files are large.  At the RTL level and above it’s not so clear cut, there are often many different point tools, files are small, and each project can use a different set of tools and scripts.  Whether a point tool runs on the local machine or remote machine really makes no difference in this case, with negligible files sizes and local scripting environments that can access remote point-tools using web APIs.

McLellan assumes a tool/hour pricing model, and claims that using a metered usage pricing would not cause prices to go down.  With SaaS, however, there could be many different pricing models that better align the value between the user and vendor.  Pricing could be based on a project, per-user, or per unit of complexity (register, gate, area, …).  If the fabs were to take hold of the SaaS model and offer EDA tools, since they know the production output they could even use a royalty based pricing model.  If you look at the current EDA floating seat model, the longer jobs run for the more seats sold, and the more money for the EDA companies.  Could it be that EDA vendors using a per-seat model don’t have the incentive to improve runtimes as much as they could?

Another objection is that SaaS doesn’t work well with highly interactive software.  This has been true in the past, but is less and less true all the time with AJAX rich-client applications like Google maps.  If Google maps can do the magic that they do then I would argue that place and route and graphical waveform viewers are reasonably possible today.  For the most part, though, there is very little graphical instructiveness in EDA.

Perhaps the greatest SaaS for EDA benefit is the opportunity to reduce the maintenance and infrastructure costs that are so high for the traditional on-premise tools. Installing, patching, and upgrading EDA tools is a huge cost that must be considered into the total cost of ownership.  SaaS ends the customers’ maintenance of the application and enables the customer to focus more on value added differentiating work while the vendor manages the maintenance.  This is much more economical and cost effective.  Salesforce.com CEO Marc Benioff circulated an internal note related to the “End of Maintenance,” which is an excellent read on this very topic.    Then, there is always the argument that chip designers want to control what version of the tool they are using so everything is repeatable.  There are ways this can be built into the SaaS offering, and at PDTi with SpectaReg, we’ve got a way to do this.

Referring to the Innovator’s Dilemma and the way SalesForce.com disrupted the CRM market, McLellan states that EDA is not like CRM since there is no “non-competition” - under-serviced users who don’t currently have access to a tool due to it’s cost.  “EDA is not like that,” he said.  I, however, think there are certainly aspects of EDA that are like that, especially with new and innovative tools.  Register automation, where we at PDTi are focused, is one such area.  We often encounter customers who don’t have any solution (in-house or commercial), who could certainly use a solution.  There are a number of users in developing economies starting to work with EDA tools for ASIC and FPGA, who are waiting for better “legal” access to tools.  There is the embedded software industry that is starting to realize that they can use FPGA embedded processors platforms and build custom (multi) processors chips for their specialized application.  This is a new underserviced market for EDA.  It amazes me how the EDAC is so hyped up about trying to stop piracy of EDA tools, which seems futile since the hackers will almost always be able to crack or cheat the next attempt to stop them.  SaaS, however, does provide a solution to the piracy issue.  To expand into these highly price sensitive markets, EDA will need to compete on price, like it or not.

While competing on price is not ideal, SaaS has the potential to afford a better cost structure for both the producer and consumer.  More than the tool price needs to be looked at to get a true picture of the total cost of ownership.  The traditional EDA sales channel is super expensive and unnecessary in today’s Internet age.   The evaluation processes are too lengthy and difficult, the tool version release milestones are too long, onsite tool support is expensive, and the customer’s maintenance and infrastructure is very costly.  The cost of having CAD engineers worrying about tool setup and maintenance is more than just the associated labour cost - there is an opportunity cost of not focusing as intensely as possible on core competencies. A SaaS tool can evolve better based on aggregate vendor observations of usage patterns.  Also, it’s easier for the customer and vendor to work together to identify, reproduce and fix bugs.

There is much knowledge in EDA that is not packaged in an automated collaborative way.  Lincoln Murphy from 16 Ventures and Ken Boasso of Keychain Logic predict that 80% of SaaS success stories in the future will not be the CRM or ERP leaders of today’s SaaS, but rather productization of knowledge.  There is a lot of knowledge in EDA and chip design that has yet to be productized — the kind of stuff you find in the scripts, processes, procedures, emails, spreadsheets, and documents at a typical chip design company.  This could potentially be formalized into SaaS tools with a user community and network effect.

Utilizing the power of native web-applications, SaaS can bring about a new class of tools for EDA that use the Internet to help managers manage, and engineers collaborate across teams and locations.  Perhaps like a social networking application, but not social, instead it’s engineering workflow networking around the project work, tools and flows.  This is something that we’ve had good success with for our SpectaReg.com tool.  Memory mapped registers are collaborative and we tie all the stakeholders together in a formalized way.  This isn’t traditional EDA - it is a work-flow automation for electronics design, perhaps a new class of EDA that productizes what has traditionally been un-productized knowledge.

If you’re interested in EDA for SaaS be sure to join the EDA SaaS Enthusiasts LinkedIn Group.  If you are doing firmware/hardware addressable register maps, try a free demo/evaluation of SpectaReg.com.

A quick snapshot on the poll described in my previous posting: “for System-on-Chip developers, where is the greatest hardware/software register interface pain?”

Of the 71 votes thus far, 59% indicated that synchronizing the firmware, RTL, hardware verification, and documentation was the greatest pain.  Then 16% voted for register documentation and 16% for hardware verification (like SystemVerilog, Vera, e, SystemC).  The RTL (like Verilog & VHDL) for coding the register-map hardware logic was a pain for 4% of voters and firmware was a pain for 2%.  The live version of the poll is available here, and below is a screenshot of the graph.

SoC Hardware/Firmware register interface pain poll

Snapshot of LinkedIn poll graph

 
That only 2% voted for firmware is surprising to me since when asked where greatest value was received, a SpectaReg.com register-map automation customer indicated that 50% of the value was provided to the firmware developer.  As a result, I would expect that anyone without a register code generation solution would experience a lot of firmware pain.  Perhaps that’s a synchronization pain and the proper solution provides more value for firmware than any other aspect as a result of cross-team synchronization.

For the 41% that did not vote for synchronization, perhaps they have no register map code generation solution, whether it be commercial or homemade.  Perhaps their solution does not sufficiently provide enough value for the option that they voted for.

For the 59% that voted for synchronization, I wonder:

  • How many have a common machine readable data format for specifying registers that can used to generate code?
  • How many auto-generate all deliverables from a common source at the same time using the same code generation (metaprogramming) engine.
  • How many have an in-house vs. commercial solution (like our SpectaReg.com/SpectaReg Onsite or Duolog’s BitWise, Delani’s Blueprint, or Semifore’s CSRCompiler).

Many more polls could be created on these topics.

People are very passionate and opinionated when it comes to in-house tools, open source, register specification file formats, and register-map methodologies in general.   There was heated discussions on LinkedIn about the following topics, which make for good future blog posts:

  • Open source tooling for register address maps, could it work?
  • Is SystemRDL really needed in addition to IP-XACT?
  • Which comes first, the code or the spec?
  • With an in-house register solution, where’s the pain? (relates to Opportunity cost in build vs. buy decisions

If you have thoughts to share, be sure to leave a comment for discussion.  There is some good meat here for future RegisterBits.com postings so stay tuned.

Did you know that LinkedIn allows users to create polls?  I discovered this today and created my first poll to ask my network of SoC developers where their greatest hardware/software register interface pain lies.  Is it the firmware, RTL, hardware verification, documentation, or is it synchronizing all the different perspectives around the common specification?

Click on the below image or link to have your say!  Tell your colleagues too, and of course if you have a comment for discussion post it here.

soc-register-interface-pain

LinkedIn SoC Register Interface pain poll.

http://polls.linkedin.com/p/31486/ovmia

Update: Someone mentioned that they don’t understand the poll because they have an in-house solution that solves these pains. For those with an in-house solution, read the poll as “where is the greatest value from your in-house solution?” I suppose a sixth option, “my in-house solution is the biggest pain” should have also been an option too, especially now that there are commercial solutions available.

In engaging with companies about register-map automation, it amazes me how many engineers think that because they have an in-house register solution, or because they could build one, that this route makes the most sense. Although there are some people that get it, quite often engineers fail to take opportunity cost into consideration (sunk costs too, but I’ll cover that in another posting). I’ll explain via an example…

Picture this; an engineering team needs a register automation tool for developing a new ASIC that is the highest performance XYZ. Their options are:

  1. License one of the commercially available register tools
  2. Build a register generation tool from scratch, or
  3. Do the registers manually

The engineering manager estimates the costs of the different options using an hourly rate per engineer vs. the licensing costs for the commercial tool. Due to the fact that the commercial tool is shared among several companies who subsidize its ongoing development, it’s priced lower than the cost of creating an in-house solution. For argument sake, though, let’s assume that somehow the manager estimates that option 2 is cheaper by some accounting magic. Forget about option 3, the ASIC has way too many registers to do manually, so that’s out of the question.

The engineering manager chooses option 2 because, in this strange, imaginary, upside down world of magical accounting, the cost of building an in-house tool is less than the cost of buying a commercial one. Reasonable right?

Wait a minute, maybe not…

The company’s core competency is building high-performance XYZ products, not building, testing and supporting tools. That tool building dilutes focus. What is the cost of forgoing the opportunity to focus as intensely as possible on building the best and highest possible performance XYZ? That’s a bit harder to quantize, but it is a cost. That’s the opportunity cost!

What happens if the competition chooses option 1, and smokes the company in terms of performance? How much does that cost in the long run?

If the manager had decided to go with option 1 — licensing a commercial register tool — that would enable the team to focus more effort on building a better, higher performance XYZ product. That extra focus would enable the company to be more differentiated from their competition, ensuring their dominance and their ability to demand higher profit margins. It’s this differentiation that provides competitive advantage in the marketplace.

One other way to think about it is that differentiating value-added efforts have a greater return on investment (ROI) than non-differentiating efforts.

Something to think about, next time you have an option that would enable more intense focus on differentiating core competencies.

Stay tuned for more postings on the economics of build vs. buy engineering decisions.