Criticalblue’s CEO David Stewart reveals how software driven hardware performance optimization moved them beyond the chip design world.
By John Blyler, Editorial Director
During a recent trip to Scotland, I met with EDA veteran David Stewart, CEO of Criticalblue. Over a dinner of haddock soup and Glengoyne, we talked about the company’s challenges in focusing on the embedded space; the free money of software performance analysis and optimization; how “good enough” delayed the multicore software coding revolution and much more. What follows is a paraphrased version of that conversation. — JB
Blyler: What is Criticalblue doing these days?
Stewart: We still optimize software right down at the hardware metal. Now, instead of adding new hardware, we optimize the software to run better on the existing hardware. That might mean algorithm changes, compiler backend changes, or data structure changes so that the product uses the cache systems much more efficiently. Among other things, our tools analyze architectures to optimize performance.
Blyler: Why is this optimization so important?
Stewart: Because most people writing software are not thinking about the underneath hardware architecture. They can easily make mistakes. For example, a software developer might create a data structure with many fields in it. One of those fields might be accessed frequently but be a relatively small part of a bigger data structure. When that data is needed from cache the software instruction pulls the entire data structure, instead of the small piece that is needed. A typical software developer might not think about the problems that will cause.
Blyler: Many of the new Internet-of-Things (IoT) platforms are aimed at non-technical developers. Are you seeing an increase who don’t really understand the underlining hardware?
Stewart: That does seem to be happening but I think that the IoT will probably come a bit later for us. Over the last few years, we’ve done a lot of work in the area of mobile and Android optimizations, that is, trying to optimize Android to run better on certain hardware platforms in the mobile space, e.g., cell phones, tablets, that kind of thing.
Telecom is another big area for us. Working in this market has reminded us that there are many different types of software engineers. Some of them don’t really understand hardware architectures like processor caches, instruction sets, and pipelines. For example, one needs to appreciate the workings of pipeline prediction systems when writing software for performance critical applications.
Blyler: It seems that you’ve shifted away from being a purely EDA company. Did that require a big change?
Stewart: It was more of a mentality shift for us because previously we would analyze the software and then synthesize the best hardware architecture. It took us a while to realize that we were working with fixed platforms where you cannot improve the hardware to improve the overall system, that is, the number of cores and caches are fixed. In that case, I have to make the best use of the hardware by optimizing the software.
Blyler: Also, chip and board hardware have become commodities. As such, there are fewer but more standardized platforms where software becomes the differentiator.
Stewart: We’ve seen that change in the Software (formerly Silicon) Glen region of Scotland. There aren’t as many hardware companies now. The ones that are here are doing less silicon design and using standard silicon products. That is a change for us.
Blyler: How has Criticalblue changed to deal with the changing market?
Stewart: We’ve been re-aligning ourselves for the last two years – growing our existing business with several new products that we haven’t’ talked about yet. And we’ve been hiring quite a lot. We’ve grown to the point that we are now self-financing, using the money that the business provides to invest in our own business as opposed to having to go to venture capitalists (VCs) for funds. That is nice. Naturally, we still have a few VCs as shareholders.
Blyler: Do you plan to become a public company?
Stewart: No, but I am doing a lot more market validation. I’ve learned painfully that it’s very dangerous to believe you know what the market needs and develop it without doing some basic market validation. You need to be sure that the market really is the way you think it is. It is very easy to imagine that you have a brilliant idea and forget about the market, thinking that they will come to you.
Blyler: That sounds like the mindset of a technically competent engineer who, after enough experience, realizes that a great idea is not enough.
Stewart: Yes. And that is probably one of the most common mistakes to make. But it’s hard to be completely objective. But it wasn’t the products that we had in the past were not good or useful. Rather, the market seemed to shift and our product features became “nice to haves” instead of “must haves.” That difference is the key. You have to come into the market with something that is a “must have.” It’s not a really cool and technical solution that is looking for a problem to solve but rather it’s addressing some real pain point of the customer. It’s dealing with the thing that keeps the designer up at night. If you can do that, then you’ll be profitable. That is the best approach, rather than trying to sell them on how your cool solution gives them value.
Blyler: Are you positioning yourself more in the embedded space?
Stewart: Yes. Our market now consists of performance centric or sensitive applications in the telecom, automotive and mobile spaces. We are doing lots of work with video recognition in automotive driver assistant technology. Also, we have expanded our linux based distributed computing work.
Blyler: Is performance more critical than power in these markets?
Stewart: That’s true in the mobile space. In the telecom market, performance is needed to meet reaction time or quality of service (QoS) requirements.
Blyler: Your Company seems to have successfully evolved from the EDA space into the larger embedded systems market. That’s a bit of a change.
Stewart: It’s been an interesting journey. There were key moments when we made quite dramatic changes to our outlook. If we hadn’t acted on those changes then we wouldn’t be here now. We’ve had a few near death experiences over the years but we are in a much healthier place.
The change for us was to partially move away from hardware. The other big change was the realization that there was a service element to our work. In 2009, we first launched our analysis tool (Prism) for multicore partitioning. At that same time, multicore development was going mainstream and most people thought that a lot of software partitioning would be needed to make use of all the available cores. But in fact, that wasn’t the case.
Blyler: I remember your involvement with the Multicore Association.
Stewart: I shared the Multicore Programming Practices (MPP) Group chair position with Max Domeika, who at that time was with Intel’s Software and Services Group. We published the best practices guide for multicore software development techniques using existing languages like C/C++.
Blyler: Let’s return to your observations that everyone thought multicore development would require a lot of software partitioning. But it didn’t happen. Why?
Stewart: It didn’t happen because when people put their existing software on next generation silicon with a multicore-aware operating system, the performance turned out to be “good enough.” If they would have partitioned the software it would have run faster but, in most application, it was good enough. And that is what engineers do. They find the most efficient and balanced design.
In working with customers, we found that the performance bottlenecks weren’t to do with partitioning but with matching the hardware and the software to the hardware platform. These are the same issues that I mentioned earlier, that is, doing analysis to improve the backend compilers, the algorithm implementations or the use of data and instruction caches in the system.
Blyler: This is the area where software really meets hardware.
Stewart: We extended our analysis tool to show designers what is going on at a low level with performance software running on the hardware. But then we discovered that most software developers don’t understand how to deal with the analysis. That’s when our service grew to help the customers get their job done. The service activity quickly evolved into complete sub-contractor work to do extensive software optimization. For example, a customer needs software to run 20% faster on an existing hardware platform. We analyze their code, optimize it and give them the updated code. It’s turned out that this specific type of performance optimization is a profitable business.
Many customers are so busy adding new functionality to their software suites that they have little time for performance analysis or optimization. Typically, we’ll create a performance analysis suite for our customers so that, as they add new features per release, this test suite will indicated how much the new features have slowed down the overall performance. This decrease in speed prompts an investigation to the cause for the slowdown and a solution to improve the performance. This activity applies to a growing number of industries that didn’t necessarily have to worry about performance in the past.
If you can take an existing hardware platform, optimize the software and make it run 20% faster, then potentially you can charge a premium for that existing product. This can result in additional or “free” money. So there are good economics in achieving performance optimization.
Blyler: Thank you.
Originally posted at Chipestimate.com “IP Insider.”