Software running close to the silicon hardware still presents many challenges for designs in terms of memory footprint optimization and security vulnerability.
By John Blyler, Editorial Director
Many former semiconductor chip tool vendors no longer frequent board-level embedded software shows. Instead, these companies are returning to conferences where the software lies closer to the silicon hardware. These companies – often in the simulation space – have found they have more in common with the chip industry than the embedded systems market and are returning to shows like ARM Techcon, Renesas Devcon and the like.
It’s no secret that the EDA tool vendors and semiconductor chip companies can no longer ship just the bare metal platform. They must include software like the drivers, RTOS and more.
“Initially we worked with the people buying the chips, but now we find ourselves working with the people developing the chips, explains Simon Davidmann, Founder and CEO of Imperas. “Today, the semiconductor vendors have to provide more complete simulation platforms.”
This change to software-based virtual platforms has been gaining momentum. Over the years, the EDA industry has to expand coverage into the larger system market. For example, the semiconductor supply chain used to be very fragmented, where chip and IP vendors worked with product manufactures, e.g., like mobile phones manufacturers. These end-product OEMs then partnered with operating-system suppliers and finally network operators. That supply chain process has evolved over the last decade or so to one of subsystem integration that resulted in EDA tools and IP companies providing software (drivers and firmware) to their customers.
In addition to the software side of processor-based chips, simulation companies like Imperas have seen an increase in performance optimization issues. In addition to traditional processor speed performance challenges, today’s designers must make efficient use of memory by the software to ensure system-wide optimization at the lowest power levels.
“Our customers are concerned about their memory footprint and not only the performance of the processor,” notes Simon. “It’s important to understand the way a software stack is used by the software-hardware. How efficient is that operation? Certainly, our customers want to find software bugs, but equally important to them is where the software is spending the most time, that is, the memory footprint.”
Perhaps that is why more customers – especially at universities – make annotations while running software simulations. For Imperas and others in this space, more universities are using simulation tools to understand the way software runs on a given hardware platforms. This understanding is also key when profiling system-wide power consumption. Often, the way software runs on a given hardware platform will determine the leading power usages of that system. Simon explains that Imperas freely licenses much of their software to universities for this purpose.
Another growing trend for simulation tools is in the analysis of security, i.e., to make the systems less vulnerable to security attacks. Hypervisors play a big role in this task. In practice, simulators operate on top of hypervisors, which are a software layer that enables multiple operating systems to run simultaneously on a single hardware platform.
Hypervisor technology is nothing new. They were originally used on mainframes, more recently on desktop computers, but now on embedded systems. Hypervisors are added to an embedded platform to separate different types of software. “This allows the development of software on top of software,” note Simon.
The interaction of software and hardware continues to be complex, which is why tools that play near the interface of both are so important.