That FD-SOI can be a key to achieving near-threshold voltage design was an important point made during a #55 DAC expert panel. Entitled How Close to Threshold-Voltage Design Can We Go Without Getting our Fingers Burnt? the session was organized by Jan Willis of Calibre Consulting. Turnout was excellent. Btw, Jan (herself an EDA expert) was one of the original advisors in the formation of the SOI Consortium, and while this DAC panel was not meant to be about FD-SOI, it turned out be a focal point.
Near-threshold voltage design* is an especially hot topic for IoT and edge-computing designers, for whom balancing performance, reliability and extremely low power is generally challenge #1. For them, the ability to get chips working at very low voltages translates into battery life savings.
The original goal of the panel was “…to explore how far below nominal voltage we can design, in what applications it makes sense and in what ways it will cost us.” The description in the #55 DAC program noted that “Energy consumption is the driving design parameter for many systems that must meet ‘always-on’ market requirements and in IoT in general. For decades, the semiconductor industry has attempted to leverage the essential principle that lowering voltage is the quickest, biggest way to reduce energy for a SoC. Some today contend sub-threshold voltage design is viable while others argue for near-threshold voltage design as the minimum.”
(Update 2 August 2018: a complete video of this panel is now available on YouTube — click here to view it.)
The panelists included:
Brian Fuller of Arm served as moderator.
Following the panel Jan published the following excellent recap on LinkedIn. She graciously agreed for it to be reprinted here in ASN, for which we thank her. So without further ado, read on!
First published on LinkedIn, June 27, 2018 by Jan Willis, Strategic Partnerships & Marketing Executive
Brian Fuller, Arm, skillfully guided a group of experts through the challenges of near-threshold design to conclude that the adoption is going to start gathering pace in a panel session at the 55th DAC in San Francisco on Monday, June 25.
Scott Hanson, CTO of Ambiq Micro, led off by saying the list of what’s not challenging is a much shorter list but that by taking an adaptive approach, they have been successful. It’s required innovating throughout the design process including test where Scott said they had create their own “secret sauce” to make it work. Later on in the panel, Scott described designers in near-threshold as “picojoule fanatics” to overcome the limitations in design tools which are geared towards achieving performance goals.
Lauri Koskinen, CTO of Minima Processor, agreed that adaptivity is key. Minima says it has to be done in situ in the design to make it robust for manufacturing while useful across more than one design. Later in the panel, Lauri indicated that FD-SOI is like having another knob available for optimizing energy in the Minima approach to near-threshold design.
Mahbub Rashed, head of Design and Technology Co-Optimization at GlobalFoundries, highlighted the need for more collaboration between EDA, IP, and foundries to support near-threshold design but noted a lot of progress has been made on FD-SOI processes. Mahbub cited models down to 0.4V for FD-SOI processes are available now and GlobalFoundries is able to guarantee yield.
Paul Wells, CEO of sureCore, validated that sureCore has bench marked their memories on GlobalFoundries FD-SOI with success. He reflected that FD-SOI has rapidly established itself as cost effective for a number of emerging markets. The panel all agreed that achieving quality on the memory at near-threshold voltage was much tougher than for digital IP. [Editor’s note: sureCore‘s CTO wrote an excellent summary of their SRAM IP for FD-SOI in ASN back in 2016 – you can still read it here.]
Paul went on to summarize at the end of the panel that near-threshold voltage is the way of the future and that it’s gathering pace. Mahbub called upon the EDA community to step up to improve the tools for low energy design. Lauri and Scott both summarized that there were drivers emerging that will grow the addressable market for near-threshold voltage design. Lauri pointed to growth coming from the applications that require edge computing which he thinks will require near-threshold voltage design. Scott concluded the panel by pointing out that there’s been a tremendous increase in performance of near-threshold voltage designs which will increase the addressable available market in the future.
~ ~ ~
This piece was first published by Jan Willis on LinkedIn, June 27, 2018. Here is the original.
* As explained by Rich Collins of Synopsys in the TechDesign Forum: “Operating at near-threshold or sub-threshold voltages reduces static and dynamic power consumption, at the cost of design complexity. […] A transistor’s threshold voltage (Vth) is the voltage at which the transistor turns on. Most transistor circuits use a supply voltage substantially greater than the threshold voltage, so that the point at which the transistors turn on is not affected by supply variations or noise. […] In sub-threshold operation, the supply voltage is well below the Vth of the transistors. In this region, the transistors are partially On, but are never fully turned. Near-threshold operation happens between the sub-threshold region and the transistor threshold voltage Vth, or around 400 – 700mV for today’s processes.
By Duncan Bremner, CTO SureCore Limited
Editor’s note: sureCore just announced availability of its 28nm FD-SOI memory compiler (press release here), which supports the company’s low-power, Single and Dual Port SRAM IP. Here, the company’s CTO explains why this IP is getting such impressive results.
~ ~ ~
Recently, sureCore announced results from a 28nm FD-SOI test chip that showed dynamic power savings exceeding 75% and static power cuts up to 35% (when compared against a number of current commercial offerings), while only incurring a 5-10% area penalty for its ultra-low power SRAM IP.
And while this data is easily substantiated as shown in Figure 1, the sceptical industry pundits have raised questions that fall into two camps: (a) That can’t be done; or (b) How did they manage that? In answer to both of these questions, here’s a quick look at the history and engineering strategy that we adopted to deliver these results.
Looking back to the early days of sureCore, SRAM fascinated us because despite many process iterations, the SRAM in use today bears a striking resemblance to the SRAM architectures that existed in the ’70s and ’80s. We concluded that no one had really taken a “blank-sheet-of-paper” look at the architecture for over 40 years. Recognising the growing importance of power efficiency for SoCs targeting forward-looking applications such as wearables, IoT, and other mobile devices, we examined power consumption in detail, and began by investigating how we could reduce SRAM power to a level attractive to the next generation of power critical, SoC designers.
Our starting point differed significantly from the traditional approach to SRAM R&D that typically starts at the bit cell. We recognised that the basic bit cell is fixed by the foundry; it’s a piece of electronics that is carefully optimised for fabrication. Modern bit cells are designed by the foundries who tend to put an emphasis on the broadest possible manufacturability drivers; yield and faster-time-to-volume as opposed to more performance-centric metrics. Their focus is on the front-end process optimisation, area and yield.
The basic rule of R&D fabless foundry engagement has been, “use the storage array – you won’t get a better packing density.” Consequently, the application use model had become separated from the technology — ‘faster or cheaper’ became the industry’s mantra instead of ‘faster and better’. This resulted in SRAM design teams focusing on how to build more sensitive read amplifiers to detect the signals, and better write amplifiers to drive the signal on to the bit cell. Not much time was spent looking at the fundamental architecture and asking: “Is this the best way?”
sureCore decided to take a more holistic view and stood back from the whole problem. We started with a clean sheet of paper and asked, “Where does the power go when you start storing data on SRAM?”
We discovered that a lot of the power is consumed hauling parasitic capacitance around. Our design strategy was therefore very simple; we developed a system architecture to optimize power while still retaining the area advantages of the standard foundry bit cell.
Simply stated, we architected the internal block architecture of SRAM by splitting the read amplifier function into a local and global read amplifier, thus dividing the capacitive load from the word-line, only driving the areas being addressed and not the whole array. This resulted in significant dynamic power savings during the read cycle. In a similar fashion, we reduced the write cycle power by a similar amount. Whilst hierarchical solutions are not new, the sureCore “secret sauce” is at circuit level developed by our engineering teams leading to not only significant power savings, but also comparable performance levels.
Our “blank sheet” approach delved deep; right down to the fundamental device physics level. Our strategic partners, Gold Standard Simulations — recognised world leaders in modelling devices at the atomic level and experts in nano-scale process nodes, helped us to understand the behaviour and limitations of processes at nodes below 28nm at a device level and bit cell level. Combining this fundamental device understanding with excellent circuit design and system analysis skills, we’ve identified where existing SRAM solutions waste power, and architected our solution to avoid this; we deliver power savings without the added complexity of write and read-assist.
At the outset, we determined it was important that our IP be process-independent. sureCore IP is based on architecture and circuit techniques rather than a reliance on process features. The result of this is technology that can reduce power in standard bulk CMOS, but is equally applicable to newer FinFET or FD-SOI processes and across all geometries, even down to 16nm and below. We believe our approach is paying off and, because we insisted in retaining the foundry optimised bit cell, sureCore’s technology can be retrofitted into existing designs enabling extended product life cycles.
This is our basic technology story… a start-up deciding to take a fresh look at an old technology and dramatically improving power performance over 75% compared with existing solutions. This is a new approach to SRAM power consumption for power sensitive applications and it delivers tangible battery life benefits to both the end user and the FD-SOI designers. Today’s FD-SOI technology is optimised for low power applications, bringing extended battery life to the nascent markets of wearables and IoT.
SureCore’s ultra-low power SRAM technology on 28nm FD-SOI saves 70% in read/write power and reduces leakage by 30% compared to 40nm bulk implementations, writes SemiconductorEngineering Editor-In-Chief Ed Sperling (read the article here). Hitting the sweet spot for mobile, IoT and wearables, SureCore recently raised $1.6 million in funding.
Targeting low-power SRAM for FD-SOI and FinFETs, UK physical IP start-up sureCore has received a £250K grant (about 292K Euros or $380.5K) from the Technology Strategy Board SMART. Working with the major foundries developing FD-SOI and FinFET technologies, the grant will be used in the development of a demonstrator chip to showcase sureCore’s patented array control and sensing scheme, which significantly lowers active power consumption. Through a combination of detailed analysis and using advanced statistical models, sureCore has designed an SRAM memory consuming less than half the power of existing solutions. SureCore is working closely with Gold Standard Simulations (GSS) Ltd. (GSS Founder/CEO Asen Asenov is a sureCore director).