For many years, the vision of autonomous vehicles was firmly positioned at the peak of inflated expectations on the Gartner Hype Cycle – a chart that plots industry trends and the projected impact for that given trend. The Gartner Hype cycle recognizes that many technology trends go through different phases where they are typically overhyped, and in time, when practicality sets in, they move from being at the peak of the hype cycle to going through the trough of disillusionment where, over time, if the trend continues to exist, it makes it to a point where the trend or technology sees deployment, but typically not at a level that was initially expected when this trend was at its peak.
For some time now, the auto industry has been going through a major transformation, not only by going electric and being connected, but also by offering the promise of delivering self-driving technologies. Advanced Driver Assistance Systems (ADAS) have delivered great strides in helping to reduce accidents – capabilities like blind spot detection, automatic emergency braking, or smart headlights – to name a few – have had a profound impact on improving driver and passenger safety. These technologies – as stated – assist the driver, but still rely on a driver to have primary control over the vehicle. Level 4 autonomous vehicles support conditional autonomy – where it is expected at some point, a driver may need to take over control of the vehicle, or may be restricted to only operating in a geo-fenced area. Level 5 autonomous vehicles assume that there is no driver intervention ever required and this vision is considered the ultimate destination for future vehicles.
While issues with legislation will have their own effect on delaying the introduction of Level 5 vehicles, perhaps equally as challenging is the actual level of computation performance that is required to achieve these higher levels of autonomy and the limited power budgets under which they are trying to be addressed. In 2018 SoCs targeting the auto industry were introduced claiming to achieve L5 performance. These devices had “only” 300 TOPs and consumed 400 Watts of power. Most recently, products that have been introduced to address L5 have claimed to have 2000 TOPs but still with very high power budgets. At one point in the early days of scoping self-driving, the industry specification for a self-driving vehicle called for a single PCB that could be contained in the cabin that would consume only 75 Watts of power.
So where historically, the automotive industry has relied on mature products that have been on mature process technology nodes – today, the automotive industry is now driving the use of the latest semiconductor technology nodes and latest packaging and assembly. The often overused phrase of “data center on wheels” is truly an understatement when considering the level of compute performance that is now being brought into the vehicle while leveraging fail-over redundancy architectures which exactly mirror those found in the data center.
Chiplets are now becoming the lingua franca in the automotive industry as building the perfect compute platform to achieve even L3+ capabilities can tax the most powerful monolithic SoCs. The AI processing performance required for perception seems to continually grow as new corner cases are identified and new neural nets with higher performance, lower latency, and higher accuracy continue to evolve and be deployed. Furthermore, higher resolution vision sensors are being deployed with larger image sizes which also drives up required compute performance of the AI engines. Manufacturing a monolithic device with sufficient compute performance while consuming nominal amounts of power is becoming increasingly difficult if not impossible.
Lower power consumption AI engines increasingly are relying on an “at memory” compute architecture to achieve both high TOPs and low power consumption. This typically results in a relatively large die size for the AI engine but also for tasks such as sensor fusion, and is better served being integrated with other disparate technology types via chiplets. Untether AI employs an at-memory compute architecture and achieves industry-leading energy-centric performance for our edge AI inference engines.
Through the introduction of the Universal Chiplet Interconnect Express (UCIe) standard, the groundwork has been laid in creating an industry ecosystem where the best-of-class technologies can be seamlessly integrated in a manner similar to how legos can be interconnected because of their common form factor. While advanced packaging technologies and other supporting manufacturing and test support is still required, the UCIe standard is an important step to gain key momentum to build out this ecosystem where the sale of discrete chiplets will soon become a reality addressing the need for integration of the best in class technologies.
The UCIe 1.0 specification was released in 2022. It defines a physical layer, protocol stack and software model. The physical layer supports up to 32 Gigabit Transceivers 16 to 64 lanes wide and uses a Flow Control Unit for data, similar to PCIe 6.0; the protocol layer is based on Compute Express Link.
The short signal paths lead to high I/O performance with low power consumption (0.5 pJ / bit). Untether AI is pleased to be an active member of the UCIe consortia, ultimately ushering in the era of self-driving vehicles to the masses.