Why Truly Smart Future Tech Needs Radically New AI Chips

Print Friendly, PDF & Email

Much distance remains between today’s AI hardware and the technologies needed to enable smart cities and autonomous vehicles (AVs) to function smoothly.

To bring new innovations to reality, AI chipmakers must concentrate on honing the building-block technologies needed to enable the truly smart AV and IoT systems of the future. That means building processors efficient and compact enough to compute and interpret reams of data in real-time. To achieve maximum benefit, these processors must be embedded within the edge devices themselves.

Improving performance and speed while keeping compute cost and power demands low is the central challenge. Domain-specific, embedded chips designed specifically for deep learning applications can help meet that challenge.

Embedded hardware can already perform tasks such as semantic segmentation and complex object detection with limited performance. Improving on such capabilities will ultimately pave the way for full autonomy across the board.

Current Processors Face Structural Limitations

Even today’s highest-end, customized GPUs are not as optimized for AI and machine learning (ML) applications at the edge as they could be, and these structural limitations are holding future smart technologies back.

For example, current AV systems may have sophisticated algorithms and ultra-sharp sensors, but the way they handle and process the data they collect is often not up to the task. Since AVs can’t afford to send data to and from the cloud (which causes potentially dangerous lags in reaction time – think about the split-second reaction time needed to avoid a child darting into the street, for example), they’re forced to haul around clunky, power-hungry “supercomputers” in their trunks.

This is part of the reason why technologies still rely today on ADAS systems that assist the driver, but cannot take over for the driver.

However, some of the hardware on cars with ADAS systems is capable of helping support full autonomy. In fact, many modern cars actually underutilize the resolution of onboard cameras and other sensors because their data systems can’t process the full-resolution feed at all or can’t do so fast enough.

In many cases, much of the sensor technology is there. But until powerful, lightweight, energy-efficient processors can be built directly into vehicles, striking the balance between low latency and high fidelity that’s needed for true autonomy will remain untenable.

Reimagined Architecture with AI in Mind

There is a path forward for true on-device edge computing in the form of unique AI processors with improved architectures.

By discarding traditional computing architecture and rebuilding specialized AI chips from the ground up, future processors can break through previous limitations. New domain-specific AI chips entail massive efficiency gains over traditional architectures.

Unlike standard CPUs and GPUs, such chips dispense with external memory, storing all core computing resources instead directly on the chip. This proximity cuts down on the energy and time needed to send data to and from the chip.

Why are dedicated AI processors so efficient? Since neural networks consist of many layers and nodes, the ability to shuttle compute power and data quickly and efficiently between neural networks translates into major performance improvements. And this tight integration of hardware allows software to dynamically re-allocate resources to best fit the task at hand.

Practically, this enables such dedicated chips to deliver excellent performance with minimal power-usage, all while carrying out vital AI computations consistently and in real-time.

Building Towards a Truly Smart Future

Especially for today’s ADAS and nascent AV systems, on-device AI chips could greatly enhance functionality and reliability, ultimately helping to pave the road to full autonomy. Similar advantages would also apply to smart city/home and other IoT devices.

Whether it’s traffic monitoring for smart cities or collision avoidance for AVs, for tech to be truly smart, the real-time, high-fidelity processing of sensor inputs across countless devices must be swift, seamless, accurate, energy-efficient and reliable.

While there is still a long way to go, the breakthroughs being made in hyper-efficient embedded AI processors hold substantial promise. With domain-specific processors built atop deep learning architecture, the power and costs needed to analyze floods of data will be reduced – helping transform future technologies like true autonomy from dream to concrete reality, or reality on concrete.

About the Author

Orr Danon is CEO and Co-Founder of Hailo, bringing with him a decade of software and engineering experience from the Israel Defense Forces’ elite intelligence unit. In his role, he coordinated many of the unit’s largest and most complex interdisciplinary projects, ultimately earning the Israel Defense Award granted by Israel’s president, and the Creative Thinking Award, bestowed by the head of Israel’s military intelligence. Orr holds a M.Sc in Electrical and Electronics Engineering from Tel Aviv University and a B.Sc in Physics from the Hebrew University in Jerusalem.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*

Comments

  1. In near future, UAV drone innovation will play a vital role in developing smart cities where everything would be analyzed, monitored, managed through drone technology. But the drone technology should also have certain rules and regulations as it should be utilized for the betterment and welfare of the community. Drone technology should not affect the privacy of the public.