Tech Insight Archives | Weebit A Quantum Leap In Data Storage Wed, 25 Jun 2025 11:20:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.weebit-nano.com/wp-content/uploads/2022/04/fav.svg Tech Insight Archives | Weebit 32 32 ReRAM-Powered Edge AI:A Game-Changer for Energy Efficiency, Cost, and Security https://www.weebit-nano.com/reram-powered-edge-aia-game-changer-for-energy-efficiency-cost-and-security/ Thu, 27 Mar 2025 10:58:07 +0000 https://www.weebit-nano.com/?p=16200   In AI inference, trained models apply their knowledge to make predictions and decisions. To achieve lower latency and better security, the world is transitioning steadily towards performing AI inference at the edge – without sending data back and forth to the cloud – for a wide range of applications. Because edge devices are often […]

The post ReRAM-Powered Edge AI:<br>A Game-Changer for Energy Efficiency, Cost, and Security appeared first on Weebit.

]]>

 

In AI inference, trained models apply their knowledge to make predictions and decisions. To achieve lower latency and better security, the world is transitioning steadily towards performing AI inference at the edge – without sending data back and forth to the cloud – for a wide range of applications.

Because edge devices are often small, battery-powered, and resource-constrained, it’s important that the computing resources enabling this process and the associated memories are ultra-low-power and low-cost. This is a challenge for AI workloads, which are known to be power-hungry.

The industry has been making progress towards lower power computation largely by moving to more advanced process nodes. This enables more performance with greater energy efficiency in smaller silicon area. However, non-volatile memories (NVMs) haven’t been able to scale to advanced nodes along with logic. Today we see advanced chips in process nodes of 3nm. At the same time, embedded flash memory is unable to scale below 28nm. This means that NVM and AI engines are often manufactured at very different process nodes and can’t be integrated on the same silicon die.

This is one of many reasons why the industry is exploring new memory technologies like Weebit ReRAM (RRAM).

 

The need for a single-die solution

Neural Network coefficients (often referred to as NN weights), which are used for computations by the inference engine, need to be stored in an NVM, so that when the system is powered-on these coefficients are available for compute workloads. Because it’s not possible to integrate flash and an AI engine on one die below 28nm, it is standard practice to implement a two-die solution, with one die at a small process node used for computing, and the other die at a larger process node used for storing the coefficients. These two dies are then either integrated in a single package or in two separate packages. Either way, such a two-die solution is more expensive and has a bigger footprint. Also, copying the coefficients from an external flash to an on-chip SRAM in the AI chip is very power hungry and creates latencies. In addition, the fact that the coefficients are moved from one chip to the other creates a security risk, as it is easy to eavesdrop this communication.

The ideal solution for edge AI computing from power, latency, cost and security perspectives is a single die that hosts both memory and compute.

 

A scalable, single-chip solution with ReRAM

Embedded ReRAM is the logical alternative to flash for edge AI. ReRAM is significantly more energy efficient than flash, and it provides better endurance and faster program time. Since it is scalable to advanced processes, ReRAM enables a true one-chip solution, with NVM and computing integrated on the same die.

ReRAM-enabled SoCs are less expensive to manufacture because they only require two additional masks in the manufacturing flow, while flash requires 10 or even more such masks. Embedding ReRAM into an AI SoC would eliminate the need for off-chip flash devices and replace most of the large on-chip SRAM used to temporarily store the NN weights. Since the technology is non-volatile, the system can boot much faster as there is no need to wait for loading the AI model and firmware from external NVM, and the security risk is removed. ReRAM is also much denser than SRAM, so more memory can be integrated on-chip to support larger neural networks for the same die size and cost, while enabling more advanced AI algorithms.

New Demo: ReRAM for ultra-low-power edge AI

A new demonstration showcases the advantages of Weebit ReRAM-powered edge AI computing. Developed through a collaboration between Weebit and Embedded AI Systems Pte. Ltd. (EMASS), a subsidiary of Nanoveu, the gesture recognition demo shows Weebit ReRAM working with EMASS’s energy-efficient AI SoC, the EMASS ECS-DOT. The demo emphasizes the ultra-low-power consumption of ReRAM and its ability to enable instant wake-up AI operations. In the real world, such a system could be used to detect driver activity for advanced driver safety systems, or it could be used for safety/surveillance, robotics, and many other applications.

ECS-DOT is an edge AI chip manufactured in a 22nm process that delivers significant energy efficiency and cost advantages, with best-in-class AI capacity. In the demo, ECS-DOT loads the neural network weights from Weebit ReRAM where they are being stored. As noted earlier, this is a powerful feature of ReRAM – it can be used to replace the large on-chip SRAM to store the NN weights, as well as the CPU firmware.

Weebit ReRAM isn’t yet integrated into the ECS-DOT SoC, so the proof-of-concept demo shows a two-chip solution with the 22nm Weebit demo chip communicating with the EMASS chip over an SPI bus. In an end solution, the ReRAM would be integrated on-chip, eliminating latency, cost and security risks, and demonstrating even lower power consumption. Such integration can enhance system performance and also ensure scalability and sustainability, paving the way for smarter, more autonomous edge devices.

Above: ultra-low-power ReRAM based gesture recognition system
with Weebit ReRAM and EMASS AI SoC

 

EMASS recently made a strategic pivot away from MRAM technology and is embracing ReRAM. The company says that ReRAM is better able to support next-generation systems in IoT, automotive, and consumer electronics.

 

Looking Ahead

Research is now underway to bring memory and compute resources even closer together through analog in-memory compute. In this paradigm, compute resources and memory reside in the same location, so there is no need to ever move the coefficients. Such a solution using ReRAM will be orders of magnitude more power-efficient than today’s neural network simulations on traditional processors.

You can see our new demo video here:

 

The post ReRAM-Powered Edge AI:<br>A Game-Changer for Energy Efficiency, Cost, and Security appeared first on Weebit.

]]>
Enhancing IoT System Performance with Smart Memory Partitioning https://www.weebit-nano.com/enhancing-iot-system-performance-with-smart-memory-partitioning/ Thu, 12 Dec 2024 08:49:05 +0000 https://www.weebit-nano.com/?p=15787 Low-power design is critical, especially for chips inside battery-operated IoT devices that must support applications for several years on one battery in a very small area. Embedded non-volatile memory (NVM) for these devices must have ultra-low-power, high endurance and high reliability to support continuous monitoring, logging and communicating of small amounts of data over the […]

The post Enhancing IoT System Performance with Smart Memory Partitioning appeared first on Weebit.

]]>

Low-power design is critical, especially for chips inside battery-operated IoT devices that must support applications for several years on one battery in a very small area. Embedded non-volatile memory (NVM) for these devices must have ultra-low-power, high endurance and high reliability to support continuous monitoring, logging and communicating of small amounts of data over the product’s lifetime.

Ultra-low-power embedded NVM like Weebit ReRAM (RRAM) can enable longer use times between recharges or battery replacements and help improve system energy efficiency. The low voltage levels used for memory transactions, coupled with ReRAM’s fast memory access time, greatly reduce power consumption. And with programming, standby, sleep, and very deep power-down ReRAM modes, as well as rapid wake-up from deep power-down, designers can enable near-zero leakage power of internal and external NVM. You can read more about this in my previous article, ‘How Low Can You Go? An Inside Look at Weebit ReRAM Power Consumption’.

By reducing power consumption, the memory subsystem can also allocate more power to other critical components to enhance overall system performance. Designers can take this advantage even further by implementing smart, power-aware system memory partitioning strategies. This includes dividing data intelligently across volatile and non-volatile memory resources to reduce the size of system SRAM.

 

Smart memory partitioning in practice

In a wearable sensor designed to monitor a specific health parameter, it is common to store code on external flash and then load code onto the local code SRAM from which the MCU then fetches the code. Each time the system wakes up to log and process data, there is MCU power consumption related to executing the Write cycles; as well as time and energy needed to load the code from the external flash into the code SRAM, and for the MCU to fetch the code. There is also power required to maintain the code SRAM or keep on the always-on logic for these operations.

 

Above: Typical MCU architecture with external flash

 

An alternative way to architect this would be using an eXecute in Place (XiP) architecture where on-chip ReRAM can be used to store code instead of the code SRAM, and the MCU can fetch the code directly from the ReRAM. This reduces system wake-up time and decreases power since there is no need to access the external flash. It is also possible to turn off the code ReRAM to further reduce power. Our calculations show that this can result in 30% power savings over the previously described traditional architecture.

In addition, instead of storing log data to external flash, we can store it into on-chip ReRAM, eliminating the external flash altogether. If we replace the on-chip code SRAM as well as part of the data SRAM with ReRAM, we can achieve a total of 60% power reduction, and a device that can last up to four years!

Finally, instead of logging the data into SRAM and storing processed data onto ReRAM, we can log processed data directly into ReRAM, thereby eliminating most of the on-chip SRAM. In this way we can reach a total of more than five years of lifetime for this application. With NVM like ReRAM, there is close to zero power consumption needed to retain the data during inactive states.

 

Above: Example medical device logging system with on-chip ReRAM for code and data

 

Enabling new use cases

Reconsidering the memory technology and architecture in a typical IoT sensor/medical logging device can enable advantages in terms of power consumption and device lifetime. This becomes even more important with a device that doesn’t have a battery. In a device using energy harvesting, a traditional architecture can be prohibitive. Using a combination of logging data in SRAM and uploading it to flash can actually consume more power than what’s available!

Advanced hearing aids, wireless earphones, pacemakers and other medical and wearable appliances that need over-the-air (OTA) firmware updates can also benefit. In our calculations, performing a chip erase and then programming the new code required for the OTA update requires more time and energy than what is available using a standard off-the-shelf ultra-low-power flash device.

Embedded NVM like ReRAM, coupled with smart memory partitioning, can improve the energy efficiency of battery operated and energy-harvesting ICs. In a new article in Electronics Weekly, I go into greater detail on the different architectures and power savings that can be achieved.

Read the full article here.

The post Enhancing IoT System Performance with Smart Memory Partitioning appeared first on Weebit.

]]>
Weebit ReRAM Foundry Developments Presented at FMS https://www.weebit-nano.com/weebit-reram-foundry-developments-presented-at-fms/ Mon, 12 Aug 2024 06:19:09 +0000 https://www.weebit-nano.com/?p=15240 At the recent FMS: the Future of Memory and Storage event in Santa Clara, I took part in the session on Emerging Memories, during which I shared some of the recent progress Weebit has made with various foundries towards making our technology available via their IP portfolios.   Above: The author (second from left) on […]

The post Weebit ReRAM Foundry Developments Presented at FMS appeared first on Weebit.

]]>
At the recent FMS: the Future of Memory and Storage event in Santa Clara, I took part in the session on Emerging Memories, during which I shared some of the recent progress Weebit has made with various foundries towards making our technology available via their IP portfolios.

 

Above: The author (second from left) on the Emerging Memories Session at FMS 2024

 

FMS is the most important non-volatile memory (NVM) conference, usually attracting ~6,000 participants from across the memory industry. During the show, major memory companies release their newest products and technological achievements. This year, FMS changed its branding from the ‘Flash Memory Summit’ to ’the Future of Memory and Storage,’ recognizing the fact that the NVM market is no longer flash-only. It covers emerging technologies such as ReRAM (RRAM) and MRAM, which are now taking more significant market share, especially as embedded Flash hits a scaling wall.

One application area where ReRAM is gaining traction is automotive, where it brings high-temperature reliability, immunity to electromagnetic interference, high endurance, fast switching speed, longevity and security. Many automotive applications like autonomous vehicles and Advanced Driver Assistance Systems (ADAS) are also part of the AI revolution which is rapidly changing the world. The focus from an NVM perspective is on increased capacity, energy efficiency, and performance. But that must be married with cost efficiency – an NVM for AI, supporting large densities, must be cheap enough to manufacture and mass produce. ReRAM is an ideal fit.

 

Weebit ReRAM: The Latest Foundry Developments

One of Weebit’s important recent developments is the tape-out of a demonstration chip integrating Weebit’s embedded ReRAM module in DB HiTek’s 130nm BCD process. We announced this milestone recently with DB HiTek – a tier-one foundry in South Korea. DB HiTek’s 130nm BCD process is ideal for analog, mixed-signal and high-voltage designs. We’re seeing interest from their customers in a variety of applications including smart power management integrated circuits (PMICs), where integrating the PMIC with a controller on one die can lead to significant advantages in terms of performance, security, power and cost.

We’re now showing publicly for the first time performance data of our ReRAM technology implemented on GlobalFoundries 22FDX® wafers including endurance and retention data. These are the first such ReRAM results.

Pre-qualification results show Weebit’s ReRAM stack is stable at 105 degrees Celsius up to 10K cycles endurance. We’re also demonstrating very good data retention, maintaining pre- and post-cycling for a long time at high temperatures (150°C). These are impressive results for our first set of wafers, and we are continuing to collect data as our characterization and qualification activities continue.

 

Above: cycling and data retention results of Weebit ReRAM on GlobalFoundries 22FDX® wafers

 

Another recent development is our partnership with Efabless Corporation, focused on providing Efabless chipIgnite customers access to Weebit ReRAM IP into designs manufactured using SkyWater’s 130nm CMOS (S130) process. chipIgnite lets customers, including academics, researchers, startups and groups within large OEMs, quickly and cost-effectively develop and test new designs. If they like the design, they can license our product for commercial production with SkyWater.

 

The Weebit ReRAM module has already been fully qualified in SkyWater Technology’s 130nm CMOS (S130) process at temperatures of up to 125 degrees Celsius—the temperature specified for Grade-1 automotive applications. We’ve also already showed ReRAM results on SkyWater S130 under extended automotive conditions with extremely high 100K endurance cycles at very high temp of 150 degrees Celsius.

 

You can see here the slides from my recent FMS presentation.

 

 

The post Weebit ReRAM Foundry Developments Presented at FMS appeared first on Weebit.

]]>
Innovative Memory Architectures for AI https://www.weebit-nano.com/innovative-memory-architectures-for-ai/ Tue, 09 Jul 2024 06:45:07 +0000 https://www.weebit-nano.com/?p=15163   One of the biggest trends in the industry today is the shift towards AI computing at the edge. For many years the expectation was that the huge datacenters on the cloud would be the ones performing all the AI tasks, and the edge devices would only collect the raw data and send it to […]

The post Innovative Memory Architectures for AI appeared first on Weebit.

]]>

 

One of the biggest trends in the industry today is the shift towards AI computing at the edge. For many years the expectation was that the huge datacenters on the cloud would be the ones performing all the AI tasks, and the edge devices would only collect the raw data and send it to the cloud, potentially receiving the end directives after the analysis was done.

 

More recently, however, it has become more and more evident that this can’t work. While the learning task is a strong fit for the cloud, performing inference on the cloud is less optimal.

With the promise of lower latency, lower power and better security, we are seeing AI inference in a growing number of edge applications, from IoT and smart home devices all the way up to critical applications like automotive, medical, and aerospace and defense.

 

Since edge devices are often small, battery-powered, and resource-constrained, edge AI computing resources must be low-power, high-performance, and low-cost. This is a challenge considering power-hungry AI workloads, which must rely on the storage of large amounts of data in memory and the ability to quickly access it. Some models have millions of parameters (e.g., weights and biases), which must be continually read from memory for processing. This creates a fundamental challenge in terms of power consumption and latency in computing hardware.

 

Data movement is a key contributor to power consumption. Within chips, significant power is consumed while accessing the memory arrays in which the data is stored and while transferring the data over the on-chip interconnect. The memory access and speed of the interconnect also contribute to latency, which limits the speed of the AI computation. Speed and power both get significantly worse when the data needs to be moved between two separate chips.

To keep edge computing resources low-power and low-latency, hardware must be designed so that memory is as close as possible to the computing resources.

 

The continuous move to smaller process geometries has helped to keep power consumption to a minimum and has also reduced latency for AI tasks. But while computing resources continually scale to more advanced nodes, Flash memory hasn’t been able to keep pace. Because of this, it isn’t possible to integrate Flash and an AI inference engine in a single SoC at 28nm and below for edge AI.

 

Today it’s standard practice to implement a two-die solution, where one die at an advanced process node is used for computing, and another die at a more mature process node is used for memory. The two dies are then integrated in a single package or in two separate packages. A two-die solution is detrimental to AI performance because memory resides far away from compute, creating high levels of power consumption, latency and total system costs.

 

The ideal solution is a single die that hosts memory and compute, and embedded ReRAM (or RRAM) is the logical NVM to use. Embedding ReRAM into an AI SoC would replace off-chip flash devices, and it can also be used to replace the large on-chip SRAM to store the AI weights and CPU firmware. Because ReRAM is non-volatile, there is no need to wait at boot time to load the AI model from external NVM.

 

Such ReRAM-based chips use less power and have lower latency than two-chip solutions. And, with a wider path to memory enabled by the width of the interface, the memory interface is no longer limited by the number of pins on the memory device. The result is faster access time, faster inference, and the potential for true real-time AI computing at the edge.

 

ReRAM is also much denser than SRAM which makes it less expensive than SRAM per bit, so more memory can be integrated on-chip to support larger neural networks for the same die size and cost. While on-chip SRAM will still be used for data storage, the array will be smaller and the total solution more cost-effective.

 

Finally, ReRAM-enabled chipsets are cheaper to manufacture, since they only require the fabrication of one die. This makes edge computing more affordable and consequently more accessible for a large array of applications.

Above: how the various memory technologies compare

 

You can see here the slides on this topic I presented at the Design Automation Conference (DAC) 2024.

The post Innovative Memory Architectures for AI appeared first on Weebit.

]]>
Functional, Fast, and Ultra-Low Power:A Live Look at Weebit’s Second IP Module https://www.weebit-nano.com/functional-fast-and-ultra-low-powera-live-look-at-weebits-second-ip-module/ Tue, 05 Sep 2023 07:57:06 +0000 https://www.weebit-nano.com/?p=14133 You might have seen the news that Weebit’s second Weebit ReRAM IP module is now available for the S130 CMOS process from SkyWater Technology. The IP is now fully qualified per JEDEC standards and is ready for mass production. We recently filmed a quick demonstration of the IP module to show that it’s functional, fast, […]

The post Functional, Fast, and Ultra-Low Power:<br>A Live Look at Weebit’s Second IP Module appeared first on Weebit.

]]>

You might have seen the news that Weebit’s second Weebit ReRAM IP module is now available for the S130 CMOS process from SkyWater Technology. The IP is now fully qualified per JEDEC standards and is ready for mass production. We recently filmed a quick demonstration of the IP module to show that it’s functional, fast, and much lower power than flash.

You can watch the video by clicking below.

In the first part of the demo, we took a photo, added a Weebit logo, and saved it to the module—storing the image as ones and zeroes in the memory. As you watch the demo, take note of the energy consumption used to complete the Write operation, along with the comparison to typical NOR flash memory. You’ll see that Weebit ReRAM uses 0.23 millijoules of energy to complete the Write operation, while typical flash takes 33.0 millijoules for the same Write operation. The difference between the energy consumed by ReRAM compared to flash on the same operation is more than 100X!

There is a similar difference on Read energy, with Weebit ReRAM consuming 0.72 microjoules and flash consuming 28 microjoules – a ~40X difference!

After the photo has been stored once, we save it again. This time, the only thing that has changed in the image is the clock in the upper right corner. Because Weebit ReRAM has direct program capability and byte addressability, it only needs to actively store the new bits of the image – in this case the numbers on the clock. Parts of the image where data did not change require very little time and energy to access, and parts where new data was written require a bit more. You can see that it took 55.1 milliseconds to store the original image, and when stored with the small change, it only took 3.7 milliseconds.

Direct program capability and byte addressability are key differentiators compared to flash, which must access entire sectors of data every time it erases or writes. Looking at Write energy consumption, you can again see the advantage compared to flash: Weebit only consumed 0.02 millijoules for the operation, while the NOR flash would consume 2.2 millijoules for the same operation – a difference of 100X!

If you’d like to learn more about the low power consumption of Weebit ReRAM (or RRAM), check out our new article, How Low Can You Go? An Inside Look at Weebit ReRAM Power Consumption. And stay tuned for future videos and articles where we discuss this important topic.

The post Functional, Fast, and Ultra-Low Power:<br>A Live Look at Weebit’s Second IP Module appeared first on Weebit.

]]>
How Low Can You Go?An Inside Look at Weebit ReRAM Power Consumption https://www.weebit-nano.com/how-low-can-you-goan-inside-look-at-weebit-reram-power-consumption/ Wed, 23 Aug 2023 13:31:21 +0000 https://www.weebit-nano.com/?p=14108 One of the key advantages of Weebit ReRAM (RRAM) is the technology’s ultra-low power consumption. Some of this advantage is due to the inherent features of the technology, and some of it is due to smart design. In this article we’ll explain why customers need a low power non-volatile memory (NVM) and what makes Weebit […]

The post How Low Can You Go?<br>An Inside Look at Weebit ReRAM Power Consumption appeared first on Weebit.

]]>

One of the key advantages of Weebit ReRAM (RRAM) is the technology’s ultra-low power consumption. Some of this advantage is due to the inherent features of the technology, and some of it is due to smart design. In this article we’ll explain why customers need a low power non-volatile memory (NVM) and what makes Weebit ReRAM lower power than other types of NVM. We’ll also explain a bit about some of the design techniques and levers that customers can use to adjust the power.

 

Why is low power consumption important?
In our rapidly warming climate, it has become critical to minimize carbon emissions, and this includes reducing the power consumption of everything we touch – from our homes to our cars to our personal electronic devices and beyond. This is now a key consideration at the government level in many countries, and is a key consideration for institutional investors.

At the practical level, for companies developing electronic products, low power consumption is often a key consideration, especially when it comes to battery operated IoT devices with Bluetooth® Low Energy or energy harvesting technology, and medical devices such as wearables and implantables.

Such devices must ensure that data gathered by tiny sensors is regularly and reliably delivered, often from remote or inaccessible locations. For many of these applications, whether in medical, transportation, agriculture, or other applications, reliability can have life or death consequences. Long battery life – supporting applications that last up to 10-15 years on one battery – is critical.

Above: Various ultra-low power attributes of Weebit ReRAM lead to longer device battery life.

Even for products that are plugged into power, designing for low power can be an important consideration. Developers want to avoid costly fans and heat sinks, reduce overall electricity costs, and meet consumer product energy efficiency standards including certifications like Energy Star and LEED for products and buildings, as well as EU energy efficiency labeling. Such guidelines consider not only active power consumption, but also ‘leakage’ power consumed when a product is not in use.

 

The Role of NVM in Reducing Power Consumption
While NVM may not contribute as much to system power consumption as other components such as the CPU, connectivity modules or display, reducing its impact is a key goal for an overall power management strategy.

As part of a system, choosing ultra-low-power NVM helps to enable longer battery life, leading to improved energy efficiency and longer use times between recharges or battery replacements. It can also lead to better thermal management and overall greener technology. Importantly, by reducing power consumption, the memory subsystem can allocate more power to other critical components, such as the processor or display, improving overall system performance.

When it comes to NVM, there are various factors that contribute to power consumption, such as the power consumed by Read and Write operations, standby power, access frequency and overall system design. Let’s look at some of these in a bit more depth.

 

Read Power Consumption
In an NVM ‘Read’ operation, data is retrieved from a specific memory location. This includes decoding the address to identify the specific memory location to be accessed, retrieving the data, and outputting the data for processing elsewhere in the system. The ‘Read’ operation is the most common NVM operation, happening many more times than programming the cells, and thus consuming more power.
The power consumed during a Read operation depends on several key factors. One of these is the power supply used. Flash and some other types of memory require a special high voltage supply. Weebit ReRAM is able to read out of a low-voltage power supply – the same one that any system needs for basic calculations. With Weebit ReRAM, there is also no need for an always-on charge-pump – something that is needed with flash memory.

Another contributing factor is cell reading voltage and current. The cell reading voltage refers to the voltage level applied to a memory cell during the read operation. Different memory technologies have specific voltage requirements for reading data from their cells, and these can vary based on the specific memory technology, fabrication process, and design considerations. With Weebit ReRAM, a Read operation is performed using only the digital core voltage (VDD), and the Read cell voltage for ReRAM is typically a few hundreds of millivolts (mV) or lower.
The typical Read cell voltage requirements for other NVMs are higher, typically in the range of 1 to 3V for flash, and several hundred mV to a few volts for MRAM. Weebit ReRAM also has a dedicated “read-only-mode” during which program voltage can be completely shut-off.

These are just a few considerations in terms of Read power consumption. Other important things that impact Read power include:

  • The number of data bits read in parallel, including error correction code (ECC) bits: Weebit’s ReRAM architecture is flexible to support different word widths based on the system architect’s preference.
  • Memory array capacitance: In Weebit ReRAM, the bitline capacitance is reduced due to array segmentation.
  • Sense-amplifier efficiency: Weebit’s engineers have innovated and optimized the sensing circuitry to consume extremely low power per bit.
  • Control logic and self-timing circuitry: Weebit ReRAM has a single-ended Read operation with self-timing to enable the operation to terminate as soon as it is complete.

Fast Read times in Weebit ReRAM also allow Execute-In-Place (XiP) to further save system power. We will cover this in a future article.

 

Write (Program) Power Consumption
In an NVM Write operation, data or instructions are stored or updated to a specific location. This is a complex operation encompassing many events which, of course, consume power. Power consumed during programming is mainly dependent on:

  • The number of data bits to be programmed
  • The power supply used during SET and RESET operations
  • The current through the cell during the SET/RESET operation
  • The write circuitry (LDO, limitation, termination) efficiency
  • The ability to shut off the power as soon as the operation has completed

In terms of the number of data bits to be programmed, one of the key advantages that ReRAM and other emerging NVMs have over flash memory is that these technologies do not require a sector erase. With flash, even if you just want to erase one bit, you actually must erase the entire sector or segment, and before erasing that sector, you first have to program all the new bits including those that didn’t even need to be programmed. This is obviously a lot of extra work and power. Designers working with flash have found ways to work around the challenge and mitigate the penalties associated with the extra programming and erasing, but even with these workarounds, emerging memories like ReRAM are significantly more power efficient.

With ReRAM, using its direct program/erase capability and byte addressability, the programming is bit-wise: each bit can be independently and selectively SET or RESET. Importantly, with Weebit ReRAM, a programming algorithm does a comparison to existing data to avoid unnecessary writes and then masks out the bits that do not need to be reset.

Above: A programming algorithm compares new data to existing data and only resets the new bits.

The programming algorithm also splits Words into Sub-Words to control peak power consumption to mitigate against any issues such as IR-drop or power supply failure. Weebit ReRAM implements smart programming algorithms that control voltage, current and pulse duration during the Write operation, enabling efficient usage of resources.

Flash memory often requires high voltages for programming, sometimes requiring voltages generated by a charge pump or DC-DC converter. These types of converters add area to chip, add cost to the system and waste power. With Weebit ReRAM, programming is ultra-low-power, capable of being done using a lithium cell battery. It also requires low voltages (using a ~3V supply) with no charge pump needed when a ~3V IO voltage is available.

As with a Weebit ReRAM Read operation, Smart algorithms enable the shortest possible Write time and a termination mechanism shuts off the programming pulse as soon as the cell is flipped.

 

Standby/Sleep/Power-Down Modes
Depending on their specific application and operation, a key design consideration for customers is how often the memory can be in programming mode, standby mode, sleep mode, or very deep power-down mode.
During the inactive states of the system, there is significant cell leakage when using a volatile memory such as SRAM. Similarly with DRAM there are “hidden” refresh cycles that consume power during these states. With any non-volatile memory, there is close to zero power consumption used for retaining the data during inactive states. Like other NVMs, Weebit ReRAM is able to be completely powered down to zero leakage while maintaining stored data. The fact that ReRAM does not require an always-on charge pump makes this advantage even more evident.

The wake-up time of a memory from deep power-down mode to active is also a key factor. A memory that can wake up rapidly from power-down mode to read (or programming) mode allows the system architect to put the memory to sleep even during shorter activity breaks. Said another way, waking up quickly means the system can also go to sleep more often. Again, not having a charge-pump makes this advantage even more meaningful as charge-pumps are known for their slow and power-hungry wake-up times.

Weebit engineers are focused on continuing to reduce the time needed for our ReRAM to wake up from very deep power-down mode. The time is already very fast to switch from power-down to standby mode, and we are in the process of further reducing this by orders of magnitude.

We are focused on providing customers with flexibility when it comes to their choices. One of the benefits of working with Weebit is that our designers are experts at optimizing these parameters and we are willing and able to work with customers to help them optimize their designs to balance performance and power consumption as well as other parameters. If you’d like to learn more about Weebit’s design expertise, read this recent blog on our smart algorithms.

 

Ultra-low-power NVM
While the factors impacting NVM power consumption can vary widely based on application, Weebit ReRAM is shown to consume significantly less Read, Write and Standby power than embedded flash and other NVMs, contributing to longer battery life for many devices. The low voltage levels used for memory transactions, coupled with its fast memory access time, greatly reduce the overall power consumed by Weebit ReRAM.

Weebit is also shown to be a ‘greener’ type of NVM compared to other technologies. An environmental initiative we completed with our partner CEA-Leti earlier this year examines the environmental impact of Weebit ReRAM compared to MRAM. You can read about that study here.

The post How Low Can You Go?<br>An Inside Look at Weebit ReRAM Power Consumption appeared first on Weebit.

]]>
The Power of ReRAM for PMICs https://www.weebit-nano.com/the-power-of-reram-for-pmic-rram-memory-embedded-nvm/ Thu, 06 Oct 2022 07:43:19 +0000 https://www.weebit-nano.com/?p=12481 As Weebit ReRAM continues towards production, we’ve decided now would be a good time to dig into some of the applications where we think our technology will first have an impact. One of these is power management integrated circuits (PMICs). What is a PMIC? One of the first things to know about a PMIC is […]

The post The Power of ReRAM for PMICs appeared first on Weebit.

]]>
As Weebit ReRAM continues towards production, we’ve decided now would be a good time to dig into some of the applications where we think our technology will first have an impact. One of these is power management integrated circuits (PMICs).

What is a PMIC?

One of the first things to know about a PMIC is how to say it. You can say each letter separately, or you can call it a “P-mick”. At only two syllables, it’s easier to say, and if you’re reading this article aloud to your colleagues, they’ll be very impressed by your knowledge.

PMICs are integrated circuits (ICs) that regulate and control the power in an electronic system and often incorporate multiple power management functions in one chip.  For mobile and other battery-operated devices like wearables, hearables, sensors, and IoT devices, PMICs can help extend battery life, decreasing battery size to achieve the smallest possible form factor. In high-performance, computationally intensive platforms, PMICs are used to maximize performance per watt while increasing system efficiency.

A PMIC is responsible for controlling the flow and direction of electrical power within a device

PMICs are ubiquitous – they are used in just about every electronic system. At the most basic level, the PMIC controls the flow and direction of electrical power within a system. It sets the voltage levels (e.g., 3.3V, 5V, etc.) for each of the chips in a system, including CPUs, digital-to-analog converters (DACs), analog to digital converters (ADCs), and input/output (I/O) devices. Because voltage often varies between these components, many products use multiple voltages internally, and the PMIC makes sure the correct voltage is supplied to each one. The PMIC also acts as a conduit from the external power source – such as a battery or wall outlet – to the various components.  Because of the combination of functionality that is required in a PMIC, a BCD (Bipolar-CMOS-DMOS) technology is often used. This single process makes it possible to integrate analog components (bipolar), digital components (CMOS) and high-voltage transistors (DMOS) on the same die.  This is a complex technology that provides advantages for PMICs and a large set of analog and power components to the designer.

All this means that a PMIC’s job is more challenging in more complex products. Today, PMICs for complex systems have pre-programmed and adjustable functions to address the many disparate requirements they may face, and they are often field upgradeable as well.

A Growing Market

According to a 2021 report from Yole, the PMIC market will grow to more than US$25.6 billion by 2026, with mobile/consumer representing the largest segment. Multi-channel PMICs – those that need various voltages to power various loads – are dominant in these markets.

The same report predicts automotive and industrial applications will grow most quickly during that time. In automotive, this is driven by adoption of multi-channel PMICs widely used in advanced driver assistance systems (ADAS), as well as electric vehicles where PMICs manage the power flow through the EV battery.

There are also opportunities for highly integrated PMICs in other segments such as industrial, telecom, and medical applications.

NVM in PMICs

Power management as a function has been used for decades. Up until the mid-1990s, the primary goal was to trim voltages to fit product requirements, and this was handled as a simple analog function. In the mid-1990s, as electronic complexity increased, PMICs began to manage this function using very simple EEPROM, a basic type of read-only Non-Volatile Memory (NVM) to store analog calibration and trimming data. One-Time-Programmable (OTP) NVM was also used for this function, but since trimming often requires iterative voltage adjustments, multiple banks of NVM were needed when using OTP.

Starting in the early 2000s, as companies started integrating more and more digital functionality into one chip (a System-on-a-Chip or SoC) to meet performance, cost and power consumption goals, the function of the NVM inside of PMICs began to evolve and it continues to do so today. While still used for trimming voltages, in today’s highly integrated SoCs, embedded NVM is a critical block within the PMIC, used to store controller code and configuration data, as well as unique IDs.

Some PMICs today also require integration of a microcontroller (MCU) for added intelligence, including smart sensing and measurement in ultra-low power IoT products. These highly integrated systems not only need to store data and boot code, but must also run firmware updates, requiring high-performance, low-power NVM.

As NVM plays an increasingly critical role in these chips, it has moved from storing hundreds of bits to thousands of bits to tens of thousands of bits – potentially even a million bits.

Of course, not every PMIC needs this level of complexity. Some systems need only very simple voltage regulation, and simple PMICs with small NVM can do this. This is why NVM for PMICs can range from very simple EEPROM or OTP NVM for small, simple requirements; to Multiple-Time Programmable (MTP) for a small programmable/reconfigurable NVM; to embedded flash for more robust needs. There are power, area, performance and cost tradeoffs which designers must consider alongside power management requirements.

Embedded Flash in PMICs

When it became apparent in the early 2000s that a more robust NVM solution was needed for a growing number of PMICs, embedded flash was the only available option, so that’s the way the industry moved. Unfortunately, the addition of flash introduced a great deal of complexity and expense, starting with the addition of between seven and 11 extra masks into an already complex manufacturing process.

Another complication is that embedded flash must be integrated in the Front End of Line (FEOL) of the manufacturing process where other analog and power components are also integrated. Because the BCD process integrates a variety of different components on a single die, it requires careful balancing of the different design needs of each device type.

At the same time, Flash requires careful integration to make sure it works reliably within a design – often based on past experience and best practices. To accommodate it, companies must make technology design trade-offs that sometimes compromise the other analog components in the FEOL. In this way, flash plays an outsized role in driving the integration strategy of the whole chip. This is obviously not ideal since the compromises designers must make to accommodate flash can lead to overall degraded performance, larger size, and higher cost.

Flash has other limitations including a lack of robustness in harsh environments, which often requires building costly redundancy into the design. Importantly, the integration cost of embedded flash increases with each process shrink, an obvious challenge as companies move to more advanced process geometries. All this means that the industry is looking for new NVM solutions.

Emerging Memories for PMICs

When considering alternatives, the first consideration is whether an NVM is integrated in the back-end-of-line (BEOL), where no compromises are needed with other analog components. Using a BEOL NVM allows full optimization of analog components, and it simplifies adoption into new fabs (it can be adopted once for a geometry and it will work with all the different variants, unlike flash which must be adapted to each variant). A BEOL memory is the best alternative to embedded flash for PMICs, but it must be small and cost-effective.

Weebit ReRAM is a Back-End-of-Line (BEOL) technology, enabling full optimization of analog components, and simplifying adoption into new fabs

As a BEOL NVM technology, Ferroelectric RAM (FRAM) is one option that companies can consider. However, FRAM is unable to handle the high temperatures needed in PMICs (up to 150 degrees Celsius). It also requires exotic materials and new fab equipment – neither of which makes sense for an analog chip like a PMIC. Another option is Magnetoresistive RAM (MRAM). MRAM is also a BEOL technology and, in most cases, it can handle high temperatures. However, MRAM is complex and expensive from a capital expenditure as well as manufacturing standpoint, requiring expensive additional tools and masks, which is not a good fit for the analog market.

The answer? Weebit ReRAM. As a small and cost-effective BEOL technology, it checks all the boxes for NVM in PMICs.

Weebit ReRAM checks all the boxes for NVM in PMICs


Toward Net Zero

The low power consumption of Weebit ReRAM is an important advantage for PMICs, especially as societies across the world look for ways to decrease their carbon footprints. PMICs can keep power consumption and dissipation in electronic systems as low as possible – both active (when it’s on) and leakage (when it’s off). With PMICs designed from top to bottom for high efficiency – including ultra-low-power NVM, it’s possible to have a real impact on this critical issue.

Weebit and our R&D partner CEA-Leti are currently conducting an environmental initiative that will analyze the environmental impact of Weebit’s ReRAM compared to other NVM technologies.

With its unique combination of advantages, Weebit ReRAM is the logical alternative to embedded flash for the next generation of PMICs.

 

The post The Power of ReRAM for PMICs appeared first on Weebit.

]]>
See the Weebit ReRAM IP Module in Action! https://www.weebit-nano.com/see-the-weebit-reram-ip-module-in-action-embedded-rram/ Sun, 26 Jun 2022 13:37:27 +0000 https://www.weebit-nano.com/?p=12046 At this week’s Leti Innovation Days event, Weebit is showing the first public demonstration of our ReRAM IP module. Just in case you aren’t attending the event, we’re sharing a video of the demo so you can see Weebit ReRAM in action. The demo shows the real-world capability of Weebit ReRAM as a non-volatile memory […]

The post See the Weebit ReRAM IP <br>Module in Action! appeared first on Weebit.

]]>
At this week’s Leti Innovation Days event, Weebit is showing the first public demonstration of our ReRAM IP module. Just in case you aren’t attending the event, we’re sharing a video of the demo so you can see Weebit ReRAM in action.

The demo shows the real-world capability of Weebit ReRAM as a non-volatile memory (NVM) integrated into an actual subsystem. Specifically, you’ll see live images being fed into the ReRAM IP module on a test board which is then unplugged and then powered on again. The images are retained during power-off, demonstrating the non-volatile nature of the memory.

The demo also shows the speed at which the ReRAM module is read and written, demonstrating a significantly faster write speed compared to flash memory. This speed has a lot to do with the fact that Weebit ReRAM has direct write capability and is bit accessible, unlike flash which needs to erase entire data sectors every time it writes, leading to slower write throughput.

The demo is based on Weebit’s embedded ReRAM module that includes:

  • A 128Kb ReRAM array
  • All analog circuitry, including optimized voltage and current references
  • Smart algorithms
  • Control logic and data manipulation
  • IOs (Input/Output communication elements)
  • Redundancy and Error Correcting Code (ECC)

Watch the video!

The post See the Weebit ReRAM IP <br>Module in Action! appeared first on Weebit.

]]>