Contact us

A new window will open A new window will open

Research & Development Highlights

Our outstanding technologies will create the future. We are developing advanced memory technologies that are three to ten years ahead, and we are also adapting them to promote R&D of application and solution technologies for memory systems and storage systems.

Research & Development Field

Device technology

Process technology

System technology

Production management technology

Device technology

New Memory Development

We are in the process of developing new memory technologies in order to widen our product portfolio and expand our business. We propose new memory cell technologies to realize even higher bit density file memories, as well as various high-speed nonvolatile memories. For example, we have demonstrated STT-MRAM technology(*1) and ReRAM technology(*2) with the highest density as of the time of publication(*3). As advanced device, process and circuit technologies need to achieve memories with new structures and new materials. We are challenging ourselves with new tasks on a daily basis.

*1 Spin Transfer Torque Random Access Memory
(We presented 4Gbit STT-MRAM technology at IEDM with SK-hynix in 2016.)

*2  Resistive Random Access Memory
(We presented 32Gbit ReRAM technology at ISSCC with SanDisk in 2013.)

*3 Figures according to our research.

Memory cell structures presented at the conference (Left: STT-MRAM; right: ReRAM)
Memory cell structures presented at the conference (Left: STT-MRAM; right: ReRAM)

TCAD (Technology CAD) Development

TCAD (technology CAD) is one of the key technologies for prospectively and effectively developing the leading-edge memory devices that require new materials and complex 3D structures.

To start, we establish fundamental models of process phenomena and device operations. We apply computational science such as first-principle calculation for a thorough understanding of electron-level or atomic-level microscopic phenomena.

Then, we promptly build the process and device models into our in-house TCAD system, which realizes robust simulation.

We make great contributions to good prospects and efficient advanced memory development not only by finding solutions to the technical issues with the memories currently under development but also by predicting the performance and possible issues of future generation memories before starting fabrication.

Development flow with TCAD
Development flow with TCAD

Development of New Evaluation Method for Nanomaterials

In order to realize new memory devices, development of nanomaterials (molecules or particles whose size is less than 10nm) is crucially important, but it is extremely difficult to evaluate their electrical properties.

For example, when the top electrode material is deposited on the nanomaterial on the bottom electrode, degradation of the nanomaterial may occur if the heat resistance of the nanomaterial is low, or a short between the top electrode and the bottom electrode may occur if the top electrode material penetrates the nanomaterial. Probing by STM (Scanning Tunneling Microscope) is another evaluation method, but it is very difficult to get good reproducibility.

We have established a brand-new evaluation method for nanomaterials by applying the state-of-the-art semiconductor fabrication process. Firstly, a large number of nanogaps like one in Fig.1, whose space is almost the same size as the nanomaterial, are formed at once with good controllability, and then a nanomaterial is inserted into the nanogap. Figure 2 shows examples of nanomaterials, namely, a gold nanoparticle, fullerene C60, and an oligo-phenylene-ethylene derivative. Figure 3 shows I-V characteristics of nanomaterials in a 5nm or 2nm gap . Very small current, lower than 1pA (p=10-12), can successfully be measured. Figure 4 shows the histogram of the threshold voltage that can flow 0.1pA current, and distributions can be obtained by multi-point measurement.

We will continue to develop new evaluation methods and apply them in the development of new nanomaterials, and promote the development of new functional devices.

Development of New Evaluation Method for Nanomaterials

Process technology

Next-Generation Lithography Process: Nanoimprint

In the optical lithography process, shorter wavelengths and higher NAs that increase the lens diameter have been introduced to meet demand for device miniaturization. As wavelength reduction and NA heightening are approaching their physical limitations, new techniques are emerging, such as multiple patterning that repeats optical lithography several times or EUVL (Extreme Ultra-Violet Lithography). However, due to the cost of process step increase and additional process tools, it is inevitable that process costs will increase.

In order to overcome the lithography process cost increase, we are developing nanoimprint lithography that can miniaturize devices at lower cost. The nanoimprint technique uses imprinting to transfer nanoscale patterns on a template to a Si wafer, and unlike conventional lithography tools, it does not require a lens optical system for reduction projection.

The nanoimprint is a highly anticipated next-generation lithography method to realize advanced memory devices with reduced cost.

Nanoimprint lithography
Nanoimprint lithography

Analytical Technologies for Next-Generation Devices

In order to achieve high-performance and high-functional next-generation memory devices, it is essentially required to have (1) device design and process technology for 3D nanostructures, (2) material technologies that can introduce various functional thin films, (3) analysis technology that can reveal device nanostructure and material composition.

As many 3D memory nanostructures consist of intricately stacked thin films, it is very important to accurately understand the nanostructures of individual films, the interfaces between them, and the elemental composition distribution in order to realize high-performance and high-reliability devices. New analytical techniques need to analyze nanometer-level 3D structures, and we are driving various advanced analysis methods to achieve this task.

Specifically, Atom Probe Tomography (APT) can reveal 3D elemental distribution by counting the atoms one by one, as shown in the left figure. The right figure is an example of transistor (MOSFET) elemental analysis that can successfully visualize the 3D profile of elements on the nanometer level.

The principle of the APT (left); an atom map of a transistor (right)
The principle of the APT (left); an atom map of a transistor (right)

Development of image processing technology utilizing machine learning

State-of-the-art semiconductor manufacturing requires highly accurate defect inspection even if the defects are very small. We are developing a new inspection technique utilizing not only conventional image processing but also machine learning.

The left-hand figure below shows an example of conventional defect inspection in the semiconductor manufacturing process using SEM (Scanning Electron Microscope). Defects such as open or short failure of metal wires on semiconductor wafer are detected by comparing with the CAD layout(*1) of the circuit. But as the pattern transferred on a wafer is not identical to the CAD layout, excess detection of non-defects may occur. We have developed the novel inspection technique shown in the right-hand figure below. We apply machine learning to generate a virtual SEM image from the CAD layout and compare it with the SEM image to get more accurate results(*2). We will continue to introduce advanced machine learning that progresses day by day and develop technologies that contribute to higher yields and higher quality of our products.

*1: CAD (Computer Aided Design) drawing for semiconductor IC manufacturing (e.g., wiring )

*2: Joint development with Toshiba Corp.

The result of the defect inspection with the CAD layout (left), and with machine learning (right).
The result of the defect inspection with the CAD layout (left), and with machine learning (right).

System technology

HMB (Host Memory Buffer) technology for DRAM-less SSDs

Recently, laptop computers are becoming thinner and thinner, and built-in SSDs are required to be smaller in size, as well as lower in cost. But if the DRAM on an SSD is eliminated to reduce the number of SSD parts, it generally degrades the data read/write performance of the SSD.

We have successfully developed HMB (Host Memory Buffer) technology to realize a DRAM-less, high-performance, one-package SSD. HMB technology utilizes part of the host memory (DRAM) as if it were its own, and achieves equivalent performance to an SSD with DRAM.

As cooperation between the host driver and SSD is necessary, we developed HMB protocols for booting and connection, and have them incorporate PCIe® SSD interface standard, NVMe 1.2* with major CPU/OS vendors.

A DRAM-less, high-performance, one-package SSD with HMB technology is now a product of our SSD division, called BG series SSD. It is also one of our main consumer SSD products. We will continue to develop advanced technologies for high-performance, small, and low-cost SSDs.

* An interface specification developed for SSDs
  NVMe is a trademark of NVMe Express, Inc. PCIe is registered trademark of PCI-SIG.

Conventional SSD (left) and HMB-SSD (right): HMB-SSD utilizes a part of host DRAM instead of DRAM on SSD.

Conventional SSD (left) and HMB-SSD (right):
HMB-SSD utilizes a part of host DRAM instead of DRAM on SSD.

A Daisy-Chained Bridge Interface Technology for High-Bandwidth and Large-Capacity SSDs

As technologies such as AI rapidly evolve, SSDs increasingly require larger storage capacities and higher speed. In the near future, some applications will require SSDs with a capacity larger than 1 PB (1015 Bytes) and data bandwidth higher than 100 GB/s.

Power consumption at data centers is also increasing steadily, and if no countermeasures are taken, social issues might arise. Therefore, power reduction of SSDs used in data centers is also a very important task to accomplish.

We have managed to realize high circuit board density and high bandwidth simultaneously. In order to realize high density, we introduced the newly designed Bridge chip, which connects the SSD controller to a number of NAND flash memories via only a pair of two date wires in a daisy-chain* topology, one for downlink and the other for uplink. In addition, we have also introduced a novel signaling technology to taper the bandwidth at each stage of the daisy-chained bridge-chips for power reduction.

* A wiring method to connect multiple devices in series or in a ring

Our proposed daisy-chain based SSD interface (Data Downlink)
Our proposed daisy-chain based SSD interface (Data Downlink)

Development of high-speed and high-energy-efficiency algorithm and hardware architecture for deep learning accelerator

We have developed an AI accelerator for deep learning and presented it at an International conference on semiconductor circuits, A-SSCC 2018.→Related Information

Huge numbers of multiply-accumulate (MAC) computations are required for deep learning, but they give rise to long computation time and large power consumption. In order to cope with them, we introduced two new techniques: “filter-wise optimized quantization with variable precision”  (Fig. 1) and “bit parallel MAC hardware architecture.” (Fig.  2)

The filter-wise technique optimizes the number of weight bits for each one of tens or thousands of filters on every layer. If the average bit precision is 3.6bit, the recognition accuracy of layer-wise optimized quantization (Fig. 1 middle) is reduced to less than 50%, but the proposed filter-wise quantization maintains almost the same accuracy as that before quantization with reduced computation time.

The bit serial technique (Fig. 2 left) is often used for MAC architecture, but if it is applied to filter-wise quantization (Fig. 2 middle), the execution time will vary depending on the bit precision of filters. The PE (Processing Element) assigned for the filter whose computation is large may become a bottleneck. The bit parallel technique (Fig. 2 right), on the other hand, divides each various bit precision into a bit one by one and assigns them to several PEs one by one and operates them in parallel. The utilization efficiency of PEs is improved to almost 100% and throughput also becomes higher.

We implemented our algorithm and hardware architecture with ResNet-50(*1) on FPGA(*2) and demonstrated the image recognition test of ImageNet(*3) run with 5.3 times computation throughput and with computation time and energy consumption as low as 18.7%.

*1…ResNet-50: One of deep neural network, generally used to benchmark deep-learning for image recognition

*2…FPGA: Field Programmable Gate Array

*3…ImageNet: A large image database, generally used to benchmark image-recognition, the number of image data is over 14,000,000. 

Fig. 1: Conventional quantization with fixed 16bit (upper),  layer-wise quantization (middle), proposed filter-wise quantization (bottom).
Fig. 1: Conventional quantization with fixed 16bit (upper), layer-wise quantization (middle), proposed filter-wise quantization (bottom).

Fig.2: Layer-wise quantization and bit serial architecture (left), filter-wise quantization and bit serial architecture (middle), and filter-wise quantization and bit parallel architecture (right).
Fig.2: Layer-wise quantization and bit serial architecture (left), filter-wise quantization and bit serial architecture (middle), and filter-wise quantization and bit parallel architecture (right).

Production management technology

Factory Innovation

As a result of an increase in the capacity of memory products, the amount of data handled at a factory is growing considerably. Unlike automobiles, flash memories are manufactured using a complicated network of more than 5,000 manufacturing and inspection equipments. To maintain high quality, more than two billion data items are collected every day in real time from manufacturing equipments and transport systems. Complicated factory analyses are performed using such an enormous amount of data. For example, deep learning technologies help to greatly reduce the percentage of devices rejected by a defect test and AI technologies help reduce the time required to infer the cause of defects. In addition to Yokkaichi Operations, Toshiba Memory Corporation is currently constructing a memory fab in the city of Kitakami, Iwate Prefecture. We are introducing state-of-the-art tools and promoting open innovation both within and outside the company with the aim of achieving efficient production at two sites.

Figure 1. Example of big-data utilization at Yokkaichi Operations
Figure 1. Example of big-data utilization at Yokkaichi Operations

To Top