Mobile/Client/AI Computing Forum Korea

Tuesday, May 16 • Seoul

 Program Moderator: Youngbin Lee, Samsung

JEDEC Welcome
Mian Quddus, JEDEC Board of Directors

Morning Session


How ChromeOS is Making Smarter Memory Decisions

Keynote Presenter: Brian Geffon, Google

Over the past several years there has been a shift to focus on security and isolation, with it has come increasing levels of over provisioned devices. This talk will go into how ChromeOS is taking advantage of, or adding features to, the Linux Kernel to better handle resource constraints from user space. We will discuss why user space can do a better job managing memory in many situations. Finally, we will touch on how hardware can make this job easier going forward.


The Origins of CAMM (Compression Attached Memory Module)

Keynote Presenter: Dr. Tom Schnell, Dell

CAMM is a Dell-patented memory module design, brought to JEDEC for industry standardization and adoption. CAMM solves great customer problems of performance limits, connector reliability, and service. CAMM also solves great OEM problems of thermals, memory bus routing in motherboards, system form factor constraints, EMC noise, and memory servicing. These problems have existed for years and now have been addressed. This presentation covers teaching on how to discover great problems, how to uncover creative solutions, the innovators dilemma in putting early ideas into product, the JEDEC journey, and the future direction of CAMM.



Supporting Energy Efficient Execution for AI (at the edge)

Keynote Presenter: Amedeo Zuccaro, ST

This presentation provides and overview about the key aspects of supporting energy efficient execution of AI inferences in embedded devices as required for connected intelligent nodes.



Enlarged Coverage of Low Power Memories (LPDDR) with Better Performance / Low Power

Keynote Presenter: Jeff Choi, SK hynix

Since LPDDR introduced to the industry to reduce the power consumption, it has been interested not only in power consumption but also in performance as well. LP memories has rapidly enhanced their speed / performance, providing the better performance than that of DDR memories. With the sense, new industry / application prefer to use LPDDR, enlarging the area of LP memories to cover. It becomes LPDDRx to be adopted in not only mobile / client space, but also AI / graphic / server area, etc. In this talk, it will be touched with how LP memories prepare for new requirements from the industry.



Memory, Test and Measurement and the Impacts of Changes in the Data Center

Presenter: Brig Asay, Keysight

Perhaps no other technology will have a bigger change in the data center than memory over the next few years. With the move of the server to further disaggregate, memory must be faster with less latency. Faster memory, means even bigger test and measurement challenges. Previously difficult tasks, such as probing and decoding, only get harder for everyone over the next several years. This discussion will focus on those challenges and some of the best ways to overcome them.


LPDDR5: Everything Everywhere all at Once

Presenter: Brett Murdock, Synopsys

LPDDR5 has for some time been the Jack of All Trades for memories while actually being the master of some. This presentation will discuss various applications for LPDDR5 and why LPDDR5 is the memory of choice. The presentation will discuss not only the view of the memory from the internal SoC.

12:00-1:00PMLunch Break

Afternoon Session



LPDDR Memory Subsystem Evolution for Various Applications

Presenter: Eric Oh, Samsung

Memory centric architecture is the key for system enhancement and currently, LPDDR memory subsystem evolution gives value for various application. LPDDR memory has evolved to attain continuous improvement for high speed and power efficiency and data rate increase within limited power budget provides huge benefit for many application. In this presentation, we will summarize the LPDDR memory solution evolution trends for various market and address next generation LPDDR memory subsystem considerations to leap forward to the future.


In-Memory Computing for Neural Networks Using Multi-Level SONOS

Presenter: Sergey Ostrikov, Infineon

In-memory computing (IMC) is a technology that aims to keep data and computation as close as possible to each other. One way to implement IMC is by using non-volatile memory (NVM), such as flash, with the goal to reduce data movement and reduce power consumption associated with data movement. AI applications rely on large amounts of kernel weights needed for computation. NVM-based IMC can perform computation in place of storage and thus eliminate the need to fetch the weights into a compute engine. This presentation explores the challenges associated with this approach, such as efficient propagation of intermediate computation results through a static memory array, and proposes a functional solution using TVM as an ML compiler.


Divergence of Memory Technology Needs for Client/Mobile and Cloud Server SOCs

Presenter: Nagi Aboulenein, Ampere

We will discuss areas of divergence (and synergy) of client/mobile and server memory technology needs for future SOCs and platforms.




Memory History and Beyond

Keynote Presenter: Osamu Nagashima, Micron

DRAM technology trends and industry focus features. Today’s DRAM application requirements and technologies.


Adaptable and Programmable System Architecture and Applications Driving DDR5 to Meet the Demands of the Next 5 Years

Presenter: Thomas To, AMD

The explosion of data traffic makes data center/cloud computing workloads demand to grow exponentially. The data center processors are seeing mixture of file sizes, diversified data types and new algorithms for varying processing requirements. Adding to the challenge is the workload evolution, with cloud-based ML/AI (Hardware Machine Learning & Artificial Intelligence) being the first and foremost. The processing speed and bandwidth demand increase the data center burden. Example workloads targeted for acceleration are data analytics, networking application and cybersecurity. Adaptable system accelerator, such as implemented with FPGA, have bridged the computational gap by providing heterogenous acceleration to offload the burden. However, the new data path, such as in ML, is fundamentally different from the traditional CPU data path flow. This presentation will highlight the diverse applications of programmable system and contrast the different system memory (e.g., DDR5) requirement to traditional CPU system requirement. The discussion will stress on the balance among system cost, bandwidth and memory density requirement going forward.



Utilize New Memory Features to Enhance Intel Platform User Experience

Presenter: Sanghyun Yoon, Intel

In this session, you will learn how the system applies DDR5/LDDR5x new features for memory initialization sequence in order to achieve higher bandwidth. Also, see some of the innovative approaches to improve client memory qualification and user experience.



Next-Generation Memory Access for Edge AI Computing: 8.533Gbps, 16Gbps and Beyond

Presenter: Marc Greenberg, Cadence

Edge AI Applications require a very high memory bandwidth to perform AI functions. What’s the right memory for it? In this presentation we will discuss what’s necessary to implement a memory interface at the highest speeds available under the JEDEC standards – LPDDR5X-8533 and GDDR6-16G – and look at the future coming with even higher speed grades for these memory types.


LPDDR5 In System Validation

Presenter: Barbara Aichinger, FuturePlus

LPDDR5 is a strong contender for use in the embedded market and even finding its way onto specialized memory modules. It has proliferated in several different package types, has very high speed data capture requirements and low power features. In this presentation we will take a look at how Engineers are validating LPDDR5 designs.


LPDDR5 Interface Test and Validation Methodology

Presenter: Randy White, Keysight

Over time, as LPDDR speeds have increased, the fundamental approach used to move data has had to change. Traditional high speed digital timing and noise with min/typ/max specifications has given way in LPDDR5 to high speed serial approaches based on eye masks with jitter specifications. LPDDR5 must go a step further to deal with distorted eyes using tunable equalization. At each point the need to characterize and measure what’s defined in the spec has made Measurement Science and DFT increasingly important in defining the LPDDR spec. This session will focus on the Measurement Science behind the LPDDR5 specification.


Closing Remarks
Mian Quddus, JEDEC Board of Directors

Program, topics and speakers subject to change without notice.