Intel Labs at Intel® Innovation 2023

Highlights:

  • This year’s Intel® Innovation Conference will run September 19-20 at the San Jose McEnery Convention Center in San Jose, California.

  • Intel Labs offers seven sessions regarding Innovation for the Future as well as eight demonstrations and a tech insight talk on the Future of AI.

  • Register to attend in person or watch the opening keynotes digitally on Tuesday, September 19th at 9 a.m. PST and Wednesday, September 20th at 9:30 a.m. PST.

author-image

By

At the 2023 Intel® Innovation conference, Intel® invites you to engage with your peers and learn from the brightest minds in the industry about how to utilize some of the latest hardware and software breakthroughs to speed development, drive innovation, and help hone your competitive edge. To begin the conference, Intel’s executive leadership and special guests will outline the latest advancements in computing, offer a roadmap for the future of the industry, and unveil the latest developer solutions to unleash the next wave of innovative technology.

As part of this event, Intel Labs is excited to present the Advanced Technology track. This fascinating track offers insight on the latest advancements and solutions within some of the most critical areas of technology, including AI, quantum computing, integrated photonics, machine learning, Private 5G, and more.

Among the seven technology sessions, eight demonstrations, and future of AI address, there are learning opportunities available for everyone. Be sure to register to attend in person or watch the opening keynotes digitally on Tuesday, September 19th at 9 a.m. PST and Wednesday, September 20th at 9:30 a.m. PST.

Learn more about Intel Labs’ contributions at Intel Innovation:

Tech Insight

The Future of AI: Nimble, Cognitive, Autonomous, and Responsible
Gadi Singer, Vice President and Director of Emergent AI Research at Intel Labs 
September 20, 2:00 p.m. PDT

2023 marks a year of transition to a new phase in AI. The emphasis is shifting to cost-effective, trustworthy, secure, responsible, adaptable generative AI models at scale across many industries, including healthcare, manufacturing, retail, and entertainment. At Intel Labs, researchers are advancing the state-of-the-art of emerging AI applications as well as working with the industry to best integrate and gain value from State-of-the-Art latest technologies. In this session, Gadi Singer, a Vice President at Intel Labs and Director of the Emergent AI Research Lab, collaborative research efforts that will shape the future of AI. He will detail industry trends and recommended architectures and approaches that aim to help AI become more nimble, knowledgeable, autonomous, and responsible.

Sessions

Integrated Photonics and its Impact on Systems and Data Centers of the Future
James Jaussi, Senior Principal Engineer and director of the PHY Research Lab at Intel Labs
September 19, 11:00 a.m. PDT

The ever-increasing movement of data from server to server is taxing the capabilities of today’s data center infrastructure. The industry is quickly approaching the practical limits of electrical I/O performance. Integrated Photonics research at Intel labs addresses growing challenges of network traffic in data centers by integrating silicon photonics with low-cost, high-volume silicon. Learn about our recent advances and explore possibilities of how integrated photonics may change system interfaces and network topologies in future data center architectures. This session will lay out Intel’s vision for optical compute interconnect and end-to-end optical networks in future data centers.


AI-enabled Edge Services and Control for the Intelligent Edge
Ravi Iyer, Intel Fellow, and Director of Intel’s Emerging Systems Lab
September 19, 11:00 a.m. PDT

Enterprises are deploying innovative AI/ML based edge services supporting diverse use cases in distributed heterogeneous edges. In this session we’ll show how AI/ML techniques are used to operate the intelligent edge from services to infrastructure end-to-end and top-to-bottom down to the platform. This session will introduce a set of innovative edge applications and the use of AI/ML techniques to improve their capabilities and performance, e.g., apps improving defect detection, performing visual query and data management, or coordinating mobile robots’ paths. We’ll show how automated AI/ML algorithms optimizations for resource constrained edge platform help optimize apps performance, and we’ll details how these algorithms are used to optimize end-to-end operation of applications over a 5G network infrastructure. This session will also demonstrate how these algorithms dynamically adapt shared resources usages using platform features such as Resource Director Technology to satisfy users’ requirements defined through high level intent for the end-to-end pipeline. This session will detail how innovative AI/ML technics are used to optimize and operate the intelligent edge.


Neuromorphic Computing: Loihi 2 and Lava Software
Mike Davies, Director of the Neuromorphic Computing Lab at Intel Labs
September 19, 1:30 p.m. PDT

Despite decades of progress and recent breakthroughs, today’s AI technologies still fall short of biological brains in many important respects. To enable a future world filled with intelligent devices of all scales that operate safely and autonomously alongside humans, Intel is pioneering a fundamentally new approach to computing inspired by the principles of neuroscience. This has led to the novel Loihi processor architecture that delivers orders of magnitude gains in computational metrics by minimizing data movement and exploiting sparsity and asynchronous event-based communication. This session will detail the latest developments in Intel Labs’ neuromorphic computing research.


Driving Foundational AI Research at Scale
Somdeb Majumdar, Director of the AI Lab at Intel Labs 
September 19, 1:30 p.m. PDT

Intel’s AI Lab builds foundational AI techniques, disruptive applications, and moonshots. This session will share insights into distributed training platforms for billion-scale graph neural networks (GNNs), work on long-form video understanding with graph architectures that match SOTA transformer performance at a fraction of the cost while handling 10x longer videos and AI for Science breakthroughs with the first conversational interface for protein structure design and new benchmarks for AI for material discovery. Each topic will cover open-source products and share opportunities to collaborate on industry-relevant problems.
 

Advances in Quantum Computing and Intro to the Intel® Quantum SDK
Anne Matsuura, Director of Quantum & Molecular Technologies at Intel Labs
September 19, 2:30 p.m. PDT

Even though we may be years away, quantum computing promises to enable breakthroughs in energy production, materials, chemicals and drug design, financial and climate modeling, and cryptography. Many companies are joining efforts to advance research that will help make quantum computing a reality. Intel has recently engaged academic partners with the “Tunnel Falls” quantum computing research chip. In parallel, Intel Labs is growing a community of software developers to leverage a full-stack solution called the Intel Quantum Software Development Kit (SDK). This session will detail Intel’s recent advances, and introduce the Intel SDK and demonstrate its use, including the execution of popular quantum-classical chemistry algorithms.
 

Encrypted Computing: Reaching the Pinnacle of Data Privacy
Rosario Cammarota, Chief Scientist of Privacy-Enhanced Computing Research at Intel Labs
September 20, 11:00 a.m. PDT

Going beyond today’s Confidential Computing, a new area of research called Encrypted Computing promises to reach the pinnacle of data privacy and security for cloud computing. Unlike trusted execution environments, Encrypted Computing is a powerful new technique for enabling computation and collaboration on private and sensitive data through end-to-end encryption. It could revolutionize how encrypted data can be shared, analyzed and processed in the future, empowering organizations to gain valuable insights while theoretically eliminating risk of exposure of their data to 3rd parties. Join this session to learn about Intel’s recent advances in Fully Homomorphic Encryption and the Intel Encrypted Computing SDK.

Self-Optimizing Databases: Using Machine Learning in Database Management Systems
Nesime Tatbul, Senior Research Scientist at Intel Labs
September 20, 1:30 p.m. PDT

A Database Management System (DBMS) is a complex software system. Performance depends on the hardware, the data, and the database queries. Optimizing a DBMS for a particular problem can be overwhelming. Intel labs, working closely with academic partners at MIT and Brown, is developing data management systems that optimize themselves automatically. DBMS optimization using simple parameter-search methods has been used before. This research, however, is breaking new ground with innovative approaches based on machine learning. Everything is on the table from self-organizing data containers and automatically optimized query engines to low level data structures deep inside the DBMS based on machine learning models. Join this session to learn about the future of data management with systems that optimize themselves.

Heterogenous Programming: Distributed Data Structures, Algorithms, and Views in C++
Ben Brock, AI Research Scientist at Intel Labs 
September 20, 2:00 p.m. PDT

Distributed and heterogeneous systems, such as clusters with multiple GPUs per node, have become increasingly common. Today, we program such systems with multiple, often low-level programming models. We need higher level programming models to enable software ecosystems for these systems. It is our goal to help users program these systems with standard data structures and algorithms – just like they use sequential systems today. We do this with distributed data structures, which automatically distribute data over multiple GPUs or nodes, along with distributed algorithms, which compute in parallel. Join this session to learn how we can run C++ programs, with minimal modification, over multiple GPUs, achieving performance comparable to expert-tuned code.

Demonstrations

AI Robotics: Human-AI Towards Programmability by Demonstration

Collaborative robots (Cobots) are evolving into adaptive intelligent assistants for general physical tasks with disruptive ROIs. In this live demonstration, illustrative pick-&-place tasks will be explicitly created by an end-user (tele)operating a robot in VR. The featured programming-by-demonstration process consists of i) human-robot imitation, ii) actions and entities annotation, & iii) arranging modular robot skills. This use-case will be presented step-by-step introducing our advances in motion primitives, enhanced robot vision & VR interfaces controlling, inferring, asserting & acting in real-time via Intel Realsense & Intel XPUs.

AI Security and Trust

This tech experience will cover several topics of security, trust, and privacy in AI. 

While popular large language models (LLMs) such as ChatGPT are trained using publicly available data, the next step is their deployment within organizations, augmented with custom data. But how to guarantee that LLMs respect organizational boundaries and do not reveal information to unauthorized personnel? This interactive demo will present a secret guessing challenge where the goal is to convince the LLM to reveal information it was told not to. Participants will try to complete three levels of increasing difficulty.

Assistive Computing: AI Response Generation and Brain-Computer Interface

The Assistive Context-Aware Toolkit (ACAT) is an open-source software platform developed by Intel Labs that enables people with motor neuron disease to communicate through keyboard simulation, text prediction, and speech synthesis. The recent release of version 2.0 of ACAT features a brain-computer interface to enable fully locked-in users to communicate.  The release also includes enhanced language modeling capabilities fine-tuned for AAC usages to enable personalized sentence completion, as well as efficient typing modes for faster communication. The system is optimized to run on client devices and the user interface has been completely redesigned for ease of use. This demo will showcase all the features of ACAT, including the Brain-Computer Interface and the new text prediction capabilities.

Generative AI: Latent Diffusion Model for 3D (LDM3D)

DepthFusion showcases the power of Intel’s LDM3D diffusion model in generating 360 views from text prompts. Using text prompts provided by the user, the LDM3D diffusion model generates a 2D RGB image and its corresponding depth map, providing a complete RGBD representation of the text prompt. This model was then fine-tuned on a subset of the LAION400M dataset, a large-scale image-caption dataset. The depth maps used to finetune the model were generated by MiDaS 3.1, a depth estimation algorithm that provides highly accurate relative depth estimates for each pixel in an image. Intel’s application then harnesses the power of TouchDesigner – a versatile platform that allows for the creation of immersive and interactive multimedia experiences – to bring the generated 360 views to life, providing users with a unique and engaging way to experience their text prompts.

Immersive Telepresence on Light-Field Displays

This demonstration will highlight a complete end-end system for real-time Light-Field Video using cloud processing supported by client capabilities at the content creation and consumption endpoints. This ‘Light-Field Video’ technology demonstrates a breakthrough in immersive experiences for video on demand, real-time streaming and video conferencing.  The demo will show a complete end-to-end system that includes a consumer-grade light field camera array for capturing a wide range of camera angles simultaneously, algorithms for variable viewpoint video creation, cloud processing, streaming content delivery, and an immersive video viewer.

Neural Object Cloning: AI Driven 3D Content Creation for Gaming and More

3D content creation is an essential component in the development of videogames, immersive experiences, and advanced visualization solutions. Traditionally, this process required a team of artists to invest significant time, ranging from tens to hundreds of hours, in meticulously crafting 3D objects and their corresponding textures to ensure optimal visual quality in game engines and virtual environments. Intel’s Neural Object Cloning technology represents a significant advancement in the field, utilizing the latest generation of generative AI techniques. This innovation streamlines the 3D content creation process, making it as straightforward as recording a video of an object with your phone.

Neuromorphic: Satellite Scheduling and Optimization on Loihi 2

This demo shows how Intel’s neuromorphic technology can ease the computational burden of solving scheduling problems frequently encountered in logistics and operations research. A typical scheduling problem involves assigning a certain number of agents to perform a certain number of tasks, while respecting constraints of a schedule. The demo shows how a key scheduling problem in the space technology industry is tackled: scheduling a large number of Earth observation requests (tasks) to a constellation of satellites (agents). Scheduling problems take time proportional to the number of agents and the square of the number of tasks to solve. For commercial satellite companies with dozens of vehicles and thousands of customer requests, current algorithms are not suited to find the best solution in time.

Quantum: Intel® Quantum SDK and 3D Interactive Hardware Demo

This demo highlights version 1.0 of the Intel Quantum Software Development Kit (SDK), which lets users interface with Intel's Quantum Computing stack. The SDK includes an intuitive user interface based on C++, an LLVM-based compiler toolchain adapted for quantum, and a high-performance Intel Quantum Simulator (IQS) qubit target backend. The demo will also feature a 3D interactive look at Intel’s quantum hardware.