Vanakkam all
NTT – Nippon Telegraph and Telephone Corporation, is a Japanese multinational information technology (IT) and communications corporation headquartered in Tokyo, Japan. AI is all about analyzing the data and producing outputs. But how fast can the rendering happen. For a large scale AI data analysis, that too in real time, how Red Hat is helping NTT – lets take a look
As part of the Innovative Optical and Wireless Network (IOWN) initiative, NTT Corporation (NTT) and Red Hat, Inc., in collaboration with NVIDIA and Fujitsu, have jointly developed a solution to enhance and extend the potential for real-time artificial intelligence (AI) data analysis at the edge : MWC BARCELONA – February 26, 2024
As the volume of data from sensors and devices grows, processing this data efficiently becomes crucial. With all these, performing AI analysis at the network’s edge—where data is generated, helps in assessing input in real-time. When large data is processed with AI, it can be a slow process due to computational demands. Updating AI products result in integrating additional hardware and cost. With edge computing capabilities emerging in more remote locations, AI analysis can be placed closer to the sensors, reducing latency and increasing bandwidth.
Hardware Accelerators
Other than general-purpose CPU, hardware accelerators are specialized hardware components designed to perform specific task which requires high speed, intensive. Example – processing AI, ML, deep learning, data analytics.
Graphical Processing Unit (GPU) are highly efficient at parallel processing tasks , well suited for AI and ML training, data analysis. Data Processing Unit (DPU) to accelerate networking, storage and other security tasks.
Red Hat Openshift – Hardware Accelerators :
Red Hat OpenShift, an enterprise Kubernetes platform, for deploying, running, and managing containers across different environments.
- OpenShift facilitates the integration of hardware accelerators into your Kubernetes clusters.
- Openshift provides mechanisms to schedule workloads on nodes equipped with these accelerators (GPU,DPU etc) ensuring that your AI/ML, applications can access the specialized computing resources they need ( nodeSelector )
- OpenShift ensures that hardware accelerators are used efficiently
- OpenShift simplifies the deployment of applications that require hardware accelerators – (Operators)
- OpenShift abstracts the underlying infrastructure details, allowing developers to focus on building and scaling their applications without worrying about the specifics of the hardware
Summary :
Large-scale AI data analysis in real time, powered by Red Hat OpenShift, can support Kubernetes operators to minimize the complexity of implementing hardware-based accelerators (GPUs, DPUs, etc.), enabling improved flexibility and easier deployment across disaggregated sites, including remote data centers.
As per Chris Wright, chief technology officers – “With Red Hat OpenShift, we can help NTT provide large-scale AI data analysis in real time and without limitations.”