<p><span style="font-size:1.4em;">Generative AI And LLMS</span></p>
Developments over the last two years, especially the rapid advancements being made in the field of Artificial Intelligence (AI) holds significant promise for a dynamic technology ecosystem. With the advent of Generative AI (Gen AI), Foundation Models, and Large Language Models (LLMs) we are witnessing the emergence of newer avenues where voluminous data streams, such as the ones available from the SDVs, can be leveraged for maximum impact in the shortest possible time.
At L&T Technology Services (LTTS), initial studies and pilots have led us to believe that there are at least five key areas where the application of these new techniques can result in transformative outcomes within a short span of time.
We feel that currently, there is very less repeatability and reuse of codes within teams and departments, even if the functions that these teams are writing are the same. Code duplication is not the only effort that goes in, the associated effort and time invested in quality, testing and security is a waste of valuable resources.
There is a huge potential in this area for leveraging Gen AI and LLMs.
How can one reduce effort by reusing proven, tested, and approved code? With companion tools like Copilot from Microsoft, Duet AI from Google, and Code Whisperer from AWS, it is now possible to do so. These tools assist developers in software development and enable code reuse.
In an initial experimentation with a customer, we observed that structured and monitored code reusage resulted in approximately 35-40 percent of reduction in efforts and time. This results in considerable cost savings, helps streamline operations, and enables a faster go to market.
The need here is to ensure that guidelines and processes are in place for developers to leverage these tool, including, monitored access to code repositories, defining policies on usage of artefacts from open source, enabling real-time interventions, etc.
SDLC is a promising area where Gen AI and companion tools can be leveraged given the ease of use and value unlocked across effort, quality, and time.
Test Automation has been a priority for everyone with a keen interest in reducing costs, minimizing efforts, and shortening cycle time. Emergence of LLMs like LRM and others are opening possibilities of transforming the entire spectrum of automation.
LLMs can be leveraged to create test cases directly from requirement documents in any language via simple, two-step processes. This involves the requirement document being interpreted directly to build a list of test scenarios. Each test scenario has the required test steps detailed in specific formats. This can even be extended to automated test execution on the target device.
At LTTS, we have undertaken three successful pilots in this area with very promising results. Initial indications point to a marked reduction of efforts across the whole process in the range of 30- 45%. This also gives rise to new possibilities wherein over a period time, LLMs trained for OEM-specific data could drive higher accuracies, deliver consistent quality, and more importantly, increase test coverage and drive greater automation from the initial phase.
With the emergence of foundation models like Florence for object detection in AD/ADAS use cases, we can start looking at how to increase the quality of images captured during poor lighting conditions. While the damage to sensors, poor image quality, bad weather, poor lighting condition, and corner cases continue to be a challenge, LLMs like Florence, with expertise in object detections can be fine-tuned for driving significantly improved outcomes. This scenario is especially effective for the growing fleet of SDVs worldwide.
For better results, the LLM will have to be developed, trained, and fine-tuned based on voluminous OEM data streams. Despite the need for training, we feel that it is a promising model to address existing challenges and drive future opportunities and avenues for improvement. We are exploring on the modalities of a pilot using Florence for object detection.
As we move toward inferencing and processing at the edge (involving a controller or a camera for SDVs) it is important to look at potential avenues for driving model optimizations. This includes leveraging low compute and memory footprint, key features enabled by edge devices. New techniques in quantization and pruning can prove to be a major differentiator in this regard.
Leading global SOC vendors are now delivering more advanced toolkits that enable deeper model optimization pathways. At LTTS, we are investing in an embedded AI framework that enables a two process model optimization technique. The first stage involves running it through our frame-work to get a model that is platform agnostic. In stage tow, we can undertake further optimization with the tools provided by the target device vendor. We have already recorded a reduction of 70% in compute power needs in the optimized model.
As OEMs continue to build models for enabling different functions and features in the SDV ecosystem, one can easily envisage a large catalogue of use cases to be leveraged across various scenarios. This underscores the need to focus on model optimizations as an inherent requirement for effective model development.
A challenge that comes across is finding the acceptable balance of optimization versus accuracy. While sometimes, a highly optimized model does show a drop in accuracy by a small percentage, we feel that this is a decision that will always have to be exercised based on the criticality of the feature in enabling the overall capabilities of the offering within the SDV.
LLMs like Llama, GPT, T5, etc., are NLP (Natural Language Programing) based models and can address different text-based requirements. These LLM’s, when trained adequately, can be lever-aged across various use cases like enabling interactive question/answer interactions between the user and the user manual on any topic that a user might want to know and is covered in the manual.
The LLMs can also be trained to address complex use cases like AutoSAR configuration where it can extract unstructured values from specification documents, combine with tabular data, and generate AutoSAR configuration in JSON format.
Our experience further indicates that the true impact of AI is evident when it is allowed to cut through the layers of computing. LTTS feels that for effective leverage, we need to operate across all layers of computing, as is illustrated in Figure 1 :