Artificial intelligence (AI) is the capability of machines to learn and emulate human intelligence to execute intricate tasks—like logical reasoning, automating repetitive tasks, and making decisions.
AI has evolved significantly over the years, adopting a bottom-up approach through various subfields, including machine learning (ML), deep learning, computer vision, and a combination of these areas known as foundation models. Understanding this progression is crucial to appreciating how AI has enabled businesses to function harmoniously with human-computer interactions.
An Overview of AI Evolution
In the 1980s, AI development was characterized by rules, expert systems, and deterministic programming. These approaches involved feeding knowledge directly into the system, enabling businesses to carry out tasks such as matching invoices, bookkeeping, and generating sales or purchase orders. This era marked the dominance of computer programs and deterministic systems in business. This type of AI, known as symbolic AI, significantly boosted productivity and profitability for business owners. Symbolic AI persists in various forms, including the semantic web, ontologies, and automated planning and scheduling systems.
Fast forward to 2024, business owners can now conduct complex analytics instantaneously using simple question-and-answer interactions, significantly reducing their reliance on developers or experts. The rapid progress in AI has enhanced the existing symbolic AI ecosystem with statistical machine learning (ML) algorithms and foundation models leveraging natural language processing (NLP). This innovative form of AI, known as ‘generative AI,’ offers business owners real-time analytics and forecasts, automated business workflows, and a growing range of capabilities. SAP has been at the forefront of this innovation, with its SAP S/4HANA platform integrating generative AI to provide businesses with advanced analytics and automation capabilities.
In the context of SAP, integrating generative AI with the existing functionalities of SAP S/4HANA introduces a new enterprise dimension to AI. This integration allows data to flow seamlessly across multiple business processes within an enterprise.
Now, let’s delve deeper into generative AI…
Understanding Generative AI
As previously mentioned, generative AI leverages foundation models and large language models (LLMs). A type of foundation model designed specifically for generating text, LLMs can produce various types of outputs, including text, images, audio, and video. While foundation models are focused on text generation, they can operate across different modes, such as images, audio, and video.
Key Functions of Generative AI
Foundation models are machine learning models that utilize data labelling as an intrinsic mechanism to provide context during the learning process. This data labelling is naturally embedded in the structure of the data. This approach differs from the three primary learning methods in machine learning—supervised, unsupervised, and reinforcement learning—where data labelling is typically performed manually.
These neural networks undergo training on extensive datasets employing a self-supervised learning algorithm to execute various tasks. Primarily, these tasks are centred around natural language processing (NLP), although there are exceptions:
- Data classification and keyword extraction
- Text summarization
- Engaging in conversation and responding in a question-and-answer format, akin to human interaction, by searching through vast volumes of data
- Generating content, such as images, video, music, etc.
- Code generation
Foundation models operate on vast data represented as tokens to construct a model. The effectiveness of a foundation model is typically gauged by the number of parameters it encompasses, often reaching into the billions. Notably, leading AI tools boast even higher figures; for instance, OpenAI’s GPT-4 currently boasts an impressive 1.76 trillion parameters.
A Paradigm Shift in the AI Landscape
While generative AI offers a wealth of opportunities for business growth, its probabilistic nature can create uncertainty for decision-makers. However, a breakthrough paper titled “Attention is All You Need” introduced the transformative attention-based transformer model, potentially minimizing this probabilistic aspect.
This model, a core component of neural networks, assigns weights to different parts of input data (business data) to generate the desired output. The resulting architecture, exemplified by OpenAI’s Generative Pre-Trained Transformer (GPT) with only 100 million parameters, demonstrated the ability to perform complex tasks like sentiment classification with minimal guidance. As models like GPT-4 scale to utilize trillions of parameters, the generated outputs become increasingly reliable and accurate.
This shift marks several key transitions:
- From Supervised to Self-Supervised Learning: Foundation models leverage self-supervised learning, constantly scaling and labelling themselves based on the data, they’re fed.
- Emerging Capabilities Take Center Stage: Developers are no longer limited by pre-defined capabilities. Generative AI provides a platform to build future-proof applications by seamlessly embedding emerging technologies into existing ones, enabling crucial scalability.
- Multi-Purpose AI Models on the Rise: While traditional machine learning facilitated automation through single-purpose models, the expanding nature of foundation models allows for the development of multi-purpose AI models that can tackle multiple tasks simultaneously.
- Beyond Classification: The Generative Leap: AI’s transition from deterministic to probabilistic culminated in models evolving from data classification to generating multi-modal content. In essence, this represents a significant payoff for the decades of work invested by machine learning engineers.
Generative AI, a Yet-to-be-Proofed Tech
At a broader level, generative AI appears to hold promise in tackling a wide array of complexities, particularly in addressing broad general knowledge issues. However, its credibility is still under scrutiny due to several factors, keeping it in a nascent stage of development.
One such factor is “hallucination,” where generative AI may produce answers that seem plausible and coherent but are ultimately incorrect. Additionally, integrating generative AI into business contexts requires ongoing refinement of the use case models. Given the rapid evolution of businesses, adapting generative AI models to these changes demands careful consideration.
To enhance the functionality of generative AI, it’s essential to assess and incorporate additional problem-solving techniques, such as calling functions and refining chain-of-thought prompts, to improve the model’s performance. These challenges stem from the stateless nature of Large Language Models (LLMs).
Strategies to Enhance the Reliability of Generative AI
Generative AI is a powerful tool that many companies use to solve complex business problems. As it becomes more popular, companies are investing to ensure these AI models stay up-to-date and work well for their specific needs.
One way to improve generative AI is a process called grounding. This injects additional information and context into the model, like giving it more background knowledge. This helps the model avoid making up false information and makes it more accurate.
Companies can also use different approaches to control how the generative AI works. Here are a few:
- Prompt Engineering: This involves giving the AI clearer instructions about what you want it to do. There are two ways to do this:
- Zero-shot Learning: This lets the AI use reasoning skills to complete a task without needing examples.
- Few-shot Learning: This gives the AI a few examples to help it understand the task better. Both methods involve giving the AI more information to work with, but zero-shot learning relies purely on the AI’s reasoning, while few-shot learning uses specific examples. It’s important to note that giving the AI more information can be more expensive and take longer to respond.
- Retrieval-Augmented Generation (RAG): This lets the AI access external information sources like the Internet or special databases. It allows the AI to check its facts and generate more credible and reliable information.
- Orchestration Tools: These tools give the AI access to other tools and programs. This allows the AI to perform calculations, interact with applications, and use other functionalities to complete tasks more effectively.
- Fine-tuning: This involves taking an already trained AI model and giving it additional training on a specific task. This helps the model become an expert in that particular area.
- Reinforcement Learning with Human Feedback (RLHF): This method involves having people monitor how users interact with the AI and adjust the AI’s responses based on that feedback. This helps the AI learn over time and better understand what users want.
How Does SAP Ensure Generative AI is Enterprise-Ready?
The outlined approaches render generative AI accessible yet not invariably reliable. Establishing checkpoints around its framework is imperative for generative AI applications to attain enterprise readiness. SAP has adopted an organic strategy in crafting responsible AI solutions designed to withstand the test of time. SAP has implemented specific processes and governance structures to enhance reliability in this endeavour.
With AI ethics as a cornerstone, SAP’s AI tools incorporate checkpoints at their foundational level to ensure:
- Human involvement during the design phase ensures collaboration between humans and AI, guaranteeing that deployed models align with business objectives and are production-worthy.
- Developers gain the ability to validate output and perform cross-checks to verify accuracy.
- A dedicated team of testers conducts rigorous testing to identify deviations from the intended purpose before the model is deployed. This process, known as red teaming, allows for necessary adjustments and fine-tuning.
- Given the dynamic nature of businesses and customer behaviours, continuous feedback mechanisms and monitoring enable deployed models to adapt to evolving patterns. To ensure optimal performance and outputs, model modifications occur periodically, typically every two, three, or six months.
SAP Business Technology Platform
The SAP Business Technology Platform (SAP BTP) provides a comprehensive software-as-a-service (SaaS) solution with multitenant support, catering to potential SAP customers. SAP BTP boasts numerous applications across various use cases, such as instant email insights generation, a prime example. This scenario aims to elevate customer support services through automation and leveraging advanced email insights.
Thanks to SAP BTP’s capabilities in sentiment analysis and agent assessment, incoming emails are seamlessly categorized into different segments within the system. These tasks are accomplished through the utilization of large language models (LLMs) and LangChain, a framework tailored for developing applications powered by LLMs.
SAP Build Apps
Introduced in 2022, SAP Build Apps serves as the foundational layer atop SAP BTP, facilitating the integration of AI capabilities into SAP’s products. Targeted towards AI model developers, this application prioritizes security and governance. Engineered with enterprise-grade standards from the ground up, SAP Build Apps provides developers and users with the expected array of readily available checkpoints characteristic of SAP offerings.
SAP Build Apps enables developers to access all foundation models, whether hosted locally, remotely, or fine-tuned models, streamlining the development process and enhancing flexibility.
SAP HANA Cloud
To optimize model performance, grounding them effectively is crucial. SAP HANA Cloud utilizes business data and context to enable developers to ground models on enterprise data through its vector engine and data management capabilities. This ensures that models are aligned with the appropriate business context, enhancing their functionalities.
Additionally, developers now have access to SAP HANA vector store capabilities.
Generative AI Hub
This tool offers immediate access to partner-built foundation models, including Microsoft Azure, OpenAI and Falcon 40B. SAP intends to incorporate other foundation models, such as Aleph Alpha and Meta Llama 2. Developers can now experiment with multiple Large Language Models (LLMs) to identify the ones best suited for their mission-critical processes, ensuring complete control and transparency.
The three pillars of the Generative AI Hub underscore its pivotal role in integrating Generative AI into business operations:
- A purpose-built repository of readily available tools for developing models of any scale.
- Expedited outcomes by granting access to top-rated foundation models from various providers.
- Ensuring complete trust and control while enhancing mission-critical business processes.
Joule
Joule serves as SAP’s copilot, designed to comprehend and interpret enterprise business data for end users. By integrating Joule across SAP’s suite of solutions, including cloud ERP, human capital management, and spend management, additional AI capabilities are incorporated into the solutions. These capabilities encompass automation, natural user experience, and the provision of additional insights, optimizations, and predictions.
Maximizing Business Value with AI in SAP
As a reputable and dependable enterprise application provider, SAP integrates AI into its business solutions and processes, yielding several outcomes. By employing AI and machine learning (ML) algorithms, enterprises can redirect their attention towards business innovation, facilitating increased automation of repetitive tasks. The resulting enhanced natural user experience blurs human and machine interaction boundaries. Furthermore, combining human cognition with AI capabilities accelerates insights generation, optimizes systems, and delivers nearly flawless predictions. This synergy proves especially advantageous when such models address supply chain-related challenges.
Conclusion
SAP’s integration of generative AI represents a significant leap forward in enterprise applications. By leveraging AI and machine learning, SAP empowers businesses to innovate, automate, and optimize decision-making processes. This evolution underscores SAP’s commitment to staying ahead of technological advancements and equipping firms for the future.
As businesses navigate tomorrow’s complexities, embracing the transformative potential of generative AI is crucial. With SAP’s forward-thinking approach and integration of AI tools, organizations can thrive in an AI-driven world. Partnering with VE3, businesses can access specialized expertise to maximize the benefits of generative AI in SAP. Let VE3 be your trusted ally in unlocking the full potential of AI-driven solutions. Contact us for more!