The use of AI (Artificial Intelligence) and ML (Machine Learning) in enterprises continues to expand. However, it comes with many challenges – from developing and deploying to managing AI and ML models. Any organisation must view an AI initiative as a cross-departmental team effort.
Andreas Bergqvist, AI specialist at Red Hat, discusses how an open hybrid cloud platform can be the foundation for building and operating an AI environment and integrating all stakeholders in the process.
As generative AI continues to evolve, more companies are focusing on where to make the technology useful. After all, AI and ML technologies promise benefits like faster processes, higher quality of products and services and streamlined employee workload. However, the successful implementation of an AI strategy requires a conscious process.
From developing a viable AI strategy to monitoring and managing working models, measuring performance and responding to potential data deviations in production – these varied tasks often call for cooperation from different departments and stakeholders within an organisation.
In a typical AI project, the line of business sets the goals, data engineers and data scientists find and prepare the data to be used and ML engineers develop the models that serve the applications that developers build – all in an environment run by IT operations. The question then is, what is the ideal technological foundation for these heterogeneous tasks and challenges, i.e. a common foundation for all parties involved in the process?
This is where open Kubernetes-based hybrid cloud platforms are increasingly coming into focus for companies, as they offer a consistent infrastructure for AI model development, AI model training, and AI model embedding in applications.
To reliably organise the path from experiment to productive operation for all parties involved in the process – and to enable them to work together consistently – the platform should include several key features:
● Model development with an interactive, collaborative user interface for data science and model training, optimisation, and deployment
● Model serving with model serving routing for deploying models to production environments
● Model monitoring with centralised monitoring to verify model performance and accuracy
This platform approach offers many benefits, including:
● Flexibility: the hybrid cloud model enables a high degree of flexibility to deploy containerised intelligent application models on-premises, in the cloud or at the edge.
● Easy management and configuration with high scalability: IT operations can provide a central infrastructure for data engineers and data scientists, relieving them of the burden of maintaining and managing the environment.
● Collaboration: A common platform brings data, IT and teams together. It also eliminates process disruptions between developers, data engineers, data scientists and DevOps teams, and provides built-in handover support between ML teams and app developers.
● Open source innovation: Organisations access upstream innovation through open source-based AI/ML tools.
An open hybrid cloud platform provides a cross-functional team foundation for AI initiatives. This infrastructure supports the development, training, deployment, monitoring and lifecycle management of AI/ML models and applications – from experimentation and proof-of-concept to production. It adds AI to your organisation’s existing DevOps structure in a complementary, integrated way, rather than a disparate solution you must integrate yourself.