Eric Boyd Corporate Vice President, AI Platform, Microsoft wrote on his blog that since ChatGPT was introduced late last year, we’ve seen a variety of scenarios it can be used for, such as summarizing content, generating suggested email copy, and even helping with software programming questions. Now with ChatGPT in preview in Azure OpenAI Service, developers can integrate custom AI-powered experiences directly into their own applications, including enhancing existing bots to handle unexpected questions, recapping call centre conversations to enable faster customer support resolutions, creating new ad copy with personalised offers, automating claims processing, and more. Cognitive services can be combined with Azure OpenAI to create compelling use cases for enterprises. For example, see how Azure OpenAI and Azure Cognitive Search can be combined to use conversational language for knowledge base retrieval on enterprise data.
In addition to all the ways organisations—large and small—are using Azure OpenAI Service to achieve business value, we’ve also been working internally at Microsoft to blend the power of large language models from OpenAI and the AI-optimised infrastructure of Azure to introduce new experiences across our consumer and enterprise products.
A responsible approach to AI
We’re already seeing the impact AI can have on people and companies, helping improve productivity, amplify creativity, and augment everyday tasks. We’re committed to making sure AI systems are developed responsibly, that they work as intended, and are used in ways that people can trust. Generative models, such as ChatGPT or DALL-E image generation model, are models that generate new artifacts. These types of models create new challenges; for instance, they could be used to create convincing but incorrect text to creating realistic images that never happened.
Microsoft employs a layered set of mitigations at four levels, designed to address these challenges. These are aligned with Microsoft's Responsible AI Standard. First, application-level protections that put the customer in charge, for instance, explaining that text output was generated by AI and making the user approve it. Second, technical protections like input and output content filtering. Third, process and policy protections that range from systems to report abuse to service level agreements. And fourth, documentation such as design guidelines and transparency notes to explain the benefits of a model and what we have tested.
We believe AI will profoundly change how we work, and how organizations operate in the coming months. To meet this moment, we will continue to take a principled approach to ensure our AI systems are used responsibly while listening, learning, and improving to help guide AI in a way that ultimately benefits humanity.