metamorworks - stock.adobe.com

Microsoft introduces new safety tools for generative AI

The cloud giant introduced tools that cite sources in Copilot, curb hallucination and protect privacy. They show ways the vendor has learned from previous challenges.

Microsoft is out with new safety, privacy and security capabilities to help enterprises better use generative AI applications.

The tech giant on Sept. 24 introduced new capabilities such as Evaluations in Azure AI Studio, Correction, Embedded Content Safety and Confidential inferencing.

Evaluations in Azure help support security risk assessments, the cloud vendor said.

The Correction capability in Microsoft Azure AI Content Safety's Groundedness detection feature helps fix hallucinations before users can see them.

Embedded Content Safety lets customers put Azure AI Content Safety on devices.

Confidential inference in the Azure OpenAI Service Whisper speech-to-text model enables customers to develop generative AI applications that support end-to-end privacy.

GenAI challenges

Microsoft’s new AI safety features are the latest entrants in a wave of attempts by tech vendors to overcome some of the hurdles enterprises face in using generative AI tools and applications.

While the market is moving from the idea conception phase to implementation, hallucinations, in which AI models output falsehoods, still afflict many generative AI models.

"There are a lot of these challenges that organizations need to overcome to take generative AI to that next level in their implementations," said Gartner analyst Jason Wong. "It's not going away anytime soon. You can only try to mitigate it using various techniques."

Without addressing some of these hurdles, generative AI may not reach the heights it's expected to RPA2AI analyst Kashyap Kompella said.

"As customers gain a better understanding of AI technologies, it becomes clear to them that they need to get a better handle on these challenges," he said. "If not, generative AI may wither as experimental projects instead of gaining traction and achieving scale in the enterprise."

Microsoft is not the only vendor addressing these generative AI privacy and security problems.

For example, over the summer AWS introduced Guardrails for Amazon Bedrock, which can filter out more than half of hallucinated responses while blocking more than 85 percent of harmful content.

Also, in May, Google introduced new safeguards such as AI-assisted Red Teaming and expert feedback.

Despite vendors seeking to address difficulties with AI safety, privacy and security, enterprises still need help implementing these tools, Wong said.

"There's a lot that varies from vendor to vendor, and also the skill sets needed in organizations to understand how to implement and check the performance of what they built against different models," he said.

Learning from the past

For Microsoft, some of the new capabilities address past challenges. For example, new capabilities in Microsoft 365 Copilot -- expected to be released next month -- will include transparency in web queries by enabling users to reference web content when searching Copilot. The feature will provide exact web search queries from the users' prompt in the linked citation.

A capability like this shows ways the cloud provider has learned from experiences such as the lawsuit from The New York Times, said Ricardo Baeza-Yates, director of research at the Institute for Experiential AI at Northeastern University.

Last year, the Times sued Microsoft and its partner OpenAI for copyright infringement.

"It's a good first step," Baeza-Yates said. "At least they're giving the right attribution to the content that they are outputting, which I think is the first step to solve the copyright valuation problem when they [allegedly] scraped content that was copyrighted."

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

 

Dig Deeper on Enterprise applications of AI