Microsoft’s Post

View organization page for Microsoft, graphic

22,882,474 followers

When an AI model "hallucinates," it deviates from the data given to it by changing or adding additional information not contained in it. While there are times this can be beneficial, Microsoft researchers have been putting large language models to the test to reduce instances where information is fabricated. Find out how we're creating solutions to measure, detect and mitigate the phenomenon as part of its efforts to develop AI in a safe, trustworthy and ethical way. https://msft.it/6049Y0m41

  • No alternative text description for this image
Saman Akbarian

BI Developer/Data Engineer på Sogeti

2w

It's a feature and not a bug, let it hallucinate, we all do it from time to time.

A excellent way to see the evolution very fast

Thomas Mercier

Business Intelligence | Data | Finance | Power BI | Azure Databricks | Mémoire de fin d'étude : Cloud & Souveraineté Numérique

2w

Any preference: Florence or OpenCV?

Very interesting approach to ensuring that answers are based solely on data. Like humans, the model is "hallucinating" in situations where the data is less available. This new mitigation feature will automatically rewrite answers based on the data that IS available. This will undoubtedly build more trust and confidence. Well done, friends!

Fantastic initiative! As members of the Microsoft Founders Hub, we at MemoryCareAI deeply appreciate the importance of accuracy and reliability in AI technologies, especially when they are used to enhance care for individuals with Alzheimer's and related dementias. We applaud Microsoft's efforts to rigorously address and mitigate the 'hallucination' phenomenon in AI models, ensuring these tools are both trustworthy and effective. Your commitment to developing AI responsibly and ethically aligns with our values as we continue to innovate in the healthcare space. #AIethics #ResponsibleAI #HealthcareInnovation

That’s fantastic to hear, Microsoft! Reducing instances of AI hallucination is crucial for maintaining the integrity and reliability of AI systems. By creating solutions to measure, detect, and mitigate fabricated information, you are setting a high standard for AI development. This approach not only enhances the accuracy and trustworthiness of large language models but also ensures their safe and ethical use in various applications. Kudos to your researchers for their dedication to advancing AI technology in a responsible manner. Looking forward to seeing the positive impact of these innovations!

Charonne Mose

Principal at C + AI at Microsoft

2w

Amazing and so proactive!

Love this proactive approach to ensuring not only ethical AI, but responsible and beneficial practices as well! 👏

Ramesh Jadhav

Manager Group Human Resource. SAP HCM|SuccessFactor|HRBP|HR strategy and policies|Compensation and benefits|PMGM|EC|CLMS|DIGITIZATION HR| INNOVATION AND TRANSFORMATION IN HR|COMPLIANCE MANAGEMENT|LEGAL.

2w

Very informative

Fascinating research! At CodePinnacle, we understand the importance of accuracy and reliability in AI outputs. Large language models' tendency to 'hallucinate' can have significant consequences, especially in critical applications like healthcare or finance. We applaud Microsoft's efforts to mitigate this issue and develop more trustworthy AI systems. Our team is committed to advancing AI research and development, ensuring responsible innovation that benefits society as a whole.

See more comments

To view or add a comment, sign in

Explore topics