Home > Generative AI > Scientists find way to improve AIs common sense and math performance
New York: In a pathbreaking research, scientists have invented a new way of training Large Language AI models to improve their common sense, reasoning and math performance. The method it termed as the ‘Quiet-STaR’.
According to Live Science website, the revolutionary method allows AI mods to think before giving responses. In this method the scientists provide the LLM with an inner-monologue and multiple options to a query, which gives them an understanding of the nature, feel and notion of the situation. In such a way, the AI thinks before answering and researchers found excellent responses.
Scientists used the QuietSTaR on Mistral 7B and found that AI was able to rationale between multiple parallels and was able to solve math problems. The AI thinks before it responds rather than just responding.
This method is starkly different from the one used to test on mainstream GenAI tools like ChatGPT. These mods work write and anticipate the next steps of conversation and then provide their responses but with ‘Quiet-STaR’ it will think before it answers.
Emails: aitechinfluencer@gmail.com, deepakgarg108@gmail.com
info@aitechinfluencer
© AiTechInfluencer. All Rights Reserved. Design by AiTechInfluencer