India has created the world’s first basic speech-to-speech AI model. It was developed by a former student of IIT BHU. This model can even understand and respond to human emotions.
AI technology is growing very fast, and many new models are being developed. Now, a former IIT BHU student has created an AI that can sing, whisper, and even understand human emotions. It’s called Luna AI.
This AI was developed by 25-year-old Sparsh Agrawal under Pixa AI. Luna AI is the world’s first basic speech-to-speech model. It can whisper, change its tone, and sing like a human. Sparsh says that Luna doesn’t just reply — it actually “feels.”
Most voice models are made for customer support, but Luna has been designed with entertainment and mental health in mind.
IT Minister praised it
According to a report by Live Mint, Sparsh wrote on X, “Everyone keeps asking where India’s AI is — in every WhatsApp group and conference. Today, we have the answer. Meet Luna.” Luna connects audio, music, and speech together. Sparsh also met India’s IT Minister Ashwini Vaishnaw, who praised the technology behind Luna AI.
Luna works 50% faster
In the initial tests, Luna performed better than OpenAI’s GPT-4 TTS and ElevenLabs. It works 50% faster. Sparsh said, “I didn’t have a big research lab or $100 million. I borrowed GPUs, used cloud credits, and even took credit card debt to make it happen.”
Who is Sparsh Agrawal?
Sparsh studied at IIT-BHU. His team includes Nitish Kartik, Apoorv Singh, and Pratyush Kumar. He is the only solo founder selected by the WT Fund, chosen from over 15,000 applicants. Sparsh’s goal is to make India the center of emotional AI.
The first indigenous AI model will arrive in 2026
The government has also announced that India’s first homegrown AI model will be launched in 2026. It may be introduced as early as February next year, before the India AI Impact Summit. The main goal behind creating this model is to keep the country’s data safe and secure.

Leave a Reply