On March 2, Fudan University’s Natural Language Processing Laboratory (FudanNLP) launched the AI-based app Hearing the World, which is designed for visually impaired individuals as an intelligent life assistant. Built upon the multi-modal model Fudan MouSi, the app merges a single camera with headphones to transform images into audio for various scenarios and alerts. The Hearing the World app offers three modes: Street Navigation, Free Inquiry (exploration of spaces such as museums, art galleries, and parks), and Object Finder. In March, the app will complete its first round of testing and launch pilot programs in major cities across China, according to the team. [Fudan University, in Chinese]