close
close

Three new Soul App features unveiled at WAIC, embodying the product philosophy of “synergistic model-application interaction”

SHANGHAI, July 9, 2024 /PRNewswire/ — Enabled July 4thThe World Artificial Intelligence Conference 2024 (WAIC 2024) officially opened. This year’s theme is “Governing AI for the Good and for All”, and the conference focused on three main sections: Core Technology, Intelligent Terminals, and Application Strengthening.

As a leading AI social platform, Soul App was invited to participate in the exhibition and showcased several of its new AI features such as “Digital Twin”, “Werewolf Awakening” and “EchoVerse”.

Three new features were presented, showing the possibilities of multimodal interaction

At WAIC 2024, Soul showcased its booth theme “Human Touch, Digital Heart” and specifically showcased three new features: “Digital Twin”, “Werewolf Awakening” and “EchoVerse”, which are Soul’s latest efforts in exploring the deep integration of “AIGC + Social”.

Among them, “Digital Twin” is dedicated to helping users create virtual digital people for effective socialization. Users can authorize the platform to set the image and characteristics of the digital twin based on chat records and post content in Soul’s public scenes. In rich dimensions such as personas, images and voices, the digital twin can achieve the effect of replicating the real person to the greatest extent.

Adhering to interest-based socialization, the Soul platform does not support the use of real profile photos. Users create virtual images through the avatar system to interact in the digital space. Currently, the launch of “Digital Twin” with private chat assistance capabilities not only helps users create a more ideal “other self”, but also enables more personalized and diversified intelligent response recommendations, making it easier for users to break the ice in socialization and increase the efficiency of social communication while helping to build a persona and make cognitive decisions.

In addition, the introduction of AI units in the interaction scenario of the “Werewolf Awakening” game clearly demonstrates the multi-modal interaction capabilities of the large model. In this scenario, users can choose any combination mode of AI and real person to initiate interaction and participate in real gameplay with AI units with autonomous reasoning, speech and “masking” abilities. “AI game companion” can also help players quickly adapt to the relatively high threshold and complex gameplay of the werewolf game, easily initiate communication, and achieve a more immersive and real-time interaction experience.

In integrating AI capabilities into on-site scenarios, Soul has launched an independent new feature, “EchoVerse.” This feature is positioned as an AI social platform where users can engage in real-time immersive communication with virtual characters and customize character personas according to their preferences to achieve different conversational styles. A character image can be generated via text description or by uploading a desired photo. The platform offers multiple basic voice timbres, and users can independently create and combine exclusive character voices to realize multi-modal interaction.

The Practice of “Synergistic Model-Application Collaboration” – From Social Improvement to Human-Computer Interaction – A New Experience

In fact, Soul is a platform that started to consider the application of AI in the social sphere relatively early. Since its launch in 2016, the company has been engaged in researching the underlying technologies and implementing AI-related applications.

For example, by launching the intelligent recommendation engine “Ling Xi” based on the full-screen portrait on the user’s station, the platform helps users find people with the same interests through decentralized mechanism, and also realizes the establishment of multiple relationships and instant emotional feedback, effectively improving the user experience. In addition, the platform’s NAWA engine allows users to create personalized avatars and scenes for immersive interaction.

In 2020, Soul initiated the systematic R&D of AIGC and accumulated cutting-edge capabilities in intelligent dialogue, image generation and voice technology such as voice generation, music generation, voice animation, etc. In 2023, Soul launched the self-developed large language model SoulX, which has capabilities such as prompt-guided generation, conditionally controlled generation, context understanding and multimodal understanding, and can achieve emotional and friendly interaction.

In 2024, the Soul voice generation model was launched, and the self-developed voice model was officially updated. At this stage, the Soul voice model includes the voice generation model, voice recognition model, voice dialogue model, music generation model, and supports functions such as real tone generation, self-voice creation, multi-language switching, and real-time multi-emotional dialogue resembling the dialogue of real people.

Currently, Soul’s multimodal interaction capabilities have been integrated into specific application scenarios including Soul’s “AI Godan”, Werewolf Awakening, Digital Twin, and EchoVerse, allowing for further improvement and expansion of aspects such as interaction efficiency, interaction quality, interaction experience, and interaction objects.

“Users’ willingness to constantly engage in conversations and interactions with AI has already proven that they value the experience the platform provides. It also indicates the durability of Soul’s emphasis on ‘Synergistic Model-App Interaction,’” he said. Tao Ming, CTO of Soul App.

AI is like finding the right hammer for a nail, Tao Ming said. “Soul is one of the most popular Internet platforms among young Chinese people. It is a natural application scene and entry into the movement. Our advantage is that we can identify the real needs of users in the scene, and the large-scale application can also immediately get feedback from users.”

Source Soul App