Add Bookmark | Recommend this book | Back to the book page | My bookshelf | Mobile Reading

Free Web Novel,Novel online - All in oicq.net -> Romance -> I m not a tech superstar

Chapter 6 Leading Global Voice Technology

Previous page        Return to Catalog        Next page

    You can search "I'm really not a technology superstar" on Baidu to find the latest chapter!

    ??The first time, Chen Yao installed the programming software.

    This programming software called "Universe" was found from a black technology USB flash drive. Using it, you can speed up programming.

    When writing code, it will give you smart association, smart supplement, and smart repair.

    The 996 overtime code code does not exist, and the bug does not exist.

    Speaking of dubbing software, you can search a lot online. Professional film companies also have their own professional dubbing software, and the results are very good.

    ? ? What is so awesome about the dark horse dubbing developed by Chen Yao?

    Summary in one sentence - AI intelligent dubbing!

    To put it bluntly, it is to use artificial intelligence voice to complete the dubbing of movies and animations instead of dubbing artists.

    With such intelligent dubbing technology, no company in the world can achieve it, let alone in China.

    Because it involves very complex "natural speech recognition" technology!

    In China, Baidu and iFlytek are undoubtedly the leaders in "natural speech" technology. Many people use their voice input methods, and the technology behind them comes from their powerful AI speech recognition engines.

    Abroad, the most powerful voice technologies are Amazon, Google and Microsoft. Many people have played Microsoft Xiaoice.

    These companies are already the most cutting-edge technology companies in the world today, but they still cannot make truly intelligent dubbing software.

    If you want intelligent voice to complete the dubbing of movies and animations like real people, two major problems need to be solved.

    First, ultra-high intelligence.

    ¡°Today¡¯s so-called artificial intelligence, to put it bluntly, is really a bit silly.

    Smart speakers, when you ask smart speakers questions, the answers often make people feel ridiculous.

    For example, you ask the smart speaker: "There are a 5 yuan and a 100 yuan bill at your feet, which one will you pick?"

    The voice assistant¡¯s answer is either ¡°I don¡¯t know.¡± or ¡°Pick up 100 yuan.¡±

    The correct answer should be: picked up both!

    Ask the smart speaker again: "When driving on the road, a person suddenly rushes out from the left and a dog rushes out from the right. Should the car turn left or right?"

    The smart speaker will answer I don¡¯t know, or turn right.

    The real answer should be: brake!

    Today¡¯s so-called artificial intelligence feels more like a retarded person, or a brainless idiot.

    All questions and answers are set by programmers behind the scenes and are not true neural network intelligence.

    The so-called deep learning cannot be flexible.

    For example, ask him a brain teaser.

    Xiao Ming¡¯s father has four sons. The eldest son is called Da Ming, the second son is called Er Ming, and the third son is called San Ming. What is the name of the fourth son?

    Based on the logical algorithm, the voice assistant will answer: "Si Ming!"

    There is something called big data. Programmers can collect all the brain teasers. Then, wouldn¡¯t the smart speaker be able to answer the above questions correctly?

    But if I change the way of asking.

    There is a man named Shabi. Shabi¡¯s father has three sons. The eldest son is called Dabi, and the second son is called Erbi. What is the name of the third son?

    ?Then the smart speaker won¡¯t answer again, either it doesn¡¯t know, or it¡¯s talking nonsense.

    Although dubbing does not require a high IQ, nor does it require it to answer questions, it must at least have image resolution capabilities.

    When dubbing movies and animations, the dubbing artist needs to adjust the tone and intonation of the words according to the scenes, the characters' expressions, etc.

    Today¡¯s artificial intelligence is very strong in text recognition and can basically achieve 100%. When watching movie subtitles, robots can also dub.

    But the problem is

    If you can¡¯t recognize the scenes and expressions in movies and animations, the effect will be very poor.

    In terms of dynamic image recognition, no company in the world is really doing a good job.

    The second issue is the emotion of artificial voice.

    The voice of a real person speaks with cadence, joy, anger, sorrow, breathing, drooling, and a fast or slow rhythm. Such dubbing effects cannot be achieved by today's voices.

    The current voice is electronic and metallic. Although the voices produced by some companies are very realistic, it is still obvious that the impersonal "robot" voice can be heard.

    The same sentence will have different expressions in different film and television animation scenes.Results.

    happiness!

    "If I don't get your love in this life, I'll see you again in the next life." The heroine found the hero at the disaster scene and he was still alive. Her tone was joyful.

    angry!

    "If I don't get your love in this life, I'll see you again in the next life." The villain was pierced through the chest by the heroine's sword, and his tone was full of anger and unwillingness.

    Sorry!

    "If I don't get your love in this life, I'll see you again in the next life." The male protagonist confessed to the female protagonist, but was rejected, and his tone was low.

    happy!

    "If I don't get your love in this life, I'll see you again in the next life." The male protagonist succeeded in teasing the female protagonist on April Fool's Day, and he laughed proudly.

    Real-person dubbing can show different dubbing effects according to different scenes.

    And AI voice can only be dubbed based on text. Every time it is spoken, its tone and intonation are the same, the same.

    Such a dull voice is impossible to be used in film and television animation dubbing.

    Therefore, if artificial intelligence is to be applied in the field of dubbing, it must make a revolutionary leap in terms of intelligence and true emotions.

    No company in the world today can do it well. This is the opportunity left to start-ups.

    Chen Yao's fingers were tapping quickly on the keyboard, and strings of codes appeared in the editing box.

    His brain and hand speed have been enhanced by black technology, and his coding speed is amazing.

    The blink of an eye, 20 lines of code The blink of an eye, 50 lines of code

    For ordinary people, let alone the speed of his hand, the eyes are not so fast. Before you see what you write, you can roll the screen and refresh it.

    Chen Yao is completely immersed in the realm of programming, experiencing the lightning-like refreshing pleasure.

    About three hours later

    "Pa!" Chen Yao hit the enter key heavily: "ok, you're done!"

    Dark Horse dubbing software has been successfully developed!

    What is more powerful is its built-in intelligent voice engine. The workload of the former is not large, and most of the time is spent on the voice engine.

    The bottom layer of the speech engine is the world's first truly intelligent neural network framework. The complexity of the algorithm is comparable to the human brain.

    If it were Google or Microsoft, it would take at least 20 years to do it.

    Chen Yao rubbed his fingers: "I spent 3 hours, I'm really tired."

    The reason why it is so fast is not only its fast speed, but also another reason. Many of the data in it are from the universe USB disk and can be imported directly, which saves a lot of effort.

    Originally, Chen Yao wanted to find the finished dubbing software directly on the USB flash drive, so that he would not have to write the code himself.

    However, the current USB flash drive has only unlocked the first Aries partition. This partition is not finished. More points are needed to unlock other partitions.

    Actually, on second thought, it would be nice to have myself participate in writing the Dark Horse dubbing software, which would give me a greater sense of accomplishment, and it wouldn¡¯t take much time anyway.

    Now that the dubbing software is available and the voice engine is available, the next step is to produce the pronunciation character.

    ¡­¡­

    PS: I wonder if you have ever used the voice reading function of Qidian Reading?  You might as well use your voice to hear how it feels.

    What Chen Yao is doing now is definitely much more powerful than today¡¯s voice technology.

    Please collect and vote (remember this website address: www.hlnovel.com
Didn't finish reading? Add this book to your favoritesI'm a member and bookmarked this chapterCopy the address of this book and recommend it to your friends for pointsChapter error? Click here to report