- Mr. Director, what's the problem?
— Yesterday, some criminal transmitted a video to the N channel TV studio, in which our chancellor allegedly gives a speech about the events in the Republic of K. The publication of this video could significantly complicate our relations. The most interesting thing is that everything looks authentic, but the chancellor did not and could not give this speech. We are interested in peace with the Republic of K.
- Not only. Your task is to understand how this was done. How this became possible at all.
- I see. So, colleagues, what are your thoughts?
- Today, it is not a problem to replace a voice. Moreover, you lithuania whatsapp data make intonation and fake the tone. But here is a video... Rita, what will your smart guys say?
— We need to analyze the video.
- Of course, but I’m interested in the theoretical possibility now.
— Theoretically, it is quite possible. Today, a video clip is no longer proof that a certain person said certain words: a new neural network can put anything into a character’s mouth at the will of the video clip creator. True, for now it only works with images, not sound. It was developed for feature films. The technology is a development of methods by which the faces of famous people are “glued” onto images of actors. The innovation goes further: leaving the video character with his own face, it gives him the necessary articulation, forcing him to pronounce someone else’s words.
— Wait, but then the video will cease to be compromising material on a politician or evidence in court, because it will be easy to fabricate a video with any speech given by anyone!
— Unless you use special equipment that will sign the video with an electronic signature. It will cost a pretty penny, but there is no other way.
— So our task is to prove that this is a fake?
-
- Posts: 702
- Joined: Mon Dec 23, 2024 3:15 am