Second is reducing political bias. You may have seen criticism of being too liberal, which was not intentional. We've worked very hard to reduce political bias in the behavior of our models and will continue to do so. Third is we want to point voters to the right information when they're looking for voting information. Those are the three things we're focused on as we go into the election. Deepfakes are unacceptable. We need to have very reliable ways for people to understand that they're looking at a deepfake. We've done some of that.
We implemented it for images, which is like a passport japan cell phone number as content spreads across different platforms. We also open sourced a classifier that can detect if an image was generated by a person. So metadata and classifiers are two technical ways to deal with this. This is proof of provenance specifically for images. We're also working on how to implement watermarking technology in text. But the point is that people should know they're dealing with a deepfake and we want people to trust the information they see. Moderator: The point of these forgeries is to deceive you, right? The Federal Communications Commission just fined a company $30 million for creating a deepfake audio that sounded like a recording of Biden in the New Hampshire primary.
There may be more sophisticated versions. A tool called is being developed that can recreate someone's voice from a recording of a second. It will be able to create a recording of a person speaking in another language. Because the product manager told the New York Times that this is a sensitive issue that needs to be done right. Why are you building this? I often say to tech people that if you're building something that looks like a Black Mirror episode, you probably shouldn't be building it. : I think that's a hopeless attitude.
election and disinformation and then
-
- Posts: 30
- Joined: Mon Dec 23, 2024 6:11 am