Deep fakes have so far not learned to simulate heart beats in images, and so they can be detected as fraudulent. But given time they will learn this as well; it’s an arms race.
In other news, heart beats are clearly visible in processed images!
In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.
What will happen when we can no longer distinguish human tweets from AI tweets? Does it matter? Should we care? Will there be a verified human status?
Renée DiResta, writing for The Atlantic:
Amid the arms race surrounding AI-generated content, users and internet companies will give up on trying to judge authenticity tweet by tweet and article by article. Instead, the identity of the account attached to the comment, or person attached to the byline, will become a critical signal toward gauging legitimacy. Many users will want to know that what they’re reading or seeing is tied to a real person—not an AI-generated persona. . . .
. . . . .
The idea that a verified identity should be a precondition for contributing to public discourse is dystopian in its own way. Since the dawn of the nation, Americans have valued anonymous and pseudonymous speech: Alexander Hamilton, James Madison, and John Jay used the pen name Publius when they wrote the Federalist Papers, which laid out founding principles of American government. Whistleblowers and other insiders have published anonymous statements in the interest of informing the public. Figures as varied as the statistics guru Nate Silver (“Poblano”) and Senator Mitt Romney (“Pierre Delecto”) have used pseudonyms while discussing political matters on the internet. The goal shouldn’t be to end anonymity online, but merely to reserve the public square for people who exist—not for artificially intelligent propaganda generators.
German authorities said Thursday that what appears to have been a misdirected hacker attack caused the failure of IT systems at a major hospital in Duesseldorf, and a woman who needed urgent admission died after she had to be taken to another city for treatment.
Microsoft Corp. has released a new version of its open-source DeepSpeed tool that it says will enable the creation of deep learning models with a trillion parameters, more than five times as many as in the world’s current largest model.
Microsoft AI tool enables ‘extremely large’ models with a trillion parameters
That’s a lot of transformations. If there’s a pattern, a trillion parameters should be able to find and store it.
Stating this differently: there should be an explanation, it should be understandable and accurate, and the system should stop when it’s generating nonsense.
These are very reasonable principles, but likely tough to deliver with current technology.
Indeed, the paper discusses that humans are often unable to explain why they have taken a certain action:
People fabricate reasons for their decisions, even those thought to be immutable, such as personally held opinions [24, 34, 99]. In fact, people’s conscious reasoning that is able to be verbalized does not seem to always occur before the expressed decision. Instead, evidence suggests that people make their decision and then apply reasons for those decisions after the fact . From a neuroscience perspective, neural markers of a decision can occur up to 10 seconds before a person’s conscious awareness . This finding suggests that decision making processes begin long before our conscious awareness.
Initially, Google will offer others advice on tasks such as spotting racial bias in computer vision systems, or developing ethical guidelines that govern AI projects. Longer term, the company may offer to audit customers’ AI systems for ethical integrity, and charge for ethics advice.
Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.
This is an under-discussed issue, but is inevitable. DeepMind is convinced that its AlphaZero DNN can master any two-player, turn-based game that shows perfect information. And its AlphaStar DNN shows what it can do in real-time games as well. It is a natural, and inevitable, extension to war capabilities.
Is this ok? Does that question even matter? How long before human-in-the-loop is the unacceptable bottleneck?
Floyd Abrams, one of the most prominent First Amendment lawyers in the country, has a new client: the facial recognition company Clearview AI.
Litigation against the start-up “has the potential of leading to a major decision about the interrelationship between privacy claims and First Amendment defenses in the 21st century,” Mr. Abrams said in a phone interview. He said the underlying legal questions could one day reach the Supreme Court.