A new dataset called ObjectNet comprises images that machines have trouble classifying. Some of these are even hard for people!
Now developer Nick Walton has created an AI version of this type of game. He’s calling it AI Dungeon 2, and the dialog is created by the AI on the fly. It’s also not fully coherent. But it’s still amazing!
My favorite exchange from The Verge article:
You can play AI Dungeon 2 yourself here.
Cameras in New South Wales, Australia will detect when drivers are using mobile phones. Importantly, the system has a human-in-the-loop which verifies the accuracy of the detection.
This kind of automatic policing raises concerns among many ethicists. (What if the system is bad at detecting certain races or genders and skews the enforcement?) But overall it is hard to find fault in this kind of efficient safety innovation. Innocent people are killed every day by distracted drivers.
One of the many fascinating things about AI is whether AI creations can be copyrighted and, if so, by whom. Under traditional copyright analysis, the human(s) that made some contribution to the creative work own the copyright by default. If there is no human contribution, there is no copyright. See, for example, the so-called “monkey selfie” case in which a monkey took a selfie and the photographer that owned the camera got no copyright in the photo.
But when an AI creates a work of art, is there human involvement? A human created the AI, and might have fiddled with its knobs so to speak. Is that sufficient? The U.S. Copyright Office is concerned about this. One question they are asking is this:
2. Assuming involvement by a natural person is or should be required, what kind of involvement would or should be sufficient so that the work qualifies for copyright protection? For example, should it be sufficient if a person
(i) designed the AI algorithm or process that created the work;
(ii) contributed to the design of the algorithm or process;
(iii) chose data used by the algorithm for training or otherwise;
(iv) caused the AI algorithm or process to be used to yield the work;
or (v) engaged in some specific combination of the foregoing activities? Are there other contributions a person could make in a potentially copyrightable AI-generated work in order to be considered an ‘‘author’’?Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation
No one really knows the answer to this because (1) it is going to be very fact intensive (lots of different ways for humans to be involved or not involved); and (2) it feels weird to do a lot of work or spend a lot of money to build an AI and not be entitled to copyright over its creations.
In any case, these issues are going to be litigated soon. A reddit user recently used a widely-available AI program called StyleGAN to create a music visualization. And although the underlying AI was not authored by the reddit poster, the output was allegedly created by “transfer learning with a custom dataset of images curated by the artist.”
Does the reddit poster (aka self-proclaimed “artist”) own a copyright on the output? Good question.
Don’t like fake news? Pass a law! But of course fake news is in the eye of the beholder:
Singapore just showed the world how it plans to use a controversial new law to tackle what it deems fake news — and critics say it’s just what they expected would happen.
The government took action twice this week on two Facebook posts it claimed contained “false statements of fact,” the first uses of the law since it took effect last month.
One offending item was a Facebook post by an opposition politician that questioned the governance of the city-state’s sovereign wealth funds and some of their investment decisions. The other post [now blocked] was published by an Australia-based blog that claimed police had arrested a “whistleblower” who “exposed” a political candidate’s religious affiliations.
In both cases, Singapore officials ordered the accused to include the government’s rebuttal at the top of their posts. The government announcements were accompanied by screenshots of the original posts with the word “FALSE” stamped in giant letters across them.Singapore just used its fake news law. Critics say it’s just what they feared
Not a great start for Singaporian efforts to police false news.
Professor Arvind Narayanan of Princeton published a brief deck on AI snake oil. Helpfully, he divides applications into three (non-exclusive) domains:
- Perception (e.g., face recognition, speech to text): “genuine, rapid progress”
- Automating judgment (e.g., spam detection, essay grading) “imperfect but improving”; and
- Predicting social outcomes (e.g., recidivism, job success): “fundamentally dubious”
His point is that humans are terrible at predicting social outcomes, and AI’s are no better. And in fact manually sorting the data using just a few features is often the best we know how to do.
So AI’s predicting job success = snake oil.
New York City convened a task force in 2017 to “develop recommendations that will provide a framework for the use of and policy around ADS [automated decision systems].” The report is now out, and has been immediately criticized:
“It’s a waste, really,” says Meredith Whittaker, co-founder of the AI Now Institute and a member of the task force. “This is a sad precedent.” . . .
Ultimately, she says, the report, penned by city officials, “reflects the city’s view and disappointingly fails to leave out a lot of the dissenting views of task force members.” Members of the task force were given presentations on automated systems that Whittaker says “felt more like pitches or endorsements.” Efforts to make specific policy changes, like developing informational cards on algorithms, were scrapped, she says.NYC’s algorithm task force was ‘a waste,’ member says
The report itself makes three fairly pointless recommendations: (1) build capacity for an equitable, effective, and responsible approach to the City’s ADS; (2) broaden public discussion on ADS; and (3) formalize ADS management functions.
Someone should really start thinking about this!
The report’s summary contains an acknowledgement that, “we did not reach consensus on every potentially relevant issue . . . .”
Google tried to anonymize health care data and failed:
On July 19, NIH contacted Google to alert the company that its researchers had found dozens of images still included personally identifying information, including the dates the X-rays were taken and distinctive jewelry that patients were wearing when the X-rays were taken, the emails show.Google almost made 100,000 chest X-rays public — until it realized personal data could be exposed
This article comes across as a warning, but it’s a success story. Smart people thought they could anonymize data, someone noticed they couldn’t, the lawyers got involved, and the project was called off.
That’s how the system is supposed to work.
There is a lot of concern about facial recognition technology, but of course there are also indisputable benefits:
The child labor activist, who works for Indian NGO Bachpan Bachao Andolan, had launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.
He had just found out the results. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.
This momentous undertaking was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”India is trying to build the world’s biggest facial recognition system (via Marginal Revolution)
Maurice Isserman, quoting from a soldier’s letter in an essay for the NYT:
At 1st you wonder if you’ll be shot & you’re scared of not your own skin, but of the people that will get hurt if you are hit. All I could think about was keeping you & the folks from being affected by some 88 shell. I don’t seem to worry about myself because I knew if I did get it, I’d never know it. After a while I didn’t wonder if I get hit — I’d wonder when. Every time a shell came I’d ask myself “Is this the one?” In the 3rd phase I was sure I’d get it & began to ½ hope that the next one would do it & end the goddam suspense.What It’s Really Like to Fight a War