Don’t like fake news? Pass a law! But of course fake news is in the eye of the beholder:
Singapore just showed the world how it plans to use a controversial new law to tackle what it deems fake news — and critics say it’s just what they expected would happen.
The government took action twice this week on two Facebook posts it claimed contained “false statements of fact,” the first uses of the law since it took effect last month.
One offending item was a Facebook post by an opposition politician that questioned the governance of the city-state’s sovereign wealth funds and some of their investment decisions. The other post was published by an Australia-based blog that claimed police had arrested a “whistleblower” who “exposed” a political candidate’s religious affiliations.
In both cases, Singapore officials ordered the accused to include the government’s rebuttal at the top of their posts. The government announcements were accompanied by screenshots of the original posts with the word “FALSE” stamped in giant letters across them.
New York City convened a task force in 2017 to “develop recommendations that will provide a framework for the use of and policy around ADS [automated decision systems].” The report is now out, and has been immediately criticized:
“It’s a waste, really,” says Meredith Whittaker, co-founder of the AI Now Institute and a member of the task force. “This is a sad precedent.” . . .
Ultimately, she says, the report, penned by city officials, “reflects the city’s view and disappointingly fails to leave out a lot of the dissenting views of task force members.” Members of the task force were given presentations on automated systems that Whittaker says “felt more like pitches or endorsements.” Efforts to make specific policy changes, like developing informational cards on algorithms, were scrapped, she says.
The report itself makes three fairly pointless recommendations: (1) build capacity for an equitable, effective, and responsible approach to the City’s ADS; (2) broaden public discussion on ADS; and (3) formalize ADS management functions.
Someone should really start thinking about this!
The report’s summary contains an acknowledgement that, “we did not reach consensus on every potentially relevant issue . . . .”
Google tried to anonymize health care data and failed:
On July 19, NIH contacted Google to alert the company that its researchers had found dozens of images still included personally identifying information, including the dates the X-rays were taken and distinctive jewelry that patients were wearing when the X-rays were taken, the emails show.
This article comes across as a warning, but it’s a success story. Smart people thought they could anonymize data, someone noticed they couldn’t, the lawyers got involved, and the project was called off.
There is a lot of concern about facial recognition technology, but of course there are also indisputable benefits:
The child labor activist, who works for Indian NGO Bachpan Bachao Andolan, had launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.
He had just found out the results. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.
This momentous undertaking was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”
Maurice Isserman, quoting from a soldier’s letter in an essay for the NYT:
At 1st you wonder if you’ll be shot & you’re scared of not your own skin, but of the people that will get hurt if you are hit. All I could think about was keeping you & the folks from being affected by some 88 shell. I don’t seem to worry about myself because I knew if I did get it, I’d never know it. After a while I didn’t wonder if I get hit — I’d wonder when. Every time a shell came I’d ask myself “Is this the one?” In the 3rd phase I was sure I’d get it & began to ½ hope that the next one would do it & end the goddam suspense.
On the 30th anniversary of German reunification, it’s still hard to be German if you aren’t native West German:
For a long time, that discrimination was not merely subconscious, but structural.
Even as Germany became a major immigration country, no real path to citizenship was extended even to the children of immigrants born in the country.
After the fall of communism, the intrinsic racism of German citizenship law became impossible to ignore. Russian citizens with German ancestry who spoke no German were suddenly allowed passports, while second-generation Turks born and raised in Germany were not.
The change to the immigration law in 2000 opened parallel tracks to citizenship for those who were born in Germany or who had lived in the country for at least eight years.
As a child, Idil Baydar says she felt German. But that has changed. The 44-year-old daughter of a Turkish guest worker who arrived in the 1970s now describes herself as a “passport German foreigner.”
“The Germans have turned me into a migrant,” said Ms. Baydar, a comedian who has grown popular on YouTube by mocking Germany’s uneasy relationship with its largest immigrant group.
Information security is a public health concern too.
Researchers at Vanderbilt University‘s Owen Graduate School of Management took the Department of Health and Human Services (HHS) list of healthcare data breaches and used it to drill down on data about patient mortality rates at more than 3,000 Medicare-certified hospitals, about 10 percent of which had experienced a data breach.
As PBSnoted in its coverage of the Vanderbilt study, after data breaches as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined. The researchers found that for care centers that experienced a breach, it took an additional 2.7 minutes for suspected heart attack patients to receive an electrocardiogram.
“Breach remediation efforts were associated with deterioration in timeliness of care and patient outcomes,” the authors found. “Remediation activity may introduce changes that delay, complicate or disrupt health IT and patient care processes.”
Law professors Barry Friedman and Andrew Guthrie Ferguson propose a compromise in facial recognition technology:
We should ban “face surveillance,” the use of facial recognition (in real time or from stored footage) to track people as they pass by public or private surveillance cameras, allowing their whereabouts to be traced.
On the other hand, we should allow “face identification”— again, with strict rules — so the police can use facial recognition technology to identify a criminal suspect caught on camera.
DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab’s more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature.
Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game’s playable races, adding to the complexity of the game at the upper echelons of pro play. It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.
It is remarkable that the computer was able to achieve this performance while being restricted to human-level reaction times (22 non-duplicated actions every five seconds). In the real world we won’t see this kind of restriction.
And, as in chess, we don’t really know what insight the AI is gaining into the game.