OpenAI, on the heels of switching from a non-profit to a “capped profit,” announces a major investment:
Microsoft is investing $1 billion in OpenAI to support us building artificial general intelligence (AGI) with widely distributed economic benefits. We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI. We’ll jointly develop new Azure AI supercomputing technologies, and Microsoft will become our exclusive cloud provider . . . .Microsoft invests in and partners with OpenAI to support us building beneficial AGI
The deal involves Microsoft becoming the exclusive cloud computing partner for OpenAI, so this is more like a partnership than a straight investment.
Khari Johnson for VentureBeat:
One of the essential phrases necessary to understand AI in 2019 has to be “ethics washing.” Put simply, ethics washing — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other.How AI companies can avoid ethics washing
I don’t think it’s fair to criticize companies for false effort in AI ethics quite yet. There are no generally accepted standards.
Following on the heels of San Francisco and Somerville, Massachusetts:
The Oakland city council voted last night to pass an ordinance banning city agencies from using facial recognition technology. The move sets up Oakland to become the third city in the United States to pass similar legislation.Oakland city council votes to ban government use of facial recognition
Are we entering an AI cool down in which the hard tech gets acknowledged as hard and the effective tech gets banned? It makes a certain amount of sense of course: effective is dangerous. We need good processes.
A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:
California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.You Might Be a Robot at 3.
Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:
A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol. Id. at 40.
One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.
This is a fantastic piece of work (and paper title!) about the benefits of human-in-the-loop AI processes.
Based on identified user needs, we designed and implemented SMILY (Figure 2), a deep-learning based CBIR [content-based image retrieval] system that includes a set of refinement mechanisms to guide the search process. Similar to existing medical CBIR systems, SMILY enables pathologists to query the system with an image, and then view the most similar images from past cases along with their prior diagnoses. The pathologist can then compare and contrast those images to the query image, before making a decision.Human-centered tool for coping with Imperfect Algorithms During Medical Decision-Making (via The Gradient)
The system used three primary refinement tools: (1) refine by region; (2) refine by example; and (3) refine by concept. The authors reported that users found the software to offer greater mental support, and that users were naturally focused on explaining surprising results: “They make me wonder, ‘Oh am I making an error?'” Critically, this allowed users some insight into how the algorithm worked without an explicit explanation.
I suspect human-in-the-loop AI processes are our best version of the future. They have also been proposed to resolve ethical concerns.
An interesting essay in The Atlantic by Arthur C. Brooks:
I suspect that my own terror of professional decline is rooted in a fear of death—a fear that, even if it is not conscious, motivates me to act as if death will never come by denying any degradation in my résumé virtues. This denial is destructive, because it leads me to ignore the eulogy virtues that bring me the greatest joy.
How can I overcome this tendency? The Buddha recommends, of all things, corpse meditation: Many Theravada Buddhist monasteries in Thailand and Sri Lanka display photos of corpses in various states of decomposition for the monks to contemplate. “This body, too,” students are taught to say about their own body, “such is its nature, such is its future, such is its unavoidable fate.” At first this seems morbid. But its logic is grounded in psychological principles—and it’s not an exclusively Eastern idea. “To begin depriving death of its greatest advantage over us,” Michel de Montaigne wrote in the 16th century, “let us deprive death of its strangeness, let us frequent it, let us get used to it; let us have nothing more often in mind than death.”Your Professional Decline Is Coming (Much) Sooner Than You Think
I’m a big fan of WeCroak, an iOS app (very much in this tradition) that reminds you five times a day that you will die. The essay has tips for managing your inevitable professional decline, but mostly this is about acceptance. There is nothing more focusing that the prospect of death. And focus is key.
Joshua Sokol for Quanta Magazine:
And then there are animals that appear to offload part of their mental apparatus to structures outside of the neural system entirely. Female crickets, for example, orient themselves toward the calls of the loudest males. They pick up the sound using ears on each of the knees of their two front legs. These ears are connected to one another through a tracheal tube. Sound waves come in to both ears and then pass through the tube before interfering with one another in each ear. The system is set up so that the ear closest to the source of the sound will vibrate most strongly.
In crickets, the information processing — the job of finding and identifying the direction that the loudest sound is coming from — appears to take place in the physical structures of the ears and tracheal tube, not inside the brain. Once these structures have finished processing the information, it gets passed to the neural system, which tells the legs to turn the cricket in the right direction.The Thoughts of a Spiderweb
The broader concept is known as “extended cognition,” and in my view it may just be semantics. Many natural and artificial features of our environments, from ear shape to computers, amplify and filter information in ways that reduce cognitive load. I’d hesitate to describe these as “cognition.” But intelligence as a concept is certainly broader than brains.
OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:
If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.Why Responsible AI Development Needs Cooperation on Safety
And they identify four strategies to address this issue:
- Promote accurate beliefs about the opportunities for cooperation
- Collaborate on shared research and engineering challenges
- Open up more aspects of AI development to appropriate oversight and feedback; and
- Incentivize adherence to high standards of safety
The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.
Angry lamentation about the effects of new tech on privacy has flabbergasted me the most. For practical purposes, we have more privacy than ever before in human history. You can now buy embarrassing products in secret. You can read or view virtually anything you like in secret. You can interact with over a billion people in secret.
Then what privacy have we lost? The privacy to not be part of a Big Data Set. The privacy to not have firms try to sell us stuff based on our previous purchases. In short, we have lost the kinds of privacy that no prudent person loses sleep over.
The prudent will however be annoyed that – thanks to populist pressure – we now have to click “I agree” fifty times a day to access our favorite websites. Implicit consent was working admirably, but now we all have to suffer to please people who are impossible to please. . . . .Historically Hollow: The Cries of Populism
Meanwhile of course the government takes (mostly innocent!) state driver’s license biometrics without notice or consent.