“The internet is less free, more fragmented, and less secure”

The Council on Foreign Relations, described by Wikipedia as a “right leaning American think tank specializing in U.S. foreign policy and international relations,” has issued a report titled Confronting Reality in Cyberspace:

The major findings of the Task Force are as follows:

The era of the global internet is over.

U.S. policies promoting an open, global internet have failed, and Washington will be unable to stop or reverse the trend toward fragmentation.

Data is a source of geopolitical power and competition and is seen as central to economic and national security.

The report is a warning that the U.S. needs to get serious about a fragmenting internet or risk losing digital leadership entirely.

YouTube is ground zero for the attention economy

The attention economy helps explain much of the news, politics, and media we see these days. The way people receive information has changed more in the last five years than in perhaps the whole of human history, and certainly since the invention of the printing press.

And YouTube, it seems, is ground zero for the hyper-refinement of data driven, attention seeking algorithms:

In some ways, YouTube’s algorithm is an immensely complicated beast: it serves up billions of recommendations a day. But its goals, at least originally, were fairly simple: maximize the likelihood that the user will click on a video, and the length of time they spend on YouTube. It has been stunningly successful: 70 percent of time spent on YouTube is watching recommended videos, amounting to 700 million hours a day. Every day, humanity as a collective spends a thousand lifetimes watching YouTube’s recommended videos.

The design of this algorithm, of course, is driven by YouTube’s parent company, Alphabet, maximizing its own goal: advertising revenue, and hence the profitability of the company. Practically everything else that happens is a side effect. The neural nets of YouTube’s algorithm form connections—statistical weightings that favor some pathways over others—based on the colossal amount of data that we all generate by using the site. It may seem an innocuous or even sensible way to determine what people want to see; but without oversight, the unintended consequences can be nasty.

Guillaume Chaslot, a former engineer at YouTube, has helped to expose some of these. Speaking to TheNextWeb, he pointed out, “The problem is that the AI isn’t built to help you get what you want—it’s built to get you addicted to YouTube. Recommendations were designed to waste your time.”

More than this: they can waste your time in harmful ways. Inflammatory, conspiratorial content generates clicks and engagement. If a small subset of users watches hours upon hours of political or conspiracy-theory content, the pathways in the neural net that recommend this content are reinforced.

The result is that users can begin with innocuous searches for relatively mild content, and find themselves quickly dragged towards extremist or conspiratorial material. A survey of 30 attendees at a Flat Earth conferenceshowed that all but one originally came upon the Flat Earth conspiracy via YouTube, with the lone dissenter exposed to the ideas from family members who were in turn converted by YouTube.

Algorithms Are Designed to Addict Us, and the Consequences Go Beyond Wasted Time

Conspiracy theories are YouTube theories. Maybe that should be their new name.

Segmented social media, meet segmented augmented reality

Users on social media are often in their own universes. Liberals often don’t even see the content that conservatives see, and vice versa.

Imagine if that kind of segmentation extended to augmented reality as well:

Imagine a world that’s filled with invisible graffiti. Open an app, point your phone at a wall, and blank brick or cement becomes a canvas. Create art with digital spraypaint and stencils, and an augmented reality system will permanently store its location and placement, creating the illusion of real street art. If friends or social media followers have the app, they can find your painting on a map and come see it. You might scrawl an in-joke across the door of a friend’s apartment, or paint a gorgeous mural on the side of a local store.

Now imagine a darker world. Members of hate groups gleefully swap pictures of racist tags on civil rights monuments. Students bully each other by spreading vicious rumors on the walls of a target’s house. Small businesses get mobbed beyond capacity when a big influencer posts a sticker on their window. The developers of Mark AR, an app that’s described as “the world’s first augmented reality social platform,” are trying to create the good version of this system. They’re still figuring out how to avoid the bad one.

Is the world ready for virtual graffiti?

I first read China Miéville’s The City & the City many years ago, and I keep thinking about how strange it was then, and how much the ideas have resonated since.

There is no Western AI plan

Tim Wu, writing in the New York Times:

But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.

The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. . . .

To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.

America’s Risky Approach to Artificial Intelligence

Planning requires paying attention. We’re a little distracted in the West these days. And Russia and China love that.

Free speech under assault from both the left and right

The Economist pens an essay on freedom of expression that is worth reading in full:

Who is the greater threat to free speech: President Donald Trump or campus radicals? Left and right disagree furiously about this. But it is the wrong question, akin to asking which of the two muggers currently assaulting you is leaving more bruises. What matters is that big chunks of both left and right are assaulting the most fundamental of liberties—the ability to say what you think. . . .

. . .Human beings are not free unless they can express themselves. Minds remain narrow unless exposed to different viewpoints. Ideas are more likely to be refined and improved if vigorously questioned and tested. Protecting students from unwelcome ideas is like refusing to vaccinate them against measles. When they go out into the world, they will be unprepared for its glorious but sometimes challenging diversity.

As societies polarise, free speech is under threat. It needs defenders

A More Nuanced Encryption Policy Debate

Bruce Schneier on a speech by Attorney General Barr on encryption policy:

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how: ­an approach we have derisively named “nerd harder.”

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having­ [–] not the fake one about whether or not we can have both security and surveillance.

Attorney General William Barr on Encryption Policy

Schneier still believes it is more important that everyone is secure than to provide backdoors to law enforcement, but at least everyone is starting to acknowledge the reality that law enforcement backdoors weaken security.

Defining a “bot” is hard

A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:

California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.

You Might Be a Robot at 3.

Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:

A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol.

Id. at 40.

One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.

OpenAI identifies AI Ethics as a collective action problem

OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:

If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.

Why Responsible AI Development Needs Cooperation on Safety

And they identify four strategies to address this issue:

  1. Promote accurate beliefs about the opportunities for cooperation
  2. Collaborate on shared research and engineering challenges
  3. Open up more aspects of AI development to appropriate oversight and feedback; and
  4. Incentivize adherence to high standards of safety

The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.

Major supplier of police body cameras concludes facial recognition is not reliable enough to sell ethically

Chaim Gartenberg, writing for The Verge:

Axon (formally known as Taser) has been shifting its business toward body cameras for police officers for the past few years, but today, the company is making a big change. At the recommendation of its AI ethics board, “Axon will not be commercializing face matching products on our body camera,” the company announced in a blog post today.

[. . . . .]

According to the board’s report, “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras.” It cites that, at the very least, more accurate technology that “performs equally well across races, ethnicities, genders, and other identity groups” would be required, assuming facial recognition technology for police body cameras can ever be considered ethical at all, a conversation that the board has begun to examine.

Axon (formerly Taser) says facial recognition on police body cams is unethical

One issue we keep sidestepping is that facial recognition technology is never going to be either perfectly accurate or perfectly equal across all classes of people. In other words, no matter how accurate the technology becomes there will always be some small differences in performance between, for example, recognizing light-skinned and dark-skinned people. So the question becomes, is any difference in accuracy tolerable? What amount?

A proposal to tax targeted digital ads

Paul Romer proposes tax policy, instead of antitrust, to nudge privacy in the right direction:

Of course, companies are incredibly clever about avoiding taxes. But in this case, that’s a good thing for all of us. This tax would spur their creativity. Ad-driven platform companies could avoid the tax entirely by switching to the business model that many digital companies already offer: an ad-free subscription. Under this model, consumers know what they give up, and the success of the business would not hinge on tracking customers with ever more sophisticated surveillance techniques. A company could succeed the old-fashioned way: by delivering a service that is worth more than it costs.

A Tax That Could Fix Big Tech

Not a bad idea.