Food Hacking

I don’t know if it even qualifies as a “hack” but this automation by Chris Buetti is fantastic:

I created an Instagram page that showcased pictures of New York City’s skylines, iconic spots, elegant skyscrapers — you name it. The page has amassed a following of over 25,000 users in the NYC area and it’s still rapidly growing.

I reach out restaurants in the area either via Instagram’s direct messaging or email and offer to post a positive review in return for a free entree or at least a discount. Almost every restaurant I’ve messaged came back at me with a compensated meal or a gift card. Most places have an allocated marketing budget for these types of things so they were happy to offer me a free dining experience in exchange for a promotion. I’ve ended up giving some of these meals away to my friends and family because at times I had too many queued up to use myself.

The beauty of this all is that I automated the whole thing. And I mean 100% of it. I wrote code that finds these pictures or videos, makes a caption, adds hashtags, credits where the picture or video comes from, weeds out bad or spammy posts, posts them, follows and unfollows users, likes pictures, monitors my inbox, and most importantly — both direct messages and emails restaurants about a potential promotion. Since its inception, I haven’t even really logged into the account.

How I Eat For Free in NYC Using Python, Automation, Artificial Intelligence, and Instagram

This is one of the best casual uses of Python I’ve ever seen. It is rare to find a process with such tangible benefits that can be 100% automated, but he found one and built the automation. Kudos.

Facebook and Housing Discrimination

The Department of Housing and Urban Development sued Facebook for housing discrimination. The allegations are fascinating and, although we mostly knew all of this before (based on reporting by Pro Publica), I think most people do not realize how impressively targeted advertisements can be on Facebook. For example:

Respondent [Facebook] has provided a toggle button that enables advertisers to exclude men or women from seeing an ad, a search-box to exclude people who do not speak a specific language from seeing an ad, and a map tool to exclude people who live in a specified area from seeing an ad by drawing a red line around that area. Respondent also provides drop-down menus and search boxes to exclude or include (i.e., limit the audience of an ad exclusively to) people who share specified attributes. Respondent has offered advertisers hundreds of thousands of attributes from which to choose, for example to exclude “women in the workforce,” “moms of grade school kids,” “foreigners,” “Puerto Rico Islanders,” or people interested in “parenting,” “accessibility,” “service animal,” “Hijab Fashion,” or “Hispanic Culture.” Respondent also has offered advertisers the ability to limit the audience of an ad by selecting to include only those classified as, for example, “Christian” or “Childfree.”

Complaint at paragraph 14.

But Facebook’s system doesn’t just enable this kind of micro-targeting. It also refuses to show ads to users that its system judges as unlikely to interact with the ads, even if the advertisers want to target those users:

Even if an advertiser tries to target an audience that broadly spans protected class groups, Respondent’s ad delivery system will not show the ad to a diverse audience if the system considers users with particular characteristics most likely to engage with the ad. If the advertiser tries to avoid this problem by specifically targeting an unrepresented group, the ad delivery system will still not deliver the ad to those users, and it may not deliver the ad at all.

Complaint at paragraph 19.

Thus, the allegation is that the system functions “just like an advertiser who intentionally targets or excludes users based on their protected class.”

There is an AI angle to this as well. The complaint specifically references Facebook’s “machine learning and other prediction techniques” as enabling this kind of targeting. And while folks may disagree on whether this is “AI” or just sophisticated statistical analysis, it is a concrete allegation of real-world harm caused by big data and computation. And I think it is an interesting case study in whether we need extra laws to prevent AI harm.

Here is a hypothesis: our existing laws prohibiting various types of harm will work just fine or better in the AI context. Housing discrimination is already illegal, whether you do it subjectively and intentionally or objectively by sophisticated computation. And in fact, it’s easier to prove the latter. The AI takes input and outputs a result. That result is objective and (with the help of legal process) transparent. The AI doesn’t rationalize its decisions or try to explain away its hidden bias because it fears social judgment. If it operates in a biased manner, we will see it and we can fix it.

There is a lot of anxiety around whether our laws are sufficient for the AI future we envision. Will product liability laws be sufficient to determine who is at fault when a self-driving vehicle crashes? Will anti-discrimination laws be sufficient to disincentivize AI-facilitated bias? Yes, yes I think they will. Perhaps the law is more robust than we fear.

US Government Tries to Address AI

Recently there’s been a push by the U.S. government to figure out this AI thing. After all, China has a big long-term plan. We should, right?

So President Trump issued an executive order in February, and the White House put together this glossy website to talk about AI initiatives.

It’s all just noise. Here’s what the executive order says:

  • We should continue to lead in AI by (a) leading; (b) developing standards; (c) training; (d) fostering public trust; and (e) promoting international cooperation.
  • All departments should pursue these objectives: (a) invest in AI; (b) invest in data; (c) reduce barriers to using AI (but not so much that it impacts safety etc.); (d) develop secure standards; (e) train people; and (f) develop an action plan!
  • The National Science and Technology Council Select Committee on Artificial Intelligence should coordinate all this.
  • AI R&D is a funding priority, depending on your mission of course.
  • Publish a bunch of stuff in the Federal Register asking for public comments, and consider these goals within 120-180 days of the order.

Bottom line: someone should really start thinking about this stuff and maybe we should spend some money on it? There is zero vision in any of this.

Thinking About Thinking

Rich Sutton, a prominent Canadian AI researcher, on a lesson learned from a career in AI:

We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.

The Bitter Lesson

There is a fair amount of controversy over this point, but it contains a core of truth. We think we know how we think, and we think we can code it. But it is precisely the activities that we humans find so easy that has been difficult to replicate on computers.

The average human finds it trivial to parse objects from visual inputs, to recognize them easily when they are rotated, and to reach out and grasp them gently. And this lack of effort causes us to believe that it is easy. But these abilities are simply presented to us by our brains, punished and formed by millions of years of failure at these tasks. These tasks are not easy, but the difficulty has been hidden.

The bottom line is we don’t really understand how we think. This is counter-intuitive; after all we are in our own heads. But in some ways understanding the brain is like the eye that tries to look at itself. If the brain were simpler, it would be easier to understand. But then we’d be simpler as well.

It is somewhat irrational to believe that computers will find a truly different way to think than humans, at least if we expect them to complete tasks in our world. Evolution is a brutally efficient designer. But coding AIs to think like we think often fails, because we don’t know how we think. And coding AIs to contain knowledge that we know often fails, because we always know more than we can code. The only approach that scales is the machine learning by itself.

OpenAI resorts to capitalism

OpenAI has a mission to ensure that artificial intelligence “benefits all humanity.” They were founded in 2015, are based in San Francisco, and employ around 100 people doing AI research on safety, policy, and general capabilities. They’ve had a number of high-profile splashes in the capability space. They were also a non-profit until Monday, operating on an initial $1B gift by Elon Musk, Sam Altman, and others.

Turns out AI research requires a lot of computers, computers cost money, and the charity of wealthy and interested tech founders is not unlimited. So for-profit it goes. But “capped” for-profit.

In a blog post, OpenAI announced the transition from a non-profit to a “capped” for-profit in which investors in the initial round will receive no more than 100x returns. That’s a pretty good return so here’s hoping they raise a lot of money. It’s an important mission, but hard to do on a non-profit basis apparently.

User Interfaces, Boeing, Airbus, and the 737 Max 8

The crash of Ethiopian Airlines Flight 302 is a heartbreaking tragedy, and especially outrageous if it turns out that the pilots fought their own computer for control of the airplane. And of course the crash has prompted another round of hand-wringing over whether planes are just too complicated to fly.

There is a very long history of concern over the complexity of flying machines. In fact it’s why the venerated checklist exists, as described fantastically in Atul Gawande’s Checklist Manifesto.

But planes and other devices have gotten ever more complex to fly. And at the same time, we have grown less tolerant of human mistakes, which are still the cause of most crashes.

The major airlines, Boeing and Airbus, have developed different approaches to solving the problems of airplane safety. I won’t go into the details here, but they basically come down to whether you trust the pilot or the automation more. You can find plenty of examples of problems in both.

But in a number of the most recent incidents, pilots have had difficulty switching control from the automation. As pilot Mac McClellan writes, pilots have always been required to identify a flight automation failure and then disable it:

What’s critical to the current, mostly uninformed discussion is that the 737 MAX system is not triply redundant. In other words, it can be expected to fail more frequently than one in a billion flights, which is the certification standard for flight critical systems and structures.

. . . . .

Though the pitch system in the MAX is somewhat new, the pilot actions after a failure are exactly the same as would be for a runaway trim in any 737 built since the 1960s. As pilots we really don’t need to know why the trim is running away, but we must know, and practice, how to disable it.

. . . . .

But airline accidents have become so rare I’m not sure what is still acceptable to the flying public. When Boeing says truthfully and accurately that pilots need only do what they have been trained to do for decades when a system fails, is that enough to satisfy the flying public and the media frenzy?

I’m not sure. But I am sure the future belongs to FBW [fly-by-wire] and that saying pilots need more training and better skills is no longer enough. The flying public wants to get home safely no matter who is allowed to be at the controls.

Can Boeing Trust Pilots?

For a long time, Boeing has argued that pilots need ultimate control of the aircraft. And they have relied on pilots to intervene when fight automation is not triply redundant. Airbus, on the other hand, has argued that pilots make too many mistakes and that computers should prevent pilots from making unsafe maneuvers.

The lesson of this incident may ultimately be that we cannot allow computers to make mistakes because we cannot rely on pilots to fix them. And if we succeed in not allowing computers to make mistakes, do we need pilots?

AI Money and Claims

One report says, lots of money being thrown at AI!

AI-related companies raised $9.3B in 2018, a 72% increase compared to 2017.

PWC MoneyTree Report, Q4 2018

Another report says lots of European AI startups might not actually be focused on AI?

According to the survey from London venture capital firm MMC, 40 percent of European startups that are classified as AI companies don’t actually use artificial intelligence in a way that is “material” to their businesses. MMC studied some 2,830 AI startups in 13 EU countries to come to its conclusion, reviewing the “activities, focus, and funding” of each firm.

Forty percent of ‘AI startups’ in Europe don’t actually use AI, claims report

When a beverage maker’s stock soars after adding “blockchain” to its name on the NASDAQ, it’s not hard to see why companies are perfectly fine being thought of as AI startups despite evidence to the contrary. Caveat emptor.

Daniel Dennett on the desirability of general AI

He sees only risks and no rewards from generalized (i.e. conscious) artificial intelligence:

WE DON’T NEED artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights and should not have feelings that could be hurt or be able to respond with resentment to “abuses” rained on them by inept users.

. . . . .

So what we are creating are not—should not be—conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality (but all sorts of foibles and quirks that would no doubt be identified as the “personality” of the system): boxes of truths (if we’re lucky) almost certainly contaminated with a scattering of falsehoods.

It will be hard enough learning to live with them without distracting ourselves with fantasies about the Singularity in which these AIs will enslave us, literally. The human use of human beings will soon be changed—once again—forever, but we can take the tiller and steer between some of the hazards if we take responsibility for our trajectory.

Will AI Achieve Consciousness? Wrong Question

Even if generalized AI is not the explicit goal, it may be the natural consequence of building devices that can fend for themselves without human intervention (in, for example, interstellar space). After all, it seems likely that human generalized intelligence evolved only as a necessary by-product of human survival needs, not as a specific goal.

Avoiding the creation of generalized AI (even if we wanted to) may be more difficult than simply deciding against it. And that’s the concern.

State Regulation of AI Technology

With the federal government seemingly unwilling or unable to regulate on cybersecurity, data privacy, and artificial intelligence, the states are increasingly active, in particular about face recognition technology.

A lot of this activity is around forming task forces, but a fair amount also addresses algorithmic impact:

Legislation referring specifically to “artificial intelligence” is currently pending in at least 13 states, according to LexisNexis State Net’s legislative tracking system. Several of the bills provide for the creation of AI study commissions or task forces, while a few deal with education or education funding.

Only four states are considering bills addressing facial recognition camera technology, including Washington, which is considering measures (HB 1654 and SB 5528) concerning the use of such technology by government entities. But at least 27 states are considering bills dealing with the subject of data collection or “data privacy” specifically.

And although there isn’t any pending legislation referencing an “algorithmic impact assessment,” there are bills in 17 states that mention “algorithm.” They include measures dealing with the use of algorithms to censor offensive, political or religious speech on social media (Arkansas HB 1028, Iowa HB 317, Kansas H 2322, and Oklahoma SB 533); calculate insurance scores (Michigan SB 88, Missouri HB 647, Oregon HB 2703 and Virginia HB 2230); and gauge the risk of coronary heart disease (South Carolina HB 3598 and SB 368).

States May Take The Lead On Regulating AI (paywall)