Here’s an application that could use some transparency:
When Conor Sprouls, a customer service representative in the call center of the insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.
Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.
Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.
Not empathetic enough? A heart icon pops up.A Machine May Not Take Your Job, but One Could Become Your Boss
I have no idea how this AI might have been trained, and the article sheds no light.
This is fascinating in light of China’s use of AI for automated racism against the minority Muslim population in Xinjiang:
A group of leading institutes and companies have published a set of ethical standards for AI research and called for cross-border cooperation amid vigorous development of the industry.
The Beijing AI Principles was jointly unveiled Saturday by the Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University, Institute of Automation and Institute of Computing Technology in Chinese Academy of Sciences, and an AI industrial league involving firms like Baidu, Alibaba and Tencent.
“The development of AI is a common challenge for all humanity. Only through coordination on a global scale can we build AI that is beneficial to both humanity and nature,” said BAAI director Zeng Yi.Beijing publishes AI ethical standards, calls for int’l cooperation
The principles themselves are as laudatory and vague as most other frameworks: “Do Good,” “Be Ethical.” They explicitly call out the human rights of privacy, dignity, freedom, and autonomy. It’s difficult to say if this is a sign of internal dissent or strategic positioning given the primarily academic and commercial origin of the framework.
Jon Porter, writing for The Verge:
The Department of Homeland Security says it expects to use facial recognition technology on 97 percent of departing passengers within the next four years. The system, which involves photographing passengers before they board their flight, first started rolling out in 2017, and was operational in 15 US airports as of the end of 2018.
The facial recognition system works by photographing passengers at their departure gate. It then cross-references this photograph against a library populated with facesimages from visa and passport applications, as well as those taken by border agents when foreigners enter the country.US facial recognition will cover 97 percent of departing airline passengers within four years
It’s not automated racism, but it’s similar in scope to China’s rollout. Routine facial recognition for tracking is here, like it or not.
Paul Mozur, writing for the New York Times:
Now, documents and interviews show that the authorities are also using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said.
The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review. The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority
Bill Gates recently said that AI is the new nuclear technology: both promising and dangerous.
Our long term survival probably requires being good at managing the dangers of increasingly powerful technologies. Not a great start.
The NYPD is using AI chat bots to surface and warn individuals looking to buy sex:
A man texts an online ad for sex.
He gets a text back: “Hi Papi. Would u like to go on a date?” There’s a conversation: what he’d like the woman to do, when and where to meet, how much he will pay.
After a few minutes, the texts stop. It’s not unexpected — women offering commercial sex usually text with several potential buyers at once. So the man, too, usually texts several women at once.
What he doesn’t expect is this: He is texting not with a woman but with a computer program, a piece of artificial intelligence that has taught itself to talk like a sex worker.A.I. Joins the Campaign Against Sex Trafficking
The article posts an example of an actual chat conversation and it is worth reading to get a sense of the AI capabilities.
Ethics tension. It’s worth noting that many AI ethics frameworks emphasize the importance of informing humans when they are interacting with bots. See also the Google Duplex controversy. Instead, this is indeed deception-by-design. How does this fall within an ethical framework? Are we immediately making trade-offs between effectiveness and transparency?
Surprise! They don’t got’em! Well, sometimes they do. But I feel like the following story happens repeatedly (though maybe not as egregiously!).
From Mr. Levine at Bloomberg. Basic story, defendant interview potential experts and hire one, leaving the others behind. One of the left behind experts goes and pitches himself to the plaintiff in the case … after leaving a written record about how bad the plaintiff’s case is with the defendant. You. Should. Not. Do. This.
Also, if you do, it’s a common courtesy to tell the plaintiff about it before they hire you. Pro Tip to all my experts out there.
Online companies that are labeled as disrupters may not give you the best deal and – get this – may use your personal data to get additional value from you! Shocking. Thanks, NYTimes! I guess it was a slow news day.
Yes, I know PG&E has been in the news… alot. For not great things. But I feel like this is like a boiling the frog issue. This is all very bad – it’d be great to see an overall data visualization of all of this – because I actually think PG&E’s behavior is even worse than we realize. I don’t have time (or ability!) to do this, but it’d be super interesting and is a story that needs to be told.
In 2018, California updated its Rules of Professional Conduct for CA-licensed lawyers. Some highlights:
- Reorganized to follow the ABA Model Rules of Professional Conduct, finally joining the other 49 states that already do so;
- Requires all law firm lawyers to “advocate corrective action to address known harassing or discriminatory conduct by the firm or any of its attorneys or non-attorney personnel” (see Rule 8.4 which defines “knowingly permit”);
- Bans sex with clients, even if consensual, unless the sexual relationship predated the attorney-client relationship (see Rule 1.8.10).
This update is a complete reorganization of the rules, and there is a chart that maps the old rules to the new rules.
Actually New Rules. The following is a (linked!) list of all the actually new rules that do not have an old counterpart.
Rule 1.2: Scope of Representation and Allocation of Authority
Rule 1.8.2: Use of Current Client’s Information3
Rule 1.8.11: Imputation of Prohibitions Under Rules 1.8.1 to 1.8.9
Rule 1.10: Imputation of Conflicts of Interest: General Rule
Rule 1.11: Special Conflicts of Interest for Former and Current Government Officials and Employees
Rule 1.12: Former Judge, Arbitrator, Mediator or Other Third-Party Neutral
Rule 2.1: Advisor
Rule 2.4: Lawyer as Third-Party Neutral
Rule 3.2: Delay of Litigation
Rule 3.9: Advocate in Non-adjudicative Proceedings
Rule 4.1: Truthfulness in Statements to Others
Rule 4.3: Communicating with an Unrepresented Person
Rule 4.4: Duties Concerning Inadvertently Transmitted Writings
Rule 6.3: Membership in Legal Services Organizations
These are still mostly common sense, but there are now more explicit rules governing a lot of behavior we already consider to be best practice. See especially Rules 4.1, 4.3, and 4.4.
And you’re welcome. If life was fair this would have earned me an hour of legal ethics credit.