The NYPD is using AI chat bots to surface and warn individuals looking to buy sex:
A man texts an online ad for sex.
He gets a text back: “Hi Papi. Would u like to go on a date?” There’s a conversation: what he’d like the woman to do, when and where to meet, how much he will pay.
After a few minutes, the texts stop. It’s not unexpected — women offering commercial sex usually text with several potential buyers at once. So the man, too, usually texts several women at once.
What he doesn’t expect is this: He is texting not with a woman but with a computer program, a piece of artificial intelligence that has taught itself to talk like a sex worker.A.I. Joins the Campaign Against Sex Trafficking
The article posts an example of an actual chat conversation and it is worth reading to get a sense of the AI capabilities.
Ethics tension. It’s worth noting that many AI ethics frameworks emphasize the importance of informing humans when they are interacting with bots. See also the Google Duplex controversy. Instead, this is indeed deception-by-design. How does this fall within an ethical framework? Are we immediately making trade-offs between effectiveness and transparency?