This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Technology Law

| 2 minute read
Reposted from Advertising Law Updates

This Week In Generative AI

Over the past several weeks, we have seen numerous articles discussing the potential of artificial intelligence tools. AI is being used for everything from legal work to art to writing, and in various other industries. But, with the excitement and promise surrounding these tools, companies should also consider potential adverse effects, including reputational harm.

Take, for example, New York based startup DoNotPay, which advertised its software as a “robot lawyer” capable of generating arguments for use in court. Relying on AI text generators such as ChatGPT, DoNotPay claimed that its software could provide legal responses to a person challenging a speeding ticket. The challenger would wear smart glasses that would record court proceedings and the DoNotPay software would dictate responses for the challenger to recite. After word of this software spread, DoNotPay received significant criticism from the legal community, including state bars, which noted the criminal charges possible for the unauthorized practice of law, and users, who questioned the validity of the startup’s claims.  Due to this industry pushback, DoNotPay ultimately decided not to use the AI tool in court.

Another ChatGPT enabled tool, Bing’s AI search engine, also has received significant negative attention. There have been reports of unexpected behavior from the AI, including that it has claimed it spies on Microsoft developers, it has professed love for a New York Times journalist, and it has claimed desire to be human. A quick online search for Bing chatbot prompts results such as “Bing chatbot meltdown,” “Bing chatbot unhinged,” and, predictably, “Bing chatbot sentient.” Of course, the Bing chatbot is not sentient, but rather a large language model trained on text gathered from all over the internet. But, the public’s response to it shows how quickly the public’s perception and excitement over an AI tool can shift. In fact, the New York Times journalist who wrote about the AI professing love for him and self-reportedly “lost sleep” over these statements had written another article just one week before claiming that Bing had “made search interesting again.”

These are only two of many recent examples of the use and popularity of AI tools as well as the potential dangers around them. It is clear that the rapid development and use of AI across industries will only continue. Regulators are also highly focused on the effects of AI: the EU is considering the AI Act, California is seeking comments on rulemaking regarding automated decision-making, and the National Association of Insurance Commissioners recently created the Big Data and Artificial Intelligence Working Group.  With a public eager to test (and critique) new AI tools and regulators focused on potential AI harms, companies should carefully evaluate their use of AI tools.

Tags

ai, artificial intelligence, privacy, data privacy, cybersecurity, cyber security, advertising, chatgpt, language model, machine learning