Artificial intelligence tools are rapidly evolving from passive, user-prompted systems, into autonomous technologies capable of planning, deciding, and acting with limited human oversight—what is known as agentic AI. As agentic AI proliferates, so will the legal questions around authority and liability—and litigation risks—arising from its use, especially where agentic AI mediates interactions between consumers and digital technology companies.
This post is the first in a series focused on privacy, advertising, and consumer protection. We will cover what you need to know about agentic AI, and how to identify and mitigate legal risks. In this post, we start with the technical and legal scaffolding: (1) agentic AI and the various ways it is utilized; (2) a discussion of the key legal issues these systems present; and (3) an overview of the theories of liability plaintiffs are beginning to invoke in agentic AI-related litigation.
I. The Technology
Agentic AI is a slippery, often hyped term. In general, an AI agent is a system that: (i) interprets open-ended instructions that define a goal, (ii) plans how to achieve that goal by breaking it into steps, and (iii) calls other tools or systems to carry out those steps.
It is not a single technology. A browser extension that monitors prices is fundamentally different from an enterprise personalization engine that dynamically rewrites product descriptions. Some may carry out hundreds of steps, others only one.
Under the hood, most agents are comprised of a core model—the “brain”—such as an LLM; a tooling layer—the “arms”—that define what tools, like APIs, the agent can use; and a “memory” of past interactions, user profiles, and environment states.
Agents for Consumers
AI agents can act on behalf of individuals across websites, apps, and services to perform everyday tasks: book travel, compare prices, manage subscriptions, schedule appointments, and complete purchases. They rely heavily on personal data, user preferences, browsing history, and contextual signals—the more data, the more autonomously they can act.
Browser-based agents, like shopping features or coupon extensions, read the webpage, inject code, and programmatically interact with UI elements—automating what a human would do manually, at machine speed.
API-first apps can also function as agents. They are usually event-driven with discretely defined functions. They includes finance tools that cancel unused subscriptions, or travel assistants that rebook cheaper flights, and they connect via APIs to the consumers’ accounts and services.
A key distinction is how much autonomy the user delegates to them: some agents require explicit confirmation before acting; others act autonomously once configured. That distinction matters enormously for questions of authority and liability.
Agents for Businesses
Agentic AI is also a growing tool for how brands and websites interact with consumers. It functions as a decision layer that dynamically tailors what users see, how they’re messaged, and what actions are suggested—in real time, based on behavioral signals and optimization goals.
Personalization engines ingest real-time events (pages viewed, cart contents) and historical profiles (purchase history, segments), then score users for intent or churn risk and dynamically choose content, offers, or UI treatments. For example: what articles to highlight, what tone to use in descriptions, and whether to show or hide discount codes.
Agent-based CRM and marketing integrations continuously pull data from platforms like Salesforce or Google Ads and decide when and how to trigger emails, ads, or on-site prompts. They may use LLMs to generate personalized message content and control what overlays appear (e.g., chat prompts for users predicted to need help, or offers for price-sensitive users).
Embedded chatbots and copilots capture free-text inputs, clicks, and other signals to decide between flows, route interactions, or flag for fraud.
Autonomous optimization agents make structural UX decisions with limited or high-level human oversight—for example, simplifying checkout for impatient users, adding friction for risky ones, or reordering navigation to emphasize upsells.
These systems vary in critical ways on: (i) what data they collect, (ii) how they profile users, (iii) how transparent they are, and (iv) what they share.
II. Who is Responsible for the Actions of an AI Agent?
One of the main benefits of agentic AI is its ability to act autonomously—the lack of human involvement can lead to increased productivity and efficiency. But this autonomy also complicates important questions of legal responsibility, especially when an AI agent causes harm or makes a mistake. Responsibility may depend on the relationship of the various involved parties, the nature of the harm, and which existing legal framework (agency law, product liability, contract law, or statutory allocation) a court or regulator applies.
Agency law traditionally governs when one party (agent) acts on behalf of another (principal) with express, implied, or apparent authority. In the context of agentic AI, if a consumer agent purchases a product or a business agent sends an offer, a court might ask whether the human “behind” the AI system expressly authorized that action, whether a third-party reasonably believed the action was authorized, or whether the principal ratified it after the fact.
Product liability frameworks may apply when the AI system itself is characterized as a defective product—in particular in situations where the AI agent makes an erroneous decision, fails to “perform” as promised, or causes harm. Under this framework, liability may be attributed to the original developer of the AI system, the actual deployer of the AI agent, or another intermediary, depending on whether the harm stemmed from a defect in the AI agent itself (such as a training data issue), or a failure to warn of potential risks from using the agent.
Contractual allocation of risk is another possible framework. Terms of service, vendor contracts, or API agreements may disclaim or allocate liability, require indemnification, or prohibit certain uses of the AI agent. But—importantly—there are limitations to contracts, which cannot always override statutory obligations, or eliminate the basic tort duties owed to third parties.
Statutory allocations of responsibility appear in some sector-specific laws, such as privacy statutes, like the CCPA. As more and more states pass AI-specific legislation, this is an area that will become increasingly relevant when considering who is responsible for the actions of an AI agent.
Because the actions of an autonomous software agent do not fit neatly into any legal framework there is a great deal of uncertainty over how courts and regulators will analyze liability. We will explore these issues further in this series and as the law develops.
III. How are Plaintiffs Currently Challenging Agentic AI?
Although agentic AI remains a rapidly evolving technology, plaintiffs are already bringing lawsuits targeting its deployment. The technology itself is novel, but the legal claims are grounded in well-established frameworks—privacy statues, consumer protection laws, contract claims, and tort principles. The challenge for plaintiffs is applying these established frameworks to systems that act autonomously and often in ways that is not fully understood.
Privacy Statutes (CIPA, ECPA, VPPA)
One of the primary avenues plaintiffs are exploring to challenge autonomous AI systems is the use of wiretapping and privacy-protection statutes originally enacted long before today’s AI technologies. In particular, the California Invasion of Privacy Act, or “CIPA”—which prohibits the interception or recording of private communications without the consent of all parties—has become a central tool in privacy-related lawsuits involving agentic AI. We have blogged extensively about CIPA and other wiretapping statutes—including recently about how the use of AI recording/notetaking tools have triggered a wave of lawsuits under wiretapping statutes.
Plaintiffs have filed numerous lawsuits alleging that agentic AI systems intercept web traffic or inject themselves into communications between website users and websites (or on phone calls) without proper consent, in violation of CIPA and/or a similar federal law, the Electronic Communications Privacy Act (“ECPA”).
The federal Video Privacy and Protection Act (“VPPA”) (which we also have blogged about extensively) is an area we are likely to see future developments. Business agents that track a person’s online video viewing behavior to personalize content may face claims under VPPA if the agent discloses that viewing data to third parties.
Computer Fraud Statutes
Claims under state and federal computer fraud and abuse statutes—such as the federal Computer Fraud and Abuse Act (“CFAA”)—arise when agentic AI systems access computer systems in ways that allegedly exceed authorization. Consumer AI agents that scrape websites, auto-fill forms, or bypass paywalls have already been challenged by platforms alleging unauthorized access.
Additional Claims
Other current or potential litigation hooks include:
Unfair and deceptive acts and practices (UDAP) and false advertising claims—AI agents may generate misleading content, deploy dark patterns, or make unsubstantiated claims;
Tort claims such as negligence, fraud, and misrepresentation—AI agents may cause harm through design defects, misuse, or affirmative misstatements;
Civil rights claims—AI agents may take biased and/or discriminatory actions; and
Contract claims—AI agents may act beyond delegated authority or fail to perform as promised.
We will explore these litigation hooks more in depth in this series.

/Passle/644c41cc474c4c94b77327c8/SearchServiceImages/2026-02-12-03-30-46-972-698d4966d62dde2adf63d0f3.jpg)
/Passle/644c41cc474c4c94b77327c8/MediaLibrary/Images/2026-02-09-17-30-07-215-698a199f7676ad96d0c769f4.png)
/Passle/644c41cc474c4c94b77327c8/SearchServiceImages/2026-02-04-16-58-21-913-69837aad5f594bf3ef35aca2.jpg)
/Passle/644c41cc474c4c94b77327c8/SearchServiceImages/2026-01-29-21-49-14-544-697bd5dace2ca415f143a30c.jpg)