Not Even Bots Are Safe From California Law Makers
In October 2018, California passed the Bolstering Online Transparency Act (BOT Act), which prohibits online bots from hiding their identities in order to appear as a human user. The human-like persona is used to deceive California residents about matters involving sales and political elections. The law recently became active on July 1, 2019. Specifically, it states:
It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.
—Bolstering Online Transparency Act
The law exempts service providers of online platforms. If the individual or organization using the bot clearly and conspicuously discloses its identity, they will not be liable for misleading the other user. The notice must also be “reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.” A simple disclaimer at the bottom of a post stating that a bot generated the content should be sufficient. While the law does not provide a private right of action, violators can face fines up to $2,500 for each transgression under the California Unfair Competition Act.
Why Did California Pass The Online Bot Law?
Social media companies, many of which are headquartered in California, commonly use bots to generate and distribute content. The geographic proximity of these companies brought the issue of deceptive bot behavior to the forefront of the legislature’s attention. The bill’s author Senator Robert Hertzberg (D-Los Angeles) stressed the importance of identity disclosure, explaining:
On the Internet, where the appearance of a mass audience can be monetized, it is critical to protect users by providing the tools to understand if information is coming from a human or a bot account disguised as [a human]. As long as bots are properly identified to let users know that they are a computer generated or automated account, users can at least be aware of [what] they are interacting with and judge the content accordingly.
—Senator Robert Hertzberg
Protecting children, who are more vulnerable to being duped by these bots and use social media sites often, were also a key consideration when passing this law.
However, keep in mind that this law does not solely apply to social media sites. Any organizations deploying bots that have the potential to reach California residents must comply. They should also ensure that any third-party contractors or vendors are aware of the law and comply. Since online content generally reaches users in more than one state, organizations should keep informed about whether any other states pass a law on this issue. However, unlike other California data-focused laws (like the CCPA and the IoT statute), there is probably less of a chance that other states will follow suit since this issue is not as nationally or globally pressing as broader data privacy concerns.
The Limitations of Online Bot Deception
California’s BOT Act has two narrow purposes: 1) bots must not act deceptively as proscribed under the statute and 2) disclosing a bot’s artificial identity removes liability. However, it is important to keep in mind that this does not mean there can be no liability under other laws if a bot distributes fraudulent information that influences sales or sways election votes. This law simply takes away liability for misleading users about the bot’s identity.
Components too Difficult to Enforce
There were also several components dropped from the final version of this law. For example, the legislature removed provisions that would have required organizations to take certain independent actions, including: 1) enabling users to report violating bots, 2) investigating and determining within 72 hours whether to act upon user reports, and 3) providing the California Attorney General with details of user reports and internal investigations. This would have placed too much burden on these platforms that would be difficult to regulate and monitor.
The Gray Area of Bot vs Human Generated Content
Lastly, some First Amendment concerns influenced the narrow language in this law. Since there is a user behind the bot who initially creates the content, the content arguably has some free speech protections. Regulating content to prevent deception is new territory because it can be hard to determine what is consider an opinion versus what is an outright fraudulent statement. As such, the legislature stayed clear of potentially implicating First Amendment issues under the BOT Act by only requiring identity disclosure instead of intensive content moderation and monitoring. If you found this blog informative, you may enjoy reading California’s Proposed Ethics Rules Are Setting The Stage For Legal Technology or The Epiq Angle Blog.