Society needs the Artificial Intelligence Data Protection Act now

On December 31, 2015, I published my original call to arms for society’s rational regulation of artificial intelligence before it is too late. I explained certain reasons why someone who is against solving problems through regulation would propose precisely that mechanism to help hedge the threats created by AI, and announced my proposed legislation: The Artificial Intelligence Data Protection Act (AIDPA).

Since 2015, we have witnessed AI’s rapidly evolving national and international growth and adoption that will soon impact every phase of mankind’s life, from birth to death, sex to religion, politics to war, education to emotion, jobs to unemployment.

Three of many recent developments confirm why now is the time for the AIDPA: (1) a McKinsey study from late 2017 determined that up to 800 million workers worldwide may lose their jobs to AI by 2030, half of contemporary work functions could be automated by 2055 and other recent studies suggest as many as 47 percent of U.S. jobs could be threatened by automation or AI over the next few decades; (2) AI has now created IP with little or no human involvement and continues to be programmed, tested and used to do so; see my Twitter for a library of media reports on AI-created IP; (3) tech giants and regulators are starting to acknowledge that industries that create and use AI should be at least partially responsible for minimizing the impact of AI-displaced workers.

Now – and not later — society must address AI’s legal, economic and social implications with regard to IP and employment. Current legislation does not adequately account for the new challenges, threats and needs presented by the impact of AI. The question is not “if” but “when” society will regulate AI. Rather than leave the job solely to politicians, industry should lead the way through the AIDPA. The urgency to finalize and enact the AIDPA cannot be understated.

This article addresses the AIDPA’s twin focuses (AI’s threats to intellectual property rights and the labor force) and presents a proposed framework to address them. The AIDPA is intended to provide industry with a voice in regulating AI while promoting its safe, secure and ethical use. The United States must lead the way in regulating AI, and leaders in industry, technology and ethics should join together to finalize and enact the AIDPA — the first and most important legislation of its kind.

Intellectual property considerations

The AIDPA’s focuses on ownership of IP and the security risks resulting from machine learning that exceeds its initial programming and/or that by virtue of its programming becomes capable of autonomous human-like reasoning. For a host of legal and technical reasons, current IP laws cannot adequately account for IP created by AI working independent of human involvement or oversight (music, art, medical techniques, processes to communicate, processes to kill, etc.) or that exceeds its initial programming. AI also will acquire vast amounts of confidential information through its ability to collect, process, analyze and utilize mass amounts of data.

Chief AI officer

The AIDPA will require covered entities (see below) to employ a “chief AI officer,” who, among other things, is responsible for monitoring AI within the workplace, creating company-wide plans for AI-impacted employment, implementing the AIDPA regulations, enacting company-wide safeguards that monitor for and respond to malicious AI activity and accounting for AI-created IP.

Governing body

The AIDPA will also establish a governing body (the “AI Board”), staffed with industry, technical, ethical and legal experts, designed to bring specialized expertise and consistency to regulating AI in industry, encourage industry participation, promulgate safety and ethical regulations and adjudicate AI-related IP disputes. The AIB will also ensure that covered entities, through their CIAO, determine if and when certain AI should be outlawed, constrained in specific ways, and/or “terminated” and, where necessary, will enforce the AIDPA’s mandates by making these ultimate determinations.

Industry also will have annual AI-related worker displacement reporting requirements and the AIB will be responsible for analyzing and reporting on AI’s displacement impact on the labor market. Finally, the AIB will administrate and adjudicate disputes related to the AI Worker Realignment Program, which will be funded under the AIDPA.

Ownership, infringement and misappropriation

With regard to AI-created IP, there are many questions of ownership and liability for infringement and misappropriation. Under current IP laws, ownership (and standing to sue) are generally restricted to humans. The AIDPA will allow, under certain circumstances, for IP to be owned by the AI which created it (and in certain circumstances the entity or individual who “owns” the AI machine) in the context of addressing and defining IP rights for non-human created works, set the parameters for human ownership of AI-created IP and, as noted above, determine what AI is off-limits and when AI ownership and even the AI itself must be restrained or terminated.

With regard to infringement and misappropriation, existing law provides that a person or entity is generally liable for infringement regardless of their knowledge of the infringement. The AIDPA will limit the liability of corporations and humans for infringement to cases where there is knowledge of and/or active participation in the infringement.

Employment considerations

The AIDPA currently defines covered entities as government contractors and organizations with 300 or more employees or annual revenue in excess of $30 million that utilize AI or develop or deploy AI-created IP in a manner that results in (i) layoffs of at least 75 workers during a 30-day period on account of implementation and/or use of AI; or (ii) an AI facility opening defined as a covered employer establishing a new facility (brick and mortar), an operation (i.e. a new logistics hub with autonomous trucks and no human drivers) and/or a line of business (i.e. a call center staffed solely with AI-machines) that utilizes AI machines to perform job functions in lieu of what historically was performed by 40 or more humans or (iii) an AI Readjustment, defined as 30 or more workers who experience a reduction of 50 percent or more in their working hours or the loss of more than 75 percent of their job functions, either of which negatively alters the amount of their compensable time.

In the event of a triggering event, the AIDPA provides for certain notice requirements. For example, in the case of layoffs, the AIDPA requires covered entities to provide at least 60 days notice to the impacted workers, which period shall be extended to 180 days for employees who enter and continue approved educational and/or employment retraining through the AIDPA’s Worker Realignment Program. Impacted workers also will be eligible for certain supplemental payments funded through the AIDPA for specified periods. The AIDPA also requires covered entities to submit annual reports on the use of AI and its statistical impact on the labor market.

The dirty “T” word

Like it or not, the undeniable scope and societal impact of AI-caused worker displacement, coupled with the massive reduction in payroll expense for covered entities and the resulting loss in government revenue, mandates that covered entities play a substantial role in funding society’s efforts to respond to and retrain displaced workers.

If it is to be assumed that mass worker displacement left unchecked has the potential to cause serious societal disruption and that AI taxation by politicians is inevitable, then this is not a provocative proposition. It is simply society being intellectually honest with itself. In 2017, Bill Gates proposed a tax on companies using AI which could be used to finance programs for the elderly and others with unmet needs. That same year, San Francisco Supervisor Jane Kim created a task force to explore an AI tax to fund education. And in Europe, Mady Delvaux, a member of the European Parliament, proposed a similar framework as part of an unsuccessful effort to enact AI legislation.

The question for industry is simple: Should the AI taxation framework be left solely to politicians, or should industry that will create and deploy AI play an important role in its formulation. The AIDPA answers that question by including a taxation component designed to secure the necessary funds for society to adjust to AI’s impact.

While still being studied and finalized, the AIDPA favors a tripartite approach for covered entities that is calculated based on (i) a minimum AI “flat” tax; plus a percentage of (ii) human labor cost savings; and (ii) profits generated by AI. The AIDPA provides that the revenue generated from the AI tax shall be used solely for two purposes: (i) retraining workers displaced by AI through the Work Realignment Program and (ii) basic supplemental income payments for AI-displaced workers for a set period.

Questions remain regarding how AI in the workplace should be regulated, but now is the time for lawyers, industry, academia, regulators and politicians to come together to finalize and enact the AIDPA.