One of the roles of a project manager is to extract and analyze large amounts of information and data that at times can feel like trying to drink from a fire hose. AI is useful in synthesizing that data in a hurry. It can spot patterns in even apparently random information in seconds and suggest decisions based on the extraction. It can highlight data and identify clashes across multiple drawings and aggregate information from multiple document versions, flagging the need for an RFI or a change order request – including by cross-checking against historical data from other projects. It can read through daily logs and subcontractor reports to track progress and pull out the common threads related to a given topic of concern, such as identifying scheduling problems, cost projections, and even safety hazards early enough to avert a crisis.
But distilling information into useful data subsets ultimately requires a human decision on what is and is not useful. As with any technology, AI requires a trained hand at the switch. The more sophisticated its algorithms and platforms become, the greater the temptation to rely on AI for what I will call “ultimate” decisions. The mindset that the machine prioritizes better than the human is in effect a relinquishment of control to the machine. And when something goes wrong, from a legal perspective the buck needs to stop at the human’s feet, not on his laptop. If a construction manager gets sued over an AI-generated decision gone wrong, his liability is not likely to be averted by blaming the AI even if the failure was in the algorithm rather than in his programming. Reliance on that AI, even if facially reasonable, won't eliminate negligence.
Utah has just become the first state to enact legislation eliminating the defense of reliance on AI in consumer protection act claims, providing “It is not a defense to the violation of any statute administered and enforced by the division under Section 13-2-1 that generative artificial intelligence: (1) made the violative statement; (2) undertook the violative act; or (3) was used in furtherance of the violation.” A bill currently in the Connecticut Legislature would provide that it is not a defense to any civil or administrative claim or action, whether in tort, contract or under the consumer protection act, that an AI system committed or was used in furthering the act or omission that the claim or action is based on. This is undoubtedly the wave of the future.
Even without legislation, court cases will eventually be called upon to elucidate the general standard of care when relying on AI. That doesn’t mean contracting parties can’t agree to their own standards of care governing AI-related liability as between themselves right now. After all, AI platform sellers all have disclaimers in their fine print to protect themselves from potential indemnity claims by contractors who are sued for AI-generated snafus. Why shouldn’t contractors try to do the same? Particularly where AI usage is encouraged or even required by owners, this is a fruitful area to explore.
From a contractor’s perspective, the more important question is whether insurance will be available to cover AI-related liabilities. A typical CGL policy may cover liability for personal injury or property damage occasioned by negligent usage of AI, but not likely for economic losses arising from the same negligence. A typical E & O (professional liability) policy has a better chance of covering such economic losses. But traditional insurance policies may not fully address specific risks associated with AI, resulting in coverage gaps.
In the coming years I expect the insurance industry to tailor policies and riders to offer, or exclude, coverage for various AI-related liabilities. But don’t wait to call your insurance agent.