Skip to main content

The construction industry has begun to embrace Artificial Intelligence (AI) tools, and the effects are beginning to materialize. In a September 2025 analysis of survey data from over 2200 construction professionals globally, the Royal Institute of Chartered Surveyors reported on the construction industry’s adoption and risk tolerance for AI and recommended action to accelerate progress and ensure responsible integration of these technologies. Based on the data, 56% of responders planned to increase AI spending in the next year. Many felt adoption of AI would significantly improve scheduling, resource allocation, contract review and cost and risk management. Surprisingly, a total of 74% of respondents described themselves as either “not prepared” or “minimally prepared” to implement AI solutions in their workflows. However, with the explosion of AI tools and seeming positive results flowing from it, the question is no longer whether construction firms will embrace AI, but how quickly the 45% of firms with “zero implementation” will be forced to catch up or lose competitive ground.

AI agents are already reviewing and editing contracts, processing RFIs, optimizing equipment fleets, and analyzing change orders with measurable efficiency gains. Industry projections suggest that within five years, AI adoption will move from today’s 12% regular-use rate to majority implementation across medium and large construction firms, transforming everything from safety compliance to schedule optimization. With the adoption of these systems, of course, come challenges. There are plenty of headlines about the impact of AI, but what is not being discussed are quiet mistakes showing up in its output, leading to RFIs, change orders, and scope disputes. Almost no one is asking the hard question: who’s legally responsible when “smart” tools make dumb, or legally actionable, mistakes?

Real World Risk from AI “Design” Tools

Problems caused by the use of AI tools are not theoretical problems. Like many other professional fields (accounting, legal), AI “mistakes” are happening now in construction, and fall into distinct categories, from “unbuildable imagery” that looks stunning but ignores physics, “hallucinated specifications” that cite standards and tests that do not exist, and concerns about “copyright infringement” that can expose firms to serious legal liability. Each represents a different flavor of the same problem. AI is extraordinarily good at producing “plausible” information, not necessarily “accurate” or “legally sound” information.

Generative image tools like Midjourney, DALL-E, Nano, and Stable Diffusion have become standard equipment in many design studios. These tools are phenomenal for fast ideation, mood boards, and early visualization. The problem is simple: they optimize for visual impact, not constructability. Consider the cautionary tale of an AI-designed copper sink. Architect Magazine documented a collaboration between designer Leslie Carothers, Midjourney, and heritage metalworkers at Thompson Traders, one of the first attempts to bring a generative-AI-designed product to market. The designer spent hours refining prompts to achieve the perfect aesthetic. When the craftsmen tried to fabricate it, they immediately encountered problems the AI never considered, such as the design didn’t include a drain, the bowl was too shallow to function, and it took some time to fabricate a prototype. The artisans had to translate the “fantasy sink” into a working product, essentially redesigning it from scratch while trying to preserve the AI’s aesthetic intent.

The point is that when an engineer and contractor finally examine the details of AI designs, reality intrudes. Either the design gets substantially modified, disappointing an owner who was sold on the original image, or the team attempts heroic (and expensive) engineering to “make it work.” Both scenarios are the foundation for disputes. The first creates expectation gaps and scope arguments. The second creates defect claims and cost overruns.

The Phantom Code Problem: When AI Invents Standards

Long, boilerplate-heavy project specifications are natural targets for AI assistance. Design firms are increasingly using large language models (LLMs), like ChatGPT, Claude, and others, to update specification sections, harmonize requirements across projects, or “refresh to current ASTM standards.” Used carefully, this can save time. Used carelessly, it creates “phantom codes,” plausible-sounding standard designations that do not actually exist.

Engineering librarians at major universities are already warning their students about this exact problem. Washington University in St. Louis explicitly cautions in its mechanical engineering research guides that tools like ChatGPT “might hallucinate the perfect standard,” inventing a convincing designation that cannot be found in any standards catalog. Medical and safety researchers studying LLMs report similar findings; these models are useful for brainstorming but unreliable for precise, reference-level information like standards and regulations. They will confidently generate fabricated citations rather than admit uncertainty. In fact, so concerned about this possibility, ASTM International has issued an AI policy prohibiting anyone from entering their standards into an AI tool at the risk of the offending engineer losing their license to access their libraries. 

Were a spec writer to ask an LLM to update a concrete section to current ASTM and ACI requirements, as an example, to include freeze-thaw durability, the model could produce a clean, professional text that includes a real ASTM test method, a blended requirement mixing U.S. and European standards, and a completely invented, non-existent test method. The phantom code sounds plausible; it follows ASTM’s numbering convention, and no one catches it during QA review. After being issued the bid documents, the testing lab cannot locate the method. An RFI reveals the problem. Now the project team is facing a contract clarification or change order, scheduling delays while parties negotiate equivalent testing, potential claims for wasted costs and time, and a legitimate professional liability question: who owned the duty to verify cited standards?

The Copyright Minefield: Training Data You Cannot Control

AI models are trained on vast datasets scraped from the internet, including copyrighted images, text, and technical documents. When these tools generate output, they may reproduce substantial portions of copyrighted work, often without any way for the user to detect it. Several major copyright lawsuits against AI companies are currently in litigation, with plaintiffs arguing that training AI on copyrighted works without permission constitutes infringement, and that the AI’s output can constitute derivative works or direct copying.

For construction professionals, this creates exposure in several ways. When using AI to generate concept renderings, the designer has no reliable way to know whether the output incorporates copyrighted architectural photographs, building designs, or artistic works. If marketing materials or presentations include AI-generated images that are substantially similar to copyrighted works, your firm could face infringement claims. If specs contain text that is substantially similar to copyrighted master specifications or proprietary technical guides, there may be liability even if not knowingly copied. Finally, AI-generated project reports, safety analyses, or technical documentation may incorporate copyrighted text from industry publications, training materials, or other protected sources.

The Copyright Act provides for statutory damages of up to $150,000 per work willfully infringed. For a specification document or marketing package that incorporates multiple protected works, that exposure multiplies quickly.

Using Smart Tools Smartly 

The Associated General Contractors (AGC) has noted that contractors using AI tools should establish clear policies around verification and review of AI-generated content. While the AGC hasn’t issued comprehensive AI guidance yet, its standard position on professional responsibility applies: contractors are responsible for the accuracy and legality of documents they submit, regardless of how those documents were created.

The solution is not to reject these tools; it is to use them appropriately. Some recommended guardrails are important. Label AI outputs clearly. Concept images should be marked “conceptual only, subject to engineering and code compliance.” Do not let AI renderings migrate into contract documents without full professional review. Verify every citation. If an AI tool includes a standard number, test method, code reference, or technical requirement, verify it against official sources. Make this a mandatory QA step, like checking calculations. Implement copyright review. Before using AI-generated images or text in external materials, have someone review for potential similarity to known protected works. Consider using reverse image search for AI-generated graphics. Document your review process. Harden your spec process. Assign a human specification authority, in-house or retained, who validates every AI-assisted edit against current standards. Treat AI output like a first draft from an intern, not finished work product. Finally, update your policies and contracts. Establish internal protocols for AI use. Consider addressing AI tools in professional services agreements, clarifying that all AI output is subject to professional review and approval. On the contracting side, discuss whether AI-generated schedules are informational or binding.

The Bottom Line: Who’s in Responsible Charge?

The construction industry has always adopted new technologies, such as CAD, BIM, drones, and laser scanning. Each brought risks that professionals learned to manage through training, protocols, and professional judgment. AI is no different in principle, but it’s different in kind. Previous technologies augmented human capability; they made us faster, more precise, more coordinated. AI can substitute human judgment if we let it, generating plausible content that bypasses our critical thinking.

The quiet failures all share a common cause: treating AI output as finished work product rather than raw material requiring professional judgment. Used thoughtfully, AI can help construction professionals work faster, explore more options, and catch errors earlier. But the moment we stop asking “Is this right?” and start assuming the AI knows better than we do, we have created a liability that no algorithm can optimize away. The smart are asking how to use AI while maintaining professional standards, legal compliance, and jobsite reality. That is the conversation worth having.

Biography:

William Thomas is a principal at Gausnell, O’Keefe & Thomas, LLC in St. Louis, where he focuses his practice on construction claims and loss prevention. He is a member of the International Association of Defense Counsel (IADC), currently serving as chairperson of the IADC’s Construction Law Committee; an AAA Panel Arbitrator; a Fellow with the Construction Lawyers Society of America; and a member of the ABA Forum on Construction, AIA, and ASCE. He can be reached at wthomas@gotlawstl.com.

Contact UsLearn More About Our Practice Areas