Skip to main content

The creative act has historically required a human mind to originate expressive choices. Even where creators embraced chance or mechanical repetition, the work was still traced back to the person who chose the tools, framed the problem, and accepted or rejected the result. That person, in legal terms, was the “author.”

Generative AI unsettles that foundation. As AI-generated hands become less wonky, and the possibility of it replacing the human hand in parts of the creative process becomes more real, it complicates a more basic question: who bears responsibility for design outcomes shaped wholly or partially by AI? In the world of architecture, engineering, and construction (AEC), that question is more than philosophical; it is legal, ethical, and increasingly urgent.

AUTHORSHIP

U.S. copyright law protects “original works of authorship” fixed in a tangible medium. Architectural works, i.e., constructed buildings, plans, models, and drawings, are expressly protected under the Copyright Act, extending to the overall form and the arrangement and composition of spaces and elements, but not to standard or purely functional features.

Copyright authorship is often lamented as an incoherent doctrinal morass, but one principle remains clear—human authorship is a bedrock requirement. In 2023, the U.S. Copyright Office made that requirement explicit: when an AI system determines the expressive elements of a work, that portion of the output is not eligible for copyright protection, as copyright is “the fruit of intellectual labor… founded in the creative powers of the [human] mind.”

The question then becomes, how much intellectual labor or human touch will suffice? Until courts or Congress clarify the threshold, the prudent approach is to treat AI as a tool for the mundane and easily verifiable, not the magnificent. This “tool” framing seems counterintuitive because AI systems create the illusion of autonomous invention. Yet, consideration of the machine in isolation from its extended sociotechnical network lends itself to romanticization of the machine, much as isolation of the human creator from his assemblage of influences once lent itself to romanticization of the human author. 

OWNERSHIP

Output produced by generative AI may incorporate or resemble elements of pre-existing works. Because end users lack access to the underlying training data, they cannot realistically determine whether an AI-generated façade, massing, or detail echoes a protected design.

Thus, the risk compounds, as a firm may not be able to claim exclusive rights in a concept shaped by AI, yet could still face infringement allegations if that concept tracks too closely to someone else’s protected work.

Meanwhile, terms of service on many platforms require users to grant broad licenses in uploaded material, enabling the program to reuse designs to train future models. That contractual overreach, layered on top of legal ambiguity, threatens to dilute intellectual property and may allow for the propagation of design errors (garbage in, garbage out), exposing both the original and AI-assisted designers to liability when those errors reappear in downstream outputs.

This produces an emerging paradox: AI disrupts authorship without displacing accountability.

ACCOUNTABILITY

AI may obscure expressive origins, but not professional responsibility. Licensing statutes and professional codes of conduct for architects and engineers still require independent judgment, technical competence, and “responsible control” over any work that is signed and sealed. Recent policy statements by the AIA and ASCE converge on the same point: AI may enhance efficiency and innovation, but it cannot be held accountable and cannot replace the training, experience, and judgment of a licensed professional. Adding further confusion to the controversy, confidentiality and attribution obligations do not dissolve simply because the recipient or collaborator is (currently) non-sentient.

The applicable standard of care still asks whether a reasonably prudent professional, in the same discipline and circumstances, would have accepted the AI output without further verification. AI’s rapid evolution may blur the definition of the prevailing standards of use at the time of the alleged breach, and eventually raise the inverse question of whether it is unreasonable not to use AI. But we are not there yet. For now, the immediate risk is overreliance on systems that can hallucinate, confabulate, or subtly propagate errors at scale.

CONCLUSION

AI-influenced design work occupies an uneasy middle ground: unlikely to be protected, possibly infringing, and still professionally and ethically accountable. AI should be relegated to the role the law implicitly assumes—a tool that enhances, but does not substitute for, human creativity and professional judgment. In practice, that means limiting AI to tasks within the firm’s competencies and subject to meaningful review; maintaining robust QA/QC procedures; safeguarding confidential and proprietary information in light of platform terms; and preserving human decisions and modifications so that final instruments of service reflect deliberate professional authorship and control rather than the illusions of machine autonomy. 

Look out for our next installment, where we will shift from legal foundations to practical safeguards. 

Contact UsLearn More About Our Practice Areas