It’s Time to Treat New York’s AI Building Systems the Same as Fire Systems
Right now, there’s little ongoing oversight of artificial intelligence platforms that run core functions
By Mary Michaels August 26, 2025 11:15 am
reprints
The newest offices are normalizing identity-centric, AI app-mediated towers. Employees can use palm scanning as biometric authentication to enter through turnstiles, while a workplace app enables access to elevators, navigation services, food orders from the second-floor restaurant and drink requests from the coffee bars. It’s efficient — but it makes these towers highly dependent on AI software.
Are we ready for AI-controlled buildings?
When critical building functions run on AI, they stop being optional amenities and become core infrastructure. A biometric access control failure can lock hundreds of people out of their homes. In a crisis, AI-driven control systems decide when smoke fans ramp up, which doors are released and how backup power flows.

New York City doesn’t have a joint authority across the Department of Buildings (DOB), the New York Fire Department (FDNY) and the city’s Cyber Command to review cyber-physical AI risks in private buildings. This is an overlooked gap in New York’s smart building boom.
New York City also has no requirement on the books to inspect or reinspect AI control software in private buildings. New York City’s Office of Technology and Innovation and the Cyber Command provides AI and cyber expertise for city systems. But, the DOB and FDNY remain authorities on life safety inspections — and neither agency currently conducts post-occupancy audits of the logic behind AI decisions.
In September 2023, Johnson Controls International, a global building automation and security controls firm, suffered a ransomware attack. The attack forced the company to shut down large portions of its IT infrastructure, disrupting operations and customer-facing systems. The company later confirmed in a U.S. Securities and Exchange Commission filing that it was a ransomware incident. Reporting at the time also noted potential exposure of sensitive building security information, a reminder that a vendor incident can become a building incident overnight.
All this is to say that the building inspection system needs an update. We should start by treating AI systems as regulated building equipment. Any AI-driven system controlling access, safety or core building functions should require DOB plan review and permitting — just like any mechanical or life safety system.
It would also be valuable to integrate AI oversight capacities into existing agencies — such as the DOB, Housing Preservation and Development and FDNY — where trained inspectors would evaluate AI systems for biases, security vulnerabilities and operational reliability.
Alternatively, high-risk forms of AI, such as biometric access controls and tenant-scoring algorithms, should undergo third-party audits before occupancy is permitted and after major updates — similar to reinspection cycles for fire safety systems.
It’s tempting to treat building AI as a convenience for faster elevators, personalized climate and lower electrical bills. However, when the same AI software controls smoke ventilation, door access and backup power, it becomes critical infrastructure. In other sectors, comparable operational technology (OT) is governed by the ISA and IEC 62443 family of standards — a life cycle approach precisely because these systems evolve after day one and failures can cascade. We should apply the same standards to New York’s smart buildings.
Our oversight inspection model was built for fixed steel, pipe and wiring mechanisms. Validate it once, and trust it for years. Software breaks that logic. Months after move-in, an AI controller can change behavior without moving a single bolt, like a cloud update that tweaks alarm thresholds, a vendor “optimization” that trims fan speeds, or a new access rule pushed at 2 a.m. None of those AI updates is currently visible to a city inspection.
We don’t need to invent a new doctrine. We need to update our safety playbook to fit software that evolves after people move in. Here are some suggestions:
- Require guardrails like human approval for higher-risk actions, uncertainty and confidence checks and “safeguard agents” that monitor and block harmful behavior.
- Certify the software before occupancy through an independent OT and AI safety review of the systems that control elevators, HVAC, smoke control, access and on-site energy. Document the intended safety logic, test fail-safe modes (cloud down, sensor lies, misclassification) and verify network segmentation between life safety OT, building IT and tenant networks. Use ISA and IEC 62443 as the spine.
- Retest annually (or at least biennially) that the live system still aligns with its documented intent. Review version histories, privileged access, vendor connections and change-control evidence. Sample decision logs for drift.
- To improve continuous monitoring, put building OT on the cybersecurity team’s radar through asset inventories, tamper-evident logs, anomaly detection and alerts for control systems.
- Restrict third-party access and require multi-factor authentication for remote access, specific roles, session recording and prompt incident reporting. Use phishing-resistant methods with segmentation and logging, and make vendor access temporary and traceable.
These guardrails aren’t about blocking progress. They are about bringing industrial safety cybersecurity to the AI control software that now runs our buildings. The cyber standards and expertise exist. All that’s needed now is the will to move from an approve-and-forget model to a monitor, audit, and adapt approach before an avoidable failure writes a headline for us.
Mary Michaels is completing a master’s degree in global security, conflict and cyber crime, specializing in emerging technology, at New York University’s School of Professional Studies. She previously worked at a New York City-based general construction and construction management firm.