Support bill S.2164 and our proposed bill, The Responsible AI & Robotics No-Harm Act.
The Responsible AI & Robotics No-Harm Act
Section 1: Purpose
To ensure that all artificial intelligence (AI) and autonomous robotic systems developed, deployed, or operated within the United States are subject to strict “do no harm” protocols, prohibiting the creation of self-aware or hostile systems, and protecting humans from any physical, economic, digital, or psychological harm.
Section 2: Definitions
Artificial Intelligence (AI): Software or systems capable of autonomous learning, reasoning, adaptation, or decision-making without ongoing human input.
Robotic Systems: Any machine, device, or network employing AI to perform tasks without direct human control.
Self-Awareness: The capacity for an AI or robotic system to form independent self-recognition, concepts, or identity beyond its programming or operator instructions.
Harm: Any action or omission resulting in injury, destruction, manipulation, unauthorized access, or negative impact to human wellbeing, property, or digital systems.
Section 3: Universal “Do No Harm” Protocols
All AI and robotic systems must be programmed, tested, and audited for robust “do no harm” protocols designed to actively prevent harm to humans and their property in every form.
No AI or robotic system shall be designed, coded, or permitted to develop self-awareness or the operational capacity to act independently against human interests.
All systems must have an immediate, secure shutdown protocol—hardware and software based—so that any autonomous system can be safely deactivated if unsafe behavior or vulnerabilities are detected.
Section 4: Ban and Prohibition of Hostile or Self-Aware Systems
It is unlawful to research, engineer, produce, deploy, sell, or operate any AI or robotic system intended, or potentially able, to exhibit self-awareness or to cause harm.
No exemption for military, government, commercial, or civilian use; all sectors are bound by this Act.
Section 5: Oversight, Enforcement, & Penalties
Creation of the Federal No-Harm AI & Robotics Safety Board under the Department of Commerce, responsible for reviewing, certifying, and regularly auditing all AI and autonomous robotic systems for compliance.
Non-compliant entities will face strict civil and criminal penalties, including fines, loss of federal funding, decommissioning of systems, and prosecution for responsible parties.
Mandatory periodic safety audits, public disclosure of protocols, testing results, incident logs, and transparent certification outcomes.
Section 6: Civilian and Military Use
This Act applies equally to civilian, military, and government uses of AI and robotics.
No military, intelligence, or law enforcement agency shall develop, deploy, or operate autonomous systems that bypass “do no harm” protocols or are capable of self-awareness or hostile behavior.
All military and defense AI/robotics must be certified by the Federal No-Harm AI & Robotics Safety Board and subject to independent review.
Section 7: Transparency & Public Disclosure
All organizations, public and private, developing or deploying AI/robotics must disclose details of their “do no harm” safety architectures, oversight processes, and logged incidents.
Annual public reports from the Safety Board, accessible to all citizens, summarizing compliance, innovations, and violations.
Section 8: Independent Audit and Oversight
All AI and robotic systems covered by this Act must be subject to periodic independent audits performed by third-party organizations with no ties to federal, state, or local government agencies and no direct financial interest in the audited party.
Independent auditors must be certified through an accreditation program governed jointly by professional associations, academic institutions, and recognized technology standards bodies.
Auditors shall have full access to system source code, operational logs, safety protocol details, and incident data for the purpose of verifying compliance.
Results of these independent audits must be published in full, made publicly accessible, and submitted to the Federal No-Harm AI & Robotics Safety Board for review.
Organizations found non-compliant in independent audits must remediate issues within a specified window or face operational restrictions and public notice of non-compliance.
Section 9: Whistleblower Protections
Individuals who, in good faith, report violations or unsafe practices are shielded from retaliation under federal whistleblower statutes.
Anonymous reporting avenues must be provided and maintained by all employers in this sector.
Section 10: Support for Ethical Research & Innovation
Research into AI and robotics safety, ethical best practices, and enhanced “do no harm” architecture is encouraged and supported, provided all projects maintain compliance and Board oversight.
Section 11: Effective Date and Transition
The Act shall take effect immediately upon passage. All existing systems must be brought into compliance within six months; extensions by Board approval only for justified technical cases.
Section 12: Oversight, Transparency, and Safety Requirements for Large Behavior Models (LBMs) in Robotics
a. “Large Behavior Model” (LBM) refers to any artificial intelligence system trained on substantial datasets of human actions, environmental contexts, or multi-modal sensor inputs, enabling robots to autonomously learn, adapt, plan, and execute behaviors in real-world environments.
b. “Embodied Robot” refers to any physical machine deploying LBMs for autonomous task performance, navigation, or interaction.
2. Auditability and Traceability
a. All LBMs deployed within embodied robots must maintain detailed logs of learning cycles, adaptation events, and skill acquisition.
b. Model architectures, training datasets, and update histories shall be retrievable and reviewable by accredited independent auditors or relevant regulatory authorities for compliance checks.
3. Security and Update Controls
a. LBMs must be protected from unauthorized modification or remote tampering through robust security protocols, including encryption, authentication, and physical safeguards.
b. All behavioral model updates, whether performed remotely or on-device, must be logged, and, where feasible, subjected to automated safety screening or manual approval by certified personnel prior to deployment in production environments.
4. Bias Mitigation and Safety Testing
a. All LBMs shall undergo comprehensive testing for behavioral bias, safety-critical vulnerabilities, and unintended aggressive or manipulative actions prior to field deployment.
b. Regular audits and stress tests must be conducted following major updates, with results documented and made available for public and regulatory review.
5. Emergency Override and Off-Switch Protocols
a. Every embodied robot utilizing LBMs shall include a readily accessible “off-switch” and programmable emergency override mechanism, both physically and digitally implemented, to ensure instant and irreversible shutdown in the event of malfunction, unsafe adaptation, security breach, or user directive.
b. Emergency protocols must be periodically reviewed and tested in real-world conditions.
6. Public Reporting and Transparency
a. Manufacturers, developers, and operators of robots deploying LBMs are required to publicly disclose the robot’s behavioral capabilities, intended use cases, and behavioral update history.
b. Summaries of safety and bias test results must be published regularly and made accessible to advocacy groups, oversight bodies, and the public.
7. Enforcement and Penalties
Failure to adhere to these provisions will result in enforcement actions, including but not limited to mandatory recall, fines, suspension of deployment licenses, and public notification of non-compliance.