A whistleblower wrongful-termination suit against Figure Robotics has ignited a debate that matters to every stakeholder in Canadian tech. The case centers on a veteran robot safety engineer who says he was fired after repeatedly raising concerns that Figure was rushing powerful humanoid robots to market without robust safety systems. The allegations — from removed emergency-stop functionality to impact forces capable of causing skull fractures — present a cautionary tale for the Canadian tech ecosystem: innovation without rigorous safety governance can create legal exposure, reputational damage, and real physical harm.
Table of Contents
- Why Canadian tech leaders should care
- What happened at Figure Robotics: the core allegations
- Timeline highlights and organizational breakdown
- Technical context: what the safety claims really mean
- Culture, governance, and investor dynamics
- Legal implications and whistleblower protection: what companies need to know
- Lessons for Canadian tech companies and investors
- Balancing capability and safety: a pragmatic view
- What the Figure case means for home robots and public perception
- Operational implications for GTA manufacturers and service providers
- How the Canadian tech ecosystem can move forward
- Conclusion
- Call to action
- Frequently asked questions
Why Canadian tech leaders should care
Humanoid robots are no longer a distant lab curiosity. They are becoming a market reality for factories, logistics centers, and eventually homes. Canadian tech firms, investors, and policy makers must treat the Figure incident as a case study in the operational, ethical, and regulatory challenges that accompany embodied AI.
Many Canadian tech companies are embedded in global supply chains, collaborate with U.S. and international partners, or plan to deploy robotic systems at scale across the Greater Toronto Area and beyond. They cannot afford to view safety as an afterthought. This lawsuit illuminates how governance gaps, product culture and misaligned incentives can produce systemic risk.
What happened at Figure Robotics: the core allegations
The suit was filed by a senior safety engineer with more than two decades of experience in robotics and human-robot interaction. Hired to lead product safety, the engineer reported direct to the CEO and quickly found that the company lacked formal safety procedures, incident reporting, and even a designated employee health and safety function.
Key allegations include the following:
- Absence of formal safety systems. No incident reporting or risk-assessment processes were in place while robots were being tested on-site.
- Dismissed written requirements. Senior leadership reportedly expressed a dislike of written product safety requirements, undermining documentation and traceability.
- Removal of safety features for aesthetic reasons. The emergency-stop (e-stop) program, a foundational safety control, was allegedly canceled because an engineer did not like how it looked.
- Impact testing that produced dangerous force levels. An impact test measured forces 20 times higher than ISO-defined thresholds of pain; the head of safety estimated the robot generated more than twice the force necessary to fracture an adult human skull.
- Near-miss incidents were untracked. A robot reportedly malfunctioned and punched a refrigerator, leaving a quarter-inch gash and narrowly missing an employee.
- Whistleblower retaliation. The engineer claims that, after escalating concerns to senior leadership including the CEO, he was summarily terminated for raising safety issues.
“The robot was capable of inflicting severe permanent injury on humans.”
Those words, written by the safety engineer and included in the complaint, crystallize the central tension. The same power and dexterity that make humanoid robots useful also create the potential for serious harm if control systems, sensing, governance, and operational protocols are insufficient.
Timeline highlights and organizational breakdown
Understanding the sequence of events explains how cultural and organizational choices amplified technical risks.
- October 2024: The safety engineer starts and is tasked with building a global product safety program.
- Early employment: The engineer finds no formal safety procedures, no incident reporting, and no employee health and safety team.
- January 2025: The CEO asks what it would take to put robots in homes; a safety roadmap for home use is prepared but the CEO does not attend the briefing.
- May–June 2025: A white paper and safety strategy are presented to investors and publicly praised by leadership. Internal downgrading of the plan follows.
- July 28, 2025: Impact testing of the Figure 02 robot produces force measurements well beyond pain thresholds.
- Late July–August 2025: The e-stop certification program is halted, communicating safety concerns in writing draws alarm from colleagues tied to sales.
- September 2, 2025: The safety lead is terminated, reportedly due to a vague change in business direction.
Throughout, leadership metrics emphasized speed to market and commercial viability. The company reportedly embraced an internal motto that prioritized bold, rapid development.
“Move fast and be technically fearless.”
That motto captures an engineering ethos that propelled many breakthroughs in software but does not translate cleanly to systems that interact physically with humans.
Technical context: what the safety claims really mean
The engineering allegations are technical but clear. Several concepts matter to anyone evaluating embodied AI safety.
Impact force, ISO thresholds, and human tolerance
ISO technical specifications for collaborative robot safety define thresholds for pain and acceptable force during human-robot interaction. The complaint states that an impact test recorded forces 20 times higher than ISO 15066 pain thresholds and that the forces were sufficient to fracture a human skull.
Whether those specific force calculations are accurate will be decided in expert testimony. What is indisputable is that a high-mass, high-speed actuator in a bipedal frame can generate dangerous energy. Minimizing risk requires engineering controls, limiting operational envelopes, speed and force constraints, and verified safety-certified emergency stop systems.
Emergency-stop certification and safety observers
An e-stop is a last-resort control designed to bring a machine to a safe state immediately. The certification process evaluates whether the e-stop can be trusted to prevent injury under the expected operating conditions. In some collaborative contexts, human safety observers are part of the procedure to ensure safe operation during early testing.
Halting e-stop certification or relying exclusively on informal safety observers increases the likelihood that a single malfunction or oversight could escalate into an incident.
AI non-determinism, perception failures, and unexpected behavior
Figure’s robots reportedly run proprietary AI systems. Advanced perception and decision-making systems are inherently probabilistic. They can misclassify objects, fail to detect a nearby person, or take inappropriate actions when confronted with ambiguous sensor input.
Non-determinism is not a theoretical quirk; it is the central safety challenge of embodied AI. Systems must be engineered with layered mitigations: reliable sensing, conservative motion planning, formal safety envelopes, redundant stop mechanisms, and rigorous validation under diverse, adversarial conditions.
Culture, governance, and investor dynamics
The lawsuit ties safety failings to leadership priorities. Two patterns stand out:
- Documentation aversion. The chief engineer and CEO reportedly disliked written requirements. Documentation is not bureaucratic red tape; it is accountability and traceability. Without it, safety work becomes informal and fragile.
- Investor-managed expectations. Investors were briefed on a safety plan that later was partially retracted. Selling a vision of safety and failing to deliver can produce investor legal exposure and reputational loss if incidents occur.
Leadership needs to reconcile the incentives of rapid product timelines with the time-intensive validations that safe physical systems require. Governance mechanisms — safety committees, independent verification, and documented risk acceptance — provide a bridge between speed and prudence.
Legal implications and whistleblower protection: what companies need to know
Whistleblower retaliation lawsuits are consequential beyond the headline. They test whether an organization protects employees who report risks and whether it has adequate processes for handling safety concerns.
For Canadian tech firms and branches of multinational companies operating in Canada, several legal principles apply even if this particular suit is U.S.-filed. Provincial occupational health and safety statutes mandate safe workplaces and employee protections. Federal and provincial whistleblower frameworks protect employees from being terminated for raising legitimate safety concerns.
Companies that dismiss safety escalations without careful documentation or bypass formal internal processes risk regulatory scrutiny, civil liability, and acute damage to employer brand — particularly within the tight-knit Canadian tech community.
Lessons for Canadian tech companies and investors
The Figure case is a blueprint for prevention. Canadian tech leaders should treat it as an urgent call to action across product, legal, and investor relations functions.
Governance and process
- Appoint a senior safety officer with independence. The person responsible for product safety should have a direct reporting line that enables escalation to the board or audit committee.
- Document safety requirements and verify compliance. Written requirements are not optional. They create auditable evidence that risk controls were considered and implemented.
- Institutionalize incident reporting and near-miss tracking. Near misses are high-value signals. Capture them, analyze root causes, and feed findings back into product design.
Engineering practices
- Design conservative motion envelopes. Limiting speed and force in uncertain environments dramatically reduces injury risk.
- Layer safety mechanisms. Combine passive physical limits, robust perception, certified e-stops, and software interlocks to create redundancies.
- Invest in realistic testing. Test in environments that match operational conditions and include adversarial cases that stress perception and control stacks.
Investor and board responsibilities
- Demand safety milestones. Investors funding embodied AI must require measurable safety milestones and independent verification before releasing funds tied to commercialization.
- Protect reputational capital. A single high-profile incident can sour customer confidence across an entire sector, including within the Canadian tech market.
Balancing capability and safety: a pragmatic view
Robots deliver enormous potential value — to manufacturing productivity, logistics optimization, eldercare, and household chores. Comparing humanoid robots to inherently dangerous consumer items such as cars or power tools is instructive. All these systems present quantified risks and significant upside.
The central question for Canadian tech is not whether to pursue advanced robotics, but how to integrate engineered safety risk acceptance into product roadmaps without neutering capability. Overly constraining a platform can render it commercially useless. Too little control creates unacceptable harm. The right path is structured, documented risk management that sets clear operational domains and a plan for incremental relaxation as evidence accumulates.
What the Figure case means for home robots and public perception
One of the most provocative claims in the complaint is that Figure leadership wanted humanoids in homes. That raises unique safety requirements: homes are unstructured, crowded, and full of humans, pets, and fragile objects. Traditional industrial mitigations like fenced-off cells do not apply.
To put robots into homes safely will require:
- Reliable person detection and intent recognition. Robots must robustly detect living beings and predict motion intent to avoid collisions.
- Context-aware motion planning. Motion profiles tuned to human spaces and ergonomics reduce kinetic risk.
- Fail-safe interaction policies. Conservative defaults for grip strength, speed, and proximity when humans are nearby.
- Regulatory and standards alignment. Home deployments will attract consumer safety regulators and product liability scrutiny.
For Canadian tech companies targeting consumer spaces, building trust requires transparent validation studies, clear labeling of limitations, and post-market surveillance to detect emergent hazards.
Operational implications for GTA manufacturers and service providers
The Greater Toronto Area hosts a concentrated cluster of robotics R&D, manufacturing, and applied AI firms. The Figure case matters directly to these organizations.
- Procurement criteria must include safety verification. When buying robotic platforms, insist on third-party certification, reproducible test data, and documented safety cases.
- Insurance and liability planning. Insurers will price risk based on demonstrable engineering controls and governance. Companies should be proactive in documenting mitigations.
- Talent and training. Operational staff need safety training and clear rules for human oversight during integration and testing.
How the Canadian tech ecosystem can move forward
Canada can be a leader in safe, humane integration of embodied AI. Academic institutions, research labs, and industry consortia can coordinate to produce tooling, standards, and testing infrastructure that enable scaled deployments while preserving safety.
Policy makers should consider harmonizing provincial occupational health protections with emerging product-safety expectations for robotics. Collaborative efforts between regulators, industry, and academia can accelerate robust certification frameworks tailored to European, American, and Canadian regulatory landscapes.
The Figure lawsuit is a stark reminder: physical AI systems require more than engineering bravado and investor excitement. They need governance, documentation, and a culture that elevates safety to a strategic priority. For Canadian tech companies and investors, the takeaways are immediate and actionable. Protective processes that may feel slow at first are essential foundations for long-term scale and public trust.
Canadian tech leaders must weave safety into product roadmaps, demand independent verification, and protect employees who raise legitimate concerns. Doing so will not only reduce risk but will also unlock growth by building consumer and enterprise confidence in robotic systems.
Frequently asked questions
What are the central allegations in the Figure Robotics lawsuit?
Who was the engineer who filed the suit and what was his role?
What does the lawsuit say about the robots’ physical risk?
What is an e-stop and why is it important?
How does this case apply to Canadian tech companies?
Are humanoid robots inherently unsafe for home use?
What should investors in Canadian tech demand from robotics companies?
What immediate steps can Canadian tech teams take to reduce risk?
How will this lawsuit affect public perception of robots?
Where can Canadian tech professionals learn more about safety standards?



