AI startup Mercor is facing a lawsuit over a data breach


Start feedback for AI training Mercur He is reportedly facing legal action after the data breach.

The $10 billion company, which has worked with the likes of deadAt least seven class-action lawsuits have been filed in the wake of the breach, The Wall Street Journal (WSJ) I mentioned Thursday (April 23).

The lawsuits allege that the hack exposed Mercur contractor information that included job interview and facial recordings Biometrics Data and screenshots of employees’ computers. One suit alleges that Mercur collected applicant screening data, such as background checks, which it shared with partners, in violation of federal regulations, the report added.

According to plaintiffs, the company’s practices include monitoring its contractors’ computers and sharing that data with clients, using recorded candidate interviews to train artificial intelligence models, and training client models on materials potentially owned by other companies.

“We strongly reject the speculative allegations in these lawsuits and look forward to presenting the facts at the appropriate time and place,” Mercur said in a statement to the Wall Street Journal.

“We take the privacy of our customers, contractors, employees, and those we interview seriously, and comply with all relevant laws and regulations,” the statement added, noting that the startup acted quickly to address the breach that affected several other companies.

Advertisement: Scroll to continue

“We are conducting a comprehensive investigation with leading forensic experts and reaching out directly to affected stakeholder groups when we reach findings,” she added.

The WSJ report added a comment from a Meta spokesperson that the company has temporarily suspended its work with Merkur and is investigating the breach.

piments books Earlier this week about the “new consensus” being formed around the “data problem” amid the race to deploy agentic AI.

“More autonomous AI systems will increase risks related to how data is created, managed, accessed and protected,” this report said. “Synthetic data needs clearer standards. Real-world data needs to be parsed more stringently. And the systems that connect it together need a stronger foundation of trust, security and control.”

Also this week, PYMNTS Checked it The Changing Cybersecurity Landscape, arguing that although few high-profile incidents this year could be called “AI attacks,” the corresponding rise in AI-powered offensive capability is still difficult to ignore.

“For example, Anthropic’s Claude Mythos preview demonstrated the ability to detect and Exploiting weaknesses Across major operating systems and web browsers, including legacy bugs on widely trusted systems.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *