Loading...
Loading...
Meta plans to deploy a Model Capability Initiative tool on employee machines to log keystrokes, mouse movements and occasional screenshots to train AI agents that can perform routine tasks across apps like Gmail, VS Code and internal tools. Leadership pitches the program as essential for building agentic assistants, but staffers have raised ethical and privacy concerns given Meta’s fraught history with data collection and regulatory scrutiny. The effort reflects a broader industry push to gather granular usage data for agent development, exposing tensions between AI innovation, workplace surveillance, and employee trust.
Tech teams building agentic AI need real-world interaction data but face ethical and legal risks when collecting granular employee activity. Employee trust and regulatory scrutiny can slow or block data collection programs and affect product timelines.
Dossier last updated: 2026-05-14 22:57:02
An internal post and petition at Meta are going viral after the company began installing mandatory monitoring software—its Model Capability Initiative—on US employee laptops to record screens, mouse movements and clicks to train AI models. Nearly 20,000 employees saw the post protesting the program as invasive and exploitative of worker data, fueling record-low morale, flyers on campus, and a growing unionization push in the UK. Workers and organizers argue the practice departs from usual volunteer or paid data-collection and undermines trust; legal protections vary but monitoring for security is often permitted in the U.S. Meta declined to comment.
An internal post and petition at Meta protesting a mandatory laptop monitoring tool have gone viral among employees, drawing nearly 20,000 views and fueling unrest. The Model Capability Initiative software records screens, mouse movements and clicks on US employees’ laptops to gather real usage data for training agentic AI that navigates computer interfaces. Workers and union organizers argue the program is nonconsensual data extraction and an invasion of privacy, contributing to record-low morale and intensifying unionization efforts in the UK. The matter matters because it raises legal, ethical and labor questions about employer surveillance as a data source for AI training and could influence corporate monitoring norms and regulation.
Meta's rollout of mandatory laptop monitoring software called the Model Capability Initiative is provoking a major internal backlash, with an engineer's post seen by nearly 20,000 colleagues and a petition demanding the program be stopped. The tool records screens, mouse movements and keystrokes to collect real examples of user behavior for training agentic AI models; Meta began installing it on US employee laptops last month. Staff cite privacy and ethical concerns, warn about normalizing nonconsensual data extraction for AI, and link the program to falling morale and a UK unionization push. The dispute matters because it highlights employee resistance to workplace surveillance used to build foundation models and could shape corporate AI data-practices and labor organizing.
Meta will install a tool called Model Capability Initiative on employee work machines to capture keystrokes, mouse movements and occasional screenshots, Reuters and Business Insider report. Management says the data will help train AI agents to understand real-world workflows across apps like Gmail, VCCode and an internal app called Metamate; CTO Andrew Bosworth framed it as advancing agents that do the work while humans direct and review. The move echoes industry efforts by Anthropic, OpenAI and Microsoft to build agentic systems and virtual PCs, but it has triggered internal unease given Meta’s history of extensive user data harvesting and privacy controversies. The rollout raises workplace-privacy and ethical questions.
Meta will roll out a tool called the Model Capability Initiative to log employees’ keystrokes, mouse movements and periodic screenshots, Reuters and Business Insider report. The data-gathering effort — aimed at showing how people actually use work apps (Gmail, GChat, VS Code and Meta’s internal Metamate) — is intended to train AI agents that can act on users’ behalf. CTO Andrew Bosworth framed the program as necessary for building “agents” that do routine work while humans direct and review them; CEO Mark Zuckerberg has pushed a vision of personal AI assistants. The move highlights internal privacy tensions and irony, given Meta’s history of extensive user data collection and regulatory scrutiny.