On April 21, 2026, Reuters reported that Meta had begun installing new software on the computers of its U.S.-based employees. The system, called the Model Capability Initiative, records mouse movements, clicks, keystrokes, and occasional screen snapshots from work-related apps and websites. According to internal memos seen by Reuters, the goal is to train AI agents that can carry out office tasks more like humans do, including small but important actions such as choosing from dropdown menus or using keyboard shortcuts. Meta said the data would be used for model training, not for performance reviews. (m.investing.com)
This matters because modern AI needs more than words. It also needs examples of real behavior. Meta’s own April 8 announcement about Muse Spark showed how strongly the company is pushing toward “agentic” AI: systems that can reason, use multiple subagents, and interact with the real world in more practical ways. When a company wants AI to operate software on its own, ordinary desk work suddenly becomes valuable training data. In other words, an employee is no longer only doing a job; that employee may also be teaching the machine that could later share, reshape, or reduce that job. (about.fb.com)
For human resources teams, the biggest issue is trust. In New York, employers who electronically monitor workers’ phone, email, or internet use must give prior written notice and post that notice where employees can see it. In Europe, the legal climate is often stricter. Eurofound notes that systematic and detailed tracking of workers raises serious privacy and data-protection concerns. Italy’s labor ministry also says that when work tools such as computers are modified with software for monitoring, extra legal conditions and clear worker information are required; if workers are not properly informed, the data collected cannot be used. (nysenate.gov)
That is why Meta’s move feels like a new HR challenge, not just a new tech story. The uncomfortable question is simple: if every click helps build a smarter AI, who really owns that knowledge—the worker who produced it, or the company that captured it? As AI becomes a colleague, manager, and possible replacement all at once, clear rules about consent, transparency, and data limits may become just as important as the technology itself. (m.investing.com)










