Mozilla has raised concerns over Microsoft’s decision to deploy its AI assistant, Copilot, on Windows systems without seeking user approval. The organization claims such actions prioritize financial gains over the rights of users.
Mozilla’s Criticism of Automatic Installations
In a detailed blog post, Mozilla criticized Microsoft’s use of automatic installations, default hardware settings, and potentially misleading user interface designs to promote Copilot across Windows systems. The primary concern is the automatic installation of the M365 Copilot app on any Windows device using Microsoft 365 desktop applications without user consent.
Furthermore, Microsoft included a dedicated Copilot key on keyboards designed for Copilot+, making it challenging for users to repurpose the key for different functions. Additionally, Copilot was automatically pinned to the Windows 11 taskbar, and plans were made to integrate it into other core elements of the operating system.
Backlash and Microsoft’s Response
These strategies led to significant user dissatisfaction, prompting Microsoft to announce in March 2026 that it would reduce Copilot’s presence in applications like Photos, Notepad, and Widgets. Mozilla interprets this move as an acknowledgment of Microsoft’s prioritization of business interests over user preferences.
Mozilla’s critique extends beyond Copilot, highlighting a recurring pattern of Microsoft using deceptive design practices to influence user decisions, such as complicating the process of changing default browsers and redirecting users to Microsoft Edge despite their preferences.
Regulatory and User-Centric Alternatives
While Microsoft excluded the European Economic Area from automatic Copilot installs, possibly due to legal pressures, Mozilla has taken a user-centered approach with its AI features. The Firefox browser includes an AI Controls panel, allowing users to disable AI features collectively or individually, with settings persisting across updates.
Mozilla’s approach contrasts sharply with Microsoft’s, offering users control over AI functionalities. This stance highlights the broader debate over user consent and privacy in AI deployments.
Microsoft’s partial rollback of Copilot underlines the growing concerns within privacy and cybersecurity circles about platform vendors bypassing user consent. As AI tools increasingly interact with sensitive data, unchecked default deployments raise significant enterprise security risks.
Mozilla’s public challenge suggests that the debate over user consent in AI technologies is ongoing, with pressures from users and competing platforms playing a vital role in guiding the ethical use of AI by major tech companies.
