Online Safety or Surveillance? The Hidden Battle

Four children sitting together, each focused on their electronic devices

Washington’s latest “kids online safety” push could quietly turn everyday internet access into an age-check system that forces Americans to prove who they are before they can speak, read, or download.

Quick Take

  • A House Energy and Commerce subcommittee advanced a package of roughly 18 bills targeting child online safety, including a revised Kids Online Safety Act (KOSA).
  • Several proposals center on age verification and app store responsibilities, raising privacy and free-speech concerns even among supporters of child protection.
  • Lawmakers removed KOSA’s “duty of care” language to address Republican concerns about broad liability and censorship-by-proxy.
  • Witnesses and analysts warn that poorly designed verification can expand data collection and create breach risks, even when the goal is limiting harm to kids.

What the House advanced—and why it matters

House Energy and Commerce lawmakers moved forward a broad package of child online safety bills after a subcommittee advanced the slate and scheduled additional committee action after recess. The package includes a revised version of KOSA, along with proposals touching app store age verification, limits on certain platform features, restrictions on data collection from minors, and even talk of banning social media access for users under 16 in some approaches. Supporters argue tech companies have failed to self-police harmful content and addictive design.

Republicans leading the effort have framed the package as a long-overdue response to real harms—sexual exploitation, predatory contact, and manipulative engagement features aimed at kids. That argument resonates with parents who have watched Silicon Valley profit while families shoulder the damage. The catch is that “protect kids” legislation often rises or falls on implementation details: which entities must verify age, what data gets collected, and whether legal pressure effectively pushes platforms to restrict lawful speech for everyone.

KOSA revisions: removing “duty of care” doesn’t end the debate

The most politically important shift in the updated KOSA is the removal of the earlier “duty of care” concept that drew heavy criticism for creating a sweeping liability threat. That phrase mattered because broad, undefined “harm” standards can incentivize platforms to over-filter content to avoid lawsuits—sometimes sweeping up constitutionally protected speech. The revised approach points toward “reasonable policies” aimed at specific categories of harm, a narrower framing designed to reduce censorship concerns while still pressuring platforms to act.

Even with that change, the core tension remains: Congress wants more guardrails for children online, but the practical mechanisms can collide with First Amendment protections and parental authority. Courts have already scrutinized state experiments in age checks and access restrictions, and legal challenges are likely if federal rules are written too broadly. Recent legal context cited by participants includes Supreme Court acceptance of age verification in narrow, sexually explicit contexts, while other courts have warned against overly expansive harm assessments that chill speech.

Age verification: privacy risk, breach risk, and “verified knowledge” pressures

Age verification is the policy fulcrum because it determines whether child safety rules become an ID-check culture online. Tech witnesses in congressional discussions have urged approaches like age estimation rather than forcing people to upload sensitive identification, warning that centralized collections of IDs create irresistible targets for hackers. That concern is not theoretical; any system that stores, transmits, or repeatedly checks identity signals can create new vulnerabilities—especially if multiple apps and services rely on the same verification pipelines.

Policy tracking also highlights a deeper shift: lawmakers and regulators are increasingly interested in “verified knowledge” standards rather than the older model where companies avoid responsibility unless they have “actual knowledge” a user is a child. Proponents argue verified systems stop minors from simply clicking “I’m 18.” Critics respond that verification mandates can normalize surveillance-like infrastructure and expand data collection far beyond what many families want. The legislative package reflects this unresolved conflict between enforceability and privacy.

Who verifies—app stores or platforms—and what happens next

Another fault line is responsibility. Some proposals place verification obligations at the app store level, while platforms and developers argue over feasibility, costs, and liability. That divide matters because an app-store-centered model could effectively gate access to large portions of the internet through a small number of corporate chokepoints, while a platform-by-platform model could produce inconsistent standards and repeated checks. Either way, the compliance burden will reshape how apps are built, how accounts are created, and what data is requested upfront.

For conservatives, the standard should be clear: protect children without building an always-on identity dragnet that treats ordinary Americans like suspects. The record so far shows bipartisan momentum to “do something,” but also ongoing disagreement about what works, what parents actually prefer, and what constitutional lines cannot be crossed. The next committee steps will determine whether Congress narrows these bills into targeted, enforceable protections—or drifts toward sweeping rules that invite court defeats and expand government pressure over online speech.

Sources:

House to take up package of bills to protect kids online

House to take up package of bills to protect kids online

Federal Law Developments

Congress’ Child Safety Bills Sound Good. Families Suggest They Won’t Work

House Subcommittee Advances 18 Child Online Safety Bills