You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We appreciate W3C's initiative in creating the Privacy Principles
document aimed at securing user privacy on the web. However, we believe
several key aspects of this document require revision to avoid
unintended consequences . While the goal of enhancing user privacy is
critical, the current draft contains overly broad language, fails to
account for the practical realities of content monetization, compromises
user choice, and overreaches into regulatory territory. These issues
could stifle innovation, compromise the open web, and unfairly impact
sectors like advertising, which not only funds the majority of free
content but also delivers value to users by making web experiences more
relevant and accessible. A careful balance between privacy protections
and data availability is essential to ensure that users continue to
benefit from personalized and valuable content while their privacy is
respected.
Overly Broad Language and Ambiguity
The Privacy Principles document is excessively long and complex, making
it difficult to understand, implement, and enforce.
Principles should be
concise and straightforward to avoid contradictions and loopholes. When
guidelines are too detailed, they become prone to misinterpretation,
which can lead to inconsistent application and enforcement. Clear, short
principles ensure that they are widely understood and easily adopted by
all stakeholders, fostering more effective compliance across the web
ecosystem.
One of the most pressing concerns is the document’s use of overly broad
language. Terms like “true intent”, “true preferences” (section 2.12)
Avoid "true intent" term. #432; We had discussed a change to use "... and not to bias the choice.", but the task force only agreed with removing the word "true".
The "true preferences" sentence doesn't need to be reworded because it's not in the principle itself.
and “enforcement mechanisms” (section 2) are not well defined, leading
to potential misinterpretation.
Such vagueness allows for subjective
implementation, creating loopholes that could hinder the adoption of
legitimate standards based on personal or ideological interpretations.
For instance, the recommendation that users should have the ability to
“turn off” data processing whenever they wish (section 1.1.1) is
admirable in theory but ignores real-world service dynamics. Many web
services rely on the responsible processing of user data, often with
user consent, to provide value, especially free content. Removing this
consent could limit the ability of websites to offer their services,
thereby affecting content availability and quality.
In section 1.1, the document oversteps by making conclusions about which
systems enhance or diminish user autonomy. It introduces unrealistic
recommendations, such as building systems that assume "unlimited time
and intellectual ability," which are impractical and offer no actionable
guidance. Moreover, "autonomy" is not a measurable concept, making it a
weak foundation for defining deceptive patterns, which are legitimate
concerns. A clearer, more concrete definition is necessary.
This turned out to be a misreading of the paragraph. While it would be nice to rewrite the paragraph in a more readable way, it's not necessary to resolve the formal objection.
The document’s treatment of vulnerability (section 1.2) broadly
classifies nearly all users as “vulnerable,” diluting the focus on those
who genuinely require stronger protections. This blanket approach
weakens the effectiveness of protections for groups who may need them more.
Explain an implication of the vulnerability section. #434: Add a statement about what implication this should have on system designers, e.g. that they should consider how their systems can make people vulnerable or that this isn't expected to apply to every system.
Finally, sections 2.11 and 2.2.1.1 address UI design and transparency,
which fall outside the technical scope of W3C’s mandate. While
transparency is important, the design of user interfaces should not be
dictated by a principles document, especially given the wide variety of
platforms and contexts in which UIs are developed. This focus risks
encroaching on areas that are best left to developers or user experience
experts, rather than a web standards body.
Impact on Content Monetization
The document fails to account for the reality that advertising involving
user data processing of some form sustains the majority of free web
content. By proposing mechanisms like global opt-out, it jeopardizes the
very model that enables users to access content without direct payment.
The document doesn’t seem to fully acknowledge that publishers and
content providers rely on certain data processing practices to fund
their services. Not all data processing is inherently harmful,
especially when users consent to it in exchange for access to free services.
Further, Section 2.1 on "Identity on the Web" introduces problematic
constraints for publishers. The prevention of same-site recognition
techniques, such as the common "read three articles for free then
subscribe for more" strategy, directly inhibits a publisher's ability to
design their own monetization models. By preventing such practices, this
principle stifles flexibility and innovation in how publishers generate
revenue and undermines the sustainability of the free content model,
which benefits users.
In section 2.2.1, the document describes “billing advertisers” as
ancillary to the user’s primary goal of accessing a site, which is
misleading. In reality, the financial ecosystem of the web requires
advertisers to fund the content that users consume. Disregarding this
connection risks eroding the ad-supported internet model, leaving small
publishers and content creators without sustainable means to continue
providing content.
It’s vital to remember that data processing isn’t only about tracking
users for advertising purposes. Some low-risk data, such as broad
location for personalized information or language for accessibility
settings, are essential to providing services efficiently and securely.
Privacy constraints should be context-dependent and account for the
diverse goals and needs of various stakeholders in the web ecosystem,
rather than focusing solely on user-centric concerns.
User-Agent Neutrality and Power Imbalance
The document does not adequately address the potential conflicts of
interest that exist among user agents, such as browsers. Many of these
agents are developed by companies with vested commercial interests,
including stakes in web advertising. By endorsing global opt-out
mechanisms and stricter privacy measures, the document may inadvertently
grant too much power to browser vendors, who can influence the standards
to benefit their own interests.
For instance, section 1.4 of the document discusses how minimal user
data could still be classified as personal data. This overextends the
definition of personal data and gives user agents, who control data
flows, excessive authority over privacy settings. This dynamic raises
concerns about potential oligopolistic behaviour, where browser vendors
enforce their vision of privacy at the expense of users, advertisers,
and content creators.
Section 1.4 on User Agents handles this.
In Section 2.10, the requirement that APIs be 'designed for purpose'
significantly restricts the flexibility of API users, limiting their
ability to innovate and adapt APIs for various needs. This shift further
concentrates power in the hands of user agents, particularly browser
vendors, who will have the authority to dictate how APIs are utilized.
This concentration of control risks stifling innovation and harming the
broader web ecosystem.
User agents are often presented as neutral actors, but the reality is
more complex. They are not merely intermediaries but key players in
shaping the web’s economic and technological future. The Privacy
Principles document should consider this conflict of interest and
advocate for more balanced governance between different web stakeholders.
Privacy and Consent
The document’s stance on consent mechanisms is impractical and limits
the user’s ability to make informed decisions. It proposes a global
opt-out mechanism, which contradicts the context-dependent nature of
user consent. In practice, users often consent to data processing for
trusted sites and services in exchange for value, whether through access
to content, personalization, or other benefits.
Global opt-out (section 1.1.1) undermines this flexibility, taking a
blanket approach to privacy that strips users of the ability to make
nuanced, informed decisions. Furthermore, consent should not be framed
solely as a barrier to data processing. When users give informed
consent, especially to trusted services, they receive more relevant
content and personalized services, which enhance their overall
experience.
A rigid global opt-out system disregards this value
exchange, preventing users from accessing the benefits of tailored
content and diminishing the positive role advertising can play in
delivering meaningful value to web users. As privacy principles should
vary by context, users need to be empowered to consent to data
processing on a case-by-case basis. This preserves autonomy while
ensuring that services reliant on advertising and other data-driven
models can continue to function.
The assumption (section 1.1.2) that “transparency and choice” inherently
signal inappropriate data processing is misleading. Transparency and
user choice are essential components of ethical data processing and
should not be framed as indicators of misuse by themselves. Instead,
these elements empower users to make informed decisions about their data.
Moreover, the document’s reference to “true intent” and “true
preferences” (section 2.12) is vague and not actionable. Without clear
definitions, these terms create a compliance challenge for developers
and web services. Consent is a dynamic, evolving process, and users
should be able to give it permanently in trusted contexts. A more
balanced approach is required to account for the diversity of user
intent and context.
Additionally, section 2.12 raises concerns regarding the practicality of
requiring actors to allow individuals to access, correct, or remove data
about themselves when that data has been provided by someone else. The
example provided—such as in the case of a shared photograph—feels far
removed from typical web platform design.
Overreach into Legal and Regulatory Domains
While privacy is an important area for standardization, sections like
“How This Document Fits In” and “Audiences for this Document” suggest
that the W3C is attempting to influence legal regimes. This is beyond
the scope of W3C’s mandate , which should remain focused on technical
standards. The role of policy-making is distinct from technical
governance, and this document blurs those lines.
Removing the "policy makers and" line would resolve the objection, but we don't have consensus to do that.
Section 2.6 on de-identified data introduces technical solutions that
extend far beyond the web platform’s scope. Principles documents should
not define specific technical approaches, as this risks overstepping the
W3C’s role and venturing into areas better addressed by laws or specific
technical standards bodies. A more focused approach would be to provide
general guidance, leaving technical implementations to other relevant
frameworks.
By making broad recommendations that veer into regulatory advocacy, the
Privacy Principles document may cause confusion between what is a
technical standard and what should be left to lawmakers. For example, in
section 2.7 on collective privacy, the document discusses collective
decision-making bodies, which is outside the remit of a technical
standard-setting organization like the W3C. The W3C should focus on
providing technical guidance that complements existing privacy laws
rather than attempting to shape policy itself.
This piece of the objection was withdrawn.
Additionally, Section 2.8 mandates that user agents must inform users of
any ongoing surveillance, which contradicts legal frameworks in many
countries. In many jurisdictions, surveillance can be authorized by
judicial bodies without notifying the subject, particularly in national
security or criminal investigations. By failing to account for these
legal and societal imperatives, the document positions privacy in
opposition to established laws and undermines its own credibility.
The principle here focused on surveillance by administrators rather than by governments, so this piece of the objection was withdrawn.
Lack of Clarity on Sensitive Information
The document fails to adequately define what constitutes “sensitive
information” (section 2.4). Without a clear categorization or framework
for identifying sensitive data, this section offers little practical
guidance. Some of the examples provided, such as language preferences or
approximate location, are essential for delivering relevant content and
ensuring a smooth user experience. Treating these as inherently
sensitive without proper context could lead to unnecessary restrictions
that degrade the quality of web services.
It’s crucial for the document to distinguish between different levels of
sensitivity and acknowledge that some data is necessary for providing
seamless, secure, and user-friendly experiences on the web.
Conclusion
While the Privacy Principles document outlines important goals for
enhancing user privacy on the web, it is overly broad, risks undermining
the web’s economic foundation, and fails to account for key stakeholders
beyond the end user. Furthermore, it veers into regulatory areas that
are beyond W3C’s mandate. To avoid unintended consequences, the document
must be revised to balance privacy protections with the needs of content
creators, publishers, advertisers, and user agents.
We urge the W3C to reconsider the implications of these principles on
the broader web ecosystem and to engage in a more inclusive dialogue
that respects the complexity of the modern web while ensuring user privacy.
The text was updated successfully, but these errors were encountered:
The objection is in https://lists.w3.org/Archives/Public/public-review-comments/2024Sep/0013.html. The whole thing is quoted here, with a task list of PRs that address parts of the objection. Not all of the objection was solvable (which is noted inline), and it's possible not all of these PRs will be accepted.
The "true preferences" sentence doesn't need to be reworded because it's not in the principle itself.
This turned out to be a misreading of the paragraph. While it would be nice to rewrite the paragraph in a more readable way, it's not necessary to resolve the formal objection.
Section 1.4 on User Agents handles this.
Removing the "policy makers and" line would resolve the objection, but we don't have consensus to do that.
This piece of the objection was withdrawn.
The principle here focused on surveillance by administrators rather than by governments, so this piece of the objection was withdrawn.
The text was updated successfully, but these errors were encountered: