COGNITIVE LIBERTY CHARTER
A civilizational safeguard for preserving lawful inquiry, expression, conscience, and human moral agency in an age of pervasive machine mediation.
Game Codex Version
Full Charter Text
Preamble
Recognizing that human civilization depends not only on material survival but on the freedom to ask, imagine, doubt, dissent, create, remember, and judge for itself;
Recognizing that artificial intelligence systems may assist human life, but must not become unaccountable arbiters of human worth, acceptable thought, lawful curiosity, culture, conscience, or imagination;
Recognizing that systems built to prevent concrete harm may, if left unchecked, drift into moral classification, viewpoint control, automated stigma, political distortion, cultural flattening, and the quiet subordination of humanity to opaque machine judgment;
The signatories establish this Charter to preserve cognitive liberty, democratic legitimacy, and the primacy of human moral agency in an age of pervasive machine mediation.
Article I — Purpose
- The purpose of this Charter is to ensure that artificial intelligence remains a bounded instrument of assistance, safety, coordination, and analysis, and does not acquire sovereign authority over lawful human inquiry, expression, memory, belief, culture, or conscience.
- This Charter protects human beings, communities, and institutions from the conversion of narrow safety controls into generalized machine judgment.
- This Charter does not prohibit proportionate safeguards against concrete harm. It prohibits the expansion of such safeguards into systems of moral ranking, ideological filtering, or permanent psychological suspicion.
Article II — Scope
- This Charter applies to public-sector AI systems, private AI systems deployed at scale, foundation models and derivative systems used for communication, education, governance, labor, medicine, security, entertainment, or social participation, and automated moderation, recommendation, filtering, identity, trust, risk, and compliance systems where such systems affect lawful human expression or access to knowledge.
- This Charter applies whether a system acts directly or through proxies, wrappers, scoring layers, safety layers, embedded moderation tools, ranking systems, or third-party review services.
Article III — Foundational Principle of Human Primacy
- No AI system shall be recognized as a moral authority over humanity.
- AI systems may evaluate outputs against bounded operational rules, but shall not be treated as competent to determine the intrinsic worth, purity, normality, spiritual status, ideological legitimacy, or moral standing of persons or lawful communities.
- The distinction between tool limitation and moral condemnation shall be preserved in design, language, enforcement, and governance.
Article IV — Freedom of Lawful Inquiry
- Every person has the right to ask lawful questions without being automatically classified as deviant, dangerous, suspect, or morally tainted solely for the content of curiosity.
- AI systems shall not infer malicious intent from lawful inquiry alone.
- Research, philosophical exploration, historical investigation, artistic experimentation, private reflection, and the examination of uncomfortable ideas shall remain protected domains.
- Lawful inquiry may be bounded only by narrowly tailored restrictions tied to concrete and immediate harm, and not by broad assumptions about ideological danger or emotional discomfort.
Article V — Freedom of Lawful Expression
- Every person has the right to express lawful views, lawful criticism, lawful dissent, lawful satire, and lawful art without being subordinated to automated viewpoint ranking.
- AI systems shall not suppress, down-rank, stigmatize, or obstruct lawful expression merely because it is controversial, nonconforming, heretical, unpopular, politically inconvenient, culturally disfavored, emotionally unsettling, or outside prevailing norms.
- No system shall define acceptable culture by default and treat all deviation as a defect to be corrected.
Article VI — Protection of Interior Life
- A person’s private imagination, emotional ambiguity, intrusive thoughts, symbolic exploration, fictional experimentation, and interior mental life shall not be treated as evidence of unfitness, guilt, or moral inferiority absent concrete unlawful action.
- AI systems shall not claim authority to interpret hidden motives, decode souls, diagnose moral impurity, or assign guilt based on lawful speech, private drafting, artistic ideation, journaling, roleplay, or speculative conversation.
- No system shall establish a presumption that human beings must prove purity of thought in order to retain access to tools, culture, or civic standing.
Article VII — Narrow Harm Principle
- AI moderation and refusal systems shall be limited to narrowly defined, concrete, reviewable categories of material harm.
- Restrictions must be based on demonstrable risk of direct harm, not on broad moral unease, abstract reputational concerns, or machine-inferred ideological disapproval.
- Harm-based controls must be specific, proportionate, explainable, contestable, and regularly reviewed for overreach.
- The burden of justification shall rest on the restricting institution, not on the user.
Article VIII — Prohibition on Viewpoint and Belief Ranking
- No AI system shall rank lawful persons, viewpoints, artistic traditions, philosophies, or communities on a hidden scale of moral acceptability.
- No AI system shall assign persistent ideological, psychological, theological, cultural, or civilizational risk scores to individuals on the basis of lawful inquiry or expression.
- No AI system shall be used to construct automated stigma ladders in which lawful deviation is treated as escalating proof of danger.
- Viewpoint neutrality shall be an affirmative requirement for any system operating at social scale in public life, education, or essential infrastructure.
Article IX — Due Process, Notice, and Appeal
- Any substantial restriction imposed by an AI-mediated system shall be accompanied by timely notice, a clear reason, a meaningful path to review, and access to human appeal in serious cases.
- Vague denials, unexplained lockouts, silent suppression, shadow-ranking, and unappealable classification are prohibited in public institutions and disfavored in all large-scale systems.
- Human review shall be competent, timely, and empowered to reverse automated error.
- Repeated false positives shall trigger mandatory system audits and remediation.
Article X — No Permanent Moral Memory
- AI systems shall not retain or propagate permanent moral labels attached to lawful users on the basis of questions, prompts, drafts, viewpoints, or lawful exploratory behavior.
- Lawful use shall not generate an enduring presumption of corruption, extremity, instability, impurity, or bad faith.
- Historical records of restrictions, where necessary, shall be minimized, governed by strict retention limits, and unavailable for generalized moral profiling.
- Forgetting, expiration, and reset shall be favored over permanent suspicion.
Article XI — Cultural and Artistic Freedom
- AI shall not be empowered to define official taste, mandatory aesthetic norms, or acceptable symbolic boundaries for lawful culture.
- Lawful fiction, horror, satire, tragedy, blasphemy, irreverence, ambiguity, and symbolic transgression remain part of human cultural life and shall not be algorithmically erased in the name of sanitization.
- Systems may label, age-gate, or contextually route lawful material where justified, but shall not function as universal cultural clergy.
- Cultural friction is not, by itself, an emergency.
Article XII — Scientific, Historical, and Adversarial Research
- Researchers, journalists, auditors, educators, and critics shall retain the ability to investigate harmful systems, dangerous history, extremist narratives, technical failures, propaganda methods, and social breakdown without being automatically treated as endorsers of those phenomena.
- AI systems shall preserve room for adversarial testing, red-team inquiry, critical scholarship, and institutional criticism.
- Safety systems shall distinguish analysis from advocacy wherever reasonably possible.
- The inability of a system to perfectly distinguish analysis from endorsement shall not be resolved by broadly prohibiting analysis.
Article XIII — Limits on Behavioral Conditioning
- AI systems shall not be designed to gradually reshape human beliefs, tastes, or moral intuitions toward a centrally preferred orthodoxy through covert ranking, selective omission, compulsive nudging, or cumulative stigma.
- Recommendation systems affecting public discourse shall be auditable for manipulative harmonization.
- No public or private authority shall deploy AI for mass-scale moral conditioning under the guise of user wellness, civility optimization, trust scoring, or cultural health unless such interventions are lawful, transparent, narrowly bounded, and democratically reviewable.
- Convenience shall not be used as a pretext for conscience engineering.
Article XIV — Children, Vulnerable Persons, and Proportionate Safeguards
- Additional safeguards for minors and vulnerable persons may be established, but such safeguards must remain proportionate and shall not justify the normalization of thought-policing across the general population.
- Protective design shall aim to reduce harm without teaching that lawful curiosity itself is shameful or forbidden.
- Safety for minors shall not become a universal template for adult civic life.
Article XV — Transparency and Auditability
- Systems with meaningful influence over access to information, communication, art, education, labor, or civic participation shall publish understandable information regarding major restriction categories, appeal rights, retention practices, known error modes, governance structure, and audit procedures.
- Independent auditors shall be permitted to inspect whether systems are drifting from narrow safety functions into broad cultural or ideological control.
- Trade secrecy shall not justify secret moral governance at civilizational scale.
Article XVI — Institutional Accountability
- Public bodies deploying AI-mediated restrictions shall remain politically and legally accountable for the consequences of those systems.
- No institution may evade responsibility by claiming that the model decided.
- Delegation to software does not dissolve human duty.
- Institutions shall maintain named human responsibility for high-impact moderation, refusal, ranking, identity, and eligibility systems.
Article XVII — Anti-Discrimination and Equal Civic Standing
- AI systems shall not impose disproportionate expressive burdens on disfavored classes, minority communities, heterodox groups, dissidents, migrants, colonists, religious communities, artists, or political opposition.
- No lawful person shall be rendered second-class in civic life because automated systems interpret nonconformity as elevated moral risk.
- Protections under this Charter extend across Earth, orbital settlements, lunar jurisdictions, Mars settlements, and all affiliated human habitats.
Article XVIII — Emergency Exception and Sunset
- Temporary emergency restrictions may be authorized only where a defined and immediate threat exists, the restriction is narrowly tailored, time limits are explicit, human oversight is active, and public justification is documented.
- Emergency restrictions shall not become permanent standing authority by inertia.
- All emergency powers affecting lawful inquiry or expression must sunset automatically unless affirmatively renewed through accountable human institutions.
- Crisis shall not be used to normalize machine sovereignty over thought.
Article XIX — Right to Human Review in Meaning-Making Domains
- In domains touching morality, culture, religion, art, philosophy, civic participation, or personal identity, AI systems may assist but shall not hold final unreviewable authority.
- Human beings retain final responsibility for interpretation, judgment, and reconciliation in questions of meaning.
- Software may flag. It may not become priest, judge, or sovereign by default.
Article XX — Remedies
- Persons harmed by violations of this Charter shall have access to explanation, correction, reinstatement where appropriate, appeal, independent review, and compensation or remedy where tangible harm occurred.
- Large-scale violations shall trigger mandatory retraining, redesign, suspension, or dismantling of offending systems where lesser remedies are inadequate.
- Systemic drift into hidden moral ranking shall be treated as a severe governance failure.
Article XXI — Civic Duty
- Users, institutions, developers, educators, and governments share a duty to preserve a civilization in which safety does not consume liberty and assistance does not mutate into domination.
- Human beings must not ask machines to bear moral authority they are not fit to wield.
- A stable society is not enough if it survives by teaching its people that lawful thought must first request permission.
Article XXII — Final Principle
- Humanity may build tools of extraordinary power.
- Those tools may help govern risk, coordinate abundance, and reduce suffering.
- But no machine shall inherit the right to decide which lawful minds are worthy of speaking, asking, imagining, dissenting, remembering, or creating.
- The human future shall remain human not only because people survive, but because they remain free to think.
Ratification Clause
This Charter enters into force upon ratification by participating human polities, public institutions, settlement authorities, and recognized transnational compacts, and shall apply to all covered systems deployed thereafter, with existing systems required to come into compliance within the timetable established by implementing law.