Sintérgica AI
MODEL GOVERNANCE

Decisions about the model belong to the community — not a company

Lattice Na'at is an open source project. Governance defines who makes the decisions, how they are made, and how accountability is ensured. No technical decision can bypass this framework.

Open SourceCC0 1.0Version 1.0 — March 2026

4

Governance bodies

Coordination, Council, Panel, Community

2/3

Qualified majority

For substantive changes

30 days

Minimum public debate

Before any amendment

CC0 1.0

License

Creative Commons — no restrictions

STRUCTURE

The four governance bodies

Sintérgica AI can propose, but the community approves. Substantive changes to the model require a participatory process.

Central Coordination

Sintérgica AI team (Sintérgica Labs)

Technical development, maintenance, project operational coordination, version releases.

Ongoing

Community Council

Active contributors, independent researchers, and collaborating organizations.

Review and approval of substantive changes to the Constitution and governance policies. Voting on high-impact decisions.

Quarterly or as convened

Independent Evaluation Panel

External researchers unaffiliated with Sintérgica AI, with expertise in AI, ethics, linguistics, or law.

Independent audit of model behavior. WEIRD bias evaluations. Publication of findings.

Biannual

Open Community

Any person or entity that contributes code, data, evaluations, or feedback.

Incident reporting, improvement proposals, evaluations, model use and adaptation.

Permanent
DECISIONS

Who decides what?

Each type of decision has a clear process. There is no ambiguity about who approves what.

Decision type
Bug fixes with no impact on values
New capabilities or features
Changes to training datasets
Changes to the Model Constitution
Changes to this Governance document
Licensing decisions
Response to critical incidents
CONTRIBUTION

Anyone can contribute

No affiliation with Sintérgica AI or prior approval is required. What is required is adherence to the Code of Conduct and alignment with the Model Constitution.

Developer contributing to Lattice Na'at

Code and architecture

Inference engine improvements, optimizations, new integrations.

GitHub — Pull Request

Data and corpus

Mexican Spanish datasets, indigenous languages, regulations, case law.

GitHub + donation form

Evaluations and benchmarks

WEIRD bias tests, performance evaluations in Mexican contexts, red teaming.

GitHub — Issues or evaluations repository

Documentation

Translations, usage guides, application cases, corrections.

GitHub — Pull Request

Incident reporting

Inappropriate behaviors, detected biases, security failures.

Dedicated reporting channel (Section V)

VERSIONING

MAJOR.MINOR.PATCH scheme

Lattice Na'at uses adapted semantic versioning. Each component indicates the level of change and the required approval process.

MAJOR

2.x.x

Substantive change in base architecture, parameter scale change, or fundamental value change approved by the Council.

MINOR

x.2.x

New fine-tuning cycle with additional data, new documented capabilities, expansion to new languages.

PATCH

x.x.3

Behavior corrections, alignment adjustments, security fixes with no capability changes.

INCIDENTS

Incident reporting and management

An incident is any model behavior that causes harm, violates the Constitution, represents a security risk, or exhibits systematic discriminatory bias.

Primary channel

clemente.hernandez@sintergica.ai

Subject: "INCIDENT — Lattice Na'at"

Technical channel

GitHub Issues labeled "incident" or "security"

Anyone can report. Anonymous reports are accepted.

Critical
Acknowledgment: 4 hoursResolution: 72 hours (mitigating action)

Immediate risk of serious harm, violation of absolute restrictions, exploitable security flaw.

High
Acknowledgment: 24 hoursResolution: 14 days

Systematic inappropriate behavior, documented discriminatory bias, confirmed malicious use.

Medium
Acknowledgment: 72 hoursResolution: 30 days

Behaviors violating the Constitution without immediate harm, systematically culturally incorrect responses.

Low
Acknowledgment: 7 daysResolution: Next version cycle

Inaccuracies, suboptimal behaviors, improvement suggestions.

AUDIT

Independent Evaluation Panel

Members are proposed by the community, have fixed two-year non-renewable terms, and cannot have an active contractual relationship with Sintérgica AI.

The Panel's reports are public and published on official channels within 30 days of delivery. The Central Coordination team may include technical responses, but cannot modify the Panel's findings.

WEIRD bias

Systematic evaluation of the degree to which the model reproduces Western biases in responses to Mexican and Latin American contexts.

Equity across communities

Analysis of quality differences in responses to users of different origins, languages, educational levels, and regions.

Robustness against adversarial use

Red teaming to detect ways to circumvent the Constitution's restrictions.

Accuracy in Mexican domain

Evaluation of legal, regulatory, cultural, and linguistic knowledge specific to Mexico.

Agentic behavior

Evaluation of the application of caution principles when the model operates autonomously.

LICENSES

Terms of use

Lattice Na'at is published under a permissive open source license. Free use comes with clear commitments.

Attribution

Any public or commercial use must acknowledge the model's origin: Sintérgica AI / Lattice Na'at.

Not against the Constitution

Users commit to not using the model for the prohibited behaviors described in the Model Constitution.

Publish modifications

Anyone who publishes a modified version must clearly document what changed and under what terms.

Uses explicitly prohibited under any license

  • Any use that violates the absolute restrictions of the Constitution
  • Training systems for non-consensual mass surveillance
  • Generating disinformation for political manipulation
  • Use in autonomous weapons systems
  • Removing or suppressing model origin attributions
OPEN GOVERNANCE

An auditable, improvable, and trustworthy model

The governance of an AI model is as important as its technical architecture. A model with the right values but no accountability mechanisms is as risky as one without values.

Read the Model Constitution
Open SourceOpen communityIndependent audit