Saltar a contenido

Safe Use of AI

Objective

This page helps Cafh use AI with discernment. AI can support translation, summaries, drafts, and routine work, but the human process must remain at the center.

Alignment with Cafh's mission

Cafh states that its mission is to promote an evolutionary movement toward egoence through the integral development of people and groups. That mission includes self-knowledge, expansion of consciousness, the search for the transcendent, and conscious living in line with the Law of Renunciation and the Mysticism of the Heart. AI must support that work. AI must not displace it. These points describe the spirit of acceptable use, not only a technical control. They help Cafh judge whether the tool serves the mission or pulls attention away from it.

A mission-aligned use of AI:

  • Supports administrative work without taking the place of inner work
  • Leaves room for study, silence, meditation, dialogue, and discernment
  • Respects truth, care, and responsibility in what Cafh publishes
  • Protects people, member data, and the good name of Cafh
  • Helps groups work better without creating dependence on automatic answers

Why this policy matters

Cafh values inner development, self-knowledge, and the expansion of consciousness. For that reason, AI cannot be treated as a neutral shortcut in every case. It may help with support work, but it cannot replace study, lived experience, discernment, or the slow work through which meaning matures.

AI can produce fluent text very quickly. That speed is useful. It can still create harm when language sounds complete but lacks depth, context, or truth. In Cafh, that risk deserves special attention, especially around teachings and interpretation.

Core principle

AI may assist human work. AI must not replace human study, human judgment, or human experience in Cafh.

Risks this policy helps reduce

  • Biased text that narrows meaning
  • Summaries that take the place of reading and reflection
  • Unreviewed drafts published as if they were final
  • Exposure of member data or restricted texts
  • Passive dependence on fast answers
  • False authority in the interpretation of teachings
  • Mechanical answers that displace meditation, discernment, and self-knowledge
  • More dependence on speed and convenience than on formation and responsibility

What AI does well

AI is useful for:

  • First draft translation
  • First draft summary
  • Grammar cleanup
  • Note cleanup after meetings
  • Draft structure for administrative text
  • Plain language rewrite of public content
  • Search help for public information

These uses save time. They do not decide meaning.

What makes AI risky

AI has real limits that matter in Cafh work.

Bias

AI can repeat bias from its training data. AI can favor one tone, one culture, one style of reasoning, or one moral frame. That bias can appear even in text that sounds neutral.

Compression

AI often shortens complex ideas. It can remove tension, silence, nuance, paradox, and depth. That pattern is dangerous in books and teachings.

False confidence

AI can sound certain even when it invents details. Readers may trust the tone and miss the error.

Loss of source meaning

AI can shift the force of a sentence. It can soften a warning. It can harden a suggestion. It can flatten spiritual language into generic advice.

Passive dependence

AI can train people to ask for answers too early. That habit can weaken reading, study, reflection, and inner effort.

Distance from mission

AI often rewards speed, volume, imitation, and surface clarity. Cafh asks for depth, conscience, renunciation, and inner transformation. If AI is used without care, the tool can pull work away from the mission. It can fill silence too quickly. It can replace patient reading with instant answers. It can place convenience above formation.

Why teachings need special care

Teachings are not raw information only. They carry language, method, context, and living experience. A member may read one paragraph many times and receive new meaning over time. AI does not live that process.

AI can summarize words. AI cannot stand in for the act of learning. AI cannot replace the inner movement that Cafh seeks to cultivate.

For that reason, AI must never become the main interpreter of books, teachings, or study material. Members must return to the original text, the original setting, and human guidance inside Cafh life.

Allowed uses

Allowed does not mean automatic. Each use still needs source care, human judgment, and review that fits the sensitivity of the task.

Allowed uses include:

  • Draft translation for later human review
  • Draft summary for administrative use
  • Formatting or grammar cleanup
  • Drafting neutral process notes
  • Drafting simple public announcements
  • Building a first list of questions for human review

High-risk uses

High-risk uses need special care and committee review. Risk rises as content moves closer to teaching, interpretation, or official voice.

High-risk cases include:

  • Public text about teachings
  • Internal study guides
  • Text that explains spiritual meaning
  • Text that may shape member interpretation
  • Text that speaks in the name of Cafh
  • Text that includes restricted internal content

Restricted uses

These uses cross the line of acceptable support. They hand human formation, interpretation, or trust to a tool that cannot live the Cafh process.

These uses are not correct:

  • Replacing study, reflection, or inner work
  • Presenting AI output as final teaching without human review
  • Using AI to define an official Cafh interpretation
  • Placing member data in open AI tools
  • Placing restricted internal texts in open AI tools
  • Publishing AI text as if it came from direct Cafh experience
  • Asking AI to act as spiritual authority

Human review rules

Every AI draft needs human review. That review must be active and careful.

The reviewer must:

  • Read the full source text
  • Compare the AI output with the source
  • Remove invented claims
  • Remove weak simplifications
  • Check tone, context, and accuracy
  • Decide whether the text should be used at all

Public content needs a named human approver. Sensitive content about teachings needs review by people with context and trust.

Questions to ask before using AI

Ask these questions first:

  • Is this task administrative or interpretive?
  • Is the source text public or restricted?
  • Does the text contain member data?
  • Could this output shape how a person understands a teaching?
  • Will a human reviewer compare it with the original?
  • Does this use support the mission or only save time?
  • Does it leave room for study, meditation, dialogue, and discernment?
  • Could it weaken the human process that Cafh is trying to cultivate?

If the task is interpretive, sensitive, or restricted, stop and raise it to the committee or its delegate.

Safe workflow

  1. Define the task in simple terms.
  2. Remove private data and restricted data.
  3. Use AI for a draft only.
  4. Compare the draft with the source text.
  5. Revise the text with human judgment.
  6. Send public or sensitive text for approval.
  7. Keep the final human version as the official record.

Translation rules

Translation is useful and risky at the same time.

Use AI translation only as a first pass. Then:

  • Compare it with the original
  • Check key terms
  • Check tone and force
  • Check names, dates, and quotes
  • Ask a human reader of that language to review it

This matters even more for teachings. A small shift in a key term can change the meaning of a whole page.

Summary rules

Summaries must not become replacements for the source. If a summary is used, it must:

  • Point readers back to the full text
  • State that it is a summary
  • Avoid claims that exceed the source
  • Avoid moral or spiritual conclusions not present in the source

Data protection rules

Do not place these items into open AI tools:

  • Member records
  • Internal committee notes
  • Restricted study material
  • Contract details not meant for public view
  • Passwords, tokens, or recovery codes

If Cafh ever uses a paid or private AI service for internal work, the committee must review the tool first.

Signs that an AI draft is not safe

Watch for these warning signs:

  • It sounds sure, but the source is more careful
  • It adds claims not found in the source
  • It turns a deep text into generic advice
  • It removes tension or ambiguity that matters
  • It changes the spirit of the original
  • It hides uncertainty
  • It feels smooth but not true

Committee role

The committee gives Cafh a place to review patterns, not only isolated cases. That shared review helps the organization correct tools, training, and approval paths over time.

The committee must:

  • Review high-risk uses of AI
  • Set approval rules for public content
  • Review tools before wider internal use
  • Review incident reports tied to AI misuse
  • Keep this policy under review as use changes

Awareness course module

The member awareness course should include one AI module tailored to Cafh. The course should not only list rules. It should help members understand the reason behind the rules and practice careful review.

That module should cover:

  • The support role of AI
  • The line between administrative help and interpretation
  • Public text and restricted text
  • Member data and open AI tools
  • Human review before publication
  • Bias, compression, and false confidence
  • Reasons teachings need special care
  • How to report AI misuse or risky output

Final rule

AI may multiply work. AI must not replace the human experience in Cafh. AI must not take the place of reading, reflection, dialogue, and inner transformation.