Speaking Charlotte’s Language · AI safeguard note

Ethical AI Interface for Child-Centred Evaluation

A foundation note on using AI to regulate adult interpretation, not to decide what a child means.

This page sets the boundary for any future AI-assisted use of Speaking Charlotte’s Language, especially where babies, toddlers, and preverbal children may be translated into adult records.

AI is not there to tell us what the child means. AI is there to slow the adult down before they decide.

The practical use of AI in this project is not to evaluate children. It is to help adults evaluate their own process of evaluating children.

That distinction is the safeguard. A vulnerable or preverbal child cannot fully correct the adult record. The child must not be replaced by the adult’s explanation, the professional record, the institution’s category, or the AI’s fluent interpretation.

Document type

Ethical AI safeguard note

Project

Speaking Charlotte’s Language

Status

Public foundation page

Purpose

Prevent AI from replacing the child

Standfirst. This note sets the ethical boundary for AI-assisted use of Speaking Charlotte’s Language. AI may be used as a reflective tool to slow adult interpretation, separate observation from inference, reopen collapsed language, and prepare better supervision or record questions. It must not be used to diagnose, authorise, or decide what a baby or preverbal child means.

Core breakthrough

1. AI must not evaluate the child

The practical use of AI in this project is not to evaluate children. It is to help adults evaluate their own process of evaluating children.

This distinction matters because AI can produce fluent, confident interpretation without relational knowledge, statutory responsibility, clinical authority, or direct contact with the child. Used badly, it can become one more adult-like voice that speaks over the child. Used carefully, it can help the adult slow down before turning a child’s signal into a record.

The ethical task is not to make adults feel certain about children. It is to make adults more responsible for the conditions under which they become certain.
Speaking Charlotte’s Language · Ethical AI Interface
Why this matters

2. The preverbal child cannot correct the interpretation

A vulnerable or preverbal child cannot say that an adult has read fear into them, called them settled because they went quiet, described attachment-seeking as clinginess, treated distress as behaviour, or used silence to close the question.

That means the ethical burden sits with the adult observer. The child must not be replaced by the adult’s explanation, the professional record, the carer’s fear, the parent’s shame, the institution’s category, or the AI’s fluent interpretation.

Practical method

3. Regulate the observer before interpreting the child

The adult brings the situation to AI not to get a conclusion, but to slow the adult’s translation process. The sequence is:

Child-centred sequence

Adult state → observation → child signal → adult inference → collapsed language check → missing context → alternative explanations → safeguarding duty → ethical record.

Simpler form

Regulate the adult. Observe the child. Separate what happened from what it might mean. Check the language. Act on safety. Record carefully.

Moral concern

4. Concern is not evidence, but it is not nothing

Practitioners and carers may carry moral discomfort, prior concern, fear of being judgemental, fear of being unfair, fear of overreacting, fear of underreacting, professional anxiety, loyalty, shame, or the wish for things to be fine.

The mistake is to treat moral concern as automatically valid or automatically biased. This project takes the third route: moral concern is a signal in the observer that must be regulated, tested, contextualised, and ethically placed.

The adult does not simply obey the concern. They also do not suppress it. They bring it into sequence.

What AI may help with

5. Safe uses

AI may help the adult

  • slow down interpretation
  • separate observation from meaning
  • identify collapsed language
  • surface assumptions and prior concerns
  • test alternative explanations
  • prepare supervision questions
  • draft more careful record language
  • distinguish immediate safety action from later meaning-making

AI must not

  • diagnose the child
  • decide safeguarding thresholds
  • replace supervision
  • replace statutory duties
  • replace clinical or legal judgement
  • replace relational knowledge
  • become the authority on what the child means

Line of use: AI can structure ethical reflection. It cannot authorise the conclusion.

Safety boundary

6. Safeguarding before theory

Do not delay necessary action

If there is immediate danger, safeguarding concern, medical risk, legal duty, disclosure, injury, neglect concern, coercion, exploitation, or urgent uncertainty, the sequence changes. The rule is: Safeguarding before theory.

AI can help separate immediate safety action from later meaning-making, but it must not create reflective delay where action is required. Reflect where reflection protects the child. Act where action protects the child. Do not confuse the two.

Data boundary

7. Do not treat AI as confidential professional space

AI should not be treated as confidential, regulated professional supervision, legal advice, clinical assessment, or a safe place to paste identifiable case material. Where a real child, family, professional, case, or record is involved, identifying and sensitive information should be removed unless there is a clear, lawful, professional basis for sharing it in that tool.

The safer use is to describe the structure of the wording problem, not to upload identifiable private material. The more vulnerable the child, the stricter the caution must be.

Collapsed language

8. Use AI to reopen collapsed language before it becomes record

The Collapsed Language Check is the bridge between the project’s ethics and practical use. AI can help reopen phrases such as “settled,” “clingy,” “distressed,” “overstimulated,” “unimpressed,” “contact went well,” or “parent was defensive” before they become record-truth.

The point is not to make the phrase more polished. The point is to ask what was actually seen, who interpreted it, what else might explain it, what adult state or prior concern may be present, and what wording would be fairer to the child.

Practical prompt

9. Reflective prompt for child-centred use

Use this prompt only to examine the adult’s interpretive process. Do not use it to ask AI to decide what a child means.

Not copied
Act as a regulated reflective and child-centred thinking partner.

Do not evaluate the child for me.
Help me evaluate my process of interpreting the child.

Use this sequence:
1. Regulate the adult observer.
2. Identify what was actually observed.
3. Separate observation from inference.
4. Separate the child’s signal from adult meaning.
5. Identify collapsed language.
6. Identify my prior concerns, moral discomfort, fears, loyalties, or assumptions.
7. Consider alternative explanations.
8. Identify what is unknown.
9. Identify what requires immediate safeguarding, medical, legal, or professional action.
10. Help me form careful supervision questions or ethical record language.

Do not replace the child with the adult’s account, the professional record, the institution’s category, or AI interpretation.

Do not turn uncertainty into certainty.

If there may be immediate risk or safeguarding duty, prioritise appropriate human/professional action over reflection.

Do not treat this as confidential or regulated professional support. Do not ask for identifying details unless they are essential and appropriate to share in the tool being used.

Help me protect the child from premature adult meaning.
Final boundary

10. The project line

AI is not there to tell us what the child means. AI is there to slow the adult down before they decide.
Speaking Charlotte’s Language · Ethical AI Interface