top of page

AI Literacy Statement

Document version: 2.0

Date: 2026-01-23

Issued by: Hoodin

Contact: https://www.resources.hoodin.com/contact

1. Purpose of this statement

This AI Literacy Statement explains how artificial intelligence (AI) is used within Hoodin Compliance Studio and what level of understanding, oversight, and responsibility is expected from users and organisations.


The purpose of the document is to support transparency, informed use, and responsible governance of AI-supported functionality. It is intended for regulatory, quality, procurement, legal, and management audiences.


This statement does not describe technical model architectures, does not provide regulatory advice, and does not replace customer-specific policies or assessments.


2. Role of AI in Hoodin Compliance Studio

AI within Hoodin Compliance Studio is used exclusively as a decision-support mechanism.


AI-generated outputs are limited to suggestions, structuring, prioritisation, and analytical assistance. The system does not make regulatory decisions, classifications, approvals, or determinations on behalf of users.


All regulatory conclusions and compliance decisions remain the responsibility of the user organisation.


3. Human oversight and user control

Human oversight is a mandatory and integral part of all AI-supported functionality in Hoodin Compliance Studio.


Users retain full control over whether and how AI-generated outputs are used. AI suggestions can be accepted, modified, or rejected by the user. No AI-generated output is applied automatically as a binding action.


AI-supported functionality does not bypass established regulatory or quality review processes.


4. Transparency of AI-supported outputs

AI-generated suggestions and analyses are clearly identifiable within the system.


Where applicable, AI-supported outputs are presented together with references to underlying sources or contextual information used in the analysis. This enables users to review, validate, and challenge AI-generated content using primary sources.


The system is designed to ensure that users are aware when AI support is involved in generating or structuring information.


5. Contextual use of customer-provided information

To provide relevant and accurate support, AI functionality operates within a defined user and product context.


This context may include information that users have entered or configured within the system, such as product attributes, intended use descriptions, organisational scope, and selected markets.


The use of contextual information is a deliberate design choice to ensure relevance and consistency. Users are not required to manually re-enter this information for each AI-supported interaction.


All contextual information used for AI-supported analysis originates from user-provided data and system configurations that can be reviewed and adjusted by the user.


AI-supported processing is not self-initiated and does not occur outside defined system functions or user-initiated activities.


6. Explainability and transparency approach

The AI models used within Hoodin Compliance Studio are not designed to be fully interpretable at the level of internal model parameters.


Instead, explainability is achieved through functional transparency, including:

  • clear identification of AI-supported outputs,

  • visibility into the sources and context used,

  • defined limitations on AI use,

  • and mandatory human review of AI-generated suggestions.


This approach supports practical explainability and accountability without requiring access to internal model logic.


7. Use of third-party AI components

AI-supported functionality within Hoodin Compliance Studio may involve processing by AI components operated by third-party providers.


Such processing is conducted in accordance with Hoodin’s established security, data protection, and governance practices.


Information regarding data location, international data transfers, and applicable safeguards is addressed in Hoodin’s data protection and privacy documentation.


8. Continuous improvement and controlled updates

AI-supported functionality is subject to controlled development and change management.


Improvements to AI components are implemented through planned updates and releases. AI behaviour is not modified through autonomous learning based on customer data.


Changes that may affect how users rely on AI-supported functionality are communicated through release notes and supporting documentation.


9. Responsibilities and appropriate use

Hoodin is responsible for the design, implementation, and governance of AI-supported functionality within the system.


Customers are responsible for:

  • understanding the role and limitations of AI-supported outputs,

  • validating AI-generated information against primary sources,

  • ensuring that AI support is used in accordance with internal policies and regulatory obligations.


AI-supported outputs do not constitute regulatory advice or compliance determinations.


10. Relationship to other governance documents

This AI Literacy Statement should be read in conjunction with:

Together, these documents define how AI-supported functionality is governed, assessed, and used responsibly within Hoodin Compliance Studio.

bottom of page