AI Governance Standard (Group)
AI usage rules for ASIACOM Group, managed centrally by the Japan entity (Tokyo HQ).
This page summarises how ASIACOM uses AI tools and the Ive assistant safely and in compliance with laws, with a focus on:
- Bank and regulator expectations
- Separation of group-level policies and local-law supplements
- Clear boundaries between trade-related compliance and AI-related compliance
Languages: EN / zh-Hant. A Japanese explanation is available on the Japan (.co.jp) site.
1. Purpose and Scope
This AI Governance Standard applies to all AI use within the ASIACOM Group, including content generation, translation, document checking and internal support tools. Tokyo HQ (Japan Desk) approves AI providers and defines the rules described here.
2. Core Principles
ASIACOM follows these principles for AI use:
- Lawful and compliant – AI must respect AML, sanctions, export control and data protection laws.
- Human-centric – AI supports human decisions and does not replace human responsibility.
- Security and privacy by design – Only necessary data is used, and personal data is handled carefully.
- Transparency and accountability – Important AI-assisted decisions must be explainable and traceable.
- Bank- and regulator-friendly – AI usage must be explainable to financial institutions and authorities.
3. Approved AI Providers and Systems
Only AI services approved by Tokyo HQ may be used for business purposes. ASIACOM uses trusted non-Chinese AI providers to avoid geopolitical and data-security risks. Shadow IT – the use of unapproved AI tools with company data – is prohibited.
4. Data Handling Rules for AI
When using AI systems, users must:
- Input only data that is necessary for the task
- Avoid highly sensitive personal data unless strictly required and permitted
- Minimize personal identifiers in trade and compliance documents where feasible
- Mask or restrict bank-sensitive and regulatory-sensitive information according to internal rules
5. Human Oversight and Decision-Making
AI may draft documents, support translations, check consistency of trade documents and summarize information. However, final decisions in high-risk areas such as AML, sanctions, contract approval or communication with regulators and banks must always be made by human staff.
Staff must critically review AI outputs and are not allowed to approve or send out AI-generated content blindly.
6. Allowed Use Cases
Typical allowed AI use cases include:
- Drafting and editing emails, minutes and internal reports
- Translating between English, Japanese and Traditional Chinese
- Checking consistency of trade documents and detecting potential errors
- Preparing internal training materials and FAQs
- Supporting internal search and knowledge retrieval where implemented
7. Prohibited Use Cases
AI may not be used for:
- Creating falsified or misleading documents, invoices or contracts
- Designing schemes that could be seen as AML evasion or sanction circumvention
- Generating discriminatory, harassing or clearly inappropriate content
- Sending messages directly to counterparties or authorities without human review
8. Monitoring and Logs
Tokyo HQ may log AI usage (prompts and outputs) within reasonable limits to detect misuse and improve quality. Logs are handled as confidential internal data.
9. Training and Awareness
Staff receive training on how to use AI tools safely, including examples of acceptable and unacceptable use. This includes trade-related compliance and AI-specific risks.
10. Review of This Standard
This AI Governance Standard is reviewed at least annually and updated in line with regulatory and technological developments, as well as feedback from banks, regulators and internal users.