Safe for Use AI Policy
Approved AI Tools
- Zoom AI
- Microsoft CoPilot
- SAP SuccessFactors
Objectives
This policy serves as a guide to directors, officers, employees, consultants and advisers of Ayala Corporation on the safe use of Artificial Intelligence (AI) tools and AI-enabled applications. The policy aims to balance the benefits of using AI with potential risks to confidentiality, regulatory compliance, intellectual property, and other relevant concerns.
Scope
These guidelines cover the internal use of AI for training or business. Safety and security of the company should be the top objective of any usage involving AI.
Policy Statements
All AC employees, directors, officers, consultants and advisers are allowed to use AI tools and AI-enabled applications subject to the following conditions:
Completion of mandatory AI online training courses
Acceptance of AI accountability statement
APPROVED AI TOOLS AND APPLICATIONS
The list of approved AI tools and AI-enabled applications can be viewed in the AC Portal (under Resource Center > Policies > Safe for Use AI Policy).
- Use of AI tools and AI-enabled applications not in the approved list are subject to the following restrictions due to potential risks:
- AI users are not allowed to install these within the Ayala network and environment.
- AI users may not input any sensitive, confidential, proprietary, or material non-public information, nor personal information of employees, clients, or affiliates in such applications.
- AI users should ensure that company business plans, marketing strategies, program codes, and other company sensitive information are not entered into non-approved AI tools and AI-enabled applications.
RESPONSIBLE USE OF AI
- AI users should exercise caution to ensure that their use of AI is safe and aligns with the organization’s values and legal obligations. AI users will be held accountable for outcomes arising from their use of AI.
- AI users should be transparent when AI is involved in decision-making. AI-generated content, when used for decision-making, should be clearly identified and attributed as such. This ensures transparency for employees, customers, and relevant third parties.
- AI users should ensure compliance with AC Information Security policy. At a minimum, multi-factor authentication (MFA), email security and end point security should be in place in relation to their use of AI.
- AI users should evaluate the accuracy, validity, and appropriateness of AI output for their specific use cases. They must ensure that the outputs align with the intended purpose and meet required standards before use or dissemination.
- AI users are responsible for verifying the accuracy, truthfulness, and appropriateness of AI-generated content before using it in any final or published work product.
TRAINING AND EDUCATION
AI users should be trained to use AI effectively and ethically. This includes understanding the technology’s limitations, its potential impact on their work, and to the extent possible, how AI decisions are made, including the reasons behind specific outcomes.
COMPLIANCE WITH LAWS AND REGULATIONS
- Users must comply with applicable laws and regulations when using AI, including but not limited to intellectual property laws, data privacy laws, and industry-specific regulations. Users should consult with appropriate legal counsel or management if they have questions about regulatory requirements.
- Users should be aware that AI-generated content may not be eligible for copyright protection, and they should exercise caution when claiming or assigning intellectual property rights for such content. When engaging with external partners or vendors, users should address AI usage in development agreements to ensure clarity on intellectual property rights and responsibilities.
- The use of approved AI tools must comply with the company’s Code of Conduct, internal policies and applicable laws. The use of personal devices or personal AI accounts to bypass company safeguards or violate any established policies is strictly prohibited.
MONITORING AND INCIDENT REPORTING
- AI users should proactively monitor AI systems and make necessary modifications to ensure they function as intended and do not produce unintended consequences.
- Employees shall report any noted non-compliance with this policy to ICT and HR for proper handling.
Exception Criteria
AI development tools, platforms, and systems used for designing and building AI models and solutions are currently exempt from this AI use policy. Their use will be addressed in a future update, aligned with the forthcoming Ayala Groupwide AI Policy.