50.15 Artificial Intelligence

Issued: July 2, 2025

1. Governing Regulations

This procedure is governed by Texas A&M University System Regulation 29.01.05 Artificial Intelligence, Texas Government Code 2054.601 Information Resources, and Texas Administrative Code (TAC) Title 1, Part 10, Chapter 202, Information Security Standards.

2. Definitions

2.1 Artificial Intelligence (AI) – an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.

2.2 AI literacy – the knowledge and skills necessary to critically understand, evaluate, and use AI systems and tools to safely and ethically participate in an increasingly digital world.

2.3 AI trust, risk and security management (AI TRiSM) – a set of solutions to ensure AI model governance, trustworthiness, fairness, reliability, robustness, efficacy and data protection. This includes solutions and techniques for model interpretability and explainability, AI data protection, model operations and adversarial attack resistance.

2.4 Automated decision system – has the definition assigned by Tex. Gov’t Code § 2054.621.

2.5 Generative artificial intelligence (GenAI) – a system of algorithms or computer processes that can create novel output in text, images or other media based on user prompts. These systems are created by programmers who train them on large sets of data. The AI learns by finding patterns in the data and can then provide novel outputs to users’ queries based on its findings.

2.6 Machine learning (ML) – a branch of AI that focuses on the development of systems capable of learning from data to perform a task without being explicitly programmed to perform that task. Learning refers to the process of optimizing model parameters through computational techniques such that the model’s behavior is optimized for the training task.

2.7 Managed generative AI – an AI technology and associated database that exists within a system-managed environment, allowing for more AI TRiSM oversight. These include tools and solutions embedded within desktop applications utilized by system members (e.g., AI-enabled applications installed on member devices) and are governed by system security policies and standards, thereby controlling their impact to the system. Data shared in these tools or solutions is not subject to additional restrictions beyond those already governing the usage of member data.

2.8 Publicly-available generative AI – an AI technology and associated database that exists outside the control of the system, where the approach to AI TRiSM is unverifiable and often unknown. These solutions expose the system to and its employees to a variety of significant risks, including potentially providing employees with false information, inappropriate access to proprietary information and/or exposing sensitive and secure information to the public.

2.9 1 Tex. Admin. Code Ch. 202 (TAC 202) – defines applicable terms used in this regulation. See 1 Tex. Admin. Code § 202.1

3. Purpose

This procedure addresses the agency’s implementation of Texas Gov’t Code Section 2054.601 in respect to artificial intelligence (AI), System regulation 29.01.05, and the ethical, responsible, and secure use of AI and machine learning (ML) within the agency.  This procedure applies to all AI-related computing and administrative functions.

4. Responsibilities

4.1 Information Resources Manager

4.1.1 Participate in agency councils, teams, or committees regarding agency-wide AI initiatives and advise on AI related concerns and issues.

4.1.2 Approve the procurement of AI solutions.

4.1.3 Approve implementation of internally developed AI solutions.

4.1.4 Approve the appropriate data storage locations used by AI based on data classification.

4.1.5 Oversee AI solution compliance with all applicable data privacy and security laws, system policies, regulations and standards, and security controls.

4.1.6 Oversee the risk management of AI solutions per the NIST AI Risk Management Framework.

4.1.7 Maintain a documented list of AI solution owners and their designated primary custodian of the system utilizing AI.

4.1.8 Report to the System Chief Information Officer (SCIO) annually on the automated decision systems inventory submitted to Texas DIR under Tex. Gov’t Code § 2054.623.

4.1.9 Oversee and encourage the integration of AI literacy into staff professional development.

4.2 Owners of AI Systems

4.2.1 Communicate to the agency Information Resources Department Head the name and official agency title of the designated custodian of the system which utilizes AI, with responsibility to ensure that an AI is robust, safe, and capable of being terminated if human control of the system is no longer possible.

4.2.2 Ensure procurement of AI follows all applicable federal and state laws, rules, and regulations, as well as System and agency rules and policies.

4.2.3 Identify AI capabilities utilized and document in plain language and clear descriptions of the overall system functions and the role of AI.

4.2.4 Create and maintain assessments of AI activities within AI systems in accordance with section 9 below.

4.2.5 Establish and maintain documentation of accountability and oversight of AI system in accordance with section 10 below.

5. Procurement of AI Systems

5.1 Procurement of AI systems must follow all applicable federal and state laws, rules, and regulations as well as System and agency rules, policies, and procedures.

5.2 Procurement of AI systems must follow the NIST AI Risk Management Framework (NIST AI 100-1) to assess risk to the agency from the AI system.

6. Development of AI Systems

6.1 Development and use of AI systems must align with the United Nations Educational, Scientific and Cultural Organization (UNESCO)’s Recommendation on the Ethics of Artificial Intelligence, ensuring transparency, fairness, accountability, safety and respect for individual rights of privacy and dignity.

6.2 Development of AI systems must follow the NIST AI Risk Management Framework (NIST AI 100-1) during the development lifecycle to ensure risks to the agency from the AI system are minimized.

7. Professional Integrity

7.1 AI systems must support and enhance professional integrity.

7.2 Use of AI systems must recognize the importance of human oversight and accountability in AI decision-making processes and ensure that AI augments human capabilities and enhances public service while preserving human judgment and autonomy.

8. Data Privacy and Security

8.1 AI systems must comply with all applicable data privacy and security laws; system policies, regulations and standards, and agency rules and security controls.

8.2 Owners and custodians of information systems that include AI must ensure the AI does not pose unreasonable safety risks by adopting safety measures that are proportionate to the magnitude of potential risks. Owners must clearly and appropriately identify the custodian with responsibility to ensure that an AI is robust, safe, and capable of being terminated if human control of the system is no longer possible.

9. Implementation and Impact Assessment

9.1 Owners of information systems that include AI must include in the system development life cycle assessments of AI activities which:

9.1.1 consider the social, ethical, and environmental impacts of the AI activities;

9.1.2 include transparency and explainability of AI activities;

9.1.3 provide clear documentation, explanations, and recourse mechanisms to users and stakeholders affected by AI-driven decisions and outcomes, and

9.1.4 are evaluated to determine if the activity falls within the scope of human subjects research.

10. Accountability and Oversight

10.1 Owners of information systems that include AI must establish clear lines of accountability and oversight for AI initiatives, with designated roles and responsibilities for AI governance, risk management and compliance across all levels of the organization.

11. Prohibited Use

11.1 AI may not be used in any manner that violates the law, system policy or regulation, or member rules and procedures.

11.2 Information input or shared with publicly available Generative AI tools (e.g. ChatGPT) is not private and could expose proprietary, regulated, or confidential information to unauthorized parties. Agency data classified as confidential or sensitive must not be input into or shared with publicly available Generative AI tools.

11.3 AI from third-party services that participates in online meetings to deliver transcription or summarization services may not be used for any meetings where non-public information, including information subject to disclosure under the Texas Public Information Act (Tex. Gov’t Code Ch. 552), is discussed.

Only first-party AI provided by an online meeting service with which the system has a contractual agreement that includes data privacy and security conditions (such as Cisco Webex AI, Google Gemini, Microsoft CoPilot, and Zoom AI Companion) may be used in such cases, with the understanding and consent from all participants that any transcribed or summarized conversation is subject to disclosure in accordance with the Texas Public Information Act.

Get in touch with the Information Resources Department Head