On September 14, 2023, the Institute for Global Governance Research (GGR) hosted a joint SMU/GGR talk session, “AI Governance and Regulation: Global and Asian Perspectives.” We invited Professor Nydia Remolina Leon (Head of the Industry Relations Centre for AI and Data Governance (CAIDG), Singapore Management University).
The session began with an overview of global challenges in intelligence regulation, before taking a specific focus on Asia. Professor Remolina Leon pointed out the importance of the governance of AI technologies including policies, regulations, and frameworks, to guide their development and ethical use. Challenges discussed covered the varying conceptions and values when it comes to the deployment of technology, such as privacy, jurisdiction limits, ethical values, and the fundamental definition of governance itself. The use of algorithmic trading in the financial sector began in the 70s, but only today are people starting to debate the ethical use of artificial intelligence. A key point of discussion in the session was the issue of inaccurate outcomes for gendered voices in natural language programming, leading to concerns of discrimination and bias in development. Due to data limitation, these algorithms tend to be gender biased, thus people still prefer human judgement over algorithm results. Therefore, a multidisciplinary approach is important to ensure a responsible, ethical, and accountable approach so the model will not be unethical and the result will not be discriminatory – which is what we want with governance.
Professor Remolina Leon also highlighted the challenge of balancing innovation and regulation, as well as managing potential risks such as privacy, cyber security, and adapting to new architecture. Human beings should always be present in the deployment of artificial intelligence, and adjustment for each sector also needs to be considered because deployment in each has different purposes, size of business, location, market, etc.
In the Q&A session, questions were raised regarding discriminatory effects in AI models because of database limitations, especially in financial architecture that will implement AI. To respond to this issue, Professor Remolina Leon emphasized the necessity of supporting weighting whereby the algorithm can discriminate positively to establish inclusion regardless of gender, race, or other characteristics which have often been excluded by the authorities. This is necessary because discriminatory outcomes can impact many areas according to the jurisdiction involved.
【Event report prepared by】
Sulastri (Master’s student, School of International and Public Policy)