Keynote Speakers

Xiaolan FU

University of Oxford

Iris Eisenberger

University of Vienna

Weidong JI

Shanghai Jiao Tong University

Amid today’s surge of technology and innovation, methods for evaluating intangible assets such as technology remain a persistent “bottleneck” constraining innovation and technological progress. Traditional valuation approaches, on the one hand, fail to fully reflect the value of technology-driven startups; on the other hand, they cannot objectively and accurately judge the value that innovative technologies bring to industrial development.

This study proposes the “Technological Value–Utility Theory” and, using machine learning, deep learning, and econometric techniques, analyzes and trains on sector-specific vertical databases across industries to develop an AI-driven method for technology valuation. Compared with traditional valuation methods, this approach is more objective, accurate, efficient, and accessible, while offering greater adaptability and interpretability.

The method provides entrepreneurs with a level playing field and momentum for sustainable development, and equips investors with objective, scientific capabilities for technology assessment—helping to build an inclusive, efficient, and virtuous innovation ecosystem.

Large language models such as DeepSeek, ChatGPT and Gemini are set to transform the role of language in society. They are changing the way texts are produced and used, as well as our understanding of linguistic ability. They are also altering how language fosters inclusion and participation in social and political activities. AI-based language production is already transforming academia, education, and the job market — a trend that is set to continue. AI affects almost all areas of life, and large language models impact all text-based academic disciplines and professions. One of the fields at the centre of this potential transformation is law. LLMs will challenge the role of lawyers and the place of law in society. Ultimately, LLMs will also force law schools to reconsider their curricula and the legal education and skills they provide. LLMs that can produce text, analyse documents and judgements, summarise laws and conduct research merit our attention. The skillset required of future lawyers will likely differ greatly from that required today. Therefore, we must evaluate the skills our students will require. Theory, system understanding, argumentative skills and a general education may once again become paramount. This talk will investigate how legal education should adapt in light of current techno-societal developments. It will demonstrate how the nature of our legal systems, practices and theory is changing. What will it mean to be a lawyer in this new environment? What kind of legal education will be required?

以各國向科技加速主義的轉向為背景,考察倫理本位人工智能治理所面臨的復雜困境。演講者認為通過安全AI實現AI安全的技術性方案,實際上構成前述困境一個突破口。倘若技術方案再與正當程序原則相結合,就會產生實質性價值以及社會的、倫理的、法律的效應,並把不同政策立場之間的對抗式博弈轉化為合作式博弈。在這個意義上,也不妨把這種技術-程序本位的人工智能治理作為一種新範式,並以此作為全球治理體制的基石。

Round Table Discussion I

Xingzhong YU (Moderator)

University of Macau

Hualing FU

University of Hong Kong

Weidong JI

Shanghai Jiao Tong University

Yinliang LIU

Peking University

以各國向科技加速主義的轉向為背景,考察倫理本位人工智能治理所面臨的復雜困境。演講者認為通過安全AI實現AI安全的技術性方案,實際上構成前述困境一個突破口。倘若技術方案再與正當程序原則相結合,就會產生實質性價值以及社會的、倫理的、法律的效應,並把不同政策立場之間的對抗式博弈轉化為合作式博弈。在這個意義上,也不妨把這種技術-程序本位的人工智能治理作為一種新範式,並以此作為全球治理體制的基石。

As the most important personal rights, personality rights of natural persons aim to protect their personal elements and personal interests. Within the personality rights, the social personal rights could safeguard the subjects’ social interactions and social relationships. In the time of digital and artificial intelligence, while the digital existence of natural persons has been becoming a social reality, their personality rights should be extended to the digital space, especially the digital social personality rights. The rational construction and implementation of the digital social personality rights could expectedly safeguard the fair being of natural persons in the digital society, protect their personal dignity, security and freedom, and ultimately maintain the public order in the digital space.

Zhaoji QIU

The Northwest University of Political Science and Law

Lianghuo FAN

University of Macau

Weixing Hu

University of Macau

Jun Li

University of Macau

Bing SHUI

University of Macau

Jiang SUN

University of Macau

Io Cheng TONG

University of Macau

Jie XU

University of Macau

Round Table Discussion II

Xiaoqing FU
(Moderator)

University of Macau

Le CHENG

Zhejiang University

Wei DING

Harbin Engineering University

Sirui HAN

Hong Kong University of Science and Technology

We investigate how FinTech adoption influences green lending in commercial banks. Using panel data from listed Chinese banks (2010–2024), we find that FinTech adoption alone does not consistently increase green lending. In banks with widely dispersed branch networks, FinTech can actually reduce green lending due to increased agency frictions and coordination challenges. However, strong internal governance (characterised by effective monitoring, standardised procedures, and strategic alignment) can mitigate these frictions and enable banks to realise the benefits of FinTech. Our findings suggest that the success of FinTech depends upon banks’ ability to integrate technology across their networks. Geographic dispersion raises implementation costs, while robust governance enhances the capacity to deploy FinTech effectively. This suggests that practices and policies designed to improve corporate governance can augment the impacts of FinTech on green lending.

為促進數據要素價值發揮,「數據二十條」專章規定了數據產權制度構建的基本原則。然而,由於我國法律對數據產權法律性質的規範供給不足,嚴重影響了數據產權制度的構建與完善。法學對數據產權建設的任務是研究數據產權的法律化方案,提出數據產權確權模式以及在此基礎上的數據授權、轉讓、交易等制度方案。文章從數據產權的法律性質出發,分析數據產權構造的主體、客體和權利內容等基本元素,進而探討數據產權的確權路徑。再以此為邏輯基點,並以公共數據授權運營為維度深化,探討數據產權登記制度,市場交易機制等實踐和適用問題,從而構建結構性制度創新性的數據產權制度。

Kung-Chung LIU

Renmin University of China,
Singapore Management University

Pablo Julián MEIX CERECEDA

University of Castilla-La Mancha

Jiyu ZHANG

Renmin University of China

Yue ZHU

Tongji University

數據與算法乃數據驅動型經濟的命脈,大數據和算法浪潮正帶來新的法律與政策問題,本文僅處理其知識產權與競爭法面向的挑戰。本文先分析數據的類型、保護、獲取及使用,以促進數據生產和流動。本文接著討論算法、算法稽核的六項原則:選擇適合的稽核對象、非對稱稽核、遵循比例原則、獨立專業原則,算法設置者和運營者披露和解釋義務、算法設置者和運營者合理註意義務。最後本文建議通過新條約和新的國際機制,建立算法及其稽核的全新國際治理框架。

Many spheres of our culture have become eminently digital. In recent years, in particular since the release of ChatGPT in November 2022, AI technologies are rapidly changing the way we access information and build knowledge[1]. It can even be argued that the status of citizenship is increasingly mediated by digital technologies.

The possibilities for rapidly advancing individual knowledge and awareness of new realities are greater than ever: Large Language Models easily enable interdisciplinary studies for the curious student, who can discover new ways of using AI to improve consciousness and understand the surrounding world[2]. But their user-friendly design and cost-free appearance can hide certain risks[3].

Firstly, intellectual maturation of the child has been linked to the development of language, which, in turn, reflects the human need for interaction with other human beings.

Secondly, considering the broad political dimension of education, Generative AI could be taking up the traditional space of other information channels. The production of knowledge could be evolving from an open-exploration and free-debate approach to algorithm-driven adaptation of information to the user’s preferences.

Thirdly, efficiency and environmental cost should be a matter for concern. The idea that many AI tools are free-of-charge for the user dissimulates the enormous energetic effort and water resources needed to train the algorithms and generate computing power to meet user demands.

Fourthly, and perhaps less neglected in legal proposals and governance strategies, the concern for safety and security: safety of vulnerable individuals (in particular, children, elderly, people with certain intellectual disabilities); but also security for strategic activities (power generation, hospitals, airports, transportation grids) that are vulnerable to cyber-attacks -even if the latter dimension may not have a prevalent significance from the central perspective of this presentation, which relates to digital literacy and educational policy

生成式人工智能的普及並非簡單的工具叠代,而是一次社會關係的重構。它既要求我們的數字素養繼續延續「如何高效使用工具」的「使用者」思維發展,更需要加上「如何負責任地理解和控制風險」的「治理者」思維。我們將探討生成式人工智能治理中對於提升公民素養的需求,使得我們在利用人工智能賦能發展的同時,能夠更好地使得其發展和應用安全、向善。

Contemporary AI ethics research often lacks a historical perspective, and studies of AI history frequently overlook ethical dimensions. A chronological review of five categories of AI ethics research and advocacy from the 1960s to the 1980s reveals significant similarities and continuities with current discourse on the topic. Pioneering insights from interdisciplinary studies remain relevant, and reflections from individuals who have experienced a full technological cycle help dispel the hype. The AI ethical principles proposed by the pioneers — human-centeredness, transparency, fairness, privacy, and accountability — remain relevant half a century later. Reflections on why AI ethics struggle to gain full traction within organizations offer valuable lessons today. Thus, AI ethics research gains historical perspective, and AI historical research complements the ethical perspective.

Shaoyang LIN

University of Macau

Rostam J. NEUWIRTH

University of Macau

Di WANG

University of Macau

Qingjie WANG

University of Macau

The past few years a growing global interest in a set of innovative and potentially disruptive technologies commonly referred to as ‘artificial intelligence’ or ‘AI’. The interest manifested itself in a global AI governance debate, which aimed to address both the potential benefits but also serious risks related to the rapid evolution and growing impact of AI on all aspects of life. The way these debates were framed and organized suffered from a series of shortcomings notably related but not limited to the uncritical use of the concept of ‘AI’, the dominant narrative of a ‘global race to regulate AI’, and, most of all, a lack of the consideration of the role of human intelligence in the use and regulation of AI. The present paper will first address each of these three shortcomings and subsequently attempt to outline different ways how human intelligence and cognition could be enhanced in the future based on ‘four-dimensional thinking’ a term inspired by Albert Einstein’s scientific description of reality as a ‘four-dimensional space-time continuum’.

I will explore the burgeoning application of Artificial Intelligence in the historical discipline, probing its potential to fundamentally reshape both pedagogy and scholarly inquiry. I will investigate how AI tools can create dynamic, personalized learning experiences, such as interactive historical simulations and tailored primary source analysis. For research, the conversation will focus on AI’s capacity to analyze vast archives to uncover hidden patterns and narratives. Critically, I will dedicate significant attention to the inherent risks of this integration and will examine the profound danger of AI perpetuating and amplifying historical biases present in its training data, leading to distorted interpretations. Further risks include the uncritical acceptance of AI-generated content, the potential for deepfakes to erode trust in primary sources, and data privacy concerns regarding student interactions.

Katrine, K. WONG

University of Macau

Panel Session I: Fundamental Concepts and Theoretical Frameworks

Qingjie WANG (Moderator)

University of Macau

Tao ZHANG

Heilongjiang University 

Jingwei XIE

Guangzhou University

Tingting SONG

Shanghai Normal University

Tianyu HAN

University of Macau

Panel Session II: Digital Literacy and Education

Mingming ZHOU (Moderator)

University of Macau

Xing YI

University Malaya

Wei WANG

University of Macau

Meifang ZHANG

East China University of Political Science and Law

Jian ZHANG

Anhui University

Panel Session III: Digital Literacy for Vulnerable Groups

Yuanyuan LIAO (Moderator)

University of Macau

Huhebi GUO

Keio University

Qingyun GAO

University of Chinese Academy of Social Sciences

Supitrada Heranakaraoran

Panel Session IV: Evaluation of Frameworks and Standards

Feng WAN (Moderator)

University of Macau

Su LIN

Fuzhou University of International Studies and Trade,
Macau Polytechnic University

Xinyuan ZHANG

University of Durham

Chang-Mao LIAO

Xi’an Jiaotong University

Xuekun ZHU

Nankai University

Panel Session V: Gender, Family and Digital Technology

Qiqi HUANG (Moderator)

University of Macau

Sheng LIU

Guangdong University of Foreign Studies

Yilin BAI

China University of Political Science and Law

Xiaoying WANG

Communication University of China

Nurhalimah Siregar

Universitas Islam Internasional Indonesia (UIII)

Panel Session VI: Data Registration and Application

Sut Hong WONG, Amy (Moderator)

University of Macau

Abdulquadir Abiola APAOKAGI

Kwara State University

Baohua ZHOU

Fudan University

Linqi LEI

Xingyu LIN

University of Macau

Zhuoxin LIN

University of Macau

Faculty of Law