主題演講嘉賓
傅曉嵐
牛津大學
技術與國際發展教授
英國社會科學院院士
Iris Eisenberger
維也納大學
創新與公法教授
季衛東
上海交通大學
文科資深教授、中國法與社會研究院院長
Amid today’s surge of technology and innovation, methods for evaluating intangible assets such as technology remain a persistent “bottleneck” constraining innovation and technological progress. Traditional valuation approaches, on the one hand, fail to fully reflect the value of technology-driven startups; on the other hand, they cannot objectively and accurately judge the value that innovative technologies bring to industrial development.
This study proposes the “Technological Value–Utility Theory” and, using machine learning, deep learning, and econometric techniques, analyzes and trains on sector-specific vertical databases across industries to develop an AI-driven method for technology valuation. Compared with traditional valuation methods, this approach is more objective, accurate, efficient, and accessible, while offering greater adaptability and interpretability.
The method provides entrepreneurs with a level playing field and momentum for sustainable development, and equips investors with objective, scientific capabilities for technology assessment—helping to build an inclusive, efficient, and virtuous innovation ecosystem.
Large language models such as DeepSeek, ChatGPT and Gemini are set to transform the role of language in society. They are changing the way texts are produced and used, as well as our understanding of linguistic ability. They are also altering how language fosters inclusion and participation in social and political activities. AI-based language production is already transforming academia, education, and the job market — a trend that is set to continue. AI affects almost all areas of life, and large language models impact all text-based academic disciplines and professions. One of the fields at the centre of this potential transformation is law. LLMs will challenge the role of lawyers and the place of law in society. Ultimately, LLMs will also force law schools to reconsider their curricula and the legal education and skills they provide. LLMs that can produce text, analyse documents and judgements, summarise laws and conduct research merit our attention. The skillset required of future lawyers will likely differ greatly from that required today. Therefore, we must evaluate the skills our students will require. Theory, system understanding, argumentative skills and a general education may once again become paramount. This talk will investigate how legal education should adapt in light of current techno-societal developments. It will demonstrate how the nature of our legal systems, practices and theory is changing. What will it mean to be a lawyer in this new environment? What kind of legal education will be required?
以各國向科技加速主義的轉向為背景,考察倫理本位人工智能治理所面臨的復雜困境。演講者認為通過安全AI實現AI安全的技術性方案,實際上構成前述困境一個突破口。倘若技術方案再與正當程序原則相結合,就會產生實質性價值以及社會的、倫理的、法律的效應,並把不同政策立場之間的對抗式博弈轉化為合作式博弈。在這個意義上,也不妨把這種技術-程序本位的人工智能治理作為一種新範式,並以此作為全球治理體制的基石。
Panel Speakers
於興中
澳門大學
環球法律學系講座教授、高研院院長
Rostam J. NEUWIRTH
澳門大學
環球法律學系特聘教授
劉孔中
中國人民大學、新加坡管理大學
法學院教授
數據與算法乃數據驅動型經濟的命脈,大數據和算法浪潮正帶來新的法律與政策問題,本文僅處理其知識產權與競爭法面向的挑戰。本文先分析數據的類型、保護、獲取及使用,以促進數據生產和流動。本文接著討論算法、算法稽核的六項原則:選擇適合的稽核對象、非對稱稽核、遵循比例原則、獨立專業原則,算法設置者和運營者披露和解釋義務、算法設置者和運營者合理註意義務。最後本文建議通過新條約和新的國際機制,建立算法及其稽核的全新國際治理框架。
劉銀良
北京大學
法學院教授、科技法研究中心主任
傅曉青
澳門大學
金融與商業經濟學教授
程樂
浙江大學
光華法學院教授、國際戰略與法律研究院常務副院長
自然人的人格權是保護其人格要素與人格利益的權利,是最重要的個人權利。人格權包括維護人們社會交往與社會關係的社會人格權。在數字與人工智慧技術時代,自然人的數字化生存已是社會現實,人格權需延伸至數字領域,其中尤其是數字社會人格權。數字社會人格權的合理構建與實施,能夠維護自然人在數字社會的正當生存,保護其人格尊嚴、安全與自由,進而維護數字社會的公共秩序。
丁瑋
哈爾濱工程大學
法學教授
張吉豫
中國人民大學
法學院副教授、未來法治研究院執行院長
Pablo Julián Meix Cereceda
卡斯提亞-查曼拉大學
行政法學教授
為促進數據要素價值發揮,「數據二十條」專章規定了數據產權制度構建的基本原則。然而,由於我國法律對數據產權法律性質的規範供給不足,嚴重影響了數據產權制度的構建與完善。法學對數據產權建設的任務是研究數據產權的法律化方案,提出數據產權確權模式以及在此基礎上的數據授權、轉讓、交易等制度方案。文章從數據產權的法律性質出發,分析數據產權構造的主體、客體和權利內容等基本元素,進而探討數據產權的確權路徑。再以此為邏輯基點,並以公共數據授權運營為維度深化,探討數據產權登記制度,市場交易機制等實踐和適用問題,從而構建結構性制度創新性的數據產權制度。
生成式人工智能的普及並非簡單的工具叠代,而是一次社會關係的重構。它既要求我們的數字素養繼續延續「如何高效使用工具」的「使用者」思維發展,更需要加上「如何負責任地理解和控制風險」的「治理者」思維。我們將探討生成式人工智能治理中對於提升公民素養的需求,使得我們在利用人工智能賦能發展的同時,能夠更好地使得其發展和應用安全、向善。
韓斯睿
香港科技大學
經濟學系助理教授
朱悅
同濟大學
法學院助理教授
Contemporary AI ethics research often lacks a historical perspective, and studies of AI history frequently overlook ethical dimensions. A chronological review of five categories of AI ethics research and advocacy from the 1960s to the 1980s reveals significant similarities and continuities with current discourse on the topic. Pioneering insights from interdisciplinary studies remain relevant, and reflections from individuals who have experienced a full technological cycle help dispel the hype. The AI ethical principles proposed by the pioneers — human-centeredness, transparency, fairness, privacy, and accountability — remain relevant half a century later. Reflections on why AI ethics struggle to gain full traction within organizations offer valuable lessons today. Thus, AI ethics research gains historical perspective, and AI historical research complements the ethical perspective.