B
Berichte / Rapports

Der CIPCO IGE Workshop 2025 widmete sich den Herausforderungen von KĂŒnstlicher Intelligenz und DatenflĂŒssen im Kontext des Urheberrechts. Diskutiert wurden technische Grundlagen wie Training, Transparenz und Herausforderungen bei der Weiterentwicklung von Sprachmodellen. Juristische BeitrĂ€ge beleuchteten die politische Debatte in der Schweiz und die Frage nach neuen Regeln im Urheberrecht. Besonders betont wurde die Spannung zwischen Innovationsförderung und Schutz der Kreativen. Industrievertreter stellten ihre GeschĂ€ftsmodelle und Erwartungen an rechtliche Rahmenbedingungen dar. Ökonomische Analysen hinterfragten die traditionelle BegrĂŒndung des Urheberrechts und diskutierten neue Anreizsysteme fĂŒr Datenproduktion. Auch die Rolle internationaler Organisationen wie der WIPO sowie die Implikationen des EU AI Act fĂŒr die Schweiz wurden thematisiert. Insgesamt zeigte der Workshop die Notwendigkeit klarer, zukunftsfĂ€higer Regeln, um Innovation und Rechtssicherheit zu verbinden.

Le workshop CIPCO/IPI 2025 a Ă©tĂ© consacrĂ© aux dĂ©fis posĂ©s par l’intelligence artificielle et les flux de donnĂ©es dans le contexte du droit d’auteur. Les discussions ont portĂ© sur les fondements techniques tels que l’entraĂźnement, la transparence et les difficultĂ©s liĂ©es au perfectionnement des modĂšles linguistiques. Les contributions juridiques ont Ă©clairĂ© le dĂ©bat politique en Suisse et la question de nouvelles rĂšgles en matiĂšre de droit d’auteur. L’accent a Ă©tĂ© mis en particulier sur la tension entre la promotion de l’innovation et la protection des crĂ©ateurs. Les reprĂ©sentants de l’industrie ont prĂ©sentĂ© leurs modĂšles d’affaires et leurs attentes en matiĂšre de cadre juridique. Des analyses Ă©conomiques ont remis en question la justification traditionnelle du droit d’auteur et discutĂ© de nouveaux mĂ©canismes d’incitation Ă  la production de donnĂ©es. Le rĂŽle d’organisations internationales telles que l’OMPI ainsi que les implications pour la Suisse de la loi europĂ©enne sur l’IA ont Ă©galement Ă©tĂ© abordĂ©s. Dans l’ensemble, le workshop a montrĂ© la nĂ©cessitĂ© de rĂšgles claires et pĂ©rennes afin de concilier innovation et sĂ©curitĂ© juridique.

Peter Picht,Prof. Dr., Professor of Information and Communication Law, University of Zurich.

Florent Thouvenin,

Prof. Dr. iur., Rechtsanwalt, Ordinarius fĂŒr Informations- und Kommunikationsrecht und Vorsitzender des Leitungsausschusses des Center fĂŒr Information Technology, Society, and Law (ITSL) an der UniversitĂ€t ZĂŒrich.

I. Context: the CIPCO IPI AI/IP Project

The Zurich University Center for Intellectual Property and Competition Law (CIPCO) maintains a standing cooperation project with the Swiss Federal Institute of Intellectual Property (IPI) on the interplay between artificial intelligence (AI) and intellectual property (IP) law.

One milestone of this cooperation was a 2022 workshop on key issues, and the necessary research and policy agenda regarding this interplay, reflected inter alia in Thouvenin and Picht, AI & IP: Empfehlungen fĂŒr Rechtsetzung, Rechtsanwendung und Forschung zu den Herausforderungen an den Schnittstellen von Artificial Intelligence (AI) und Intellectual Property (IP) [sic], 2023, 507–524. Furthermore, in 2024 the CIPCO members in charge of the cooperation, Peter Georg Picht and Florent Thouvenin, authored an IPI-commissioned study on software protection in the AI age, «New Software Protection Approaches in a World (Co)shaped by AI».1

The 2025 CIPCO IPI Workshop summarized in this report continued the above cooperation with a focus on AI and dataflows. Leading experts from law, technology, and economics, and from authorities, industry, and academia, examined how AI interacts along its value chain, in particular, with copyright law and data governance frameworks in Switzerland. By involving technical experts and industry representatives, the workshop aimed to explore how current AI models actually work – especially regarding their operations relating to copyright – and what the business models are for commercializing AI along the value chain.

As AI continues to expand in scope and application, new and complex questions are emerging not only in copyright law but also in adjacent areas, such as data (protection) and competition law. Achieving a balance between the interests of different stakeholders, particularly between AI-driven industries and creators in the copyright field, is central to identifying appropriate and sustainable answers. This includes the acknowledgement that Switzerland is a center of technological innovation and aims to maintain this position. Therefore, potential legislative measures should be designed to promote the innovativeness and competitiveness of the Swiss economy.

II. Swiss Political Landscape

Sabrina Konrad2 sketched the current political discussions regarding AI and copyright in Switzerland. AI is a focus topic of the Federal Council’s Digital Switzerland Strategy. The government aims to promote a digital transformation framework that is responsible, sustainable (ecologically, economically, and socially), and beneficial to society as a whole.

Konrad emphasized that any future Swiss framework must maintain Switzerland’s competitiveness in frontier technology R&D while ensuring adequate protection for creators and affected individuals. She explained that a motion is currently before the parliament, calling for dedicated AI rules in the Swiss Copyright Act. The response to this motion will, in line with general Swiss practice, involve the various stakeholders in the legislative process as much as possible and take into account international developments, particularly within the European Union.

Konrad also highlighted that no international standard or treaty currently addresses the collection, processing, or removal of AI training data from training sets. Model development is underway across the globe, driven by diverse actors (e.g., research institutions, private companies, and open-source communities) and within countries with diverse frameworks and interests, making harmonization difficult.

III. The Technical Foundations: Data Use and Model Architecture

Lena JĂ€ger3 provided an overview of the training of large language models (LLMs) and the persistent opacity of their internal workings. She distinguished three phases – pretraining, fine-tuning, and post-training alignment – each of which shapes model behavior in distinct ways.

Pretraining relies on vast datasets to establish statistical relationships between tokens and to enable predictive language processing. This phase is highly resource-intensive, requiring thousands of GPUs and large-scale infrastructure, and is therefore dominated by large players. Most datasets are scraped from the web (websites, social media, encyclopedias, and video transcripts); however, their precise composition is rarely disclosed. JĂ€ger identified this lack of transparency as a concern regarding privacy, copyright, and bias mitigation.

Fine-tuning employs small curated datasets to adapt models to specialized tasks. Although not strictly necessary for functionality, it is indispensable for domain-specific performance. Post-training alignment, often conducted after deployment, refines models through human feedback (e.g., Reinforcement Learning from Human Feedback, or RLHF) to ensure that outputs conform to human values and preferences.

JÀger stressed that most LLMs remain «black boxes»: weights, source code, and especially training data are inaccessible. Even models that release weights (e.g., Mistral, LLaMA, DeepSeek) withhold datasets and procedures, while nominally open models such as BERTa, Falcon, and OLMo provide insufficient transparency for reproducibility. Access to weights alone does not permit the reconstruction of the learning trajectory without the original data and training sequence.

This opacity is particularly problematic for memorization. Although LLMs generate outputs probabilistically rather than storing text verbatim, memorization occurs more frequently than is desirable. Distinctive, infrequent, or stylistically unique items produce concentrated parameter updates, making them more easily reproducible under specific prompts. Iconic materials, such as cartoon characters, carry a heightened risk of reproduction. In contrast, generic or common sentences diffuse across parameters and are difficult to reconstruct.

JĂ€ger outlined possible safeguards against data reconstruction and their legal implications. Retraining from scratch after deleting data is technically sound but economically and ecologically problematic. Machine unlearning remains experimental, costly, and unreliable. Output suppression, whereby systems refuse to display copyrighted or sensitive content, mitigates exposure at the interface but leaves the internal representations intact.

Participants underscored in this context that models cannot be considered «copies» of their training data: inputs are tokenized, vectorized, and distributed across billions of parameters. Retrieval of specific text requires manipulation of probabilistic structures through engineered prompts rather than the extraction of stored content.

This workshop session also examined recursive training on synthetic data. Repeated fine-tuning on model-generated outputs risks producing «model collapse,» whereby outputs converge toward high-probability but low-diversity sequences, ultimately degrading into repetitive or degenerate tokens.

Finally, the participants reflected on why language models outperform other AI modalities. Natural language tokenization is comparatively straightforward, and human languages exhibit rich internal structures and predictability, making them highly amenable to machine learning. Comparable patterns exist in chemistry, with its discrete symbolic rules. In contrast, sound, images, and videos introduce added complexities, such as waveforms, spatial correlations, and temporal dependencies, even though all content is ultimately reduced to numerical tensors, enabling unified training methodologies across all formats and modalities.

IV. Generative AI and Data Usage

Abraham Bernstein’s presentation complemented the technical discussion by deepening the conceptual, philosophical, and legal analysis of generative AI. He set out to correct a widespread misconception: despite public discourse often associating «AI» with generative systems, most AI technologies deployed today – particularly in industry – are still discriminative models. These systems do not generate new content; instead, they classify, label, and evaluate inputs. Bernstein illustrated this by noting that even complex modalities, such as sound, can be represented purely as numerical vectors describing waveforms, enabling discriminative models to detect patterns without ever producing new audio. Against this backdrop, generative models constitute a qualitatively different paradigm. From a probabilistic perspective, whereas discriminative models learn the conditional probability of a label given data, generative models learn the full joint probability distribution of the data itself, allowing them to synthesize entirely new images, sounds, or text based on statistical regularities rather than direct copies of training instances.

Bernstein then turned to diffusion models, which constitute the dominant method for contemporary image generation. He provided a conceptual explanation of how diffusion systems create new synthetic data. The fundamental idea is a two-part process consisting of a forward diffusion stage and a reverse-diffusion stage. During the forward process, noise is gradually added to the clean training examples in a sequence of steps until the original data become indistinguishable from pure noise. The reverse process trains a neural network to invert this transformation step by step by predicting the original «clean data» (or denoising the image) in each stage. At sampling time, generation proceeds in the opposite direction: the model begins with randomly sampled noise and iteratively applies the learned denoising steps. After the full sequence of reverse transitions, the result is a newly synthesized image that did not previously exist but reflects the statistical patterns of the training dataset.

Building on this technical foundation, Bernstein linked diffusion models to broader questions regarding generative versus discriminative systems. He emphasized that, in practice, these two paradigms can be intertwined as so-called generative adversarial networks (GANs). In GANs, generative models produce candidate outputs, such as images, whereas discriminative models are used to assess their quality or realism. This combination of generation and evaluation allows generative systems to produce increasingly convincing synthetic data and contributes to the perception – sometimes mistaken – that the models contain copies of specific works. Bernstein argued that the mere fact that a picture is «in the model» in the sense of influencing its weight distributions does not imply that the picture is stored or retrievable. This distinction is essential for evaluating copyright infringement claims.

To further illustrate the problem of similarity and perception, Bernstein invoked the example of MP3 compression. Compressed audio files are not identical to the original sound waves; they are heavily transformed representations in which the human auditory system fills in the missing information. Yet, listeners perceive them as being the same. The analogy raises the question of how similarity should be defined in copyright law when human perception, technical transformations, and statistical learning processes all influence whether two works appear «the same.» Generative models complicate these assessments further by producing outputs that may be perceptually similar to existing works, or mimic stylistic features while remaining technically novel.

He then turned to retrieval-augmented generation (RAG), highlighting it as a hybrid technique that combines traditional information retrieval with generative summarization. In a RAG system, relevant documents are first retrieved using classical information retrieval or search methods. These documents are then passed to an LLM, which synthesizes them into a coherent and often highly compressed response. Bernstein explained that RAG exemplifies the convergence of the «old world» of symbolic search and indexing with the «new world» of neural text generation. This also sheds light on why some publishers are concerned: users increasingly consume AI-generated summaries instead of accessing the original content.

From an economic and societal perspective, Bernstein emphasized that generative AI can reduce development costs and expand creative possibilities. However, overly restrictive opt-out mechanisms for training data could diminish the quality of the model and reduce the overall societal value of AI systems. Generative AI can reduce development costs and expand creative possibilities; however, overly restrictive opt-out mechanisms for training data may diminish model quality and reduce the overall societal value of AI systems. Bernstein concluded by posing a fundamental question: how should the goals of copyright law – promoting creativity, protecting authors, and enabling societal progress – be interpreted in an era where generative models transform the nature of creation and reproduction?

V. Industry Perspectives

The industry session revealed shared goals but also notable divergences in business model strategies.

IBM, represented by Markus Danhel,4 positions itself as an enterprise-oriented provider of AI systems, infrastructure, and data products rather than consumer applications. Its business model emphasizes licensing and integration of technologies that extract, structure, and govern complex data at scale. AI has become foundational across IBM’s domains, with long-term resilience supported by research in quantum-safe cryptography. For the company, AI’s value of AI depends on costly and organizationally demanding data curation, including cleaning, annotation, validation, and governance. Strategically, IBM views Switzerland as well placed to play a leading role in AI and advocates a legal framework that safeguards this advantage while addressing ethical and societal concerns.

Google, represented by Anton Aschwanden,5 situates Switzerland within a hybrid ecosystem of global firms and agile startups, including creative industries, where generative and assistive tools extend rather than replace human work. Observed shifts in user behavior suggest, in Google’s view, increasingly complex, multi-layered search queries that sustain click-through rather than displacing external content. Commercially, Google’s core revenue remains advertising, with AI applied to optimize ad systems and operational decisions (e.g., routing, logistics, and resource management). Regarding lawmaking, Google supports meaningful rules but warns that excessive compliance burdens could drive the relocation of computing-intensive activities. Foundation model training is not currently conducted in Switzerland, but a balanced legal framework could contribute to altering this geostrategy.

Syntheticus, represented by Aldo Lamberti,6 develops software for generating synthetic datasets that replicate the statistical properties of real data without accessing protected records. Its approach enables privacy-compliant AI development and the commercialization of synthetic datasets within legal constraints. The firm identifies the absence of a widely accepted technical standard for privacy-preserving synthetic data as a critical gap, stressing that anonymity must be verified case by case, concerning risks such as singling out, linkage, and inference. Trying to improve this situation, Syntheticus collaborates with the IEEE Standards Association to convene experts, publish best-practice guidance, and develop a global standard enabling AI training exclusively on compliant synthetic data.

Microsoft, represented by Sonia Cooper,7 pursues a multi-pillar strategy committed to responsible and ethical AI, encompassing the embedding of AI across products and the delivery of services via Azure (including prebuilt models, APIs, and advanced systems). The company integrates Copilot-style assistants to enhance productivity and offers sector-specific solutions in automotive, healthcare, and finance, alongside developer tooling. A central focus is rights-holder control over training data. Microsoft supports scalable, flexible mechanisms – with details under discussion in standardization forums – that allow content owners to express preferences regarding inclusion in training and outputs. Nonetheless, unresolved questions remain regarding economic impacts and dataset update cycles.

VI. AI within the Broader Economy

Hansueli Stamm8 explained that copyright law traditionally incentivizes creativity by ensuring scarcity and exclusivity, thereby creating economic rewards for authors and creators. AI challenges this paradigm by autonomously producing creative outputs, potentially reducing the need for traditional incentives.

He emphasized that while the creative sector holds symbolic and cultural importance, it contributes less than 2% to Switzerland’s gross domestic product (GDP) and is not a central driver of the national economy. Against this background, the traditional rationale for copyright is increasingly questioned: since AI can now generate works independently, debate arises over whether copyright protection remains necessary to incentivize creativity. From an economic perspective, the scarcity-based justification for copyright appears to be eroding. At the same time, AI development entails significant costs, potentially introducing new considerations regarding the need to compensate and incentivize pertinent investments.

Looking ahead, it remains unclear how incentives for creativity and innovation will evolve. Stamm identified two key questions: (i) should traditional protection periods for creative works be shortened; and (ii) how should copyright or related rights apply to AI-generated outputs, including the degree of human input involved? Current empirical data on AI use and its impact on the Swiss economy are extremely limited, making evidence-based policymaking difficult. Despite this lack of data, legislative initiatives are pressing forward, often relying on theoretical assumptions rather than concrete economic analysis.

In sum, the rapid growth of AI in creative processes requires a reconsideration of copyright’s foundational assumptions, particularly regarding scarcity, incentives, and the evolving balance between human and machine-generated creativity.

VII. Data Exhaustion, Market Incentives, and the Future of Training Data

Christian Peukert9 examined the economics of AI and the role of copyright in it. He started by citing research estimating that end users earn about $ 97bn in benefits from AI, compared to about $ 7bn of industry revenues. He then argued that this value is at stake because of the emerging phenomenon of «data exhaustion,» which describes a situation in which the internet no longer supplies sufficient volumes of high-quality, up-to-date data for training future AI models. Digital information becomes stale over time; after a certain period, data lose their relevance, accuracy, and contextual validity. This deterioration reduces the commercial value that users can derive today from models trained on yesterday’s data.

According to Peukert, this creates structural economic challenges. If no valuable or fresh data are left, and if no one is incentivized to produce new high-quality data, model performance will stagnate and societal welfare will decline. Solving this problem requires renewed incentives for data production. In other words, if society wants to avoid data exhaustion, someone must pay for the creation of new data, and that «someone» is the user, because AI companies will pass on higher costs in the form of higher prices.

Peukert illustrated this point with the example of Unsplash, a photo-sharing platform. After Unsplash made data from a random set of photographers available for AI training10 without compensation, these photographers uploaded fewer high-quality images. When Unsplash later introduced a payment program, the same photographers were three times more likely to participate. For Peukert, this case demonstrates that legislators should encourage payment mechanisms because, without financial incentives, creators will not continue to generate high-quality works that can serve as training data.

He argued that current copyright laws are not the appropriate instrument to address this problem. Drawing on a formal economic model,11 he analyzed options for copyright reform: leaving the problem to a licensing market («Opt-in») creates, in his view, a variety of issues. First, given the broad scope of copyright, it is practically impossible to locate all potential rightsholders in large datasets and negotiate license terms on an individual basis. Furthermore, asymmetric bargaining power would not lead to socially optimal outcomes.

Peukert argued that opt-out mechanisms, which allow rights holders to exclude their content from training datasets, are also not a viable policy option. Opting out is impractical because it is difficult to monitor, enforce, and scale. More importantly, if widely adopted, opting out would exacerbate data scarcity and lead to biased datasets.

A statutory licensing system – the solution Peukert views most favorably – could create a predictable, enforceable framework. Under such a system, AI developers would be legally permitted to use protected content for training, but would be required to pay rights holders, ensuring continuous data supply. This approach maintains incentives for creators, enhances legal certainty, and is easier to administer at scale than opt-in or opt-out systems.

According to Peukert, a statutory license best aligns economic incentives with societal welfare; it compensates creators, preserves a steady supply of valuable data, and avoids the inefficiencies of individualized control mechanisms. He emphasized that restoring and maintaining the value of AI requires broad participation in the creation of data, meaning that policy should create incentives for people to continue to provide a steady flow of data, rather than providing pathways for rightsholders to withhold their data.

VIII. Publicity Rights and Personality

Alexander Cuntz12 provided an overview of publicity rights, which in many jurisdictions protect commercially valuable aspects of a person’s identity, such as image, voice, and name. These attributes can generate significant economic value as documented in recent WIPO research13 on publicity rights granted across US states. From an economic perspective, it matters little whether a celebrity earns income through music, merchandising, or licensing agreements because the underlying source of value is the persona itself.

In Switzerland, however, there is no distinct «right of publicity» comparable to that in US law. Instead, elements of identity are protected under general personality rights. In certain cases, unfair competition law also provides protection. While these instruments offer safeguards, they are not structured as explicit economic rights over the commercial use of one’s image or identity.

With respect to AI training practices, Cuntz raised the question of whether using a person’s name, image, or voice to train AI models would be permissible under Swiss law. Under general personality rights, purely technical training processes that do not reproduce an identifiable likeness may be permissible, whereas the creation or commercial use of synthetic content that clearly imitates a specific individual would likely violate personality rights. Therefore, legal assessment remains context dependent and unsettled.

IX. Global Governance and the Role of WIPO

Ulrike Till14 noted that WIPO’s engagement with AI long predates the current wave of generative AI systems. It first addressed AI-related issues in 1991 and, since 2019, has convened the WIPO Conversation on IP and AI. Despite rapid technological progress, fundamental questions – such as the role of human contribution, the scope of protection, and the legality of using protected works for training – remain unresolved.

Till offered a global perspective on AI and IP. No international treaty currently governs AI training and IP, nor is one expected in the near term. She argued that this absence is not necessarily detrimental; given the complexity of the issues and the pace of innovation, additional time may allow for more coherent and future-proof regulation. Policymakers face the challenge of deciding whether genuinely new legal instruments are required or whether existing frameworks can be adapted. The answer depends heavily on empirical evidence, which is still lacking. Member states perceive both risks and opportunities, yet many governments fear competitive disadvantages if they act too slowly or too restrictively in a domain increasingly viewed as strategically vital.

Till emphasized that the debate should not be confined to copyright. Rather, it must encompass the broader innovation wave unleashed by AI, which is reshaping not only copyright-relevant sectors but also the creative industries as a whole.

X. EU AI Act and Swiss Implications

Alexander Peukert15 explained the structure and global implications of the EU AI Act, emphasizing that it functions primarily as a market-access regulation.

The Act applies to any AI system and general-purpose AI model used in the EU, irrespective of where it was trained, meaning that companies outside the EU, including those in Switzerland, must ensure compliance if they wish to serve the EU market.

While the AI Act operates alongside copyright law it is not copyright legislation; it primarily regulates aspects such as transparency, risk management, and data governance for AI systems rather than the legal status of copyrighted works. The Act explicitly covers general-purpose AI models but does not currently extend to RAG models unless they fall into a high-risk category. Therefore, RAG models remain subject only to existing EU copyright regulations.

A key element in practice is the text and data mining (TDM) exception provided by EU copyright law, which allows AI companies to analyze protected content for research purposes under certain conditions. Industry standards, such as the AI Code of Practice, further clarify responsibilities: providers of general-purpose AI models must not circumvent technical protection measures or facilitate piracy, but should respect opt-out mechanisms such as robots.txt. A future update regarding the robots.txt protocol by the IETF might provide content owners with more choices regarding the use of their data for AI training.

XI. Concluding Discussion

The concluding discussion, featuring contributions from the aforementioned speakers, as well as from Josef Drexl,16 Marc Hottinger,17 Florent Thouvenin, and Peter Georg Picht, addressed whether Switzerland’s current legal and institutional context sets effective rules for the copyright dimensions of AI and whether the empirical basis is sufficient to support further rulemaking.

Participants recognized the rapid pace of technological change, which complicates legislative design, but they also underscored the risks of inaction, including future legal uncertainty and economic distortions. The discussion acknowledged the growing capacity of AI models to tailor outputs to the respective user, including by learning iteratively from user reactions to previous outputs. These and other capabilities raise new questions about the locus of value creation, accountability, and the distribution of benefits across stakeholders. While several contributors questioned whether knowledge is adequate to craft robust, future-proof provisions at this stage, there was broad acknowledgment that some legislative intervention will be necessary to avoid legal uncertainty and strongly differing economic outcomes depending on party leverage.

Legal certainty looms large as a determinant of Switzerland’s attractiveness for AI development and deployment. Industry representatives indicated that predictable rules would positively contrast the limited legal certainty in the US, where numerous copyright-related AI cases are pending and may yield divergent outcomes. Even imperfect but coherent rules could improve Switzerland’s competitive position by providing a stable framework for both rights holders and AI developers.

Participants emphasized that the challenge lies in calibrating instruments that preserve openness to research and innovation while maintaining credible protection for copyright interests. This is all the more difficult as AI’s impact reaches across nearly all copyright-based business models.

The interaction with emerging EU legal frameworks has prompted further concerns. Legal scholars have expressed reservations about the AI Act, including the tensions it creates with EU copyright rules and EU Member State copyright law. They are increased by the AI Act’s only partial treatment of the AI/copyright interplay. For example, general-purpose and foundation models have been addressed, whereas RAG has not. Swiss rulemaking should be based on a more holistic consideration of the breadth of AI practices, including model training, downstream use and data governance.

Discussants identified judicial developments as pivotal. In the near future, court decisions, especially in the US and EU, are expected to shape the contours of lawful training practices, transformative use, and derivative outputs. In parallel, standardization in AI technology – whether induced by the AI Act or other bodies such as standard-setting organizations – will materially affect technological trajectories and business models. Rulemaking should therefore respond to case law, guide standard-setting, and focus on guardrails sufficiently flexible to not foreclose beneficial technical development.

International alignment across a wide range of jurisdictions was deemed desirable but unlikely in the near term due to widening policy and technology gaps among countries and differing legal traditions. Between the fair use and fair dealing approaches prevalent in Anglo-American jurisdictions and the more statutory (including regulatory) approaches in parts of Europe, the discussion posited guided self-regulation as a potential middle ground. Participants cautioned against the uncritical adoption of foreign models (e.g., opt-out regimes with uncertain ramifications) while urging Switzerland to avoid unnecessarily harmful divergence from major regimes, particularly the EU.

Participants agreed that the main objectives of future legislation should be to balance access and incentives, provide clear rules for training AI models and using AI systems, foster research and innovation, including the development of new AI-based products and services, ensure proportionate protection and compensation for rights holders, and maintain Switzerland’s competitiveness by developing technology-friendly, market-aware, and adaptable instruments.

Regarding rulemaking models, statutory licensing has emerged as a promising approach for reconciling access needs with compensation. Overly restrictive or premature solutions, including rigid opt-out regimes, were seen as potentially reducing the societal value of AI, whereas licensing – with careful attention to transaction costs and collective benefits – may offer a balanced path.

XII. Key Take-aways

The principal findings shared by most, though not all, participants are:

  • (1)

    The training of AI models raises a range of complex and novel technical, economic, and legal questions. The legal issues cannot be convincingly resolved across the board within the framework of current copyright law.

  • (2)

    Legal certainty should be established in the interest of all stakeholders. This requires a revision of the Swiss Copyright Act (URG). The statutory framework should strike a balance between the interests of rights holders and those of companies that develop and use AI models. It should promote innovation and creation, as well as their access and exploitation in markets, and maximize the benefits of AI for society as a whole.

  • (3)

    Although the solution that would yield the greatest societal benefit can be analyzed in theory, it is currently not possible to ground the choice of a specific solution in robust empirical evidence. This counsels against hasty «quick-fix» decisions.

  • (4)

    Switzerland should avoid harmful divergences from solutions adopted by other countries or regions (particularly the EU), while not necessarily and uncritically importing those solutions. A critical assessment of the opt-out mechanism established by the EU AI Act and the Digital Single Market Directive is particularly relevant.

  • (5)

    With regard to AI training, participants discussed, alongside contractual mechanisms, the following options in particular: (i) the full exemption of the use of works by means of a cost-free limitation (full limitation), and (ii) the exemption of use combined with an obligation to remunerate rights holders via a collective management organization (statutory license).

  • (6)

    Although the introduction of an opt-out from the exemption is conceivable, its practical implementation seems difficult. Technical standards for such an opt-out are currently being developed, but it remains unclear whether they will be effective and widely adopted in practice.

  • (7)

    The legal solutions to be developed should cover not only the presently known uses of works for AI training but also be formulated more broadly, making them more resilient to future technological developments.

Fussnoten

  1. â€čwww.ige.ch/fileadmin/user_upload/New_Approaches_to_Software_Protection_in_a_World__co__shaped_by_AI.pdfâ€ș, 1 February 2026. ↩
  2. Deputy Head of the Copyright Section at the IPI. ↩
  3. Associate Professor Digital Linguistics at the University of Zurich. ↩
  4. Head AI Software – Austria & Switzerland at IBM. ↩
  5. Head of Government Affairs & Public Policy for Switzerland, Austria, and International Organizations in Europe at Google. ↩
  6. Founder and CEO of Syntheticus. ↩
  7. Assistant General Counsel, Open Innovation Team at Microsoft. ↩
  8. Head of Economics Division at the IPI. ↩
  9. Professor of Digitization, Innovation, and Intellectual Property at the Faculty of Business and Economics, University of Lausanne. ↩
  10. â€čpapers.ssrn.com/sol3/papers.cfm?abstract_id=4807979â€ș, 1 February 2026. ↩
  11. â€čwww.europarl.europa.eu/RegData/etudes/STUD/2025/778859/IUST_STU(2025)778859_EN.pdfâ€ș, 1 February 2026. ↩
  12. Head of Creative Economy Section at the World Intellectual Property Organization. ↩
  13. â€čtind.wipo.int/record/58921â€ș, 1 February 2026. ↩
  14. Director IP and Frontier Technologies Division. ↩
  15. Professor of civil law and commercial law with a specific focus on international intellectual property law at Goethe University, Frankfurt; Co-Chair of a Working Group of the General-Purpose AI Code of Practice, responsible for the drafting of the copyright-related rules, appointed by the European Commission. ↩
  16. Director of the Max Planck Institute for Innovation and Competition. ↩
  17. Legal Counsel at the Federal Institute of Intellectual Property. ↩