The Researcher and the Machine: Rethinking Innovation in the Age of AI Tools

WRITTEN BY: MARYAMA ELMI, JD

In this final article of the Owning Innovation series, we return to where innovation begins: the creation of ideas. This piece explores the governance blind spots emerging in the era of AI-assisted innovation, a term to describe the increasingly common practice of using generative AI tools to support knowledge creation, research, and problem-solving.

Introduction

So far in the Owning Innovation series, we’ve asked:

This final piece brings the series full circle by turning our focus to the very beginning of the innovation pipeline: knowledge creation.

Traditionally, knowledge creation has been something only humans could do through research, experimentation, and collaboration (Nestian, Silviu, & Guta, 2020). It meant building new ideas, developing novel methods, and expanding what we understand to be true (Garfield, 2018).

But in the age of AI, we’re witnessing a shift. AI-enabled knowledge creation happens differently. It involves humans using machines that can analyze large datasets, detect patterns, and generate outputs that look and feel like original contributions (Jarrahi et al., 2022). It is a faster process that can be an amazing companion in knowledge creation, but it also raises new questions about authorship, ownership, and responsibility.

This piece will explore how AI-assisted innovation has changed how we understand knowledge creation, and why this is really important. Because AI-assisted innovation happens quietly and often, it has fallen under the radar in governance debates. 

But before we continue, let’s define what AI innovation is. AI innovation includes both:

  • AI-Assisted Innovation: where AI is used as a tool to accelerate or enhance the innovation process, such as using machine learning to generate product ideas, analyze research data, or optimize design workflows (Orchidea, 2023).

  • Innovation in AI itself: where the development of new AI models, algorithms, or applications is the innovation from large language models to novel architectures and generative systems.

This final article closes the Owning Innovation series by shining a light on this under-examined area and pointing toward what comes next: the Responsible Innovation Toolkit.

The Quiet Reshaping of Innovation As We Know It

AI-assisted innovation is reinventing how knowledge is produced across industries. From academia, to journalism, to the arts, generative AI has become a co-author and collaborator to many. Yet, most existing frameworks still treat innovation as a purely human process, rooted in traditional notions of authorship, despite AI already reshaping what innovation even looks like.

Let’s consider a familiar scenario:

A researcher wants to clean up the language in their academic abstract, so they paste it into an AI chatbot. They also need help visualizing a dataset and so use a generative tool to create a graph. They even summarize a confidential research proposal using an AI assistant.

These may feel like harmless, even efficient, uses of AI. But AI-assisted outputs often sit in a legal and ethical grey area, where no clear rules apply. This creates a growing set of risks, including:

  • Intellectual Property Risks: Who owns an output shaped by both human and machine?

  • Commercialization Risks: Can AI-assisted work be patented, licensed, or monetized under current rules? What are the gaps?

  • Privacy & Data Handling Risks: What happens when sensitive or confidential data is fed into third-party AI tools with unclear storage or use policies?

AI-assisted innovation is not a niche use case. It has become the norm across sectors. And without updated governance, the risks will only multiply.

Intellectual Property: Blurred Lines of Ownership

Generative AI has been widely celebrated for its ability to support innovation “to promote divergent thinking, challenge expertise bias, assist in idea evaluation, support idea refinement, and facilitate collaboration among users” (Eapen et al., 2023). But many critics argue that this same technology challenges existing intellectual property frameworks, raising concerns about ownership, attribution, and the legitimacy of AI-assisted creation.

Here are three areas where AI-assisted innovation is challenging traditional IP concepts:

  • Inventorship & Authorship: One of the core principles of patent law is that only natural persons can be named as inventors. But what happens when an AI system meaningfully contributes to the creation of a new product, idea, or solution? In other words, if an AI tool helps shape the core concept of an invention, does the human user still qualify as the sole inventor? Legal systems around the world are grappling with whether to recognize AI's role in the inventive process, especially as tools become more influential in fields like drug discovery, engineering, and design (see Sequiera & Tsang, 2024 as an example).

  • Patents & Public Disclosure: A patent is a legal right that gives someone ownership of an invention, meaning no one else can make, use, or sell that invention without permission for a certain number of years (WIPO). However, in many countries, once you publicly share your idea (e.g. on a website, in a conference, or through an AI tool), you lose the right to patent it unless you file quickly. Therefore, if you put an early idea into an AI tool (especially without clear privacy terms), does that count as sharing it with the public? This matters because it could affect whether the idea can still be patented.

  • Attribution & Ownership: If an AI co-generates text or images, can the human user claim full ownership? What happens when AI is trained on other people’s copyrighted work? These questions get even more complicated when we consider how AI systems are trained. Many generative models learn from massive datasets that include copyrighted material, from books, to music, to artwork. If an AI draws from these materials to generate something new, does the output infringe on the rights of the original creators? Or is it considered something entirely new?

Where Policy Stands Now

A recent task force led by the Center for Strategic and International Studies (CSIS) and the Special Competitive Studies Project (SCSP) in the USA explored these tensions. They concluded that for now, AI should be considered a sophisticated tool that assists human inventors, and not an inventor itself. Therefore, patent rights should remain centered on human contributions. But as AI capabilities advance, they believe that the patent system must be ready to adapt (Iancu and Elluru, 2024).

Meanwhile, the World Intellectual Property Organization (WIPO) reports that there is still no global consensus on whether outputs from AI systems trained on copyrighted data constitute infringement. This uncertainty leaves creators, companies, and regulators without a clear playbook. WIPO also outlined that the legal risks for generative AI users is also prevalent. In many jurisdictions, copyright infringement can occur regardless of intent or awareness, which means that simply generating content with AI could raise legal concerns, even if the user had no idea it was based on protected material (WIPO).

In short, current policy is approaching the idea of AI-assisted innovation cautiously, but still acknowledges the gaps. As AI continues to become a staple in innovation, legal and policy frameworks need to adapt to be more proactive rather than reactive.

Commercialization: Innovation Without Protection?

The above IP concerns are not just legal dilemmas, they’re commercialization dilemmas as well. When ownership is unclear, so is the right to profit.

We’re already seeing this tension play out in other sectors. During the 2023 Writers Guild of America strike, one of the central demands was around the use of AI in screenwriting, and whether studios should be able to use generative tools to replicate a writer’s voice without consent or compensation (Kinder, 2024). The strike made clear that AI-assisted innovation is not just a productivity boost, it can also be a commercial battleground, because it raises urgent questions about fair compensation.

Returning to the academia example, researchers are beginning to use generative tools to co-create everything from visualizations to grant proposals to entire drafts. As the Cornell’s task force on Generative AI in Research points out, this raises unresolved questions around attribution, inventorship, and the legitimacy of research outputs, which are especially relevant when those outputs are headed toward commercialization (Cornell, 2024).

In academia, commercialization often takes the form of the following:

  • Patents: Legal protection that gives someone the exclusive right to make, use, or sell an invention for a set number of years. In research, this often applies to new technologies or methods.

  • Publications: Academic papers or studies that are published in journals or conferences. They help share new knowledge with the world and can build a researcher’s credibility or lead to funding opportunities.

  • Technology Transfer: The process of turning research discoveries into real-world products or services. This often involves licensing the invention to a company or starting a spin-off company to bring it to market.

But these existing systems rely on clear definitions of who created what, and when. If a generative AI tool played a significant role in shaping a research idea or drafting a key output, that could impact the commercialization of that output, and these grey areas create real risk.

This risk is not just for individual researchers, but also for universities, funders, and industry partners looking to invest in or license research. Without clearer guidance on how AI involvement affects IP rights and commercialization pathways, institutions and researchers are left exposed to both legal and ethical risks, at the very moment when AI is becoming a standard part of the research workflow.

If we don’t resolve these grey areas, we risk building tomorrow’s breakthroughs on shaky legal ground.

Privacy & Data Handling: When a Prompt Becomes a Leak

In academic research, confidentiality is a baseline assumption, especially for early-stage ideas, confidential datasets, or ethically sensitive work. But when researchers input that material into third-party AI tools, the line between private and public becomes unclear. A blog post by Harmonic illustrates this tension well, challenging the assumption that AI tools function like simple calculators. As Marriott (2025) explains:

“A common myth about generative AI tools is that they act like calculators. You input a prompt, get a response, and the data disappears. Sometimes it does, but not all the time. In reality, many popular GenAI applications store user prompts, save conversation histories, and may use that data to train future models.”

WIPO’s guidance on generative AI use makes the same point, which is telling. This reality raises serious concerns for researchers, because they may be exposing confidential or unpublished material to AI systems they can’t fully control, even unintentionally. This is an underemphasized point in the world of AI-assisted innovation. Without proper safeguards, even casual use of generative tools could expose sensitive research or violate ethics protocols, not out of malice, but misunderstanding.

Corporate legal teams are already confronting this issue by asking the right questions. As Debevoise notes in their blog, when uploading confidential material into a consumer AI tool, companies should consider whether the data can be deleted, whether it constitutes a breach, and if the data has become accessible to the provider or others (Gesser, 2025).

The academic sector must take this just as seriously. At minimum, institutions and labs should ensure:

  • AI tool settings are configured to prevent data storage or model training;

  • Internal policies clearly outline what types of information must never be entered; and

  • Researchers receive role-specific training, especially around unpublished data and confidential sources.

The bottom line is that privacy can’t be assumed, it must be actively protected.

Owning Innovation Means Governing It — From the Ground Up

All of the above issues, from blurry IP boundaries, to unclear commercialization pathways, and fragile data privacy practices reflect the same message.

We don’t just need better rules. We need better foundations.

Throughout this series, we’ve seen that our current frameworks weren’t built for the speed, complexity, and blurred lines of AI innovation. And we’ve seen how that failure is already impacting people across sectors, from researchers to artists to institutions, in very real ways.

That’s where the Pillars of AI Innovation, introduced in the previous piece, come in. This simple, actionable framework helps us rethink the infrastructure of innovation governance in the AI era. It’s not just about managing risk, it’s about designing systems that reflect how innovation actually happens, and who it should serve.

A strong place to begin applying this framework is within high-trust domains like academia, where generative AI is already shaping research, policy, and public understanding.

To protect researchers and the innovation they drive, we need systems grounded in:

  • Recognition: Clear policies that define authorship, attribution, and appropriate AI use, so researchers are credited fairly, and the role of AI is transparent.

  • Pathways: Legal and institutional guidance on how AI-assisted outputs intersect with intellectual property, disclosure, and commercialization, so valuable work isn’t blocked or exploited by outdated systems.

  • Protection: Technical and procedural safeguards to prevent sensitive data from being exposed, along with training that makes clear what should never be entered into public AI tools.

Above all, we need to prioritize a culture of proactive governance, that treats ethical foresight not as an afterthought, but as an everyday practice embedded in how we create, collaborate, and share knowledge.

That’s why the next chapter of Owning Innovation is the Responsible Innovation Toolkit, a modular resource for researchers, universities, developers, and policymakers alike.

References

  1. Brookings Institution. (2023, October 10). Hollywood writers went on strike to protect their livelihoods from generative AI. Their remarkable victory matters for all workers. https://www.brookings.edu/articles/hollywood-writers-went-on-strike-to-protect-their-livelihoods-from-generative-ai-their-remarkable-victory-matters-for-all-workers/

  2. Center for Strategic & International Studies. (2023). When AI helps generate inventions, who is the inventor? https://www.csis.org/analysis/when-ai-helps-generate-inventions-who-inventor

  3. Debevoise Data Blog. (2025, April 16). An employee just uploaded sensitive data to a consumer AI tool — now what? https://www.debevoisedatablog.com/2025/04/16/an-employee-just-uploaded-sensitive-data-to-a-consumer-ai-tool-now-what/

  4. Eapen, T. T., Finkenstadt, D. J., Folk, J., & Venkataswamy, L. (2023, July). How generative AI can augment human creativity. Harvard Business Review. https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity

  5. Fenwick & West LLP. (2024). Emerging legal terrain: IP risks from AI’s role in drug discovery. https://www.fenwick.com/insights/publications/emerging-legal-terrain-ip-risks-from-ais-role-in-drug-discovery

  6. Garfield, S. (2018, November 13). Creation process: Knowledge creation, invention and innovation. Medium. https://stangarfield.medium.com/creation-process-knowledge-creation-invention-and-innovation-e3d6741d391a

  7. Harmonic Security. (2024). Is your data safe in GenAI apps? How AI tools can expose sensitive company data. https://www.harmonic.security/blog-posts/is-your-data-safe-in-genai-apps-how-ai-tools-can-expose-sensitive-company-data

  8. Jarrahi, M. H., Askay, D., Eshraghi, A., & Smith, P. (2022). Artificial intelligence and the creation–organization dilemma. Business Horizons, 65(4), 503–515. https://doi.org/10.1016/j.bushor.2022.01.003

  9. Nestian, A., Tita, S., & Guta, L. (2020). Incorporating artificial intelligence in knowledge creation processes in organizations. Proceedings of the International Conference on Business Excellence, 14. https://www.researchgate.net/publication/343291151_Incorporating_artificial_intelligence_in_knowledge_creation_processes_in_organizations

  10. Orchidea. (2023). What is AI-driven innovation? The role of AI in innovation. https://info.orchidea.dev/innovation-blog/what-is-ai-driven-innovation-role-of-ai-in-innovation

  11. Research & Innovation at Cornell. (2024). Generative AI in academic research: Perspectives and cultural norms. https://research-and-innovation.cornell.edu/generative-ai-in-academic-research/

  12. World Intellectual Property Organization. (2023). Generative AI: What you need to know. https://www.wipo.int/export/sites/www/about-ip/en/frontier_technologies/pdf/generative-ai-factsheet.pdf

  13. World Intellectual Property Organization. (n.d.). Patents. https://www.wipo.int/en/web/patents

What If AI

Maryama Elmi is the founder of What If AI, a platform exploring the future of AI governance through global perspectives, cultural analysis, and public-interest storytelling. A lawyer and policy strategist, she writes about the systems, gaps, and ideas shaping how AI shows up in the world.

Next
Next

Innovation Isn’t Just About Technology. It’s About the Systems Behind It