Skip to content

Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead

2025-02-15

Scott Douglas Jacobsen
In-Sight Publishing, Fort Langley, British Columbia, Canada

Correspondence: Scott Douglas Jacobsen (Email: scott.jacobsen2025@gmail.com)

Received: January 21, 2025
Accepted: N/A
Published: February 15, 2025

Abstract

Dr. Marc-Oliver Gewaltig, a distinguished researcher in artificial intelligence, computational neuroscience, and robotics, discusses his journey from pioneering neural simulation tools to co-founding Thesify.ai—an ethical academic writing support platform. He outlines his work on spiking neural networks, contributions to large-scale projects like the Blue Brain Project and the Human Brain Project, and his commitment to responsible AI usage. Emphasizing transparency and the importance of human creativity in an era increasingly influenced by AI, Dr. Gewaltig advocates for clear distinctions between human-generated and AI-generated content in both public and academic contexts.

Keywords: Academic Innovation, AI Ethics, Artificial Intelligence, Computational Neuroscience, Ethical Academic Writing, Human Brain Project, Neural Simulation, Neurorobotics, Robotics, Thesify.ai

Introduction

In this interview, Dr. Marc-Oliver Gewaltig—a leading researcher in artificial intelligence, computational neuroscience, and robotics—shares insights from his extensive career in both academic and applied research. A co-developer of the Neural Simulation Tool (NEST) and co-founder of Thesify.ai, Dr. Gewaltig has significantly contributed to projects like the Blue Brain Project and the Human Brain Project. His work bridges cutting-edge technology and ethical considerations, aiming to enhance human creativity rather than replace it. Throughout the discussion, he emphasizes the need for transparency in AI usage, particularly in distinguishing between human and machine-generated content, and reflects on the evolving role of AI in academic innovation.

Main Text (Interview)

Interviewer: Scott Douglas Jacobsen

Interviewee: Dr. Marc-Oliver Gewaltig

Section 1: Introducing the Interview

Scott Douglas Jacobsen: Today, we are here with Dr. Marc-Oliver Gewaltig. He is a distinguished researcher in artificial intelligence, robotics, and computational neuroscience. That’s an impressive array of expertise. Dr. Gewaltig co-founded the neural simulation tool NEST, which is widely used for large-scale simulations of spiking neural networks. So, my first question is: Why did you initially focus on artificial neural networks, artificial intelligence, and related fields? These complex subjects have become central to mainstream conversations and culture.

Section 2: Early AI Research and Neural Networks

Dr. Marc-Oliver Gewaltig: An AI wave was already happening when I began studying. It was just after the so-called AI winter. In my hometown, an institute called the Institute for Neuroinformatics was founded. At that time, the term “AI” was essentially taboo. You couldn’t refer to anything as AI; it was viewed with skepticism, almost like esotericism. Instead, the field was called “neuroinformatics” or “intelligent systems.” Despite this, it sounded incredibly exciting.

A friend and I decided to understand how intelligence works. That curiosity led us to study spiking neural networks. Traditional neural networks, as they were understood at the time—and still are, to some extent—stemmed from abstractions of the brain developed in the 1940s. However, we were far more interested in understanding how the brain works.

Real neurons don’t operate based on a simple function that translates input values into output values. Instead, they behave more like muscle cells with electrical activity. Each neuron receives numerous signals from other neurons but only responds if its input is sufficiently strong and synchronous in time. If many neighbouring neurons fire simultaneously, the neuron will also fire.

That’s how the brain functions. Then, the synapses—the connections between nerve cells—are modified whenever two neighbouring cells fire together. This process, known as plasticity, is what biologists call learning. That’s how I became involved in this field. I also learned a great deal about how the visual system processes information.

For instance, how do photons hitting the retina lead to the perception of an image, such as the computer screen in front of you? If you see something you like, how does that visual input translate into the urge to stretch out your arm, press a key on the keyboard, and press exactly the right key?

So that is what I have been interested in most of my life. Of course, as a scientist, you have to publish what you’re writing and researching. When ChatGPT and similar tools were released, we quickly asked ourselves: “Okay, that’s a cool technology. You can use it in unintelligent ways, but you can also use it in smart ways. What happens if everyone has AI? Do we still write? And if so, how?”

That’s why we founded Thessify.ai. The answer to the first question is: Yes, we will still write. AI can generate text based on the model it has been trained on but cannot capture what is in my mind. If you are a researcher, an author, or a scientist, you have ideas you want to express and communicate.

Somehow, you have to guide the AI to write what you want rather than letting the AI write what it decides. Anyone who has used ChatGPT will notice it can be challenging to get it to produce what you want. It’s like working with a ghostwriter with a very strong opinion and convincing him to follow your desired style or direction is difficult.

I often say having AI write for you is like having AI listen to music for you.

Section 3: Generative AI as Synthesizer

Jacobsen: Oh, that’s good. I like that. That’s a very clear image.

Gewaltig: Yes, because with AI, you neither experience the process nor benefit directly. Essentially, you’re using a proxy and placing full trust in it. That can work if you have nothing substantial to say.

However, if you have something meaningful to communicate, your voice needs to come through, whether on the screen or paper. So, you still have to write yourself. The real question is: If you have tools like ChatGPT, Gemini, or similar AI systems, how do you ensure they help you convey your message? How do you write what you truly want to communicate?

First, you still need to understand what makes a good piece of writing. Another metaphor I like to use is that generative AI is like a synthesizer or sampler in music. It can generate synthetic text just as a synthesizer can produce synthetic music.

However, a musician must know what constitutes a good composition. What’s the structure of a document? For example, a news article has a very different structure from a bedtime story—at least, it should. You still have to learn these foundational elements.

Once you understand the craft, you can use tools, like instruments, to assist with your writing. ChatGPT is an advanced musical instrument for text. That’s how I see it—you must learn how to play it.

This is where education will need to evolve. You can produce music much faster with a synthesizer, just like generating text faster with generative AI.

The virtuosity required to create has changed. For example, 200 years ago, being a good musician required mastering the piano, violin, or another instrument. There was a specific craft and mastery involved. With electronic music, much of that has changed.

Today, if you understand music theory, you can easily create electronic music from a technical standpoint. However, it is still difficult because it requires creativity. Many people don’t realize we must find ways to maintain creativity and bring it to life using modern tools.

Section 4: Thesify

Jacobsen: What is the origin of the name Thesify? How do you “bring it to life” ethically using a musical example in the context of an advanced textual synthesizer? I mean both the principles of ethics or morality behind the concept and the application of that within–typically termed–narrow artificial intelligence, specifically using large language models.

Gewaltig: The name Thesify comes from the word “thesis.” As a scientist, you must write a bachelor’s thesis, a master’s thesis, and a PhD thesis—so three theses. We use AI to help you complete your thesis, but not in a way that undermines your writing ability. Instead, we aim to train and strengthen your writing skills. A metaphor was used—I forget the author, but it was in The New Yorker. It described using generative AI in education as bringing a forklift to the weight room.

This is where education might need to shift or reconsider its approach. At Thesify, we assist students in writing their theses by addressing a specific challenge every student encounters. Imagine you have a thesis draft and take it to your supervisor or Professor for feedback.

You ask, “Professor, here’s my draft. Can you review it and let me know if it’s good and what I should change?” Then the Professor says “yes” but promptly disappears for six months, attending conferences. As a student, I find this delay very frustrating. This is where Thesify steps in. We will provide the feedback that your Professor should give you. You upload your draft, and we analyze it as if through the eyes of a reviewer.

We assess whether your thesis has a clear statement if there’s a targeted message behind it, whether the argumentation is straightforward, and if there are any gaps in your reasoning. We also consider whether you’ve left out counterarguments that you should address.

We provide a detailed list of actionable points to improve your writing for the next revision. For example, highlight where your argument lacks evidence or where you need to make your reasoning more conclusive. We suggest you provide supporting evidence if you’ve made a factual statement that isn’t common knowledge.

Additionally, we can point users to relevant literature that addresses particular statements or gaps in their argumentation. Essentially, we engage the user in a feedback loop. You upload your manuscript, receive a to-do list for improvement, and return with the revised version once you’ve addressed the criticism.

This system isn’t a chatbot—you can’t ask it to generate text for you or answer general questions. That functionality is already available in tools like Google Docs, so there’s no need to reinvent the wheel. Thesify offers something different: highly structured, consistent feedback, which is difficult to find elsewhere.

This level of detailed, constructive critique is what makes Thesify stand out.

Section 5: History of Tools

Jacobsen: In North America, much of the commentary in public media tends to focus on prominent figures—Ray Kurzweil, Eric Schmidt, Sam Altman, Elon Musk, and so on. However, your approach seems unique regarding specifics, particularly regarding a particular aspect of an AI system, such as analyzing a thesis with clear boundaries.

It’s a very interesting way to address these issues. Discussions about AI ethics often revolve around broad fears, like avoiding the so-called non-zero chances of a Terminator future. These fears are easily amplified, particularly in North America.

Your description clearly demonstrates how ethics can be embedded in a product with a specific purpose, making it actionable. Do you see this approach expanding into other academic areas beyond thesis feedback?

Gewaltig: Yes. If we look at the history of tools, they’ve always transitioned from being in the limelight to becoming almost invisible, operating seamlessly in the background.

Take cameras, for example. Manufacturers advertised features like autofocus, red-eye reduction, and other technical details twenty years ago. Today, you have a camera. That’s it. Don’t worry about those features; they’re built in and work without much attention.

Interestingly, many of these features rely on AI technologies, but they are no longer labelled as AI. Most AI uses will eventually become invisible to users. They already are. For example, any video conferencing system now incorporates AI in the background—to adjust volume, cancel out noise, and perform other functions.

But you don’t need general AI for this. What you need are very specific tools with precise solutions for particular problems. Everything else is often just a marketing ploy.

Section 6: Overselling AI

Jacobsen: Do you think there is a cultural tendency, particularly in the West and among AI communities, to overstate and oversell what is referred to as AI? Or do you find their assessments accurate?

Gewaltig: There is a vast amount of overselling. Big tech companies have a huge incentive to do this because it fuels their stock valuations. The promise of future advancements drives their sky-high stock prices—being the winner in the so-called AI race, and so on.

However, when you look at concrete use cases, making AI work accurately and reliably for a specific purpose is often much harder than people realize.

Take your phone’s photo app, for example. It classifies images like flowers, buildings, or people. There are no high stakes involved. If an image is in the wrong folder, it doesn’t matter much.

However, the stakes are extremely high for applications like autonomous driving. If pedestrian recognition fails—a false negative where the system doesn’t recognize a pedestrian—it could have fatal consequences. On the other hand, a false positive, where the car stops unnecessarily, is far less serious.

The issue of false positives and negatives is critical, yet they’re often conflated into a single measure of quality, which is a mistake. The consequences of these errors are context-dependent and can vary widely.

For instance, in autonomous driving, a false negative that fails to detect a pedestrian can be lethal, while a false positive that causes an unnecessary stop is merely inconvenient.

In academia, tools like Turnitin, which some institutions use to detect AI-generated text, illustrate this issue. These tools have a false positive rate of around four percent. That might not sound like much, but it means that out of 100 students, four could be wrongly accused of cheating.

In some cases, students have been expelled from their schools or universities. That’s a significant consequence of an error rate that, at first glance, seems minor.

Section 7: Ethical Considerations in AI Usage

Jacobsen: So, if you have, for instance, 40,000 students, four percent is quite a significant number over time.

Gewaltig: It is quite significant after a while, yes. And this highlights an important point. When considering use cases for AI, you always have to ask: Where can it go wrong, and where can it go right?

Many AI technologies work to some degree, but they don’t function with the level of accuracy we typically expect from computers. Of course, humans make errors, too, but only humans can be held accountable for their mistakes. Machines cannot bear responsibility. This ties into liability issues and how we approach litigation and related concerns.

Specific use cases are where real value is generated. This is true now, not just in the future. Everything else—grandiose claims, flashy showcases—are just demonstrations. Nothing more.

Section 8: Concluding Insights and Reflections

Jacobsen: What areas should the public focus on critically when it comes to the use of AI and claims about AI? This includes how AI is defined and discussed in public discourse.

Gewaltig: One of the most important aspects is transparency about where AI is used. Consider social media platforms like Facebook or Instagram. Recently, there’s been news that Meta plans to deploy numerous AI-generated profiles.

It should always be clear whether a profile belongs to a human or was generated by AI. Similarly, when you receive an email, a message or a phone call, it must be evident whether a human or an AI agent sent it. Transparency is crucial. Society needs clarity on whether people engage authentically or are “posing” with AI-generated content.

Using AI to write unthinkingly is posing—it’s not a genuine creation. That’s why I believe relying heavily on AI for customer relations, for example, is a short-sighted strategy. Humans are surprisingly adept at recognizing the tone and style of machine-generated text. If you’re aware of AI, you can usually tell whether a human or a machine wrote a message.

Jacobsen: That’s a whole conversation—whether we can develop, for lack of a better term, a “universal capture” for distinguishing human identity from artificial identity. But, yes. Dr. Gewaltig, thank you very much for your time and this opportunity. It was a pleasure meeting you. Thank you again for your time, and I hope you have a wonderful rest of your day.

Gewaltig: Thank you as well. It was very interesting, and I wish you all the best for Canada.

Jacobsen: Thank you. It’s morning here, so the day is just getting started.

Gewaltig: All the best with your packed schedule of interviews today. 

Discussion

This interview with Dr. Marc-Oliver Gewaltig offers a compelling exploration into the evolving landscape of artificial intelligence and its ethical applications in academic and creative contexts. Dr. Gewaltig’s journey—from his early work on spiking neural networks and neural simulation tools to co-founding Thesify.ai—highlights the dynamic interplay between technological innovation and ethical responsibility. His reflections on the transformation of AI research since the so-called AI winter underscore both the promise and the pitfalls inherent in rapidly advancing technology.

Dr. Gewaltig emphasizes that while AI can significantly accelerate tasks such as academic writing and data analysis, it cannot replicate the nuanced insights and creativity that stem from genuine human thought. He draws compelling parallels between AI’s role in text generation and the use of musical synthesizers, illustrating that effective use of AI requires not only technical proficiency but also a deep understanding of the underlying craft. This perspective is critical in an era where the authenticity of content is increasingly scrutinized, and where the boundary between human and machine output must be clearly defined to maintain trust in academic and public discourse.

The discussion also raises important considerations about transparency and accountability in AI deployment. Dr. Gewaltig advocates for clear distinctions between human-generated and AI-generated content, a stance that is particularly relevant given the proliferation of AI tools in various fields. His insights call for a balanced approach—leveraging the benefits of AI to enhance productivity and creativity, while rigorously upholding ethical standards to ensure that human voice and intellectual contribution remain at the forefront. This balanced perspective is essential for guiding future policy and educational strategies as AI continues to integrate into our daily lives.

Methods

The interview was scheduled and recorded—with explicit consent—for transcription, review, and curation. This process complied with applicable data protection laws, including the California Consumer Privacy Act (CCPA), Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), and Europe’s General Data Protection Regulation (GDPR), i.e., recordings were stored securely, retained only as needed, and deleted upon request, as well in accordance with Federal Trade Commission (FTC) and Advertising Standards Canada guidelines.

Data Availability

No datasets were generated or analyzed during the current article. All interview content remains the intellectual property of the interviewer and interviewee.

References

(No external academic sources were cited for this interview.)

Journal & Article Details

  • Publisher: In-Sight Publishing
  • Publisher Founding: March 1, 2014
  • Web Domain: http://www.in-sightpublishing.com
  • Location: Fort Langley, Township of Langley, British Columbia, Canada
  • Journal: In-Sight: Interviews
  • Journal Founding: August 2, 2012
  • Frequency: Four Times Per Year
  • Review Status: Non-Peer-Reviewed
  • Access: Electronic/Digital & Open Access
  • Fees: None (Free)
  • Volume Numbering: 13
  • Issue Numbering: 2
  • Section: A
  • Theme Type: Idea
  • Theme Premise: “Outliers and Outsiders”
  • Theme Part: 33
  • Formal Sub-Theme: None
  • Individual Publication Date: February 15, 2025
  • Issue Publication Date: April 1, 2025
  • Author(s): Scott Douglas Jacobsen
  • Word Count: 2,428
  • Image Credits: Photo by Alessio Ferretti on Unsplash
  • ISSN (International Standard Serial Number): 2369-6885

Acknowledgements

The author acknowledges Dr. Marc-Oliver Gewaltig for his time, expertise, and valuable contributions. His thoughtful insights and detailed explanations have greatly enhanced the quality and depth of this work, providing a solid foundation for the discussion presented herein.

Author Contributions

S.D.J. conceived the subject matter, conducted the interview, transcribed and edited the conversation, and prepared the manuscript.

Competing Interests

The author declares no competing interests.

License & Copyright

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Scott Douglas Jacobsen and In-Sight Publishing 2012–Present.

Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen is strictly prohibited. Excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

Supplementary Information

Below are various citation formats for Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead.

  1. American Medical Association (AMA 11th Edition)
    Jacobsen S. Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead. February 2025;13(2). http://www.in-sightpublishing.com/gewaltig-thesify
  2. American Psychological Association (APA 7th Edition)
    Jacobsen, S. (2025, February 15). Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead. In-Sight Publishing. 13(2).
  3. Brazilian National Standards (ABNT)
    JACOBSEN, S. Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead. In-Sight: Interviews, Fort Langley, v. 13, n. 2, 2025.
  4. Chicago/Turabian, Author-Date (17th Edition)
    Jacobsen, Scott. 2025. “Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead.” In-Sight: Interviews 13 (2). http://www.in-sightpublishing.com/gewaltig-thesify.
  5. Chicago/Turabian, Notes & Bibliography (17th Edition)
    Jacobsen, S. “Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead.” In-Sight: Interviews 13, no. 2 (February 2025). http://www.in-sightpublishing.com/gewaltig-thesify.
  6. Harvard
    Jacobsen, S. (2025) ‘Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead’, In-Sight: Interviews, 13(2). http://www.in-sightpublishing.com/gewaltig-thesify.
  7. Harvard (Australian)
    Jacobsen, S 2025, ‘Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead’, In-Sight: Interviews, vol. 13, no. 2, http://www.in-sightpublishing.com/gewaltig-thesify.
  8. Modern Language Association (MLA, 9th Edition)
    Jacobsen, Scott. “Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead.” In-Sight: Interviews, vol. 13, no. 2, 2025, http://www.in-sightpublishing.com/gewaltig-thesify.
  9. Vancouver/ICMJE
    Jacobsen S. Conversation with Dr. Marc-Oliver Gewaltig on Practical Applications of AI, Instead [Internet]. 2025 Feb;13(2). Available from: http://www.in-sightpublishing.com/gewaltig-thesify

Note on Formatting

This document follows an adapted Nature research-article format tailored for an interview. Traditional sections such as Methods, Results, and Discussion are replaced with clearly defined parts: Abstract, Keywords, Introduction, Main Text (Interview), and a concluding Discussion, along with supplementary sections detailing Data Availability, References, and Author Contributions. This structure maintains scholarly rigor while effectively accommodating narrative content.

 

Leave a Comment

Leave a comment