Skip to content

Iliya Valchanov on Team-GPT and Use Cases for AI

2025-06-11

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2025/01/21

Iliya Valchanov is the CEO of Team-GPT, an AI-powered platform serving 50,000 users globally. With over 1.4 million students on Udemy, he is a renowned educator specializing in AI and online learning. A serial entrepreneur, Valchanov has co-founded multiple initiatives and startups, reaching millions of users. Holding a degree from Università Commerciale Luigi Bocconi, he has published works featured in Forbes and Inc.com. Valchanov actively champions data security and compliance in enterprise AI solutions. A leader in the AI space, he focuses on scalable, ethical innovation and creating tools that integrate seamlessly into workflows, transforming how teams operate. Valchanov talks about integrating AI into enterprise workflows. Team-GPT serves 50,000 users across industries, offering tools like a Design Basis Memorandum (DBM) for structured content creation, prompt libraries, and native integrations with platforms like Microsoft and Salesforce. Valchanov emphasized the importance of realistic AI promises, data security, and compliance, particularly for enterprises handling sensitive data. Founded in 2023, Team-GPT leverages transparency, SEO, and free educational resources to grow its user base. Looking ahead, Valchanov envisions AI’s exponential integration into workflows, targeting 100,000 active users by 2025 while emphasizing ethical, practical, and scalable AI applications.

Scott Douglas Jacobsen: How does Team-GPT facilitate the discovery and deployment of AI use cases for enterprises in a data-secure manner? Let’s address this.

Iliya Valchanov: Team-GPT currently serves approximately 50,000 users across various enterprises. Through our experience, we’ve discovered that GPT and large language models are incredibly versatile, providing value across multiple departments and use cases. Initially, we asked ourselves: what are the real use cases? How many exist? Are they confined to specific fields, like marketing, or do they span all business areas?

We’ve identified over 1,000 distinct use cases on our platform. These cover marketing, sales, IT development, data science, HR, compliance, and more. Typically, these use cases revolve around chat interfaces, specific prompts, and contextual customization.

However, we learned that chat functionality alone isn’t always sufficient. To address this, we developed additional user interfaces. For example, one of our most innovative tools is called Pages, a text editor designed for long-form content creation. This tool allows users to move beyond chat and into a structured writing environment that aligns with their goals.

Another key benefit is our prompt library, which is designed for diverse use cases. Imagine someone creating a social media post or survey. They can select their desired use case, and our system guides them through a structured process. For example, to create a LinkedIn post, the user inputs a topic, and the system generates a detailed, customized prompt—longer and more specific than typical chat input.

Additionally, we provide prebuilt prompts tailored to specific professions. For example, educators can click a button to access tools designed for their unique needs. These enhancements significantly enrich the chat experience.

Our text editor becomes invaluable for writing-focused tasks, such as drafting blog posts. Users can input topics, such as “Team-GPT Raises $4,500,000,” or paste content directly from their resources. The platform then helps them refine and structure the content into professional-quality output.

Jacobsen: You can choose the length, and it generates content based on the context I just provided. I can edit the text in a text editor—delete, add, or even use AI to paraphrase, shorten, expand, or translate.

Valchanov: Exactly. All of this is AI-first. You can also achieve this with a prompt. Our vision is that as AI integrates into workflows, it will fundamentally change how we write, communicate, and execute tasks. For example, your recording interview might later be synthesized using AI.

For us, this would represent a native integration of any AI-driven note-taking tool you’re using to create a page with a given context. We’re building all kinds of tools to support this. Let me show you an example: I can drag and drop a screenshot into our platform if I have a screenshot. From there, I can extract text from the image, initiate a chat, and perform various other actions.

At Team-GPT, we’ve identified thousands of use cases from our users. For instance, if someone uses chat to create an article, we suggest they try our page creator, which is a more efficient tool. Similarly, for those automating customer support, we’re integrating with their customer support chatbots. This integration provides a novel interface where inquiries are received, processed, and resolved seamlessly.

Chats are a universal, user-friendly interface, but we’re creating many other tools that better address specific use cases.

Jacobsen: Do you find different use cases or challenges when partnering with organizations like Johns Hopkins University or Salesforce?

Valchanov: Yes, the main difference lies in the data sources. The examples I showed you are generic—they rely on foundational models and user-provided context. However, when working with larger organizations, they typically have their systems of record. These could include Microsoft OneDrive, Google Drive, SharePoint, Notion, or Salesforce.

These organizations often require native integrations that pull context from their internal systems. Enterprises, for example, often use the Microsoft ecosystem—SharePoint, Teams, and so on. In contrast, startups like us lean towards platforms like Slack and Notion. Universities, on the other hand, are almost always tied to Microsoft systems. This distinction is one of the primary differences between large organizations and smaller ones.

Jacobsen: How does integrating AI into workflows increase productivity?

Valchanov: There are several ways. First, people spend significant time searching for the right information. Even when the information exists within the organization, users often don’t know where to find it—or if it exists at all. AI can address this challenge by surfacing relevant information efficiently.

Second, AI increases productivity by streamlining repetitive tasks, enabling faster decision-making, and enhancing team collaboration.

Jacobsen: And there’s no question about it. But, typically, the issue is reusable context. How do you use the same context repeatedly for different purposes? This context could include information about your organization, processes, branding, or marketing identity. Marketing teams, for example, need a marketing identity. Sales teams need the sales playbook. Finance teams require the latest spreadsheets and data. Productivity increases when you have the right context and use AI to act on it.

Jacobsen: When was the company founded, and how did you achieve the 50,000-user base or attract such talent?

Valchanov: We founded the company in April 2023. During the first couple of months, we focused on growth hacking. We actively posted on LinkedIn and ran a “building in public” campaign, which we still maintain. All our financials, marketing campaigns, and activities are documented and shared publicly on LinkedIn. This transparency helped build trust with our users.

We were also very early adopters of this approach. Later, we started creating a lot of content. My background is in online education, and when starting I already had over 1 million paying students. We created an AI course on how to use ChatGPT effectively, which attracted some of our existing students. The course was hosted exclusively on our platform as a lead generation campaign. Users came for the course—offered for free—and stayed for the Team-GPT platform.

We created how-to guides, blog articles, and similar resources from there. Most of our traffic comes from SEO, with users finding us through Google.

Jacobsen: Do you find your Google presence stronger now than in your first year? Or has growth stagnated or risen steadily over time?

Valchanov: Like most SEO efforts, it has accumulated over time. The more you do, the more traffic you generate. If you stop working on SEO, you’ll feel the impact about six months later. For instance, we paused SEO for a year and a half, and our traffic is growing again. We receive thousands of organic clicks daily, all highly relevant to our offerings.

Since the industry is still emerging, we often get spikes in traffic when major players like OpenAI, Anthropic, or Microsoft announce something new. These announcements generate more interest in the field. Less than 1% of the world population uses AI tools, so there’s significant growth potential ahead.

Jacobsen: What is the role of data security and compliance in your adoption strategy for enterprises?

Valchanov: Many enterprises face challenges with data security and compliance. Their IT and compliance departments often distrust OpenAI and are reluctant to share data due to fears that OpenAI might train its models on it—a valid concern. For example, all OpenAI servers are located in the U.S., which doesn’t comply with regulations for many companies outside the U.S.

European enterprises, for instance, cannot use ChatGPT because it doesn’t meet compliance requirements. Compliance considerations include data residency—ensuring data is stored within the EU or North America without transiting between continents. For financial institutions, it’s often unacceptable for data to interact with OpenAI systems.

We emphasize to our clients that their employees are likely already using ChatGPT, so they need to implement secure and compliant solutions. Our product allows companies to deploy our system on their servers, whether on-premises or private clouds, and we enable them to interact securely with any model.

Jacobsen: So if a client doesn’t want to interact with the OpenAI model, they can use custom models within your system, and all of this remains contained within their network?

Valchanov: Exactly. When the data never leaves the client’s network, they can be confident it’s private and secure.

Jacobsen: Do you think data privacy and process will increasingly influence the adoption of systems like Team-GPT or AI models in general?

Valchanov: Absolutely. One of the biggest issues right now is: where does the data go? Does OpenAI have access to my data? This concern is at the core of our value proposition.

We emphasize to clients that they own their data—it’s entirely theirs. Our product is designed with this principle, particularly for enterprises that handle intellectual property, sensitive information, personally identifiable information, healthcare data, or financial records. These organizations will likely continue prioritizing private and secure environments for their data.

In contrast, smaller organizations may default to accepting that their data is stored with larger companies, much like what we see with Google. Google has extensive knowledge of users’ activities—every search, every interaction—and most people have come to accept this. However, there are alternatives, like DuckDuckGo or other privacy-centric browsers, for those who value data security.

Jacobsen: What do you see as a crucial point people need to understand when you’re educating them about these products, processes, and general data security?

Valchanov: A significant challenge is the lack of clarity around how AI works and its boundaries. We encounter a wide spectrum of beliefs: some people think AI can’t do anything useful for them, which is incorrect, while others believe AI can perform near-magical feats, which is equally unrealistic.

When I start a conversation with someone, I often don’t know where they fall on this spectrum. They might be skeptics or overenthusiastic believers. One of the key tasks is managing their expectations—explaining what is currently possible, what isn’t, and what they can realistically expect from AI models.

For example, tools like ChatGPT aren’t strong at performing complex mathematical operations. People need to understand these limitations. However, this won’t remain the case indefinitely. Models like o1 and o3 (and beyond) are already showing improvements in mathematics, but we’re not fully there yet.

The industry is evolving so rapidly that capabilities are constantly shifting. This pace of change creates confusion for decision-makers trying to stay informed about what’s feasible today versus what might be achievable soon.

Jacobsen: Not only did they not know the capabilities to begin with, but these capabilities are also changing every day. What was true yesterday may no longer be true today.

For example, when you have a two-year sales cycle, which is typical for financial institutions or government deals, the landscape can shift dramatically. At the start of negotiations, certain functionalities might be impossible, but by the end, many of those same functionalities may be feasible. It’s a very confusing field in that regard. I’ve been listening to Eric Schmidt, the former CEO of Google, and Jensen Huang, the CEO of NVIDIA.

Both present a reasoned perspective on these systems’ vast capabilities and potential while discussing risks and rewards and how to maximize the latter while minimizing the former. This is a highly relevant conversation. What will the capabilities of Team-GPT as it builds on models like O1 and potentially O3? I’ve seen some graphs where O3 significantly outperforms earlier iterations like O1, even in low-power modes. What are your thoughts on this?

Valchanov: I think Silicon Valley attracts some of the brightest minds in mathematics and computer science. If there are problems they excel at solving, they’re often mathematical or technical—such as coding, data science, or algorithm development.

The fact that the earliest AI systems focused on creative use cases, like content creation or article writing, was more of a coincidence and an outlier. I expect Bay Area experts to solve highly technical use cases—like those for coders, data scientists, or mathematicians—exceptionally well because they deeply understand those domains. On the other hand, I believe they’re less likely to excel in use cases like generating text and images or creating engaging stories.

In this regard, I foresee mathematical and reasoning capabilities improving at a super-fast pace while advancements in creativity may take significantly longer. AI may not achieve a high level of creativity for quite some time.

Jacobsen: I’ve observed a distinction in how AI is often discussed, particularly in English. These systems process massive datasets through deep learning and neural networks, enabling them to generate new information based on prior models or datasets. However, this isn’t comparable to what we call organic creativity. It’s more akin to generativity, as described by thinkers like Chomsky.

When people discuss creativity in AI, they often reference this notion of generativity. If we consider AI systems creative in any sense, it’s a form of lower-level creativity. That said, I envision a future where AI achieves higher forms of creativity once its operations and algorithms are further refined. What do you think?

Valchanov: That’s an excellent observation. These AI systems are indeed generative but in a way that’s constrained by their datasets and models. They don’t have the organic creativity of humans, which stems from complex, unstructured cognitive processes.

As you said, current AI creativity is a lower-level form. While it may someday evolve into something more profound, that will require breakthroughs in algorithm design and operational frameworks. Until then, AI’s creative abilities will likely remain limited, but its utility in technical and reasoning tasks will continue to grow exponentially.

Jacobsen: What would you project your user base to be by the end of 2025? For instance, in terms of total users and the partnerships you expect to have? Currently, you’re working with Salesforce and Johns Hopkins University. What other areas do you see as potential for your prompt-based system?

Valchanov: For us, the focus is on mid-market and enterprise clients—companies with more than 200 to 300 employees. The productivity improvements and cost savings for these organizations are enormous. While smaller organizations benefit from the boost, it’s less impactful than a company with 1,000 employees, for instance.

I haven’t set KPIs for 2025 yet, so I can’t give you an exact number in terms of total users. However, having 50 to 100 large companies—each with more than 300 employees—would be a significant milestone for us. It’s not just about acquiring users but activating them.

For example, we currently have organizations with thousands of employees, but only 50 people are actively using the system. In one case, we have an organization with 4,000 employees, but only three use the platform. They’ve been paying for three months, and now the client asks, “How do we roll this out meaningfully to all 4,000 employees?” It’s a complex challenge because you’re disrupting workflows to introduce new technology.

Another company started with 15 users. A year later, they had 60 active users. They aim to onboard 300 people by the end of Q1 and 1,000 by 2025. That organization has 5,000 employees, so onboarding 1,000 people in a year would be a significant achievement.

Long story short, I hope we reach more than 100,000 potential users by 2025. It would be an incredible achievement if we activated 50,000 to 100,000 of them. However, activation takes time and effort.

Jacobsen: Thank you for sharing your insights. I find this area particularly interesting and startups should invest more time and money in it. Everyone seems to want to invest in AI, primarily out of fear—ironically, fear of AI’s potential power. In my opinion, this field will remain a powerful area for growth well into the 2030s and beyond.

Valchanov: I agree 100%. It will likely take 15 years or more to integrate AI into workflows fully. I’m very excited about the possibilities, even though we don’t know what to expect. I’ve heard projections that humanoid robots could outnumber living humans by 2040 or 2050. With advancements in large language models, interactions are becoming much more dynamic and meaningful than ever before.

Jacobsen: Excellent. Thank you so much for your time. I’ll start editing this today, and I should have something ready for you soon.

Valchanov: Thank you! Have a great day.

Jacobsen: You too. Bye.

Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment