Limited Canva Creator Data Exposed Via AI Chatbot Database

Select Language

English

Down Icon

Select Country

America

Down Icon

Limited Canva Creator Data Exposed Via AI Chatbot Database

Limited Canva Creator Data Exposed Via AI Chatbot Database

A Chroma database operated by Russian AI chatbot startup My Jedai was found exposed online, leaking survey responses from over 500 Canva Creators. The exposed data included email addresses, feedback on Canva’s Creator Program, and personal insights into the experiences of designers across more than a dozen countries.

The data exposure was discovered by cybersecurity firm UpGuard, which confirmed the database was publicly accessible and lacked authentication. While much of the database stored generic or public data, one particular collection stood out: it contained responses to a detailed survey issued to Canva Creators, a global group of content contributors to the design platform.

The survey data included 571 unique email addresses and detailed responses to 51 questions, covering topics such as royalties, user experience, and AI adoption. Some email addresses appeared multiple times, indicating that users had completed the survey more than once.

According to UpGuard’s report shared with Hackread.com ahead of publishing on Monday, this incident is the first known leak involving a Chroma database, a technology used to help chatbots reference specific documents when responding to queries.

The database, hosted on an IP address in Estonia, appeared to be controlled by My Jedai, a small Russian company that provides AI chatbot services. Users of the platform can upload documents of any type to power their chatbots, often without much technical oversight.

The presence of Canva data in this context raised questions about how sensitive information ends up in AI training systems or chatbot backends. Although Chroma is not inherently insecure, it requires proper configuration to prevent public exposure. In this case, the database was left wide open to the internet.

Canva responded to the findings with a statement to Hackread:

“We recently became aware that a file containing email addresses and survey responses from a small group of Canva Creators was uploaded to a third-party website. The information was not connected to Canva accounts or platform data in any way. The database owned by the third-party site was not adequately secured, which led to the information being accessible.”

“The issue was reported to us by a security researcher, who discovered the exposed information using specialist tools, but is not broadly accessible to regular internet users, nor was it indexed by popular search engines. We’ve confirmed the file contents have been removed, and site logs show it was not accessed by others.”

“We’ve already contacted the affected Creators and are complying with all our legal obligations, including notifying regulators where required. We’re deeply invested in keeping our community’s data safe and secure, and we’re reviewing our processes to help prevent this from happening again.”

-Canva spokesperson

While there’s no indication that the data has been misused, experts point out that even limited personal information combined with survey content can be useful for targeted phishing attempts. Respondents shared details about their professional roles, creative habits, and satisfaction with the Canva platform, information that could be exploited if placed in the wrong hands.

My Jedai, the company whose database was exposed, is a microenterprise founded in Russia. It allows users to build chatbots powered by their own documents. The company was quick to act once notified and secured the exposed database within a day of UpGuard’s outreach.

The leak shows how AI technologies are creating new, unpredictable channels for data exposure. As more companies adopt tools like Chroma to power customer-facing bots or internal assistants, the pressure to push data into these systems can lead to shortcuts and mistakes.

This case also highlights how widely AI tools are being used around the world, often in unexpected ways. Data collected in surveys by an Australian tech giant ended up in an unsecured database managed by a small Russian firm, hosted on servers in Estonia. With the increasing use of LLMs and third-party chatbot tools, traditional boundaries for data custody are becoming harder to track.

UpGuard noted that many of the documents in the database were harmless or even nonsensical, including “mystical doctrines” and romantic advice scraped from public websites like Marie Claire and WikiHow.

However, the presence of real-world corporate data, including internal chat transcripts and links to restricted file-sharing platforms, shows how easy it is for more sensitive content to slip into AI systems without proper protection.

HackRead

HackRead

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow