Heimdal
article featured image

Contents:

Hugging Face, a well-known AI company, reports that malicious actors have gained access to its members’ authentication secrets through a compromise on its Spaces platform.

“Hugging Face Spaces” is a collection of AI apps made and submitted by community members, available for other members to test.

Hugging Face alerted in a blog post:

Earlier this week our team detected unauthorized access to our Spaces platform, specifically related to Spaces secrets. As a consequence, we have suspicions that a subset of Spaces’ secrets could have been accessed without authorization.

As a first step of remediation, we have revoked a number of HF tokens present in those secrets. Users whose tokens have been revoked already received an email notice. We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default.

Hugging Face Blog Post (Source)

How is Hugging Face mitigating the incident?

The AI platform claims that in response to the event, they have tightened security measures in recent days.

Over the past few days, we have made other significant improvements to the security of the Spaces infrastructure, including completely removing org tokens (resulting in increased traceability and audit capabilities), implementing key management service (KMS) for Spaces secrets, robustifying and expanding our system’s ability to identify leaked tokens and proactively invalidate them, and more generally improving our security across the board.

We also plan on completely deprecating “classic” read and write tokens in the near future, as soon as fine-grained access tokens reach feature parity. We will continue to investigate any possible related incident.

Finally, we have also reported this incident to law enforcement agencies and Data protection authorities.

Hugging Face Blog Post (Source)

They have already sent emails to people affected and have removed authentication tokens from the exposed secrets.

The company advises all its Hugging Face Spaces users to refresh their tokens and switch to fine-grained access tokens, which provide companies greater control over who may access their AI models.

The company collaborates with external cybersecurity specialists to investigate the compromise and disclose it to law enforcement and data protection authorities.

Malicious actors are always on the hunt for new tactics

As Hugging Face has grows in popularity, it has also becomes a target for threat actors looking to exploit it for malicious purposes, explains Security Affairs.

In February, cybersecurity firm JFrog discovered roughly 100 instances of malicious AI ML models used to execute malicious code on a victim’s PC. One of the models launched a reverse shell, allowing a remote threat actor to access a device running the code.

More recently, Wiz security researchers uncovered a vulnerability that allowed them to upload custom models and use container escapes to acquire cross-tenant access to other customers’ models.

If you liked this piece, you can find more on the blog. Follow us on LinkedInTwitterFacebook, and YouTube for more cybersecurity news and topics.

Author Profile

Madalina Popovici

Digital PR Specialist

linkedin icon

Madalina, a seasoned digital content creator at Heimdal®, blends her passion for cybersecurity with an 8-year background in PR & CSR consultancy. Skilled in making complex cyber topics accessible, she bridges the gap between cyber experts and the wider audience with finesse.

CHECK OUR SUITE OF 11 CYBERSECURITY SOLUTIONS

SEE MORE