API Key rejected?

Created on 30 April 2024, about 2 months ago
Updated 12 June 2024, 16 days ago

Problem/Motivation

When I try to use the Huggingface text-to-image, I get an error suggesting the API key is rejected. But when I go to the suggested page (https://huggingface.co/settings/tokens), I see only tokens for reading and writing. These seem mainly for looking at the repositories, not testing the models. Should I be looking elsewhere for a key?

Thanks very much for everything!

A general error happened why trying to interpolate, message Client error: `POST https://api-inference.huggingface.co/models/rupeshs/LCM-runwayml-stable-diffusion-v1-5` resulted in a `400 Bad Request` response: {"error":"Authorization header is correct, but the token seems invalid"}
πŸ› Bug report
Status

Postponed

Version

1.0

Component

Code

Created by

πŸ‡ΊπŸ‡ΈUnited States bogdog400

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @bogdog400
  • πŸ‡©πŸ‡ͺGermany Marcus_Johansson

    @bogdog400 It should be that token. It should be starting with "hf_" if its a correct token.

    I can't really replicate this, in worst case could you mail me an example token that I could try to replicate it with? You could mail on huggingface@marcusmailbox.com. As soon as I have tested it you can revoke the token.

  • Status changed to Postponed 16 days ago
  • πŸ‡©πŸ‡°Denmark ressa Copenhagen

    I was trying to use some of the meta-llama/Meta-Llama models (and some other ones) and also got the same error, and other warnings. In the end I got it working with these steps:

    1. Create a Token
    2. Update permissions under "Access Tokens > YOUR_TOKEN > Manage > Edit Permissions" and set these values:
      Inference
      x Make calls to the serverless Inference API
      x Make calls to Inference Endpoints
        Manage Inference Endpoints
      Repos
      x Read access to contents of all public gated repos you can access
      [...]
      
    3. I got the error The model meta-llama/Meta-Llama-3-8B is too large to be loaded automatically, after changing to meta-llama/Meta-Llama-3-8B-Instruct it worked

    See The Serverless Inference API: "The model meta-llama/Meta-Llama-3-8B is too large to be loaded automatically (16GB > 10GB)".

Production build 0.69.0 2024