Skip to content

Generate requests are sometimes cut short #63

@mattjhawken

Description

@mattjhawken

Two issues have arose when sending generate requests via API. The model output can sometimes be cut short, perhaps just an artifact of the LLMs being used, perhaps not. Likewise sometimes the request gets stuck on the worker (ie. Validator receives request, sends .generate to worker, who does not return anything and we get a 500 internal server error response.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingnodePeer-to-peer node features and optimizations.torchPyTorch neural network workflow and optimizations.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions