Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: definition of json schema in prompt #200

Open
kdziedzic68 opened this issue Nov 20, 2024 · 0 comments
Open

bug: definition of json schema in prompt #200

kdziedzic68 opened this issue Nov 20, 2024 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@kdziedzic68
Copy link
Collaborator

What happened?

When we pass a pydantic.BaseModel as output type to the prompt without passing the schema itself to the system prompt, we are getting parsing errors - the desired structure is not preserved in raw LLM output even though using models that support that feature described here: https://platform.openai.com/docs/guides/structured-outputs .

How can we reproduce it?

import asyncio
from pydantic import BaseModel

from ragbits.core.prompt import Prompt
from ragbits.core.llms.litellm import LiteLLM


class QueryWithContext(BaseModel):
    """
    Input format for the QueryWithContext.
    """

    query: str
    context: list[str]


class OutputSchema(BaseModel):
    last: str
    previous: str




class RAGPrompt(Prompt[QueryWithContext, OutputSchema]):
    """
    A simple prompt for RAG system.
    """

    system_prompt = f"""
    You are a helpful assistant. Answer the QUESTION that will be provided using CONTEXT.
    If in the given CONTEXT there is not enough information refuse to answer.
    """

    user_prompt = """
    QUESTION:
    {{ query }}

    CONTEXT:
    {% for item in context %}
        {{ item }}
    {% endfor %}
    """


async def main():
    llm = LiteLLM(model_name="gpt-4o-2024-08-06")
    query = "Write down names of last two world cup winners"
    context = ["Today is November 2017", "Germany won 2014 world cup", "Spain won 2010 world cup"]
    prompt = RAGPrompt(QueryWithContext(query=query, context=context))
    response = await llm.generate(prompt)
    print(response)


asyncio.run(main())

Relevant log output

No response

@kdziedzic68 kdziedzic68 added the bug Something isn't working label Nov 20, 2024
@mhordynski mhordynski self-assigned this Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

2 participants