Skip to content

Commit

Permalink
fix: use all prompts in a batch to generate data
Browse files Browse the repository at this point in the history
The original code in openai_completion() only use the first prompt and missed last 4 of a batch(5) prompts.
The fix is adding an extra loop to use all prompts of the batch to call openai chat completion api to generate data.

Signed-off-by: degaochu <[email protected]>
  • Loading branch information
chudegao committed Jun 21, 2024
1 parent bd941f0 commit d703fb5
Showing 1 changed file with 16 additions and 16 deletions.
32 changes: 16 additions & 16 deletions src/instructlab/sdg/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,23 +147,23 @@ def openai_completion(
f"Model {model_name} is not served by the server. These are the served models {model_ids}"
)

messages = [
{"role": "system", "content": get_sysprompt()},
{"role": "user", "content": prompt_batch[batch_id]},
]

# Inference the model
try:
response = client.chat.completions.create(
messages=messages,
**shared_kwargs,
)
except OpenAIError as exc:
raise GenerateException(
f"There was a problem connecting to the server {exc}"
) from exc
for prompt in prompt_batch:
messages = [
{"role": "system", "content": get_sysprompt()},
{"role": "user", "content": prompt},
]
# Inference the model
try:
response = client.chat.completions.create(
messages=messages,
**shared_kwargs,
)
except OpenAIError as exc:
raise GenerateException(
f"There was a problem connecting to the server {exc}"
) from exc

completions.extend(response.choices)
completions.extend(response.choices)

if return_text:
completions = [completion.text for completion in completions]
Expand Down

0 comments on commit d703fb5

Please sign in to comment.