Skip to content

Commit

Permalink
Update ollama.py with optional raw setting. (#21486)
Browse files Browse the repository at this point in the history
Ollama has a raw option now. 

https://github.com/ollama/ollama/blob/main/docs/api.md

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Isaac Francisco <[email protected]>
Co-authored-by: isaac hershenson <[email protected]>
  • Loading branch information
3 people authored Jun 15, 2024
1 parent 9944ad7 commit 570d45b
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 14 deletions.
18 changes: 4 additions & 14 deletions libs/community/langchain_community/llms/ollama.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,15 +112,16 @@ class _OllamaCommon(BaseLanguageModel):
"""Timeout for the request stream"""

keep_alive: Optional[Union[int, str]] = None
"""How long the model will stay loaded into memory.
"""How long the model will stay loaded into memory."""

raw: Optional[bool] = None
"""raw or not.""
The parameter (Default: 5 minutes) can be set to:
1. a duration string in Golang (such as "10m" or "24h");
2. a number in seconds (such as 3600);
3. any negative number which will keep the model loaded \
in memory (e.g. -1 or "-1m");
4. 0 which will unload the model immediately after generating a response;
See the [Ollama documents](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately)"""

headers: Optional[dict] = None
Expand Down Expand Up @@ -154,6 +155,7 @@ def _default_params(self) -> Dict[str, Any]:
"system": self.system,
"template": self.template,
"keep_alive": self.keep_alive,
"raw": self.raw,
}

@property
Expand Down Expand Up @@ -227,7 +229,6 @@ def _create_stream(
"images": payload.get("images", []),
**params,
}

response = requests.post(
url=api_url,
headers={
Expand Down Expand Up @@ -369,12 +370,9 @@ async def _astream_with_aggregation(

class Ollama(BaseLLM, _OllamaCommon):
"""Ollama locally runs large language models.
To use, follow the instructions at https://ollama.ai/.
Example:
.. code-block:: python
from langchain_community.llms import Ollama
ollama = Ollama(model="llama2")
"""
Expand All @@ -398,17 +396,13 @@ def _generate( # type: ignore[override]
**kwargs: Any,
) -> LLMResult:
"""Call out to Ollama's generate endpoint.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The string generated by the model.
Example:
.. code-block:: python
response = ollama("Tell me a joke.")
"""
# TODO: add caching here.
Expand All @@ -434,17 +428,13 @@ async def _agenerate( # type: ignore[override]
**kwargs: Any,
) -> LLMResult:
"""Call out to Ollama's generate endpoint.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The string generated by the model.
Example:
.. code-block:: python
response = ollama("Tell me a joke.")
"""
# TODO: add caching here.
Expand Down
3 changes: 3 additions & 0 deletions libs/community/tests/unit_tests/llms/test_ollama.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ def mock_post(url, headers, json, stream, timeout): # type: ignore[no-untyped-d
"system": "Test system prompt",
"template": None,
"keep_alive": None,
"raw": None,
}
assert stream is True
assert timeout == 300
Expand Down Expand Up @@ -149,6 +150,7 @@ def mock_post(url, headers, json, stream, timeout): # type: ignore[no-untyped-d
"system": None,
"template": None,
"keep_alive": None,
"raw": None,
}
assert stream is True
assert timeout == 300
Expand Down Expand Up @@ -181,6 +183,7 @@ def mock_post(url, headers, json, stream, timeout): # type: ignore[no-untyped-d
"system": None,
"template": None,
"keep_alive": None,
"raw": None,
}
assert stream is True
assert timeout == 300
Expand Down

0 comments on commit 570d45b

Please sign in to comment.