Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] TypeError: llama_index.core.ingestion.pipeline.run_transformations() got multiple values for keyword argument 'show_progress' #2133

Open
6 of 9 tasks
yaziciali opened this issue Nov 29, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@yaziciali
Copy link

Pre-check

  • I have searched the existing issues and none cover this bug.

Description

Hi,
I get this error, and couldn't resolve it,
Can you help me?

file_name: image__vector_store.json, extension: .json, reader_cls: <class 'llama_index.core.readers.json.JSONReader'>
Traceback (most recent call last):
File "/Users/user/AI/private-gpt/scripts/ingest_folder.py", line 122, in
worker.ingest_folder(root_path, args.ignored)
File "/Users/user/AI/private-gpt/scripts/ingest_folder.py", line 58, in ingest_folder
self._ingest_all(self._files_under_root_folder)
File "/Users/user/AI/private-gpt/scripts/ingest_folder.py", line 62, in _ingest_all
self.ingest_service.bulk_ingest([(str(p.name), p) for p in files_to_ingest])
File "/Users/user/AI/private-gpt/private_gpt/server/ingest/ingest_service.py", line 87, in bulk_ingest
documents = self.ingest_component.bulk_ingest(files)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/AI/private-gpt/private_gpt/components/ingest/ingest_component.py", line 135, in bulk_ingest
saved_documents.extend(self._save_docs(documents))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/AI/private-gpt/private_gpt/components/ingest/ingest_component.py", line 143, in _save_docs
self._index.insert(document, show_progress=True)
File "/Users/user/AI/private-gpt/.venv/lib/python3.11/site-packages/llama_index/core/indices/base.py", line 209, in insert
nodes = run_transformations(
^^^^^^^^^^^^^^^^^^^^
TypeError: llama_index.core.ingestion.pipeline.run_transformations() got multiple values for keyword argument 'show_progress'
make: *** [ingest] Error 1

Steps to Reproduce

PGPT_PROFILES=ollama make ingest ../ING -- --watch --log-file ../LOGS/privategpt_ingest.log

Ctrl+c after a while
Then try again

Expected Behavior

ingest documents

Actual Behavior

error

Environment

Mac ARM cpu and gpu with python 3.11

Additional Information

No response

Version

No response

Setup Checklist

  • Confirm that you have followed the installation instructions in the project’s documentation.
  • Check that you are using the latest version of the project.
  • Verify disk space availability for model storage and data processing.
  • Ensure that you have the necessary permissions to run the project.

NVIDIA GPU Setup Checklist

  • Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to CUDA's documentation)
  • Ensure an NVIDIA GPU is installed and recognized by the system (run nvidia-smi to verify).
  • Ensure proper permissions are set for accessing GPU resources.
  • Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi)
@yaziciali yaziciali added the bug Something isn't working label Nov 29, 2024
@jaluma
Copy link
Collaborator

jaluma commented Dec 2, 2024

Llama index has been bumped... I will update Llama Index to fix all dependencies issues.

@Abdur-Rahman29
Copy link

same error here. Please fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants