Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot read property 'createModel' of null #7

Open
kumard3 opened this issue Jun 25, 2024 · 9 comments
Open

Cannot read property 'createModel' of null #7

kumard3 opened this issue Jun 25, 2024 · 9 comments

Comments

@kumard3
Copy link

kumard3 commented Jun 25, 2024

hi getting this error while using the package , i might be using it wrong, can you help me with

    storageType: 'file',
    modelPath: './gemma-2b-it-gpu-int4.bin',
  });
@cdiddy77
Copy link
Owner

That filepath needs to point to a location on your device.

For android, see here

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

@kumard3
Copy link
Author

kumard3 commented Jun 26, 2024

That filepath needs to point to a location on your device.

For android, see here

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

Thank you I will try that, okay I have two more questions

  1. What is the type asset ?
  2. The .bin model, is the right one , right ?

@cdiddy77
Copy link
Owner

cdiddy77 commented Jun 27, 2024 via email

@kumard3
Copy link
Author

kumard3 commented Jun 30, 2024

getting this error
createModel [Error: internal: Failed to initialize session: %sCan not open OpenCL library on this device - undefined symbol: clSetPerfHintQCOM]

i have tried it on physical pixel 7 and emulator pixel 8 pro.

@ALuhning
Copy link

Ever figure this out - also having a difficult time figuring out how to bundle the model as an asset or access via a file path to the model on the device.

To bundle, have downloaded a model and included in an assets folder, tried putting it in android/app/src/main/assets, tried putting it in a models/converted folder as well as in the same folder as the file calling the function.

With storageType of file - have put the model on my device and tried accessing with /data/local/tmp/llm/gemma-2b-it-cpu-int4.bin as well as moving it to other locations trying different variations of the file path.

Most interested in getting it working as a bundled asset though. Any help/pointes appreciated. Thanks.

@jrobles98
Copy link
Contributor

Hey guys, I just pushed a new PR attempting to fix the problem of OpenCL library and also commenting about where to put the model files to make it work. I hope this fixes your problems and clear up your doubts 😄

@alam65
Copy link

alam65 commented Oct 1, 2024

That filepath needs to point to a location on your device.

For android, see here

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

@cdiddy77 I'm setting up my llmInference like this:

const llmInference = useLlmInference({
   storageType: 'file',
   modelPath: '/data/user/0/com.offlinellmpoc/files/gemma-2b-it-cpu-int4.bin',
 });

But my app is getting crashed. I'm not able to get what the issue is

@luey-punch
Copy link

luey-punch commented Nov 6, 2024

That filepath needs to point to a location on your device.

For android, see here

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

I am relatively new to react native and mobile app development. How does one bundle the model as an asset?
According to google docs:

Note: During development, you can use adb to push the model to your test device for a simpler workflow. For deployment, host the model on a server and download it at runtime. The model is too large to be bundled in an APK.

I agree that bundling it as an asset is the best but I do not know how to do it. Can you show me how?

@shreykul
Copy link

shreykul commented Nov 7, 2024

@luey-punch I was able to do that by creating a folder named 'assets' inside android/app/src/main and moving the model inside it although it worked, my android was lagging a lot.

const {generateResponse} = useLlmInference({ storageType: 'asset', modelName: 'gemma.bin', });

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants