-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Face Expression #223
Comments
What are the ideas to implement such a model in this project, as the model captures my movements and demeanor, imitates my speaking style and actions, and can chat with me intelligently afterwards? |
Hi @xuullin , you have to make the snapshots of the face expressions before use the VRC FaceExpression Proxy at runtime.
var faces = new List<FaceExpression>() { new FaceExpression("Angry", 3.0f) };
modelController.SetFace(faces); Or, set face expression to the response from your skill. response.AddFace("Angry", 3.0f); See also the example for ChatGPT. |
Setup uLipSync or OVRLipSync correctly. You don't need to modify the code. |
thank you |
Thank you very much for your reply. Can you talk about some of the connections and differences between this project and intelligent digital human generation technology? What are the changes that need to be made in this project if we want to implement intelligent digital human generation? |
Hi, I have selected Setup VRC FaceExpression Proxy, but after running unity, the character model does not make facial expressions, what is the situation? I'm using phane from Booth for the character model, if I want to make the model's mouth shape to match different audio, how should I modify the code? What are the ideas to achieve this?
The text was updated successfully, but these errors were encountered: