• Showcase
  • Spine Pro Vtuber Working Prototype

Another issue could be certain cameras mirror the image such as left and right are flipped. Are you familiar with Open Broadcasting Studio ( OBS )? It has functionality to create a virtual camera that you can flip horizontally. If you have another web camera that doesn't mirror, you can test it out. I noticed this issue as well when I switched over to my laptop camera which mirrors the image. Currently the prototype only cater to non-mirror images but I could give that an option in the future.

Your feedback has been helpful!

Related Discussions
...

I have OBS installed, but even if I started the virtual camera, I could not use it because the option does not appear in Google Chrome's camera settings. To be precise, in the general settings of Chrome, I can set like the following:

However, on the actual web page, it was automatically changed to FaceTime HD Camera and could not be freely changed here.

Incidentally, Google Meet allowed me to choose the virtual camera.
I have looked for other ways to flip the camera, but unfortunately have not found any.
I have attached the project file above so you can see if the problem does not occur in your environment.

I think my eye pupil jitter filter is too strong for your model, Misaki. I will lower the strength and update the change as soon as I get the chance. On my test model I though the filter was the right strength. It's great to see another model and how model rigs are affected differently. :think:


I released a small patch, version 1.0.2.

  • Changed canvas resize functionality.
  • Decrease the strength of jitter filter for the eye pupils.

Hello Misaki. I hope this weaker filter works better for your model.

Thank you for your quick fixing, it looks much better! 😃 The movement has become smooth, and weird movements such as only one eye moving are less than before.

In the video above, I tried to roll my eyes 360 degrees, but unfortunately the up and down did not seem to respond very well. The distance between up and down is shorter than left and right, so it seems to be hard to catch the movement. I'm not sure if it would be optimal to have the filter adjusted based on my model, so I think it would be nice if the settings could be adjusted by the user.

By the way, I will modify my model later as the rigging of the eyes and mouth is still not very good. I'll share it again when the improvements are done.

So the jitter filter is for the model to stop twitching so much. Much like the .gif below when your eyes are not moving but the model is twitching excessively. :bigeye:

I had the filter strong enough to cause the eye pupils to jump. I thought it was favorable effect to have jumpy eye pupils. :upsidedown:
I lowered it to the same level as the other filters as the face and mouth. There should not be too much bias of the change.

Letting the user adjust the face tracking setting is for future updates. When you allow those adjustments, users would want a way to save the settings so they do not have to adjust every setting each time.

So the jitter filter is for the model to stop twitching so much. Much like the .gif below when your eyes are not moving but the model is twitching excessively.

Ah, I see.

When you allow those adjustments, users would want a way to save the settings so they do not have to adjust every setting each time.

Exactly. I think it would be great if you can download the settings as a file such as a JSON format and load it with the skeleton data when you want to apply the settings again.

There are so many features to add, so I am going to implement them one at a time.

Misaki, have you played with the drop down box below the Reset Render Camera button on the Model Settings? I might have not made it clear that drop down is where you change certain properties about the model(s).

There are so many features to add, so I am going to implement them one at a time.

Sure, take your time 🙂

Misaki, have you played with the drop down box below the Reset Render Camera button on the Model Settings? I might have not made it clear that drop down is where you change certain properties about the model(s).

I have tried some of them! The position and scale of the model can be changed on the canvas with the mouse (this is really comfortable!), so I have not had to change them using the settings box, though.

The scale and position really comes to play when you add more than one Spine model. With my recent unreleased testing, I was able to have 3 faces tracked on the same camera. There is a way to assign each tracked face to each Spine model. All the models start at 0, 0 position. You would not want overlapping models when you are using multi-face tracking. The scaling function helps when the models are different sizes. While over the canvas, your mouse only changes the viewing port. The model's scaling is still at 1 scale for x and y.

I wonder what you think about skins and animation properties if you tried them.

While over the canvas, your mouse only changes the viewing port. The model's scaling is still at 1 scale for x and y.

Ah, I see. I'm looking forward to seeing what fun looks like when multiple models are placed.

Regarding animation property, what is the middle box set for? I can see that changing the animation of the pull down changes the animation, but is it played on track 0?

Regarding skin property, I can confirm that it works correctly, but is it currently only possible to apply one skin?

The middle box is for inputting values and is not used for the animation property. That box is for the other properties. Certain properties would use either the value input or the drop-drown list. Since properties like skin and animation have a finite number of choices, a drop-down list would be more appropriate. If value input box is too confusing, I could hide it for properties that do not need it.

I forgot to update the track layer for the animation property so it is not working on the intended track layer. It is in the middle of the track stack so you have half the tracks overriding it. 😃

So far you can only apply a single skin. I only worked out populating the drop-down list of all skins. I never had the intent of multiple skins in early development but it seems like something I could expand upon.

Thank you for elaborating on those! 🙂

If value input box is too confusing, I could hide it for properties that do not need it.

Yes, I think the value input box should be hidden as it would mislead people into thinking it is something that is meant to set up the animation.

I forgot to update the track layer for the animation property so it is not working on the intended track layer. It is in the middle of the track stack so you have half the tracks overriding it. 😃

Ah, that makes sense. I had a feeling that something was wrong. It would be useful if it could be used to switch from the default breathing animation to another, or to switch facial expressions.

So far you can only apply a single skin. I only worked out populating the drop-down list of all skins. I never had the intent of multiple skins in early development but it seems like something I could expand upon.

Yes, it would be useful to be able to use more than one skin, such as one for costumes and one for expressions or postures, since skins can be used to change facial expressions and postures using constraints.

Expanding on more animation tracks and skins add another dimension of complexity. I will need to figure how to expand on those features.

It would be useful if it could be used to switch from the default breathing animation to another, or to switch facial expressions.

Yes, it would be useful to be able to use more than one skin, such as one for costumes and one for expressions or postures, since skins can be used to change facial expressions and postures using constraints.

It will probably a good idea to get animation and skin properties their own sections in the future. They are getting too complicated to be used as a single input setting. If this keeps up, I will end up with a Spine visual programming web application. :scared:

I was hoping to soft cap the amount of animation tracks and skin layers. If I follow your suggestion, everything would need to be uncapped ( or until you web browser crashes :p ). So far I am using 20 something animation tracks just for face tracking.

On a side note, this topic got so many views within a day. I checked my itch.io stats and there hasn't been that many downloads on the test model. So many lurkers among us or it is just you, Misaki. :o

I give you my suggestions on many things, but of course this is your tool, so make it what you want it to be 🙂

I was hoping to soft cap the amount of animation tracks and skin layers. If I follow your suggestion, everything would need to be uncapped ( or until you web browser crashes :p ). So far I am using 20 something animation tracks just for face tracking.

I'm not familiar with the cost of having a lot of tracks, so it would be better to have Nate or Mario comment on this.

On a side note, this topic got so many views within a day. I checked my itch.io stats and there hasn't been that many downloads on the test model. So many lurkers among us or it is just you, Misaki. :o

I’m sure I'm not the only one checking this topic! The order of topics will go up when there are responses in this forum, and a lot of people see the posts at the top. That’s why there are a lot of views.

I give you my suggestions on many things, but of course this is your tool, so make it what you want it to be 🙂

Your suggestions are really important to me because it is the feedback that I have gotten. I am going to slowly implement them because I have no idea what I am doing :lol:. It is uncharted territory.

I'm not familiar with the cost of having a lot of tracks, so it would be better to have Nate or Mario comment on this.

Soft capping is really just less work for me :grinteeth:. For example, with only one animation or skin I do not have implement a more complex system. I will get to it eventually.

I’m sure I'm not the only one checking this topic! The order of topics will go up when there are responses in this forum, and a lot of people see the posts at the top. That’s why there are a lot of views.

I wish the feedback was that popular.

For all those lurking and have Discord, I am Aestos on unofficial Spine Discord server. :think:

6일 후

It has been a while, but I have made various minor modifications to my skeleton. The latest version of the recording is here:

As for the old videos, I had set them to limited public access, but I have set the latest video to public access. For people who are new to this tool, I have recorded the process from uploading the skeleton data so that it can be a simplified tutorial.

My skeleton still has some issues as the clipping attachments for the eyes sometimes go wrong, but I have attached the latest file here for your reference:
face-for-Spine-Vtuber-Prototype.zip
(Also, I deleted old files in this thread.)

I will come back to this thread when I have time. Cheers! :beer:

https://silverstraw.itch.io/spine-vtuber-prototype
https://silverstraw.itch.io/spine-vtube-test-model

1.0.3

  • Rearrange model property drop-down list.
  • Add "maximum number faces" option to model property drop-down list. This property will allow more than one face to be tracked using a single web camera.
  • Add "Single Value Properties" label to the left of model property drop-down list. It should be clearer about the drop-down list if it was not before.
  • Update the track layer for the model property "animation" option".
  • Hide value input box for animation and skin properties.
  • Add setting for "face pitch strength", "face yaw strength", "face roll strength", "mouth height strength", "mouth width strength", "left brow strength", "right brow strength", "left eye strength", "right eye strength", "left pupil pitch strength", "left pupil yaw strength", "right pupil pitch strength", "right pupil yaw strength" next to "Single Value Properties".

Misaki wrote

My skeleton still has some issues as the clipping attachments for the eyes sometimes go wrong, but I have attached the latest file here for your reference:

Hello. What is wrong with the clipping attachments for the eyes?

That sounds like a great update! However, I could not find where the strength setting was. I don't see the drop down menu in your video, is it visible on your end?

Hello. What is wrong with the clipping attachments for the eyes?

When the eyes were closed, the clipping masks sometimes crossed over, and this caused the eyes that should have been hidden to be visible. I fixed the problem today and I replaced the attached file on my previous reply.

Misaki wrote

That sounds great update! However, I could not find where Add strength setting was. I don't see the drop down menu in your video, is it visible on your end?

I meant that I added the setting for the user to change "face pitch", "face yaw", "face roll", "mouth height, "mouth width", "left brow", "right brow", "left eye", "right eye", "left pupil pitch", "left pupil roll", "right pupil pitch", "right pupil roll" next to property label. I apologize that Open Broadcast Studio did not capture any pop-up menus when I recorded the video. The drop-down does appear at my end but I did not full screen capture the process.

Hmm, somehow I can't find that setting on my end....

Also, I can't find "maximum number faces", so the update seems not to be reflected properly. Is there anything I need to do to use the updated version?

Also, I can't find "maximum number faces", so the update seems not to be reflected properly. Is there anything I need to do to use the updated version?

I apologize; mistake is made at my end :tear: . I forgot to rename the new web file as "index". In effect, itch.io read the old web file and ignored the new. I have corrected the name and should reflect the updated interface. Good thing you mentioned it :handshake: .

Sorry for the late response, thank you for reflecting the updated interface! I have tried to do the multiple face tracking, and the following video is the result (Sorry for the video looks a bit confusing because the position of the model and the face being tracked are reversed):

It worked at first, but it stopped after about 20 seconds :think:
The tracking itself seemed accurate while it was working, so I’m very impressed when I saw it for the first time! I don't know the cause why it would stop, but it would be a very interesting feature if it could be made to work consistently.

I haven't tried the other settings yet, so I'll try them later!

I have not managed to break the multiple face tracking and replicate the bug in your video. This is something that would require further investigating. I would like to know if you could replicate this bug repeatedly. It could be either hardware or software that is causing the bug. For one, the camera only allow one application to read the web camera stream data from what I have tried on Windows. Since the web application does not write any log files onto your computer or communicate to a server, it will be harder to pinpoint the cause of the bug.

I would like to know if you could replicate this bug repeatedly.

Yes, I was able to replicate this bug again. I think it's possible that I have the wrong settings, so I recorded the process from when I accessed the tool:

One thing I noticed is that the model that was initially tracking my movement will follow the second face’s movement when the second face tracking starts. Is this something wrong with my settings?

Can you open up the web inspector ( should be F12 shortcut )? See if there are any errors in the console section.

Misaki wrote

One thing I noticed is that the model that was initially tracking my movement will follow the second face’s movement when the second face tracking starts. Is this something wrong with my settings?

It is not your settings. The face tracking system is probably scanning the camera image from the top to down and left to right direction. The second face that appeared after is identified as face 1 because it is top left of the first face.I am going to add an numeral indicator near face meshes later to help identify the face index.

The face tracking system is probably scanning the camera image from the top to down and left to right direction. The second face that appeared after is identified as face 1 because it is top left of the first face.I am going to add an numeral indicator near face meshes later to help identify the face index.

Ah, I see. I am relieved to hear that my settings were not wrong.

I tried again and found an error in the console tab.

Uncaught (in promise) TypeError: Cannot read properties spine_vtuber.js:1 of undefined (reading 'x')
at V (spine_vtuber.js:1:13566)
at l (spine_vtuber.js:1:13819)
at face_mesh.js:73:321

Here is a video recorded with the console tab open.

I think I have located the problem from the error message and fixed it. Try again and see if you still get the error. For some reason your web browser was generating incomplete data. I made the program check if the data is missing beforehand so it would not stop working :wounded: .

Thank you for your quick response! Unfortunately, I still got the error again. I didn't open the error log details last time so I showed that at the end of this video:

Let me know if I need to try anything else!

The good news is that the previous fix worked but the same problem occurred on another section. I did the same check up on that section as well. I hope I squashed all those bugs :wounded: .

I am surprised that those errors has not appeared on my computer with multiple face tracking :think: .

Now it's stable and working! Awesome 😃

However, as you can see in the video, only the model placed on the right side sometimes momentarily appears to be in a strange look where each facial part seems to pop outward.

I checked the recorded video frame-by-frame and it seems that sometimes the face goes off the tracking and when it comes back on, the face parts seem to be pulled as shown above.
I don't know why the tracking is off, especially since the face is not moving significantly, but the face closer to the camera appears stable, so it may not be stable if it is farther away.

Now it's stable and working! Awesome 😃

Great to hear.

However, as you can see in the video, only the model placed on the right side sometimes momentarily appears to be in a strange look where each facial part seems to pop outward.

I had the animation track scaling factor too high for when the AI stopped recognizing a face. The initial value of the vector has been adjusted by a million fold. Hopefully that will not produce an image that is as jarring as your image. I added another circumvention to stop moving the Spine model in that situation as well.

I don't know why the tracking is off, especially since the face is not moving significantly, but the face closer to the camera appears stable, so it may not be stable if it is farther away.

I lowered the face recognition threshold from 80% to 50%. When the AI is at least 50% confident, the computer will start producing facial coordinates for the Spine model. Hopefully that will help with finding faces further away from the camera. I am not sure if I want to expose the ability to change the threshold in the future.

The Spine Vtuber Prototype 1.0.4 works fine! This time, I did not see any errors that I saw yesterday. I also feel that faces can be detected well at a distance.

I also tried changing the strength settings. Some of the settings could be changed to get better effects (e.g. "face pitch" and "face yaw"), but it was difficult to get good results for the eyes because it sometimes did not work so well depending on the angle of the face. For example, turning the face down can cause the eyes to open wide.
As for the mouth, my skeleton setup is not good (it opens up against the top, which is not appropriate) and I would like to fix it when I have time.

Anyway, it is wonderful that this tool has improved very much in this short period of time :yes:

I think the eye issue could be resolved by editing the animation in Spine.

Anyway, it is wonderful that this tool has improved very much in this short period of time :yes:

Good thing they were simple fixes :wounded:.

This is really cool! I haven't tried anything VTuber related until now so this was a fun experience for me. :grinteeth:

Dev stuff isn't my area of expertise so I'm not exactly sure what you can do, but here are all the thoughts I had while I was using it:

  • It would be cool to see what files are added to the drag and drop zone when they're dropped in to know that they've successfully been uploaded without pressing the button below to see if it works.
  • I'm not sure what happened, but when I imported my latest rig, the import wouldn't load all of my assets. It worked when I used an older file though, so it may be an issue with my file and not the prototype (the same thing happened recently when I used that file with Rhubarb as well). But I thought I'd at least mention it (see bad import jpg)
  • Being able to adjust the strength of individual parameters was really helpful here. Before I recorded, my eyelids wouldn't open to their full open position so I adjusted that a bit.
  • I made a rough test of some eye blinks which I show in the video. I experimented with turning alpha to 0 on the lower lids so they would disappear when the eyes opened. It would be cool if there was a way to disappear the lids just as they open all the way.
  • It would be even cooler if there was a way to use an entire animation as the motion rather than a single key (unless I set mine up incorrectly). For example, having one animation control the entire left eye blink. The first frame would be eye completely closed, and the last frame would be the eye completely open. Then the frames in between could be refined to allow custom deformation rather than a linear straight shot from one position to the next. This could also help in selecting the best time to adjust draw order too. I'm thinking of the Moho rigging process as inspiration here.

Overall really cool to use and I'm excited to see how you take this further!

- It would be cool to see what files are added to the drag and drop zone when they're dropped in to know that they've successfully been uploaded without pressing the button below to see if it works.

Yes I can do something about that.

- I'm not sure what happened, but when I imported my latest rig, the import wouldn't load all of my assets. It worked when I used an older file though, so it may be an issue with my file and not the prototype (the same thing happened recently when I used that file with Rhubarb as well). But I thought I'd at least mention it (see bad import jpg)

I do not know either about the bad import. Would be hard to figure without any of the files or error log.

- Being able to adjust the strength of individual parameters was really helpful here. Before I recorded, my eyelids wouldn't open to their full open position so I adjusted that a bit

Yeah, not everyone is comfortable or able to stretch their facial features to the extreme. Even if they could, their face would fatigue eventually.

- I made a rough test of some eye blinks which I show in the video. I experimented with turning alpha to 0 on the lower lids so they would disappear when the eyes opened. It would be cool if there was a way to disappear the lids just as they open all the way.

  • It would be even cooler if there was a way to use an entire animation as the motion rather than a single key (unless I set mine up incorrectly). For example, having one animation control the entire left eye blink. The first frame would be eye completely closed, and the last frame would be the eye completely open. Then the frames in between could be refined to allow custom deformation rather than a linear straight shot from one position to the next. This could also help in selecting the best time to adjust draw order too. I'm thinking of the Moho rigging process as inspiration here.

The problem is that the live timeline ( from face tracking ) would conflict with the animation timeline using animation mix alpha setup I have now. I think it is still possible but it would require the calculations to be applied to the animation track timeline itself and have the animation mix alpha constant at one. This would require me to create a separate copy of the application to experiment with. Wait for the next update. :nerd:


I tested with only with one eye but your request of able to use more than one keyframe for Spine Vtuber Prototype is possible! I keyframed at half way mark on the timeline with green eye and the full way with red eye. I had set the FPS to 100 so each frame is 1% of the movement range. You can set the FPS to any value but you have to keyframe within the one second. I still need to make the changes for the rest of the animation tracks and update it on itch.io :cooldoge:.


Spine Vtuber Prototype 1.0.5 update

  • Moved the names of uploaded files from bottom of the page into the Drag and Drop Zone area. This allows easier viewing of loaded files.
  • Changed how calculations are applied. This allows animators to use multiple keyframes on the timeline for each animation track. Animating within one second is recommended. Previously, each track only allowed one keframe on the timeline.

https://silverstraw.itch.io/spine-vtuber-prototype

I have also updated the Spine Vtube Test Model to reflect the change in 1.0.5

https://silverstraw.itch.io/spine-vtube-test-model

The new animation system looks really interesting! However, I have tried the v1.0.5 with my skeleton which has not been changed since the previous test, but it did not work. It doesn't work when I turn on the camera active, and it has been in a weird state from the start. Here is the screenshot:

spine_vtuber.js:1 Uncaught TypeError: Cannot read properties of undefined (reading 'length')
    at l (spine_vtuber.js:1:12224)
    at t.ondrop (spine_vtuber.js:1:12560)

Does the error log above seem to indicate the cause?

I was able to recreate your error and patched it. I forgot that drag and drop uses a slightly different code than clicking to upload files.

The weird state of your model is normal and is separate from the error. Before you were only allowed to keyframe the end state of the animation movement. Now that you can key more than one frame, you are going to have move the previous keyframe to the where 1 second would be at. For example. if your Spine file is set at 60 FPS, 1 second is on the 60th frame. If it set to 30 FPS, then 30th frame is the 1 second mark. Then you are going to need to add a keyframe at the 0th frame. It is going to take some time to make the changes to the Spine file but you are free to keyframe whatever you want within that 1 second timeline. Note: I set my FPS to 100 so that each frame is 1% of the movement range.

Thank you for your quick response! I understand the new specification of the animations. I fixed my skeleton and now it is working very well! 😃

It seems to be smoother than before, is this also thanks to the update? Also, when testing multiple face tracking, there was a problem that if you opened the mouth as wide as possible, the mouth would open wider than specified in the "mouth height" animation, but this problem has been fixed and the model no longer moves wider than the created animation. This is really awesome.

Excellent. You got your model to work now.

I also noticed that your model animation got smoother from the video. I think the update definitely played a role. As to why, I would not be able to provide a concrete answer. I speculate it could better in-betweening when there is more than one keyframe. The other thing could be it better performance changing the track time as opposed to changing animation track alpha. Nate would probably know more about it.

The " the mouth would open wider than specified in the 'mouth height' animation" is a number capping issue. Out of the box, Spine runtime caps off the track time with track ending time while the animation track alpha is uncapped.

Thank you for explaining about them! It seems like a lot of things worked better by using the track time instead of using track alpha to change animations, right? That’s an interesting finding.

By the way, this change has also made the creation of the model much easier. It used to be difficult to make sure that the eye's mesh was clean when in the middle of the eye-open animation, but now it is easy to check and modify.
I think the animation of turning the face sideways could be improved, and I would like to modify it eventually.