- 수정됨
SVG Integration Outline
Hi,
I think it is time to talk about the SVG Integration Outline for Unity and Spine.
In case you missed the svg announcement : Unity at GDC - Unity 2018 Roadmap - YouTube
Information how to import experimental package into unity is found here : https://github.com/Unity-Technologies/vector-graphics-samples
Spine 3.6 runtime is able to load skeletons without the backing atlas?
So to understand what is the plan as I see it. I don't expect Spine Editor will work with SVG files in short time so I guess we will always get exported projects from Spine Editor to Unity with png atlases.
Any idea/vision how can we integrate Unity's current svg solution into Spine->Unity->App workflow?
To my limited understanding we need to somehow swap Spine Editor generated atlas for svg generated textures at runtime and also prevent Spine Editor textures to be included in the build and instead include just svg files. Well it is not completely clear in my mind but lets discuss it to narrow it down to workable workflow.
Thanks, Marek.
Spine exports skeleton data separately from the atlas. The Create atlas
checkbox on the data export dialog is solely for convenience. While you need PNGs for the Spine editor, at runtime you can use something else for rendering (or even render nothing, eg if you just want to use the animated skeleton data).
The workflow would be:
1) Rasterize your SVGs to PNGs.
2) Animate in the Spine editor using PNGs.
3) Export JSON or binary skeleton data.
The next steps are all at runtime:
4) Rasterize your SVGs to an atlas texture and create a matching Atlas.
5) Load the skeleton data and use the atlas for rendering.
spine-unity expects a RegionAttachment rendererObject
or MeshAttachment rendererObject
to be an AtlasRegion. Then it expects the AtlasRegion page
to have an AtlasPage rendererObject
which is a Unity Material.
So, for step 4 ideally it would work like this:
4a) Determine the bounds for each SVG and pack the rectangles.
4b) Create a texture that fits all those rectangles and render each SVG in the correct place on the texture.
4c) Create an Atlas which consists of a number of AtlasPage and AtlasRegion objects. The page is configured to match the texture (Unity material). The regions are configured to match the packed rectangles.
Packing could generate multiple atlas page textures, but it's easier for now to keep it simple and assume everything fits on one page. When it does become time to support multiple atlas pages, it's important to somehow group the regions on each page to minimize texture changes. The worst case we want to avoid is if each region that is drawn comes from a different page.
For step 4b, it appears the Unity API to use is VectorUtils.RenderSpriteToTexture2D
. Ideally the SVG can be rendered directly to the atlas page texture in its final resting place. If the Unity API doesn't allow this, we may have to render it elsewhere, then copy it into the atlas page.
First I think we need to make clear the distinction between application runtime and spine runtime here. To prevent misconceptions lets always use app runtime or spine runtime respectively for a given context.
I have to say that we are facing here two texture generation stages Unity Editor ( Edit Mode ) and Unity Runtime ( Play Mode or App on particular device ).
In Unity Editor when developer is in Edit mode he needs to at least see the basic skeleton set up. But generally we don't want the textures that we need to use in Edit mode to be included in the build since the textures will be generated and packed at app runtime. In other words we need just temp texture atlas that is used when developer is in Edit Mode. When he hits Play button then the runtime atlases generated from svg files will take place. So I need your help here how to have just temp texture atlas that we can generate from svg files assigned to SkeletonData only in EditMode? Or we can generate atlas from Spine Editor and use it in Unity Edit Mode but we need to be sure not to include that atlas in Build. Which I feel can be easier to do than to pack temp atlases from svg files just for Unity's Edit Mode.
In Unity Runtime ( app itself ) what we have already is that we are able to make atlases at app runtime ( on the user's device ) from given svg files. Also multipage atlases were solved. We found this packer https://github.com/DaVikingCode/UnityRuntimeSpriteSheetsGenerator and on top of it wrote our own solution. So we have all the information about where each svg "sprite" is generated on that final atlas that we save into "persistent path" for cache purposes. So the last missing part is actually to somehow connect our current at runtime generated json atlas description file with what spine is using. Which won't be hard and if I get your help we can have it in one or two days ready.
Does it sound reasonable?
foriero wroteFirst I think we need to make clear the distinction between application runtime and spine runtime here.
I don't think we need that distinction. Runtime means when the app is running. The opposite is "compile time", which is anything that happens before the app is running. Some of the things that happen in Unity are compile time, eg generating a SkeletonDataAsset. Most or all things that happen in Unity's edit mode are runtime, eg loading atlas PNGs.
The juggling that happens for Unity's edit mode is usually the same as at runtime (play mode). For example, when using a normal PNG texture atlas, Unity's edit mode needs to load the images into textures so it can render the skeleton in edit mode. Doing that happens at runtime and uses the same code as play mode. When you start Unity's play mode, it doesn't use what it did for edit mode, it loads everything again for the real app run.
My steps 4 and 5 are runtime steps. They happen for edit mode to be able to show something. They happen again for play mode when the application is actually run. Likely it's the same code in both cases and nothing special needs to be done. In both cases the texture atlas is generated in-memory, it is not written to disk, and so it is not possible to somehow include the generated atlas as a file in the application.
Also multipage atlases were solved. We found this packer https://github.com/DaVikingCode/UnityRuntimeSpriteSheetsGenerator and on top of it wrote our own solution.
That packer code may be fine, I didn't look at it closely. The license is friendly. Still, I would probably be more inclined to use libgdx code (modified and ported to C#), eg PixmapPacker.
So we have all the information about where each svg "sprite" is generated on that final atlas that we save into "persistent path" for cache purposes.
The generated atlas texture could be stored to disk for caching, but I would not worry about it at this stage. Also, caching should be optional. One of the use cases for using SVGs is to have thousands of attachments. In this case you would generate an atlas only for the attachments necessary. Eg, you might consider the attachments only for characters in a room, the gear they have equipped, the animations they can perform, any animations needed for the room, etc. Then you would generate an atlas with just those attachments. In this case you would not want caching, because it is almost certain that different characters will be in the room next time, be equipped differently, etc.
the last missing part is actually to somehow connect our current at runtime generated json atlas description file with what spine is using.
You do not need an atlas file. In step 4 you generate one or more textures which contain the regions for your SVGs. You also create an Atlas instance and populate it with AtlasRegion and AtlasPage objects. All of this is in-memory. In step 5 you give the atlas to the Spine runtime, just as if it were an atlas you exported from Spine and loaded from disk. Everything after that is the same Spine runtime usage as when using an atlas from PNGs.
- Unity's SVG implementation is mesh-based (ie, they dynamically flatten into meshes based on SVG paths.). Incorporating that into a Spine skeleton involves a bit of ambiguity. Plain region attachments could just be transformed into MeshAttachments but Unity might be storing extra stuff like per-vertex colors that Spine Attachment doesn't. And MeshAttachments, since the "flattening" exists purely on the Unity side, would lose information necessary for, for example, deformation animation and weights since there is no guarantee that their vertices would match up. This is also a lot of speculation as Unity's experimental package doesn't expose much to us at the moment, which is probably fine since it's not ideal for their experimental package's API to solidify before it's ready.
Just based on that, seems like making Unity SVG support (the mesh kind) and Spine-Unity is questionable and still a bit farther away.
2.
But for what you specifically have achieved so far, being able to render SVGs onto textures, it's a bit more straightforward.
For rendering, every renderable Spine Attachment stores information about what Material and UV coordinates it uses. Since you're already able to render your svgs into textures, we only need to package up that information into what the SkeletonRenderer system can use.
Add the using above your script:
using Spine.Unity.Modules.AttachmentTools;
Then for each renderable attachment, you load in the necessary rendering information.
The simplified version of it is.
var r = yourTexture.ToAtlasRegion(templateMaterialSoItKnowsWhatShaderAndPropertyValuesToUse); // 1. Pack info for an unatlased texture into an object.
attachment.SetRegion(r); // 2. Store that info to the attachment.
These are the two basic steps, and you'd do this for all your renderable attachments.
The above is also the underlying idea behind Mix and Match. https://github.com/pharan/spine-unity-docs/blob/master/Mix-and-Match.md
The current API is conducive to Mix and Match and its related workflow.
But for full texture replacement, especially from a pre-atlased source, may be a bit different and there may be some utility methods missing. It probably won't be too hard to add though, and they likely make sensible additions to AttachmentTools.
But depending on your data source, it may be better to go a different route too.
AtlasAsset is actually an extendable class. We're coming out soon with a Spine-Unity SpriteAtlasAsset type which can allow you to source regions from Unity SpriteAtlas instead of Spine atlases. If a similar asset type could be created with your SVG-to-texture code in it, it would probably work fine, even in edit mode.
Ok understand. What I see from where we are right now using Spine-Unity SpriteAtlasAsset seems to be the right path. We would need to write Spine-Unity SvgAtlasAsset though. Our IntegratorTool2D can generate both pngs and svgs in the same directory structure inside Unity project so write few lines of code to create those assets is pretty easy and straightforward once they are written from your side. Then the rest is also pretty straightforward. The Spine-Unity SvgAtlasAsset needs to take care for in memory svg2png2atlas generation for both editor and runtime.
So the goal here is to be completely user device resolution independent. Which means SvgAtlasAsset can scale up or down textures and generated pngs from svgs according to that particular device. So the SvgAtlasAsset needs to have information for what initial resolution that character has been designed once we have it it is pretty easy to adjust pixelsToUnit accordingly to get bigger or smaller generated pngs and also final atlas. That way all spine animation will still look great even if we get in future devices with more displayed pixels. I will make small video how we currently generate pngs, svgs and how we create spine projects from that export so that it is clear where we can create that SvgAtlasAsset or even SpriteAtlasAsset. The only manual intervention would be to drag those sciptable objects onto final spine skeleton data I guess. Which we can also automatize.