Making super cool assets for 3D games that don’t crash mobile devices requires artful tweaking. It doesn’t matter if you’re making a Web AR enabled pamphlet or the next Fortnite, asset optimization is extremely important. From rigs to textures to polygon counts, there are a handful of factors that should be carefully and creatively considered before you dive into design:
I: Textures & Shaders
For the best performance, textures should be in .png or .jpg format (in most cases, 4096x4096 (4k) max). Making the image sizes powers of two also helps with performance.
Here’s a fun fact: You have to make (or buy) your own shaders. Any special shaders that you’re using in your video rendering pipeline will need to be rebuilt from scratch in Unity. We recently made a hologram style shader for an AR game that has classic blue wireframe-like holograms, similar to Tron or the Star Wars Death Star Plans.
You can also achieve the look of toon-shading with a custom shader (there are a few decent ones on the Unity store to get you started). Just keep in mind that it’s challenging to get precise control of outlines. If you’re used to animation for television, prepare yourself to get ragey, because it should be easier than it is and it always looks slightly different than you expect. What you have to realize, however, is that your shader and all the application code needs to be able to run on smartphones and render in real-time. That’s the magic of it, after all!
II: Polygon Count
Polygon size is a major consideration for performance and load speed, and relies heavily on the number of items in the scene, device support, animations, etc. According to Unity, somewhere between 300 and 1,500 polygons per mesh will give you good results. Otherwise, the non-sciencey answer is: keep polygon count as low as possible when and where you can so there is more wiggle room for the number of objects on screen, shaders, effects and other running code.
Keep in mind that you could technically have two or more 1,500 poly characters running around the same scene, but still be as conservative as possible — especially if you’re making a game for kids, as kids usually have hand-me-down and aging devices.
If you aren’t hip to normal maps, you should try and use them. You can have a quite simple model with limited geometry and use a normal map image as a texture that simulates depth, texture and geometry. You can see in this image how you can replicate the look of a high-poly model with the clever use of normal maps.
Polygons aren’t the only factor. Also consider the draw calls, or how many objects are being drawn to the screen at once. You could have four simple cubes on screen and, if they are doing some crazy stuff with shader draw calls, it’s going to grind to a halt.
Unfortunately, when it comes to rigs, there are many variables and certain techniques that are not always supported. It’s not an exact science but, in a nutshell, you can't use deformers and you have to stick to bones and blendshapes. Also, if you need to poly-reduce or optimize the models, all the rigs (and texture maps) will break.
To get your rigs and animation from your animation software (like Maya or 3D Studio), you will need to first export the skinned mesh and then export the animation and bones data only for the rest of the animations. Then, hook them up inside Unity. It’s not the most fun workflow but it works. If you try an import it all at once, you’ll end up with duplicated assets and inflate your file size. Depending what you’re working on, it might be better to just animate right inside Unity.
Your final asset design budget is going to be based on a number of things: shaders, physics, bump maps, normal maps, particles, post processing effects, etc. If you’re used to a pipeline for television or film, prepare to be frustrated! It’s not an exact science, but we’ve learned a thing or two over the years and we’d love to help you out on your next project.
Get in touch if you could use some help on your next AR tool, VR experience or Unity game.