Two points on bit-depth within AE:
1. 16-bit can offer noticeable quality improvement over the standard 8-bit, but mostly on things like gradients, edge detail etc.
2. 32-bit is ONLY useful if you are working with HDR plates or want certain effects to take on a more photographic look (mostly blurs and such).
Either way, these choices really only affect what is happening WITHIN After Effects as you work. 99% of the time you will be outputting to an 8-bit or 10-bit file format anyway, such as Animation codec, Prores422 etc. Outputting to higher bit-depth than 10-bit requires using frame sequences or specialized (read: expensive and tricky) codecs and most of your apps will not accept them anyway (including FCP). In the end, that’s not even the point of higher bit-depths in AE – the working bit-depth simply defines the precision at which effects take place, so that your work looks cleaner when output to 8 or 10-bit. Working in these formats is just a way of preserving certain information and detail while working in AE. When you output the file to 8-bit you really aren’t going to lose much (if any) of that added detail, but if you’re really concerned feel free to render to a 10-bit codec.
Bottom line, it couldn’t hurt for you to work in 16-bit given your source materials, but 32-bit would be total overkill with painfully long render times and minimal, if any, improvements over 16-bit.
Brendan Coots
Splitvision Digital
http://www.splitvisiondigital.com