I appreciate workflow discussions. It is true that syncing sound is traditionally an assistant editors job. However technology changes, best practices change, workflows change. In a contemporary workflow we are trying to maintain connectivity to as much of the metadata throughout the pipeline as possible. By embedding sound metadata (and other metadata) at the point of transcode (which happens on set), we maintain metadata integrity. To me this is more important than the question of who syncs the sound. The syncing (sync is generally on as you say with good crew) is usually easy. My question is about maintaining the integrity of the audio timecode metadata, which currently seems to get lost at sync in resolve, but for no good reason. Even though it generaly ought to be the same as camera,, it would be useful to have it included as separate metadata for those times when tc sync cannot be maintained.