- May 14, 2014 at 6:22 pm
Here’s the set up:
CS6 Master Suite
96 GB RAM
Dual Xeon (e5-2667) 6 CORE – 2.9ghz
Mod’d Graphics Card references for Mercury CUDA in Premiere (doesn’t help AME)
Rendering a 4K x 4K image sequence from Adobe Premiere Export to AME.
This was a proxy build from 1K x 1K 30fps….clips offlined and relinked to 4K master plates.
Nearly ZERO effects in Premiere – mostly Cross Dissolves/basic editing etc.
All files are locally referenced (internal harddrives)
2-3%CPU Usage….barely even registers across the 12 available processing cores.
90% RAM usage…hit its maximum — 10GB limited for system.
40 hours of rendering…
265 GB of data so far…
THERE HAS TO BE A FASTER WAY.
After this render is done, I’ve thought about exporting 4K MasterTimeline to AfterEffects and rendering with multiprocessors support.
But does anyone have any other ideas.
AME CS6 is a DOG.
To note…this is a serious issue we will have in the future, 4K x 4K is only halfres for final. Finals will eventually achieve 8K x 8K
- May 15, 2014 at 2:50 pm
In AME what’s your renderer? The drop down menu should say Mercury Playback Engine GPU Acceleration (CUDA) since you’re running an Nvidia GPU.
Other things that contribute to render times:
(1) Format/codec of source media
(2) Format/codec of output
(3) Disk speed and connection type of where you source media is located
(4) Disk speed and connection type of where your output media is going
AME CC moved further into utilizing more cores and more of the CUDA technology. CS6 made good strides over CS5.5, but CC did turn up the juice a bit. Something else to consider….
- May 15, 2014 at 10:53 pm
We are running CS6 AME — there is not a selection for GPU render– no dropdown menu to select a different render engine.
1. Source footage is Local PNG image sequences
2. Output is Local PNG iamge sequences
3. SATA 3 Harddrives 7200 PRM 2TB
4. SATA 3 Harddrives 7200 RPM 2TB
Looked at resources throughout the process – CPU usage was nearly ZERO the entire time…peaked at 3% usage. RAM 95%. Disk Activity was SUB-10mb/s and Network was 100kb/sec.
39412 Frames. Total
57 Hours of Rendering
- May 15, 2014 at 11:19 pm
PNG image sequences aren’t a multi-threaded format. So it’ll only use 1 core of 1 processor. When you jump into other formats like ProRes, DNxHD, h.264 you’ll see multiple cores being utilized.
So you’re also running the same hard drive for both source media and final output? Not usually recommended, but since you’re doing an image sequence your slow point will be the individual processing of each frame.
Keep in mind your also doing a 4K frame. And your image sequence is spitting out frame-by-frame results (so 30 stills a second). That’s a lot of processing for a single core process.
- May 27, 2014 at 9:49 pm
Just to put a conclusion on this–
The PNG 4Kx4K plates finally completed. 57 hours of Rendering through AME. 198 GB.
I redid another render just to see what would happen – this time using JPEG sequence @ 100%, 4Kx4K, 34,932 frames, Rendered in 9hrs 32min. 168 GB.
PNG was the issue. The format killed the processing. The JPEG sequence was accessing about 30-45% CPU and around 87 GB or RAM. Night and Day difference.
We are limited on the file format we can output, our slicing server for the dome will not read ProRes etc.
I was quite surprised that the PNG took so long since this was a pass-through render. Frame sequence source to Frame Sequence output…the PNG codec was the enemy here. I never realized the PNG was such a compressor-heavy file write. But now I know where the demons sleep!
Log in to reply.