Keith Koby
Forum Replies Created
-
There are better tools available for capturing and organizing materials from tape than fcp7. There are free ones too that come with your capture devices.
If you haven’t detected a switch to digital delivery since the tsunami that affected Japan 2 years ago, then you certainly are working a lot in archival materials. We have been receiving HD feature length films as prores via aspera quicker than fedex for nearly two years now and lower bit rate mezzanines going back 5 years now.
Last year we stopped delivering promos on tape. We are done buying lots of video tape. LTO tape on the other hand we buy a lot of. But even that model is changing. All of this material is delivered over the internet and soon even big facilities with lots of video assets will be backing up offsite over the internet. It happens a lot already.
Yeah we still receive some materials on hdcam and digibeta. FCP 4- 7 and maybe before was great at letting you organize material in a capture scratch after capturing and then editing. It was a great app for the last, what, 10 years? What is exciting about fcpx is that it is a tool that has useful features and tons of potential for the next 10 years or so.
I can understand your hesitation to part with old familiar workflows, but you need to look at the reality of the situation.
Keith Koby
Sr. Director Post-Production Engineering
iNDEMAND
Howard TV!/Movies On Demand/iNDEMAND Pay-Per-View/iNDEMAND 3D -
Ha! Agreed! Great to hear from you Gary!
-
[Jeremy Garchow] “Meaning, you are dedicating one particular volume for testing or only one volume in particular does not show the latency for whatever reason?”
We have one volume actually in production that does lots of ingest that uses the minis with san links and gigabit thunderbolt adaptors as metadata controllers. They use the built in gigabit nic for the private md net and the thunderbolt gigabit adaptor through the second port on the san link as the “public” lan connection. No complaints yet. It’s snappy.
[Jeremy Garchow] “Have you tried an ATTO box?”
No but it would in affect be the same situation as the promise san link in that it is a container for a pci-e fiberchannel card with a thunderbolt adaptor on it. I would assume similar latency.
-
What is the footage coming from AE?
-
From the minority camp of those wanting “a” pci-e slot. Latency is supposedly introduced over the thunderbolt to fiber channel san link adaptor. This makes thunderbolt suspect as a means to connect to fiber channel while running a metadata controller on a san.
Having said that, we are doing it with minis on one volume in particular and it seems to be working just fine.
Keith Koby
Sr. Director Post-Production Engineering
iNDEMAND
Howard TV!/Movies On Demand/iNDEMAND Pay-Per-View/iNDEMAND 3D -
But think about it. It’s just a spec for the new UHDTV stuff going forward… The only place that I think it really causes problems is live tv broadcast where you would see 4k UHDTV acquisition and broadcasts at 1.00 frame rates and standards conversion down converts to “pedestrian” HD at 1.001 frame rates. That’s what concerns the big broadcasters… The chance of artifacts introduced by a live standards conversion is too great and the balance of people watching 4k to HD would be lopsided to HD in 2020.
For the camcorder class that only shoots at 1.001, it doesn’t really matter, because it would need to be conformed and upconverted to mix with UHDTV content anyway. Kind of the same thing applies with the current crop of dvds and blurays and tvs. If you acquire and master with a 4k 1.00 frame rate spec and want to deliver to today’s tv’s and peripherals, then you have to conform.
Keith Koby
Sr. Director Post-Production Engineering
iNDEMAND
Howard TV!/Movies On Demand/iNDEMAND Pay-Per-View/iNDEMAND 3D -
Gradient banding is caused by compression during processing. We experienced an issue once with a particular plugin where it was dropping the footage to 8 bit for rendering which introduced banding. If you are using built in effects, then it probably isn’t this.
Go to your project library, select your project, open the inspector and get it’s settings from the wrench icon. Verify that you are not using a timeline that is using ProRes lite or proxy or whichever one they brought into X.
Keith Koby
Sr. Director Post-Production Engineering
iNDEMAND
Howard TV!/Movies On Demand/iNDEMAND Pay-Per-View/iNDEMAND 3D -
It is interesting that you pose this question. I understand that it is in regards to your particular footage. Coincidently, there is quite the debate happening *again* in the broadcast engineering community about UHDTV standards and weather 1.001 frame rates should be vanquished.
It would be nice to have straight 60 frames and no drop frame time code to worry about in the future.
Keith Koby
Sr. Director Post-Production Engineering
iNDEMAND
Howard TV!/Movies On Demand/iNDEMAND Pay-Per-View/iNDEMAND 3D -
[Walter Soyka] “Keith, I’d be curious to hear your thoughts on Adobe Anywhere.”
I really don’t know much about it. I’ve only seen the mutton chop video on their website. They don’t explain how it really works in that video, but it looks like you need an array of servers to do the crunching to whatever proprietary format they would use for streaming. Then all the rendering happens back in the datacenter.
I’m curious how much it costs and if you can use their streamed video to go out to an external monitor at the remote site. Also what happens when the remote editor goes to grab his/her favorite plugin but it isn’t installed on the server back in the data center. Or for that matter is their favorite font available?
I know of several editors who would be really into working from home.
-
[Walter Soyka] “I am happy to be corrected here, but wasn’t FCSvr was Postgres, while FCPX is CoreData/SQLite?”
You are correct and that is one big barrier to making FCP X Events/Projects shared via reading the same event out of a san location on shared storage. SQLite is one user at a time. To get multiple people reading – not even reading and writing – just reading – from the same exact event at the same time on shared storage, you’ll need a new database type under the hood or an external server with a check in/out scenario or permission controls, or a little of both even. SQLite is the only cocoa compliant db I’m aware of, so you are talking not only about a big overhaul but also about going off the reservation if you want to do it “in app”. I doubt that would be how it happens.
So if it happens, will it be through an external server control or, through a peer to peer check out/check in share process tracked in app? How does it work from a user standpoint?
I could imagine an app to app method through the share menu. You send someone an invite to share a section (compound clip) of your project or an event for example. When both inviter and invitee are on line, the exchange of db data could happen as a background event. When the invitee returns the share, you’d also need processes for accepting, rejecting or merging returns. I could see auditions being extremely powerful in this scenario. It gets complicated if it is a remote share process where proxy media is involved and then offline/online processes are needed as well.
No matter the method, here are some of my wish list items for collaboration/sharing:
1. Live updated master/slave event sharing and project/compound clip check out/in process.
2. Share of timeline and events not only to peers, but also to render machines for background rendering, exporting and analysis. If you are going to share the stuff, why not share it to machines available for processing?
3. Live updates to shared events so that new data can be read in real time (for growing file type applications).
4. APIs for pushing shared event data back out to a MAM
5. Background “Share” transfer and updates of databases to the peer or control server. Unobtrusive notifications in the app.It seems like they have a good base to create a great collaborative tool. From our standpoint we’d love to see it happen.