Forum Replies Created

Page 2 of 9
  • Helge Tjelta

    September 25, 2014 at 12:20 pm in reply to: Custom resolution & output?

    Hi, just make a setting in compressor, then in FCPX make a new output and choose compressor and select you photojpeg setting.

    I did this with a DVX system for the D3 playback for a 5760×960 output. Worked like a charm.

    Helge

  • Helge Tjelta

    September 6, 2014 at 11:23 pm in reply to: adobe begins construction on the edit media deathstar

    As explained in the whitepaper:

    How it works

    Source media files used by Adobe Anywhere are kept on a storage server connected to the cluster via a high-bandwidth OS-level file system mount. Local and remote users can upload source files to the storage server and can even work on the files while upload is in progress. Once the files finish uploading, they can be immediately used by other editors.

    The core component in the Adobe Anywhere cluster is the Collaboration Hub. It contains the database of project information and metadata, manages user access, coordinates the other nodes in the cluster, and provides an API for integration into other Adobe Anywhere clusters.
    The Adobe Mercury Streaming Engine nodes provide real-time, dynamic viewing streams of Adobe Premiere Pro and Prelude sequences with GPU-accelerated effects to your team members on their individual computers. The media is streamed from its native file formats on the storage server out to the users.
    And when it’s time to export from Adobe Anywhere, the Mercury Streaming Engine can generate final files from sequences and other media.

    A minimum of three Mercury Streaming Engines is required for a small workgroup. But the system can easily be scaled to support larger groups or more complex use cases, simply by adding more Mercury Streaming Engines. The exact number required depends on use-case details such as the number of simultaneous users and media formats.

    Adobe Anywhere does not require dedicated network cabling between individual editing workstations and the Anywhere cluster. This can greatly lower the costs associated with your network infrastructure and make it possible to create flexible workspaces for your creative team.
    The viewing streams Adobe Anywhere delivers to individual team members are small and light enough to fit on shared, standard LAN/WAN networks. Adobe Anywhere does not edit, move, or delete the original media on the storage server. These functions are reserved for your media asset management system. And there are no proxy files in the Adobe Anywhere system. The server accesses the full resolution files and uses the power of the Mercury Streaming Engine to deliver exactly what is needed, given the bandwidth available. As a result, you get less complexity, lower workflow costs, and speedier production.

    Helge

  • Helge Tjelta

    September 6, 2014 at 11:12 pm in reply to: adobe begins construction on the edit media deathstar

    I’m not quite so sure about that..

    From my understanding, there is nothing localy.

    I.e. if you start on a timeline, prepared for U by someone else, and you sit at home with your laptop. On wifi, you will right from the start, be able to play a 5K RED epic raw file.

    That is because you are only seeing a stream video in your canvas window. So you never ever have a file at you place, unless you tell the system to offload a bunch of proxies.

    otherwise you’ll only get streaming of a “viewport” containing you master edit being done.

    Helge

  • Helge Tjelta

    September 6, 2014 at 9:36 pm in reply to: adobe begins construction on the edit media deathstar

    Yes, I know it is a BIG OVER simplification. But that was the whole point.

    I get the colaboration and the other stuff.

    But the main prinsiple is: you only see streams of video coming from the mainframe, and only send commands from you local machine to the mainframe.

    That’s why you don’t need the best lines (connection) in the world.

    And thats the whole point. You can do it on much more hardware, and use what you got, as long as it can decode incomming video fast enough.

    Helge

  • Helge Tjelta

    September 6, 2014 at 9:30 pm in reply to: iDraw and Motion

    Hi, you can do it as PSD, but then you loose the vector part.

    Coming from iDraw the whole point is to get it as vector and not bitmap.

    Going the PSD route, you get only bitmap.

    Helge

  • Helge Tjelta

    September 6, 2014 at 4:43 pm in reply to: iDraw and Motion

    I have no Adobe programs, only iDraw and Pixelmator. Doing fine with FCPX and motion.

    All vector stuff I get from outside, goes into iDraw and clean up, then output as PDF. Add these to motion and turn off the fixed resolution in the media window. Now rescale to whatever and never loose quality.

    Have a nice ride! 🙂

    Helge

  • Helge Tjelta

    September 6, 2014 at 4:39 pm in reply to: adobe begins construction on the edit media deathstar

    OK guys, before the praise begins, just read Oliver’s article fully.

    The Adobe Anywhere seems to me just like a bunch on internaly build “remote desktops”. So there is no wonder this will work on a laptop. Nor am I impressed with 5K playback, thats because there is never a 5K playback going on localy at all. At best we have a transfer of 1/4 HD and compressed quality over the VPN. (1/4 is from it looks lik a mac with 1920×1200 screen estate and the recorder window is 1/4 of the screen).

    Basicly, if you take any NLE and do a VPN session you’ll get the same. What is cool is that they have build in this “remote desktop” and hidden it, so it “looks ” like you are on the mainframe locally.

    This is just as going back to a citrix office way of working. The clever part is the local ingest transfering to the mainframe happening in the background stuff.

    Think of Adobe Anywhere as OnLive og STEAM. Just with the upload in addition.

    But again, read Olivers article, he explains the difference of Avid and Adobe solution, and shows the differences.

    /Helge

    Helge

  • Helge Tjelta

    May 21, 2014 at 7:34 pm in reply to: Convert DNxHD in MXF wrapper to ProRes

    I would just download BlackMagic DaVinci Resolve, import all the mxf, put them into a timeline, render all out again as prores.

    It’s free and can work as a great converting hub. And also later as you coloring stage as well

    🙂

    Helge

  • Helge Tjelta

    February 28, 2014 at 5:26 am in reply to: Protocol for Backing up work in FCPX 10.1.1

    Hi Jack, also remember to use my app called X-wiper to clean up you library before archiving or just as a regular cleanup tool.

    It will delete renders, proxies, transcoded and shared files. You choose what to delete. And One-button, all are gone, freeing up a lot of GB’s in your libraries.

    check it out at:

    https://tiny.cc/1wr47w

    Helge

  • Helge Tjelta

    February 27, 2014 at 7:45 pm in reply to: Loudness and broadcast

    Hi Pam, it all depends on the spect for the channel. In europe, we now have a standard called R-128. When broadcasters accept things following this spec, it means that the integrated loudness should stay at -23dBFS, that is the precieved loudness should be around -23dBFS, this gives us a dynamicly headroom for 23 dB of true peak. This is the loundness war is over…

    But, many broadcaster has not yet adjusted to R-128, so if they say -9dBFS peak level, that means you must compress you stuff alot to stay at the same level as others… Just don’t peak over. This is the loudness war is still on going.

    So check to se if the broadcaster has loudnes metering or peak metering in their specs.
    /Helge

    Helge

Page 2 of 9

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy