Speaker 1 00:00:08 This is the workflow show, a podcast, covering stories about media production technology from planning to deployment, to support and maintenance of secure media solutions. We cut through the hype in the media industry and bring a human touch to the discussion which we call workflow therapy. I'm Jason Whetstone, senior workflow, engineer and developer for Chesa and I'm Ben Kilburg senior solutions architect. It just speaks systems. Ben and I had an opportunity to nerd out with some folks at Amazon web services or AWS Jack Windsor, global lead of content production and Matt Herson principal content production specialist joined us on the podcast to tell a story of media content production in 2020 and beyond. So while many of us knew long ago that we were all eventually going to be using cloud technologies to get our work done, the COVID-19 pandemic and the challenges associated with it have accelerated that journey.
Speaker 1 00:01:01 Jack and Matt talk about a recent partnership with the Hollywood professional Alliance or HPA, which was dubbed project leader. Hosan. This was an effort to bring together teams of creatives and technologists alike to solve the challenges of rapid media production in the age of remote work with many of these team members and different regions of the globe working with the same content, but first a reminder to subscribe to the workflow show, which will help you to know when we drop new episodes. And it also helps us know how many folks are listening. And if you have suggestions for guests for episode topics, tweet at the workflow show or on LinkedIn, you can also email workflow [email protected]
. All right, let's get to our discussion with Jack Windsor and Matt Harrison with us today, we have Jack global lead of content production, and Matt Herson principal content production specialist fro for AWS. Thanks for joining us today, guys, we are going to be talking about project leader. Hosan Jack, why don't you get into what that entailed and what that was all about and how it came about?
Speaker 2 00:02:07 Sure. So, uh, you have to go back. I've been in the industry for over 20 years where I had met. He goes by Jay Z. Now his name is . He's a content production specialist in the industry for a long time. I am DB resume. That takes a couple of days to read he's very German. And he came up to me about, uh, gosh, it was 2019, I think was the first project we did and propose that we do a, making a movie in a day type project that the Hollywood professional association or HPA tech retreat, which for those that don't know the HPA tech retreat is in Palm Springs, uh, normally around February, every year where the strongest technical executives from Hollywood and New York, the cover broadcast and color and content production, all kind of come together and, you know, talk serious technology and the changes that are happening in our industry.
Speaker 2 00:02:57 It's a, it's a great, great retreat. So anyways, JC was offering, Hey, let's, let's make a movie in a day. So we, we started that project in 2019 and it was quite successful. So when the pandemic hit, he decided to raise the bar. You know, HPA was in March this last year, but this year, uh, decided to make multiple creative projects kind of come together for the retreat, which involved cinematic shorts being created out of London, Dubai, Australia, Mexico city. We supported a virtual production project out of Los Angeles. We also had an animated short happen out of Mongolia. And my son even got involved and created a thematic video game that supported the entire project. What was, what was great about this is we had some really great cinematographer, uh, talent associated to these projects. So, you know, we had like Sondra de Silva out of Mexico city, Ruby bell, who is famous for Ulaan in hidden figures.
Speaker 2 00:03:55 She supported the project out of Australia and we had a beer Abdulla out of Dubai who has also done some great shorts. And then we had Bella by our, who is a famous actress. She was also miss Mongolia, but she was the one that coordinated the London and Mongolia activities. And then Barbara supported the virtual production project out of Los Angeles. And then Getty Windsor is the one that worked on the epic games project as part of this part of all of this. So, you know, it was great because we got to see all these technologies come together. And from AWS cloud perspective, it gives our, you know, it gives our engineers and solutions architects, an opportunity to get what creatives and kind of get out of your wheelhouse. So as you know, on the technology side, we have kind of a language in which we talk about, uh, technologies and it's usually in servers, storage, bandwidth database type language, but when you're working with creatives, they tend to talk in original camera, negative in dailies and editorial and conform.
Speaker 2 00:04:54 And, and so bringing those languages together to work in the cloud and collaborate on a global scale is really an amazing project to bring these kinds of things together. So, you know, essentially, you know, when we break it down, you know, we had multiple solutions, architects supporting this. Matt Harrison was one of the leaders in our project and bringing it all together. And the neat thing for AWS is to be able to learn the language of the creatives, but also it gives us proof points to be able to bring together things like content lake, where we provide a repository in which all of these third-party providers can work together in the cloud and creatives are collaborating on a global scale and it creates new problems for the industry, such as, you know, what time zone are we going to be working in because we have people that are remote from onset production.
Speaker 2 00:05:41 So we had people in Germany that were watching the onset activities happening in Mexico city, which was also doing editorial dailies down in Venezuela. And they were doing their visual effects pipeline out of Montana. So which timeline are you going to coordinate on for these projects? And it made it really kind of exciting. So, you know, essentially we broke the project down into its components. We had multiple swim lanes for captured to clown. We had the content lake component, which was architected by Matt Herse and then SAC owner in controlling the third party interactivities on all of these projects. And then we had the production services. We had the visual effects pipelines of which we had four discreet visual effects, pipeline supporting the project. And then of course we had the packaging, globalization and delivery components to the project. So it really was a script to screen activity, which was performed in just a few months. Wow.
Speaker 1 00:06:34 That's, that's quite an accomplishment, I would say, well, Matt, let's talk about some of the tech a little bit. This sounds like a huge challenge, right?
Speaker 3 00:06:43 Yeah. Especially when you start working with people around the world, right. It's not just one area. We know we're all focusing on Los Angeles or New York city or Austin, where you have tons of bandwidth, tons of technologists in that area as well as really common connection paths. So when we started looking at this, what I'd like to do is kind of take a little bit of an abstracted view from it to Alex, starting with the content lake itself. So Jack described it as just the place where we put all of our assets and all the applications will come to that one source to gain access to it, to do that. We have to do user level control, right. It can't be open to the world and we have to make some pretty strict policies on that S3 bucket, as well as making sure we get into technologies like versioning, multi-factored delete, you know, we want to make sure we're logging all of these interactions.
Speaker 3 00:07:32 We want to make sure we're capturing, who's connecting when they're I'm credentials and their roles, and as well as be able to do reflections. So that right building QuickSight dashboards to make sure that we can see it. It's logical and there's nothing that's really standing out going, oh, Hey, someone just pulled down four terabytes of data. And they're a, you know, someone that should be placing data. So w when we started going through this, we started just building up production meetings and we said, who's on the project. What are their roles? How are we gonna approach this? And we started getting a really interesting balance of here's our core team of who's gonna be on the project between all the different areas and all the different locations. And we are okay, great. What access to they need. So let's just bring it down to the least privileged model.
Speaker 3 00:08:19 So, you know, going zero trust, I don't trust anybody until you tell me their explicit role. So from onset, we started getting, okay, there's going to be this person on this set, that person on that set their, their credentials need to be where they can place and confirm assets into this data lake. Great. So we built an IAM role to be able to do putts and lists, to make sure they can confirm their assets once they place them. And they're able to do that. And then as we grew, we started giving people more access as they need. It gets, you know, and so on and lists to make sure that we're accessing it at the right way. But where it got really exciting is when we start getting direct service integration. So color front was directly pulling from our bucket to do daily and codes to do dailies out, as well as some finishing work, making sure that the color and audio pipelines matched out at the end of the cycle. You know, Jack touched on editorial, we got to pull for editorial. Kent can just magically sit there. So there's a lot of different areas you can really dive into, but, you know, hopefully that frames up where we started. Right?
Speaker 1 00:09:21 Yeah, it kinda does. So you know, where my brain always goes with this kind of stuff is what is the glue that holds all this stuff together? I mean, obviously all back-ended by, by services and AWS. But when we look at, you know, as, as an integrator, putting something together like this, it's always like, what is the backend? What's the workflow engine, what's the, you know, media asset manager, what, what's all that. So I guess, you know, that that's, that's kind of where my brain is going. Like, how did this all fit together? Or was it mostly bespoke? Did you guys sort of build it as you needed to? So
Speaker 3 00:09:51 Let me, I'm going to pass this to Jack is it's really exciting because each one of these projects were completely separated in pipeline and activity and design. So Jack, I'll leave it to you to kind of talk about each one.
Speaker 2 00:10:02 Yeah. Well, this is, this is what I love about this project, because it gives us an opportunity to show off the community of technology providers and partners that work on AWS as part of these projects. So, you know, even on captured a cloud, we had four very different captured cloud scenarios, as far as getting content into AWS. So Australia we were working with . We also worked with into core, which is the video village, you know, application , which integrates with Moxy. And, and then they also integrate with, uh, arch platform technology. So you start to see what I love about these projects as, as we start bringing on partners and technology providers, and talk to them about how, how these projects are going to come together, they start talking among themselves. And because they're on a common foundation of AWS, they find that it's pretty quick for them to figure out what that interface is going to look like.
Speaker 2 00:10:53 And then with the guidance of Matt Herson and Zach Wallner, you know, being able to enforce, you know, best practices and how they're going to interact. And then we were also able to help glue them together by using solutions like media exchange on AWS, which, you know, it's specifically designed for business to business content movement in a secure way, and then removing the underlying costs associated with it. So it's kind of, it's really fun because you're starting with this, you know, with different languages, as far as learning how the creatives talk, learning how the technology people talk. And then after you sit there for about 30 minutes, people start seeing the gaps that have kind of that of, oh, I understand why people aren't using this now because they didn't understand what I meant. And so as they start having the conversations, everyone starts getting it.
Speaker 2 00:11:37 And it's amazing how fast those glue, you know, the glue components come together and at the end of the Moxie in and arch and intercore, we're working flawlessly together within the project. And we saw the exact same thing happened with fifth kind working on, you know, on other parts of the project where they were also working with color front and they were working with black magic and they were working with, uh, gosh, Matt I'm, I'm, I'm actually drawing a blank on some of these names at the moment. I don't want to forget any of them. Cause we had quite a few people that were participating and did a great job, but it's, it's fun to see it all kind of come together. And that's the capture to cloud park VFX. We had four, you know, we had bebop technologies. We had untold studios working with hot spring, uh, which is another company out of India. We had Foundry nuke running on all four of those VFX pipelines, but they were the underlying technology orchestrating at all. Eclipse tech came to the table and was doing VFX. So we had being driven out of Wisconsin, out of Montana, out of London, out of India, all supporting these global projects. And it was just, you know, it was kind of seamless to us. We just looked at it as, you know, just, it was as close as the console keyboard console was on the,
Speaker 4 00:12:50 So we started talking a little bit about the content like there, the backing store for that I assume would be as three and everybody was just pushing into that and then distributing from there and to the various other storage repositories. Is that right guys?
Speaker 3 00:13:04 Yeah. So the content lake is primarily based on Amazon's S3 offering. So simple storage service rights, our object offering. When we start moving into the different parts of the pipeline, there's, there's a couple options. Most of the partners that we used in this project have direct S3 integration, so they can actually live, read cash, render and output back to that content lake. So there's not a lot of movement required, but obviously there's some things that I need to leave. The, you know, you need Posics level permissions, you need standardized protocols, SMB, NFS, and that's where we would shift it from the primary S3 repository to a localized file service. You know, FSX for windows file server or one of our other partner, German storage opportunities and really designs to make sure that we're hitting the requirements right, is each one of these pipelines had different requirements.
Speaker 3 00:13:54 We had editorial, which is primarily based on windows workstation. So that was heavy SMB. We had VFX pipelines that were mixed, both windows and Linux. And we had some opportunities to actually go full Linux with pipeline. That was all NFS based connectivity. So we actually had a wide variety and then we went into our finishing pipeline and this is really where we had some real fun experimenting. So we actually went to a luster based file system that was backed by S3, where it actually automatically grabbed the files from us three hydrated into a caching luster based file system and was able to push over 12 gigabytes per second on a single host to do some color output on the finishing workflow. So that's crazy speed for a single workstation, right? Like that's really hard to get consistently. And it, and it performed really well, which is really exciting. The first time we've been able to get to that level of both user and interaction, application efficiency and output efficiency in a seamless manner without having to do a ton of what I like tell nerd knob to, uh, you know, twirling where you have to constantly just adjust every little setting.
Speaker 4 00:15:03 Yeah. Let's dig into that. Just a hair more just cause you said Norton dirty storage speed things. And that's inherently interesting to me. So luster is a Linux clustered file system and that was laying over top of what multiple S3 backstories so that you could write to all of them simultaneously.
Speaker 3 00:15:21 So we actually have, uh, Amazon has a managed FSX for Lustre file system. So you don't need to deploy the ECE to and manage your MTTs and be able to create your, all your provisioning and your passionate tier versus your storage layer. It's already, we deployed as a service so you can create what's called a scratch volume and the scratch volume. You can then be interlinked with your S3 bucket. It just needs permissions to be able to view the objects, be able to access them. And then you can actually create automated right back strategies as well. You can do a time-based, you can do a CLI based. Even the Lambda functions can be triggered to write back to the primary bucket as a right through or right around. So when we start looking at that, you're going to create your Lustre file system, right? That's nothing new in the world.
Speaker 3 00:16:06 You create file systems. So based on your size, you're going to get a bigger cash pool attached to it. So then your end client will need a luster based driver. So great. We have a file system. We have a driver, we have an interconnect that way, we're able to access that and we're doing it all within the same local availability zone. We're using security groups to control access to the Lustre file system, as well as the desktop client that you're connecting to. And we're making sure that in the inbound from externally, how do they connect and what protocols and what source ports and what IP is they're using. So we went with a lockdown procedure in that fashion as well, where they had to be using. And in this case we use terror DG. So PC over IP ultra on a leave reject, correct me if I'm wrong. I think it was a Sento S desktop. I'm almost positive. It was seven of us seven. And then we would just basically use the Nvidia driver. So it was a chief for workstation. So I believe it was an eight extra large. So it had a lot of power. It had a lot of network connectivity and were able to do some pretty unique things. Cool.
Speaker 4 00:17:10 So the Lustre file system, the storage it was using was what EBS or was that just the direct disc underneath of the, um, virtual machines in the ECE two instances as part of the luster file system there? Yes.
Speaker 3 00:17:25 So, you know, every instance on AWS has some form of disc underneath it, right? The workstation itself they're running EBS. It was GP too. And that way we really built out a huge volume for them to be able to work with. And we've maximized EBS throughput on that instance, but on the luster side, because it's a managed service, you don't have to worry about what type of volume, how many instances to deploy, what caching level should I do? All you do is click what the size you want is provisioned throughput, and then go, and then it will automatically spend that up for you. And then it will just be available as a Mount. So it's really nice that we don't have to worry about managing a file system. You can just worry about doing the luster base activities, which is creating MDTs. So you're going to say, I want to perform performance, optimize my Lustre file system. And that's creating those strike of groups that say, okay, great. Anything from one Meg to 10 megs, granted that's one MDT, one gig to a hundred gigs. That's another, because I know I have very disparate file types and I want to optimize for that usage.
Speaker 4 00:18:26 Got it,
Speaker 2 00:18:27 Matt, which scenarios was that used on? Was that for the Autodesk conform?
Speaker 3 00:18:33 It was. So it was the conform and the, uh, some color pipeline as well with no,
Speaker 2 00:18:39 The Autodesk flame implementation was, was pretty interesting because it was a, they had some issues with the Dubai set up. They were planning on doing some things on prem, ran into some concerns and Barry Gaucher, who was a, uh, one of the Hollywood editors that was participating in the project and got together with Autodesk and asked if we could move the project over to our Los Angeles. Local zones. Local zones is a version of our regions that is essentially, it has less resiliency as a region region has multiple data centers, but it's essentially full data center infrastructure, but it's designed for extremely low latency workloads, so perfect for Hollywood type production workflows. And that's the, the main reason that Los Angeles was, was likely the first place to receive a local zone. But yeah, we turned that up. I, I think we were asked on a Friday, we had everything up and running on a Sunday by Sunday, I should say. And we're able to keep that project moving through their EMR, through their crisis deadline.
Speaker 3 00:19:38 And what's great is since we're using data lakes, it's easy to get these assets. It's not a heavy lift. It's not a heavy move and to spin up local zones, they already exist today. Right? We have Los Angeles, we have Atlanta, there's a ton coming there's there's ones all around the United States and north America. So the idea is just getting to as close to the user as possible to reduce any form of latency. So their interactivity is seamless because we all know, right. If we take a step back and we go, okay, latency's the killer of all fun. That's kind of the joke, right? When you go to 30, 4,000 milliseconds, your user experience is going to be pretty poor opposed to, you know, eight to 10 to 15 to 20. It's pretty usable. It's almost as if you're directly attached to that computerized instance or input device where you really don't have to worry about any type of crazy jittery and weird input.
Speaker 4 00:20:36 So that's a great, that's a great point. So in terms of, you know, VDI, virtual desktop interfaces, people are editing in the cloud, they're doing all sorts of cool things. Coast to latency can be widely variable. What I've generally thought of is underneath a hundred, you'll kind of get away with it. But above that, it starts to get really frustrating. What are you guys seeing generally in and around? Like what creatives, you know, at the top of their game are willing to put up.
Speaker 2 00:21:07 So this is two points. So there's first, it's overcoming the fear of change as all creatives get FID, fear, uncertainty, and doubt. At the beginning of the, of the suggestion that there's not going to be a cheese grater hugging their leg or warming their leg as they're doing their work. But, you know, Matt, Herson, he's going to speak to you as far as the performance expectations. Cause he's, uh, he's done a lot of research in this space, but once we get them over that and we make sure that we've characterized the network and performance capabilities, honestly, when they come into this project, they become our best evangelists. Matt, do you want to go into the technical components with us? Yeah,
Speaker 3 00:21:45 I do. I mean, one thing that I really liked that you hit on Jack was fear of change. You know, everyone has an editorial workstation today that they touch feel interact with and it works. And to say that I'm going to do something new and they have a project deadline, that's a very scary thing, right? You're, you're messing with people's livelihoods, reputations money. Like these are major impactful pieces, but understanding that there's maybe a different, maybe better, maybe more unique way of going about it really changed the availability. Plus with, you know, this will air at some point, I'm sure local shortages happen. We went through a huge shortage with our semiconductor manufacturing by our, I mean, just the UN the world earth plant earth manufacturing kind of had a huge problem where GPU's were impacted CPU's cars, you know, really anything that takes microchips were impacted.
Speaker 3 00:22:37 So then you go, okay, great. I have a new show. I have aged out hardware and I need to, I need 20 workstations. Good luck. I can't get them. I'm still trying to get some video cards from a year ago that are under $2,000 valleys, they ship. So, all right, that's a long pointed way of diving into this. So, you know, there's a lot of streaming technologies out there. And in this project we used a couple of different ones. We use both Terra ichi, PC, over IP ultra, which has really a huge 3m protocol provider. That's been around for quite some time. Amazon also as a, a service or a product that you can use called nice DCV, which is another streaming protocol provider. So you start seeing some kind of confusion, I would say, in the marketplace, because there's also parsec that came into be there's VNC from back the day there's remote desktop RGS.
Speaker 3 00:23:28 Like I could name a hundred, but it doesn't help add a clarity, but this conversation. So what I would say is when we start going through interactivity, you know, one thing that we hit on was latency, and you're talking about coast to coast latency. We want the workstation to be as close to the end users as physically possible. And this is really the paradigm of just because he was shot in the Mojave desert doesn't mean you need to do your production there. You can have talent in other locations. So you can have talent in Montreal town, in Los Angeles anywhere. And you want to bring the workstation as close to those users as possible because when you have Wakeham tablets or input devices or scrub pads or anything, those devices need to be connected to. And when you start having input, you need to remove input latency.
Speaker 3 00:24:15 Otherwise your editor, who is the award-winning editor is going to go, this is untenable and I'm not doing it right. So latency markers, what's good. What's, what's good enough from our experience on both this literalism project and previous HPA projects, I'm working with customers across our industry. We found that under about 27 milliseconds is pretty seamless to 99% of users where they feel as though they're in front of that device. When you start getting into the 30 plus millisecond to start noticing input latency, and that's really sensitive to artists, to editors, where they need to constantly re scrub timelines, double click on elements and go, Hey, I'm clicking. It's not playing. We have a problem. Or, you know, I'm running a audio board and I'm changing the level and the level input range is wrong. So now I need to remaster that that's a huge stopping factor, right?
Speaker 4 00:25:13 Sure. Got it. So the secret is as close as you can be, be that close,
Speaker 3 00:25:18 Bring the incentives, being, bring that virtual workstation as close to the end-user as possible. The other magic sauce I would throw on top is build your army or your really Amazon, Amazon machine image. You know, that, that connection source to be specific to that type of user, don't just make a generic one that launches in the go, okay, Greg, go install your software. Have a great day. No, pre-install have the licenses ready. Make sure that they're able to get in and start working have group policy objects, map. Those strides have scripts on boot to get to where you need to be bring profile settings across, because guess what? The last thing I want to do. And, and I'm editing a previous recording right now is go, oh, where's my plugin directory. Where's my fonts. Where's all my interlink. Where's my, where's my effects catalog. I don't have time to look for all this. Like I need it now. Like I don't want to spend a half an hour setting it up.
Speaker 4 00:26:10 Yeah. Yeah. It's like hot rods or sports cars. They're finely tuned for the jobs. Right. And then if you're going to say, I'm going to take away your cheese grater, AKA your fancy silver hot rod. That's sitting underneath your desk that you've spent hours and hours customizing to your unique specifications. And then you say here's a magic one from the sky. People are going to go, um, FID, FID, you man. So yeah,
Speaker 1 00:26:33 I guess it boils down to, you know, thinking of these, of these workstations as, as a tool rather than as an extension of your personality. I kind of look at the automotive that way, frankly. It's like the, you know, the automobile is a tool to get me from here to there. It's not an extension of my personality. That's just me. I mean, there are a lot of people out there that, you know, that they're really into, you know, automobiles, but that seems like it's kind of on the, along the lines of this discussion. I know, um, when I first started learning about cloud technology, I remember the, a solutions architect saying, don't treat your instances like their pets. You know, they're just, you spend them up and you throw them away when they're, when they're done. If
Speaker 2 00:27:10 Your servers have star Trek names, there's a problem.
Speaker 3 00:27:15 I mean, that brings up a really, really profound point, especially as we move towards the future, right? Like, I'll be candid. I have two kids, you know, I care about where they're going to be when they grow up and how the world's going to exist. I know Jack has everyone has a family. So moving into, you know, substaining, that's a huge impact, making sure we reduce the amount of waste that comes into play. So, you know, Amazon's taken a strong stance on green energy. So making sure that we're using power that's self-generated, that makes an impact. Because guess what? At home, you probably don't want your power bill to be in the hundreds or thousands of dollars. You know, all these machines eat up a lot of power. There are a lot of electronics. There's a lot of waste that goes in. So what happens if I just say, guess what?
Speaker 3 00:27:58 I have this little box you can run on your desktop. It's called a thin client. It's been around since I want to say sixties thinking about my old ass 400 IBM series days and go guess why you don't need anything really local because you're doing it all remote. And at the end of the day, you have a by-product of security put into place to, it's kind of hard to walk away with a hard drive full of stuff if you don't have it. Right. But that makes a huge differentiation, especially in content security space and that zero trust model, for sure.
Speaker 2 00:28:29 Sure. And there's certainly a lot of technology partners and providers out there that are trying to take advantage of the space. You know, when we first did the first leader who was in project, it was bebop technology that came forward and they were the only ones supporting the project. This time. It was bebop, it was arts platform, eclipse tech and untold studios. And they were bringing new ideas to the table. Such as untold studios has typically been more of a service provider at this time. They had partnered with hot spring to provide commodity-based VFX resources to support the project. So they were providing the underlying infrastructure and actually partnered with somebody else to provide the human resource work on the VFX project. So, you know, it's, it's interesting how the business is changing and it's becoming easier to use. Um, I was talking to an executive. I actually, I was talking to the CTO of eclipse tech just yesterday, Jeff home. And he was talking about the, the way he now receives visual effects projects, and it's, he gets calls. He looks at the project and they provide him with a URL to connect into the platform to do the work. So we know that studios and productions are adopting these centralized cloud content architectures where people can go to the content, the is
Speaker 3 00:29:41 There with content hydrated and they can start to work.
Speaker 1 00:29:45 It sounds fantastic. I mean, I know just working in the, in the capacity that I do sometimes doing the work is the easiest part. You know, it it's the getting into the environment, you know, getting everything set up to where you need it, making sure you have the right VPN credentials, for example, you know, all that kind of stuff can be a real pain and it, and it can be a blocker to actually getting the work done. Especially if you run into any problems along that train and you know, that that could be enough to just halt your progress. Yeah. I'm definitely on board with this style of working.
Speaker 3 00:30:15 Yeah. I mean, one thing that we haven't really talked about, and I think it's kind of an interesting transition is, you know, what tools can you use to move your data around? Right. You know, it, you get hands on to launch the workstations, get S3 running, right? Like those are, those are potentially easy things, but then when you start going through going, okay, great. We shot in Dubai. We're ready to edit. I don't know the CLI what do I do? That was, that was kind of a fun conversation. I'm going, okay. So we have a couple options. One, do we have, you know, an integrator or a partner kind of walk them through how to do it and do that manually for them. Do you script automation? And this is really one of our partners that came to the table, which was IMTS soda actually came out to be a pretty successful use case that it was kind of new to our space.
Speaker 3 00:31:03 At least was data management at a very simplistic GUI layer. So for instance, they went into IMT soda platform, which was locally running that account and they went, okay, great. I'm ready to start editing. I need to do my Dubai show at 100 of 100 to 1 0 5, 1 to five eight. So they can actually go in gooey, click on what they needed and say, here's the transfer. Here's where I need to go. Here's my, uh, FSX for windows file server. That's what I'm gonna edit on. I have a couple editors waiting, let's go, and you can click the job. But what I really liked about it was you can click estimate. It will actually estimate the cost of the data movement, the cost projection of the storage, as well as the life cycle management of that data. So it actually was a pretty easy tool for, you know, an AC or someone that's like an assistant editor to be able to do their orchestration for edit bays and click go. So it was a really nice feature to do that. And it's, bi-directional right. You can load out, you can load back in, you can get your data shifting, which is pretty helpful because you never want a one-way street because then you always say, Hey, I need to go back home. How do I get home?
Speaker 1 00:32:10 Yeah. And, and I believe there's even an estimation of time, the time that it's going to take to complete the transfer. Yeah.
Speaker 3 00:32:16 So there's this estimation of cost and time. I really liked the archival side of it as well, where I could do projections of how much your storage costs you. If you moved into archive, what's the projected timeline cost savings in that perspective. And you know, when you start going into archiving, you have items like Amazon S3, glacier, Amazon S3, glacier, deep archive, which are, are very fordable tears to store data for very long periods of time. But you know, being candid, right? The, the point of this show is to get a little deeper, you know, the recall methodology, it's not a simple get, you can't just get on that storage class. You actually need to get a call to recover it, restore that asset and set parameters for that restore, whether you're doing expedited retrieval units, standard recovery, bulk recovery, and there's timelines associated. So having a system to translate that, to say, okay, I don't know what any of that means.
Speaker 3 00:33:11 I just need to edit the project. I need all the assets and one hour, and I need it live in an hour. Well, that translates to, I need the assets recovered in 30 minutes. I need it loaded to that system. And I need, my head are in place in 45 minutes to start reviewing to be that hour. So that's when you can make a decision tree, right? So to you go through provision recovery unit, 15 minutes, here's your assets go. And then the next job, move it to that file system go. And then you got your editor connecting it via their streaming protocol choice and they go, awesome. I'm connected. I can do my work. Now. It reduces a lot of that friction that you'd normally have where you need it, resources to go. Okay. CLI copy. So you like man, see, all right, job check, job status check.
Speaker 1 00:33:56 Yeah. Yeah. And for our listeners on what Matt mentioned there, that's something that I think a lot of folks are considering now, you know, they want to get off of LTO, uh, maybe, maybe an on-prem LTO and move it to a cloud archive. It, it, isn't always as simple as just shove everything into S3 and then you can get it back whenever you want it. When we go to these archive tiers in the cloud platforms, uh, like Matt said, like LTO, there is this idea of, you've got to wait a little while to get that content back. And I can't necessarily tell you exactly how long it's going to be. So when we look at automating those, there's a little bit of complexity. Just, just to throw it out there. A little bit of complexity that goes into the automation of mainly the restore, the archive is, is easy. It's, it's the restore, which is what everybody cares about. How do I get the stuff back? I think archive is always going to be one of these things that we know we need to do. And we're always a little kind of like, well, uh, we try to dispel the uncertainty as much as we can.
Speaker 2 00:34:51 Yeah. As a metal tie archive is a trigger word for me, for spending decades focused in that particular space. It's one of the reasons that I worked with some other, with some folks here in AWS to create media to cloud is a quick-start solution, which aggregates content and allows you to test machine learning services, to augment the metadata against those assets. As you put them under management, the cloud migrating from an existing archive. But you know, as you guys know those on-prem archives of LTS, there's, there's a huge chunk of that. You know, someone will tell you they have a 50 petabyte archive, but in reality, probably 20 petabytes, they would throw away if they could, you can't delete. And they probably don't have visibility to another few petabytes because they just have poor metadata. The metadata is atrophied over time, you know, where they they're just moving it from one type to another. So, you know, you really have an opportunity when you move your content into AWS to test and augment the metadata and address that metadata atrophy to where you can find and search and really have control of your content. So it is something that should be considered as they move in.
Speaker 1 00:35:58 Oh, that's a really good point, Jack. It's, it's something that I think you're right. Not too many people think about it because archive is kind of, as I'm sure, you know, out of sight, out of mind, do this thing. And you know, we do it because everybody says we have to, and you know, we might need it someday. And it's always, when we do, that's always the, the mad rush and the scramble, you know?
Speaker 3 00:36:21 Yeah. So I like what Jack's kind of touching on right there. We have media to cloud media insights engine, and these are frameworks that exist, you know, QuickStarts that exist today that you can launch and start getting richer metadata based on your assets that you already have, or that you've already made. So you can automatically bring them in via this Quickstart. And it actually does the archival step for you. What I really like about this is that it gives you a chance to view your metadata, catalog it, database it in a quick referenceable way that you can pull into your ma'ams PAMs and dams that you need to get richer insights. So a lot of metadata previously created didn't have things like celebrity detection, key words being said, you know, even recognition of objects. Sometimes it's really important. You know, you start going through and you're like, oh, I really wish I had something and I'm totally thrown this arbitrarily, you know, Flinstones ish rock walls. And you can just start searching these elements and go, oh, Hey, look at my catalog. I have a million assets and I can actually internally curate and bring to the table without having to go reshoot or build or buy.
Speaker 2 00:37:30 But the most important thing is test. Don't just turn on machine learning. Service is bringing the archive you want to test cause you don't want, you don't want to bring in an entire archive of B roll footage and run, you know, run speech, speech to text. And there's no speech.
Speaker 3 00:37:46 Yeah. So quantitatively track your data, make sure you get what you need, opposed to what you want because there's costs associated with all this, right? Each one of these machine learning services and APIs generates a policy. You want to be efficient with it, especially if it's B roll that you most likely won't ever touch. Again. Maybe just a quick celebrity detection is enough, maybe some time object or just standard cataloging, you know, name a confirmation, just things like that is really important, but on the archive space, what I find to be really interesting and I've, I've played in there for quite some time is moving away from proprietary tape format, LCFS, proprietary cartridges. You have to worry about things like how long is it good for how many reads, how many hours of operations storage capacity as well as storage temperature turns out. If it gets really hot tapes are not a big fan of that who knew there was old tapes that would actually catch fire, but moving them into an object space, having the media directly referenceable without having to read back an entire tape really changes the, the access pattern as well.
Speaker 3 00:38:51 It can be a lot more advantageous opposed to going, oh, I need to pull from this tape for this asset, that tape for that asset. Oh my robots full. All my bays are full. You can actually move away and just say, here are the assets I need. Let me just pull them. And I don't have to worry about any of the logistics. And
Speaker 4 00:39:07 That's interesting. That's a really great point, Matt, where are you guys seeing people tearing primarily and the data access patterns are people using infrequent S3 more than they're using glacier or are they using both? And then glacier is just kind of the, oh crap. And if I really need to grab it, I will from glacier. I'm just curious about the access pattern of the typical M and E cloud adopter.
Speaker 2 00:39:37 The customer use case and eMoney news is obviously going to be different than a production studio. So news clearly, clearly needs to have certain assets at the ready in case something happens. You know, when Michael Jackson passed away, obviously everyone was in a rush to get to their assets. If they would have been in deep glacier, it would have been difficult. So this is where infrequent access becomes very ideal for news agencies so that they can keep assets at the ready in a lower cost, uh, provisioning tier, where they can afford to lose those assets. You know, there's an element of risk there, but even so they can restore from the glacier and rebuild those assets at any time. So it's not a replacement of an archive. It's simply, you know, a redundancy that they keep in place to run their business. But you really have to think about what is the use case and what is the archive. There is no one archive. Yeah. You're talking to a studio. There is the master archive. There's the colorized master archive. There's the service master archive. There's the marketing and promo archive. And Matt will probably tell me, tell, tell you that I'm missing three or four others, you know, but essentially that's how we go in. And it's, it's kind of a consultative conversation with each, with each of the eMoney customers to say, what is your business use case? And then we would help tailor it to their usage.
Speaker 4 00:40:54 It's exactly like what
Speaker 1 00:40:55 Yes it does for our clients. You know, w what, what's the best ma'am out there? Well, I mean, who, you know, what kind of work are you doing today?
Speaker 2 00:41:04 You guys are not going to go out of business because that consultative experience is absolutely critical for many customers to rely upon. You're the Sherpa that helps people climb the hill. You've done it multiple times. And that's, that's really key to the success of customer.
Speaker 4 00:41:19 Yeah. The, I was thinking about something that just came up yesterday for me, where a client was looking to render some things a little bit faster with media encoder in their day-to-day work. And being able to be like, oh, our friend's helmet might be able to help with that. Right. Just knowing that this cool German company exists, that other people might not know like, oh, we've got to build this thing from scratch. Well, no, somebody did that and they're in Germany and we can put you in touch with them because they're awesome. And
Speaker 3 00:41:48 Yeah, having, having that breadth of experience and, you know, seeing the use cases, being able to do postmortems internally on them, as well as seeing where they were successful, where, you know, things have changed as well as influencing companies roadmaps. Right. You know, if you're seeing consistent ass, you're going to bring that to your co your partners and your customers saying, Hey, here's where we're missing the mark. How do we fix that? And obviously you were very well-respected in the industry to say, okay, these are not just random opinions. They're based on, you know, data, it's data driven decisions. Here's what we're bringing to the table. And, and it's really great to see that level of outcome.
Speaker 2 00:42:23 Yeah. If I can bring us around full circle, this is exactly why I love working on these later housing projects with the HBA, because it gives us an opportunity to work on all the technologies from script, all the way to presentation in a CMS. It gives customers an opportunity to see what is the art of possible in AWS. And we can dive into any of the specific technologies. You notice how we dive right into archive and policy questions, and then went right over into VFX rendering as just another thing. But we could have talked about editorial. We could talk about conform. We could talk about globalization. We had fifth kind interfaced with . It was actually STI media at the time they were still doing the acquisition, but they were interfaced also with, uh, own zones, which I think they're changing the name to tele air for IMF packaging. But all of those components were part of this community of, of partners and technology providers that were put together to support this thought leadership project for the HPA. And it gives people a chance to go, you know, I have a hundred things that are involved in my workload in which to run my studio, but I have 10 key problems. And it gives them an opportunity to actually dive into those specific areas and help open the conversation up to solve the problems.
Speaker 3 00:43:44 Great. Another really fun thing along this path is touching new technology, brand new out of the box. It could, it could go really great, or it could explode in your hands, right? It's being able to, to mess with these new technologies in an area where risk is acceptable, because this is a internal program and it brings lessons learned out to the marketplace. Here's what was successful. Here's what wasn't. And you see that in the virtual production space, you know, it's become, it's booming everywhere. Everyone's talking about Mandalorian and how they did it. How do we, how do we do that? You know, there's other shoots and movies going on every day, you know, how do we take care of previous? How do we take care of our versioning? What do we do about our metadata? And everyone met, met a verse and Metta for thinking it's like the hottest term on the planet. That if I had a
Speaker 2 00:44:31 Ferrari, I'd probably just name it the Medicar
Speaker 4 00:44:35 Or go hide in the woods for a few months. Yeah. That's been fascinating that how quickly our science fiction upbringing has become our day to day present. I mean, I don't think we're a hundred percent there yet, but, you know, Jack, you talk about the art of the possible and you know, the fact that we can spin up all of these services on demand and immediately access them so long as we have the pipes out of wherever we might be that are fast enough to provide decent access, to get what we might have shot up into the cloud. And that it's really a playground after that, which is amazing.
Speaker 2 00:45:14 And you see, you see, uh, you know, I mentioned at the beginning that you have these creatives and architects working together, and I'll give you a few examples. I would talk, we would talk to Jay Z about, yes, we're going to work with these partners and bring this stuff into a content like Jay Z would look at me and is that a cloud? Is it a lake? Uh, and so, and we, we get into conversations about what, you know, we'd have people that were saying OCN and OCF. You also had camera raw, you know, it was another conversation. And then other people were saying, well, I'm capturing, you know, I'm capturing and be raw, you know, six K or eight K or even 12 K was part of the sequences. Then we had other people saying, well, yeah, I need to get DPX sequences because we're going to be working over on VFX over here, bringing all of those languages together, where the it guy goes, got it. That means this much storage to me and this much bandwidth and where the creative knows, oh, I'm going to get what I expect visually and with the right color depth when I'm doing my review process. So that everybody is kind of, you know, going through that Rosetta stone and understanding the languages of each other, that, you know, that's absolutely critical in these projects.
Speaker 4 00:46:21 Yeah. Now that that still takes human know-how and ingenuity there, for sure. There's definitely a human touch to that.
Speaker 2 00:46:28 Yeah. They, the one piece that, that it kind of came to light was we had the distribution side, which is at the very end of this project. Uh, Raul Vitone had worked with Dolby, he'd worked with Bitmovin and a CDOT to build out that end packaging, CMS platform for the project. He was also working without tele air and he was going to the creatives asking for, so this is the HDR version I need here is the encoding spec. And the creatives would just stare at him because they were at different parts of the ecosystem or, you know, the workload that had never really talked together before, but it raises good questions of, well, why don't we have an easy button for publishing to, you know, these end points towards the beginning coming out of editorial and mastering and part of the project, maybe that's an area that we could develop to make that simpler for creatives to distribute content. So it raises those kinds of debates and questions. Not saying that we have the answers yet, but you know, it gets us to where we can start to innovate in this phase. It kinda reminds of
Speaker 3 00:47:29 Localization, especially since we're working in multiple countries and a whole slew of languages, there was a really big push for localization of at least translation of content much earlier in the production style. So it really changed the idea of, okay, great, we're shooting in, uh, it was, I want to say five or six different countries. We have at least five different languages and we needed a commonality and commonality for most was English. So we actually had to bring in machine learning services early into the daily state to be able to do that translation reflection of script translation. Are we going the right path? Did we deviate? Did we get the shots we need? And all of this was really coordinated via both media insights engine, which is a quick start that we had. Also, our partners were really involved in building solutions on the space.
Speaker 3 00:48:21 Gray Metta was a huge one that kind of came in to play with machine learning services and be able to do that translation layer, but it all needs to be coordinated. And this is where really the information came together was in fifth kind and Moxie in frame IO, because not only did you have all these assets, you had to manage you, workflow methodologies you had to do. And then finally you had to have to review and approve in notation systems along the way to make sure you're getting the cuts you need. Otherwise you go, oh, Hey, by the way, you know, that gear that we rented or borrowed, or we're given, you know, 12 K cameras, it turns out they're very expensive to get back. So we need that. We have a limited shoot time and we need to get everything we need to get.
Speaker 1 00:49:02 Yeah. Let's talk about cost a little bit mainly just to say, I feel like costing out an AWS solution could be a full-time job and it often is for some people, and that's not a dig or anything. That's just, that's more just like, this is the way it kind of is, you know, because, uh, you know, there's such a breadth of services there and it's the kind of thing where you could potentially be paying more for particular event, but then less over the course of most of the time that you're using these services was really pretty, pretty astonishingly cool, take a sports team or a sports organization, for example, that might be recording a game or something like that. They might be doing really heavy, heavy production during a weekend, just during specific times of the year and paying for bandwidth and storage and all kinds of stuff that they're just going to be using during that production space time. And then it, it, you know, could be shuffled in archive, pulled down local, you know, there's, there's many different ways that it could go, but your usage is kind of flexing up and down based on what you're doing. So, you know, that's great. That's really cool in the case of these projects, you know, they're only costing when they're in production, right. When they're being used. So I think that's a, you know, a great thing to just, just to, to realize, yes,
Speaker 3 00:50:21 I'm a little bit, I mean, it's, you know, it's one of the benefits of the cloud, right, is the paper use model, but it also offers
Speaker 2 00:50:27 Complexity when you, when you're managing over 200 AWS services, of course not every customer is using all of them on these projects. We typically invest in the industry by running them in our essay accounts, which gives us visibility to the costs that are associated to these projects and helps us characterize that for customers and advise as, as it grows. But we're also, it also gives us an opportunity to promote certain services that reduce costs for customers. I had mentioned earlier media exchange, uh, on AWS, that one is specifically designed to remove the egress fees. And since it uses our storage back plane, it never leaves the storage technology component. So, you know, as you know, when B2B transfers happen, whoever is receiving the content will, will need to rehydrate the content. They'll do a QC, they do all the spot checks they have to do. They'll do the fixity checking and they want to make sure that everything is exactly, you know, as transferred, but within the media exchange on AWS, that does not need to happen because it's not leaving the same storage medium that it's always been in. And it's not changing to a different accelerated transfer technology it's using our backbone. So customers are confident in the fact that that content that moved from one location to another is exactly the same content that was sent from the source. Matt, you want to add any color to that?
Speaker 3 00:51:45 I look at costs in a, in a really unusual way for most, I guess I really liked taking a step back and saying, okay, what are you trying to accomplish? What's your goal? Are you trying to run editorial workstations? What's your workstations you have today? I like looking at it from that perspective and then kind of working backwards back to eight, the AWS cost brackets, let's be honest. It, it, there's a L you know, 250 plus services. There's so much to look into. Do they all apply? Probably not. For most of these use cases. So, you know, partners like a chance to really bring value, to be able to say, Hey, we've done this a whole bunch of times. We have profiles created for costs. We can tell you estimations. And there's obviously ways of adjusting that based on, you know, what the individual needs.
Speaker 3 00:52:29 But when we start thinking about it, it really comes down to a couple of core. Cause we're thinking about in these use cases where it's going to be compute, storage and transfer, like those are going to be the big pieces that you're going to be looking at. As you go through this transformation of the editorial and color pipelines, it makes a lot of sense. So there's even optimizations, but then those pipelines to help you reduce your costs, we have something called spot instances, which are on demand, bidded for lower price. If you have rendering, there's also reserved instances where you're committing to a longer term for that at a lower cost. You know, there's optimizations just right there. There are savings plans. There are so many things, but what really comes to play as the biggest advantage is make sure you're optimizing your workflow. And when I say that, I'm saying, you know, if you're not, if you don't have people working for eight hours a day, turn off those instances, they don't need to be running 24 7, be efficient. You know, if you're storing data stored efficiently, doesn't need to be an object. Doesn't need to be in file. Make sure you're aligning this correctly and get rid of duplicates and triplicates of data for no reason other than just, Hey, we've done that in the past and it makes us feel good. So to say so,
Speaker 1 00:53:45 Yeah, I was going to say, and so a couple of things I'm picking up here, number one, Jack, you mentioned that you'll, you guys will typically sort of POC some of these types of efforts on your own team. So I hear that that's a great way to sort of baseline, okay. Here's, here's an idea of what the cost is going to be based on our POC. Now your costs may go up or down based on, you know, some efficiencies we might find or some, you know, unexpected, whatever, but that's a great way to baseline. The other thing is just Matt. What I hear you saying is shifting the thinking a little bit about when we have an on-prem server on prem storage pool, we put our files there and we forget about them and maybe changing our thinking about that a little bit. Where is it most efficient to keep this data?
Speaker 1 00:54:27 Is it more efficient to keep the server up and running all the time or do we, do we shut it down when we're done with it? It strikes me as the, you know, the analogy that I would use as driving an electric car. I drive an electric car. I've been driving one for about three and a half years. I had to completely change my thinking about how I got from point a to point B. I, I mentioned, you know, I think of my car as a tool. I really do, but you know, a tool could be a screwdriver or a hammer, you know, you're thinking about then, which route am I going to take? Am I going to get on the highway? Or am I going to take the back roads? And you know, that, that all figures in, is it going to be a three-hour trip or a five-hour trip based on where I have to charge on the way, you know, that, that kind of stuff.
Speaker 1 00:55:04 So it's really a matter of changing your thinking. You know, one of the benefits to driving an electric car is that it's really cheap to drive it. It really is. I mean, I was going to Baltimore every day from Stateline, Pennsylvania, Maryland. And I think my electric bill went up about $30 a month. I mean, I used to have to spend 50 or so dollars a week in fuel when I was driving a diesel, very efficient, you know, diesel and I was spending 50 bucks a week to go back and forth to the office. My electric bill went up $30 driving just, just the same amount and a month. So there's a benefit, another benefit, you know, zero emissions from the tailpipe because there is no tailpipe, another benefit, no maintenance, like barely any maintenance costs. So, you know, there, there are benefits here, but it does require a little bit of changing of thinking about the way you're going about doing your work and you know, where your data is moving. Let's just say,
Speaker 3 00:55:56 Yeah. So I really liked that analogy in one extra piece. So that would be, you don't have to change it all today. You know, if you need to move fast, you can move fast. You can always iterate as you move to. You're never locked in, you know, like a traditional Lino NAS you spend, you know, let's say I'm buying some crazy deep petabyte storage. I'm going to spend a million dollars. And I need to capitalize on that. I have to keep it for X amount of years because I have, you know, uh, double declining balance and right ops and taxes and all this other crazy stuff. So you have all those componentry where you're, you're really big into something with moving to AWS and really a lot of cloud vendors in this case, you've been candid. You know, you can dive into technology, you can grow as you need it.
Speaker 3 00:56:42 And if you hit a certain point where you go, you know what, there's a better way than change. The whole idea is there to be able to be innovative, to move quickly. You know, we have a, we have a saying internally that it's best to fail fast, right? Go through experiment, try it. If it doesn't work, change you really, the only one that's holding you back is yourself. You really have the opportunity to change. However, whenever you want. And then as we launch new services, maybe there's a more advertising service that fits that exact need, or maybe there's price productions. Like we've done. We've had over 74 price reductions on most services. It gets more efficient over time as well. Yeah.
Speaker 2 00:57:22 Yeah. And this is, this is exactly where in Chesapeake comes into play because you have, you know, things like the cloud adoption framework, which are designed to make sure that customers are not just thinking, okay. So if I move my storage from on-prem to cloud, then let's just do that. And, and you'd tell the it guy to go ahead and make that change. We have frameworks and there's a SIS that can come in and help customers think about, okay, so what is the operational impact? What's the security impact? What's the HR impact? How does finance change? You know, as we think about moving to the cloud and to make sure that all, you know, all of these different organizations are involved in the decision process and they realize that it's not going to be the same, that there are changes. And the, you know, the changes are for the better, you know, you're not going to have a big Exodus of resources because you're moving to the cloud. And the guy says I only wanted to manage on-prem Mavs. He now sees a huge opportunity because now there's training in cloud services and he can learn new skills. There's a lot of opportunities here, but you have to make sure that you're having the conversation with each of the different groups and you guys know this better than anyone else. I'm sure you do workshops like this for your customers all the time.
Speaker 1 00:58:30 Yeah, we do. Well, I think that's a great place to wrap today, guys. Thank you, Jack. When singer from AWS and also from AWS, Matt Herson. Thank you
Speaker 3 00:58:39 For joining us today. Thanks for having us. Thanks guys.
Speaker 1 00:58:42 Thanks for listening. The workflow show is a production of Chesa and more banana productions. Original music is created and produced by Ben Kilburg. Please subscribe to the workflow show and shout out to [email protected]
or at the workflow show on Twitter and LinkedIn. Thanks for listening. I'm Jason Wetstone.