#7 "Storage SANity"

September 28, 2012 01:12:32
#7 "Storage SANity"
The Workflow Show
#7 "Storage SANity"

Sep 28 2012 | 01:12:32

/

Show Notes

The Workflow ShowStorage Area Networks (SANs) are essential to make multi-workstation video post-production facilities work smoothly. In this episode, Nick Gold and Merrel Davis  explain in detail all the elements that go into creating a well-functioning SAN environment. After listening, you too will be well-versed in this critical realm within digital workflows. The length of this episode is 1:12:32.  We find that many folks like listening to our podcasts while commuting. Your comments are welcome below.  You can also email us directly. The Workflow Show podcast series is also available through iTUnes. We invite you to subscribe to the series, and we urge you to rank or review the series in iTunes as well. Show Notes: SAN (storage area network) DAS (direct attached storage) NAS (network attached storage) SAS (serial attached SCSI) Gigabit ethernet 10-gigabit ethernet file server RAID redundacy File-sharing protocols Jumbo frames SCS (Small Computer System Interface) iSCSI Fibre Channel   Host Bus Adapter (HBA) file-locking and volume-locking SANs metadata controller Xsan and StorNext SANmp EVO NAS vs SAN comparison Oh, and to set the record straight regarding Merrel & Nick's discussion about PBS' The Newshour former hosts -- Yes, Jim Lehrer has retired from the show and contrary to what was implied, Robin MacNeil is alive and well working on variety of projects. Did you catch The Workflow Show episode about "Media Asset Management"? Follow ChesaPro on Twitter
View Full Transcript

Episode Transcript

Speaker 0 00:00 This is the workflow show. I'm Nick gold. I'm accompanied by my cohort. Meryl Davis were of Chesapeake systems episode one zero seven. Howdy. Yeah. Hey guys. Hey, usually I say howdy. Oh, well, you know, my God, we really are merging seven whole episodes, man, say, Hey, seven, seven, lucky seven. So what, what are we talking about? Today's episode. And, and it's funny cause you know, Meryl and I were like, what should we do next week? And we actually said it in that voice, that's our, that's our thinking voice. And, and it's funny we were, we were going through all these things that we've talked about with asset management and the workstation and editing platforms and archive and backup. And it's funny cause you know, I've been at Chesapeake for eight and a half years now. Can you believe that that's almost a decade and I'm sorry, the vast majority of what we've done over those years. I mean a huge percent of a percentage of it involves work group storage systems for media work groups that could be video editors predominantly and it's a storage area networks, also some file servers, but it's this work group storage storage that multiple people can be using at the same time. And we haven't even done an episode about storage yet. Speaker 1 01:24 I think a lot of it was, you know, intellectually, uh, this was something that we sort of assumed wasn't accepted part of the environment. Speaker 0 01:33 Yeah. And then, and then we get our typical eight phone calls the last week where people are like, I am sick of stacks of FireWire drives. And it's like, Oh gosh, we can't ignore this because it's still an issue. It really is, is, is fundamental to everything that we've talked about up to this point, storage storage. It's not sexy, it's a freaking hard drive icon on your desktop. Speaker 1 01:55 It's true. And, and you know, I think the thing that's probably, um, the biggest hurdle, uh, with regard to this is that, uh, this is that part that requires you to interact with your it department and your it department, Speaker 0 02:08 If you have an item, Speaker 1 02:09 Right. Or, or the, you know, your neighbor's kid, but the it department is typically centered around, um, uh, network infrastructure as it relates to the overall organization. Speaker 0 02:23 Yeah. Like sending email and getting on the web and pushing software updates and activity, that sort of thing. So they're not thinking about like, I need to pull four streams of video simultaneously that are each several hundred megabits of bandwidth that's they took so much. Or, Speaker 1 02:41 You know, in, in many cases I know we've come up against a, certainly with a certain set of clients that the idea of a network addressable piece of storage that has a Mac integration Speaker 0 03:00 At some level is totally, uh, you know, out of the realm of their comfort, comfort to ability comfortability for the ability is not a word comfortability it's tightly award. It's not even close to the realm of comfortability comfort. Touchability the stability is completely superfluous. The word is comfort to ability or disability to ability to ability it's about, so you may be working in a video environment, but you may not have somebody necessarily who has their video visors on when it comes to addressing how to get storage on a network. And you might, I mean, and we'll get into this in more depth than a few minutes, but you almost never want your video storage network to be the same network as your general network. It's, it's going to get ugly real fast. So let's, let's break down kind of what we want to talk about today. Speaker 0 04:02 Let's, let's start with kind of the challenge, although we will be getting more into this next week with episode one Oh eight growing pains as we're going to describe it because next week we're going to really kind of take a lot of the things we've been talking about over the first couple of months here and relate them to like a hypothetical environment where growth of your organization and your data storage needs and the number of editors and the complexity of projects is kind of what you're facing. And we're going to walk through how to phase out the kind of expansion of your, your facility. But right now we are going to talk about some of the immediate challenges that that might lead you to looking to put in a consolidated network, to work group storage system, and then otherwise known as a sand possibly, or it may be something else, but it could be a NAS could be a NAS. Speaker 0 04:55 They all are the same three letters, but in a different order, they all sound like rappers. Yo um, MCC actually I believe there is a rapper named NAS, but I think it's a NAS. Oh, well, excuse me. I think it's spelled like NAS. Yes. We're going to talk about kind of why one technology might be more appropriate, the different cost points, uh, the performance that you may need for your editing tasks and which of the technologies might be compatible with that. So we're going to guide you all through this wonderful topsy turvy world of video work group networked storage systems today saw the workflow shot on the workflow, show the workflow, flush workflows. So let's, let's start somewhere fairly simple. Alright. I worked in the field for 10 years. I spin off a company and now finally, I've got my own edit suite. I've got my own slew of clients. Speaker 0 05:48 Um, and uh, typically up to this point, it's been me, the editor slash videographer slash you know, whatever. Yeah. Or I say the, the term that I actually dislike intensely these days is predator to catch it. Yeah. Producer editor. It's it's it's um, a prevailing term used a lot. Um, you know, they look and they're looking for predators, go on, go on mandy.com going like the New York city I'm picturing like the Allianz from the Schwartzenegger movie with the weird faces with like predator, you know, like pulling people's skulls off and then using them as trophies. I mean, Oh, so, you know, but that's, which is cool, but I don't really want to be like hanging out with them editing. Well, you know, I guess the point that I'm trying to make, um, is that, uh, in that environment you have a self-sufficient individual who is dealing with a significant workload. Speaker 0 06:46 Yeah. At some point, Jack of all trades. Yeah. Well, you know, and, and at some point that's going to grow, you know, we have more jobs than you have time or edit machines, more projects you've started outgrowing drives. You've got a few people working on different machines. So you can add the other machine, you add another seat to your editor and what does this inevitably become Merrill? What does your storage situation typically look like? If you are like almost everybody else on the entire planet stack a freaking hard drives, you know, it, you hate it, but gosh, why, why do they multiply like rabbits? I mean, so let's look at the challenges, right? One challenge ever growing storage needs, you just have more and more, no one wants to throw everything away. And these days everything is tapeless. And so you're not really like pumping back out to video cassettes. Speaker 0 07:39 You, you just keep everything on hard drives now and USB or FireWire, which is typically what we run into in the editing world. Those drives they're cheap, they're tempting. They're designed to be cheap. And they're designed for short term use we've, we've run into situations as a reseller where it's like cheaper for us to buy a car, like a two terabyte FireWire drive online than it is to buy a raw, just blank, sat a hard drive for some reason. Sometimes, you know, those manufacturers let's see GE technology, those guys, they just get, they sell so many of these things, their volume, you know, purchasing, you know, weight that they carry is so great that they can sell them for barely more than the cost of the actual hard drive mechanisms inside of those things. But, you know, that's, it's not a good way to go long term what's. I mean, here's two big problems, right? Your data starts to get split over bunches of these and organization becomes a challenge. That's number one, number two, they fail. They do. And they fail. Uh, I mean, let's just ask the question. Have you ever had a hard drive, fail? If, if Speaker 1 08:55 Any of you told me no, I I'd say you were lying. Cause everyone has had a hard drive fail. I've had them fail. They're not, they're not in any matter. Right. Speaker 0 09:04 You know, in computers, you know, two things fail. I think just in my experience over the many, many years, I've been using computers since like, uh, being a weed, little lad, two things fail more than anything else. Number one, hard drives number two, power supplies. Those are like, it just seems like nine out of 10 problems of an actual physical hardware problem. A person has with a computer is their hard drive dying or their power supply pooping out. So, you know, some users who get into this pattern of developing stacks, Oh, FireWire drives. They may manually make copies between them. They may use ones that have hardware, raid, mirroring capabilities, Speaker 1 09:44 Still. It becomes so unwieldy and unwieldy, especially cause you can manage it. And I've seen guys do this. They think I can do this. I, um, I have a good sense of how to organize my files. I've always kept these things in order. I did it in the past and there's no reason why I can't do it now. And then Speaker 0 10:03 Season eight, mid season finale of breaking bad that I was watching last night makes really clear momentum. And, and um, just the having done something in a certain way for an amount of time, I think they use the word inertia in the episode. Speaker 1 10:21 Talk to me about what are you talking about? Like wait until like the whole world, by the way, I'm watching breaking. Speaker 0 10:26 It's so good. I just I'm overwhelmed with how good a television program it is, but they talk about inertia, right? And the inertia of using these little FireWire drives, which we've been using for like 15 years now to edit video off of, and it just kind of works. And maybe you have had a failure too, but they're cheap and you can kind of get away with it. And as you're, you know, you take on a new client as by another drive, the inertia of using that system kind of just keeps you going down that road until it just becomes unbearable. And here's, here's the things that I've run into that typically drive someone into starting to think about a sand. And again, we'll get more into this, a little more in depth next week, but it's typically like drives are dying. And they're like, Oh my God, using these little flimsy FireWire USB drives or whatever Saturday, Speaker 1 11:12 I mean, do you have any drive? Do you have any drives from projects that you did 10 years ago? People do lots of them. Yeah. They don't fire up. Speaker 0 11:21 Yeah. You're just sitting on it. And we talked about this in the archive episode or the archive and backup episode, hard drives sitting on a shelf, not getting used. Oh, catastrophe waiting to happen. Because when you go to plug it back in and fire it up, everything that used to be a lubricant making that thing work has kind of congealed over the years and frozen the heart. Speaker 1 11:40 Speaking of congealed over the years, I was watching news hour with Jim Lee or the other day. And uh, you know, it's so funny. Cause I was just thinking about Jim lair randomly like yeah, last night, I just figured I wanted to share my habits as well. Speaker 0 11:54 I mean, if we're going to do this, what are you supposed to be all like current and up to date? Cause you watched the tired. Who did Jim Lear? No, he didn't. It was McNeil. I thought who McNeil died? Didn't he? No, he left last year. I'm going to Google this. Okay. Listen, I watched breaking bad. I'm not going to feel bad about it. I feel bad about it either. So, so anyways, great. The inertia breaks down. When you start to experience drive losses, you plug these things in or another common thing that gets people starting to think about a work group storage versus these stacks of firewall drives is when you do have multiple users, multiple workstations, and you need to kind of like have multiple people with access to the same group of footage at the same time, because now you have to buy an extra drive, make copies of the data, takes a lot of time. Speaker 0 12:49 Cause this video stiff is big. You're waiting to copy stuff between the drives. People are working off of different versions of the files. You're not using your storage space efficiently cause you are duplicating it amongst drives. Once someone's working off their Island of storage, as I like to call it, the other person is working off of their Island of storage. Pretty soon. You've got an archipelago of storage Island of storage that, uh, that, uh, Kenny Rogers and uh, uh, Dolly Parton song. I don't think so. Oh wait, you know what? That's islands in the storm. And we could do a play on that though, a storm storage anyway. So, so everyone wants an off of their own Island is, is going to create additional bottlenecks and you know, management issues. And so people are like, wouldn't it be great if we could just all be editing off of the same big drive in the sky and you know what? Speaker 0 13:46 I would just something to add to this. Um, I've, I've encountered, uh, as an editor myself, I, you know, I used to have this mindset when an editor has spent most of his time in a closed environment, the idea of not having immediate local access to the files that you're working on is, is not necessarily the most palatable thing to an editor that has spent his entire time working locally off of a hard drive. I mean, it's, it, it does require, you're saying the idea that this storage device that's centralized to sitting in a rack somewhere and can just get thrown into your backpack is a little wheeze inducing. I think, I think especially if there's, um, a sense that I always will have access to it and I can do with it, what I need to do when I need to do it. And now it's going to be the <inaudible> it sounds awfully enterprise oriented. Speaker 0 14:42 What am I supposed to do? I got to move things from, from location location. I was at a client's the other day that, uh, they are not in substantial in their size and they do very large business with, uh, in the commercial world and television. And there are stacks of FireWire throughout eight suites. And it's amazing to me. Yeah. They frankly probably outgrew that methodology six, seven years ago. However, it happens because of inertia. And so there are, you know, there are things to weigh there's certainly costs, but what, what does moving to a work group storage system net you immediately, first of all, the vast majority of them are based on raid storage. So it's redundant sets of hard drives all added together into a massive volume that because the data is redundantly getting splayed. As I like to say, across a bank of drives and individual hard drives, failing does not jeopardize immediately your actual data integrity. Speaker 0 15:49 That's something just to raid redundancy of your storage, that a work group storage system essentially is inherently going to offer. Cause basically they're all made up of rates. You get that immediately and it's an immediate net benefit. You know, data loss is less likely as we like to say, a rate is not a backup, but having a raid as the basis of your storage makes it a more robust storage system than what many people have in these, these desktop drives, which is usually either an individual hard drive. And if it goes belly up, it goes away and you've lost your data or often to hard drives like a G rate, right? Everyone's run into G rates in the reason they're a little bit larger is because they have two hard drive mechanisms inside and they get striped together. So the size and a little bit of the performance gets added together. Speaker 0 16:43 The problem with the two drive raid raid zero, a striped G rate or something like it. I know this, what is it? If one hard drive fails your fucking doctor? So let's, let's get into statistics mode. Statistically, if you've got a two drive raid, zero desktop FireWire drive, and either one of the two constituent hard drives dying means you've lost all of the data you are, how much more likely to experience data loss about as likely as you are to experience a bunch of rage in the middle of the street. The correct answer is a doubling. You will have twice as statistically likely for that volume to die because either of the two drives that make up that volume, my answers are now accurate though. Yours is a little more poetic, I would say. So, so rate is something that a workgroup storage system is probably just going to net you right away. It's more robust storage than your average desktop drive. It's also probably a lot bigger than your average little two drive or one drive desktop drive. So it's just a more unified container for your data, right? That's just, it's going to be six or eight or 16 or 32 or several dozen hard drives all added together into this volume that people are connecting to. So, you know, it leads to the stuff just not being scattered around amongst multiple drives. That's true. Speaker 1 18:12 I mean, and there are, there are half steps in this sort of journey to centralize work group storage. And that is what a lot of times we see happen is, uh, um, my individual FireWire drive in cutting it. So let me get like a cow digit for Bay raid, which is certainly better than a single hard drive, but it's not necessarily solving this fundamental problem of multiple people accessing the same volume right. Or significant. You're just going to end up with more of those things. Sure. Time, you know, but wait, I've got a nice new iMac and I love it when you go into like, like third party mode, like, like the imaginary person. Yeah. Well, I've got a lot of measuring people in my head, so this is what's crazy to me to make this one just happened to be a clown. Okay. But wait, I'm on Cronk. Speaker 1 19:06 I'm a part time clown. And I make clown videos for both, uh, weddings bar mitzvahs and my YouTube channel. Um, and I, I have an iMac and I've got some Thunderbolt and I've got a Pegasus Thunderbolt raid and you know what, I'm smart enough to know because I have an it clown, uncle. He also, he went to clown college with me. Yes. But so my it clown, um, uncle, he says, well, you know, you can just share all those hard drives over your network and just make sure it's connected to your local machine. And then I, as the lowly, uh, bar mitzvah clown, I decide, well, Hey, that's going to be Stan the lowly bar mitzvah stand the lowly bar mitzvah clown. Yes. So I was actually at my barometer. I stand the lowly Barmitzvah clown. Right. Uh, I, I think that's a viable solution. Why, why can't you just reshare the FireWire terms over Ethan, that and Sharon drives all the time. I mean, like I got movies on my hard drive. I wouldn't watch them. Why can't I just do that? Speaker 0 20:10 I go into file sharing and just share it out and you know, maybe make it to someone needs to enter a name and password. They mounted on their desktop. Here's why Merrill. No, no, this is not for marrow. This is for Stan Stan, listen, Stan lowly bar mitzvah clown. Here's why you don't want to do that. Number one, you're not getting around the issue of, of having to push and pull data between drives and duplicated and copying it. And if you are under the idea that you may be able to work directly off of someone else's shared a drive directly across your ethernet network, you actually can tax the CPU of the system hosting that shared drive because it's now acting as a file server. In addition to being a machine that someone's probably trying to edit off of. And why would you want to tax them and burden them in such a way Stan lowly bar mitzvah clown? Speaker 1 21:00 Well, because I double booked bar mitzvahs the Saturday, you had to hire kind of like nice clown as the assistant editor. Yeah. I had to, I had to hire, um, and even more lowly clown than myself, which, which was just the guy who put a little sit on his face and you know, it's like the sad. Speaker 0 21:18 Yes. So there are performance ramifications to just hosting out your FireWire drives over ethernet and having people work off of them or maybe they're pushing and pulling data, but then it's waiting to stop using Thunderbolt Thunderbolt. Isn't really the bottleneck there. The bottleneck is your ethernet network or your wireless network. If you're silly enough to try that Stan. Speaker 1 21:38 Well, the bottleneck in my life is the bottom of the bottle. Cause I'm standing in the lowly bar mitzvah clown. But any of our listeners who like who might side gig is our mountain, you know, like for all you alcoholic bar mitzvah clowns, we're very sorry. So, um, there's issues Speaker 0 21:58 To that. So, and again, individual hard drives dying is still gonna result in data loss. So you start to say, what if we put together a dedicated, centralized work group storage system, what is this going to look like? What are my options? What is the next phase beyond the stacks? Oh, FireWire drive Speaker 1 22:17 Desktop rates were still okay. So I think this would be a good time to basically in case anyone has any illusions. Otherwise, what will you're talking about here is still working off of hard drives. It's just how they're housed and where they're placed and how you interact with them. We're still dealing with spinning disk here. I think that's an important thing to note. Speaker 0 22:39 Yeah. It's not, not a different fundamental technology. It's the way of using that, that core storage technology and implementing it. The particular vendors who tend to deal with this space, the fact that again, it's a networked device and not something that we call Daz or DAS direct attached storage. So a FireWire drive or a little desktop rate array is Daz direct attached storage. Cause it's directly attached to your workstation. Speaker 1 23:08 That's actually my stage name too. I'm dying as the clown. I mean, dare make a check out to stand, but Daz is my stage name. So now you're moving Speaker 0 23:20 To the direction of a sand or a storage area network or a different type of network storage technology called a NAS or network attached storage. A NAS is just a fancy way of saying a file server. And we all probably have a concept of what a file server is. It's a drive you connect to as a client, across a network, you mounted on your desktop. Multiple people can be hitting it at the same time and potentially working directly off of it. Some people are pushing data to pulling data from, but it's just basically a file server. So the first thing to evaluate in a workgroup storage system is what route are we going to take? What even is the difference between a sand and a NAS? What, what, what are the performance differences? What are the differences in cost and what does one get me that the other doesn't that's that's question number UNO. Speaker 1 24:16 So to address that question, uh, first I think we need to figure out how we want to handle our data specifically yes. With, with a sweet caress, but how we access the data is the fundamental difference between a NAZ and a sand let's start at the basics. Right? Okay. When you connect Speaker 0 24:43 To a file server, if you've ever plugged your computer into a network and connected to a file server before maybe it's in your office and it's where you have your word documents or whatever it is, right. A file server, first of all, is using typically an ether net network. That means you're using ether, net cabling with ether, net switches and the server, which is hard drives that are being hosted by a dedicated computer that shares those drives out that storage. It might turn them into a raid array that raid gets turned into a volume or drive or multiple drives that you connect to. And Mount it has a server built in, it's running some kind of an embedded or fully dedicated operating system. It has its file services turned on. You've created user accounts with names and logins and passwords and set permission systems on it. You plug it into your ether, net, switch people, your client systems or your workstations, your laptops or desktops are on that ethernet switches as well. And then you have to connect to the file server using a file sharing protocol. Speaker 1 25:54 Yeah. You know, like when somebody says, well, you know, just a backslash backslash to the name of the server slash folder. Speaker 0 26:01 Yeah. So there's always like SMB or AFP or, or FTP could also maybe kind of be seen in as an example of this in a sense. But what we're really talking about is certain applications they're often built into your, your operating system that speak to a file server. And what they're really doing is that they're taking the drive, that's built into the file server and kind of creating this intermediary layer that multiple people using their client's systems are communicating through in order for multiple people to be writing out to that same central volume or reading data off of that centralized at the same time without actually writing over each other's data. So like, Speaker 1 26:48 Like, wait, doesn't that make overhead, It's a really hard to respond to that, you know, even slightly serious, but the overhead okay. The overhead, right? Well, there is overhead. So that's the thing about file sharing protocols. Stan, I'm just going to call you stand for like, my life is miserable. Can you make a Hong Kong sound cloud? Your horn is broken. It's been broken since my wife left me. Can you guys tell that we're recording this on a Friday late afternoon, by the way, I'm just throwing that out there. So, so there is overhead these file sharing protocols, Speaker 0 27:32 The common one for the Mac operating system is AFP or the Apple file protocol in the world of windows, SMB, or Mario Samba. As it's sometimes referred to as a very standard one, you have NFS and uh, also windows oriented sips, which has tended to tends to be kind of unique. See Linuxy as a file sharing protocol. When you, because you know, again, when a is writing to a hard drive, there's a little database at lives on that volume that drive that you connect to that rules where the data is actually getting stored on that physical device. The file system again, is a little database that lives on the volume that that determines where the files live, what their permissions are. Lots of little things about the volume when multiple people write to a volume at the same time, unless there is this intelligence behind it, unless there's that kind of mediated file sharing protocol and set of services involved, people would just stamp all over each other's data. And the Speaker 1 28:35 Wife I'm like you, okay, we're stopping this damn thing. You're just back to Merrill. And she took a little kids, little kids, my children, you, you took all seven of them. She put it in her car, we drive a Miata. So the clowns drive me out as well. They're small enough to fit seven children. Oh, I get it. See, that was the joke there thing. Cause it's a clown car thinking thing because anybody who drives a Miata is like, Oh, it's a cloud, it's a cloud, sorry for all the Miata drivers out there, along with the bar mitzvah clowns. But so, so that's, that's these file sharing protocol Speaker 0 29:16 Calls are establishing guidelines and rules in real time that are, that are, are determining, who's writing to which section of these centralized file server blocks of storage. So they're not corrupting each other's data. And yet there's an overhead involved to allowing that, that, that communication over an ethernet network between you on your edit workstation and that central file server to happen. And this is why file servers. They weren't ever really originally designed for real time, video editing tasks of high bit rate video. Well, you're never, you're never going to get the kind of performance that you want off of it, even if, well, you might, you know, I guess it depends on, on file size and, and number of layers and a whole lot of other factors, but, well, here's the thing, right? So to think about the internet and TCP IP TCP, IP being kind of some of the core protocols of, of internetworking devices and how they communicate and, you know, TCP IP as, as, as, uh, kind of the, the underlying architecture of, of networks was developed for the purpose of being able to dynamically route packetized internet, traffic around network outages. Speaker 0 30:30 So, and, you know, there's debate as to whether this was really the core reason it was developed this way, but certainly an interesting application of this, as far as the military was concerned when, who were obviously large contributors to the early formulation of the internet, although not the sole contributors. Um, but they were like, Ooh, that would be great. If one of our cities got nuked and we had networks between our cities or our military facilities, and now this traffic, this networking traffic is going to know basically how to route around a major outage and can still get to the recipient. And so if you kind of relate that to an ethernet network in an ether net world, if you are sending traffic to a file server, AKA copying a file Twitter, capturing a stream of video off of a video feed and, you know, just file server that is been tweaked to be good for real time, video playback and capture purposes. Speaker 0 31:27 And there's a lot of subtlety to the way these things get built that make a file server, a good video file server and a not so good realtime video server. Can I just buy something out of the box? Not so much, not so much, cause there's a lot of configuration issues. You are typically talking about file servers that have a, more of a price premium associated with them than maybe some just off the shelf, little, you know, let's see network drive kind of thing. Those are cheap, but that's not what we're talking about here. So here's the other thing, a gigabit ethernet connection. When you use one of these file sharing systems as your real time work group, video storage system, and you're connecting to it via a gigabit switch and every client system has gigabit on it. Well in real world terms, especially when you account for the overhead of the file sharing protocols, you use to connect to these file servers, even on a closed gigabit network on a per user basis, that gigabit ethernet connection is netting you a connection speed. Speaker 0 32:31 That's about equal to a typical FireWire 800 connection. So if you have been in an environment where you're using like G rate drives or Lucy's little to drive drives that are in one box that have a FireWire 800 connector moving to a gigabit ethernet based storage system, if the back end of the system is designed properly, I'll get more into that in a second. But if the backend is assigned to right, your individual editors will be able to pull about as much stuff in real time as they would off of a FireWire 800 drive directly connected to their laptops. So if that's how you've been working, you can probably get away with one of these gigabit ethernet file server systems, perhaps it's certainly not going to net you any more performance than you're used to. If you've been using direct attached East side of drives, which perform a little better than that often, or these little desktop raids, you may be moving backwards by moving to a gigabit ethernet file server NAS as your work group storage system. Speaker 0 33:33 Because again, you're going to be limited to about FireWire 800 speeds on this type of a thing. Now, the thing about this, so we have a file server with its own drives and those drives may have very high speeds associated with it, but that thing needs to connect to the ethernet switch to if it connects to the ethernet switch, assume again, a dedicated gigabit ethernet switch with a gigabit ethernet connection of its own. And you have five editors on that switch each with their own gigabit ethernet connection, the singular gigabit ethernet connection that the file server itself has and connects it to the switch now becomes the bottleneck because despite the fact that individual editors may be, have a gigabit connection to the switch, if they're all feeding into the single gigabit pipe, if you will, or tube, because as we know, networks actually really are made up of tubes that the file server is behind. Speaker 0 34:30 That's now taking the gigabit ethernet connection and splitting it say five ways. And so now your users, if they're working at the same time are way lower performance than a FireWire 800 connection. The way we solve this is that if you want to have a file server, be the central server in a real time video editing environment with multiple users working at the same time, and you need them each to be able to have roughly FireWire 800 speeds using that gigabit connection, then what do we do? 10 gig E 10 Giggy. Alright, so, so, so ether net is a thousand megabits. You don't need special connector types. You don't need optical transceivers, which we'll get to in a moment when we talk about fiber, it, a lot of people are looking at ether net now as a very attractive physical layer for these video storage networks. So w what, you know, I think it would be worthwhile to mention here, because we've been talking sort of an abstract level that we're still talking about file servers, file servers. And we're Speaker 1 35:37 Still talking about NAZ raids that are sitting in chassies that are in many cases, rack mounted. Speaker 0 35:47 Yeah. These things don't necessarily look like a little table, top four or five or eight drive rate array. This might be this kind of 16 bays. Yeah. These are like these big and pieces of equipment that typically get racked in like an it server rack. They put out a lot of noise and they have a lot of power requirements and they generate a lot of heat. So these are serious pieces of gear. Um, you are stepping it up a notch typically, although there are some that are a little cheaper and less unwieldy to install, but we are moving to a class of, of it equipment that again, you might call enterprise oriented. It's, it's the stuff that lives in the rack, not the little drive that sits on your desktop, but, you know, ether net, as we've been talking about and using the typical file sharing protocols and using a well-designed optimized gigabit ethernet network with maybe a 10 gig path to a file server, that itself has 10 gig on it is, is a, is a very viable option. Speaker 0 36:49 But, you know, again, we related it to FireWire 800. So you can probably pull three streams of progress, high Def off of it. You can, in real time, as layers without having to render, you can probably pull two streams of ProRes HQ, maybe three, if you're really lucky. Um, you know, if you use a more highly compressed format, like 35 or even 50 megabit XD cam HD, you know, off of an SBS card, or, you know, XD Cammie X, maybe you'll pull like five to six streams of that stuff. Maybe up to seven, uh, DVC pro HD tends to be about seven streams. So you have to relate those, that stream count of the complexity of the projects you're working on and how many simultaneous video layers an editor might have to pull. You know, if a cross dissolve as two layers for that, that portion where it's the two layers of video dissolving across one another, before you render it for that brief portion, it needs to be able to give you two streams of performance. If you're doing the Brady bunch thing, and you got nine little videos up at the same time, and before you render it out and flatten it, Speaker 1 38:03 <inaudible> thing. I, if you're doing an eight by eight or rather a three, my math is terrible. The Brady bunch was three by three. Yeah. It was three by three eight. I don't think there was anyone in the middle. I think that was the logo. It said the Brady bunch. And it came out. Yeah, yeah, yeah. But that's when we're talking, when you're, when you're making, when you're making a tic TAC toe screen of videos. Yeah. We're not talking about modern, you know, we're not marriage arranged rowing footballs at our little sisters and messing up their noses for the dance. Speaker 0 38:34 Um, so, so that, that's the ethernet file server reality. Now here's the thing about ethernet. We've been talking about Nazism and file servers and typical file sharing protocols that you would use to connect to these things like AFP, SMB, sifts, or NFS, right. There's a little wrinkle and it steps it up a notch, which is using a thing called eyes scuzzy. And that's not a made up phrase. No, it's not. And it has nothing to do with Apple as a matter of fact, uh, Oh yeah. The eye is not like, no, no, it's not nice, but you know what? They may have been a little, I don't know, are certainly litigious in that manner. So I scuzzy and scuzzy is S C S I, or if, if memory serves it's this small computer systems interface. Well, anyway, you're probably, if you've been in this industry a while, remember scuzzy drives, how'd you conjure that what's that the small computer systems interface. Speaker 0 39:38 Cause I'm like the superhuman of Uber intelligence Merrill. I tell you this everyday. Um, no scuzzy, the small systems, computer interface. We're not talking about these big old honking scuzzy connectors from your no, no. I mean, cause scuzzy, as you might remember used to use these big connectors, you'd have to terminate them and set scuzzy IDs on every device that was part of your scuzzy chain. But really what scuzzy was is at its core was two things. It was both that those physical connector types, but it was also the way that the data got written to those drives using these things called scuzzy drive commands different. And if you, if you imagine it more direct and to the point than packetized data over a TCPI P ethernet network or, or, or, or internet network scuzzy is kind of the, one of the ways that a system will write data to a drive that it's directly connected to, well, we don't use scuzzy drives anymore, but the scuzzy drive commands are these what we call a block level protocol that writes data more granularly and more directly what certain levels of latency being guaranteed, where the data is always getting written out in the correct order. Speaker 0 40:53 Unlike packetized network traffic scuzzy is still a useful protocol and command set for writing data to drives. And they figured out a way to kind of Mary gigabit, ethernet and scuzzy data. And what they do is they use normal yet it has to be compatible with it, but they use close to normal or their higher quality gigabit ethernet switches that are optimized for ice Guzzy traffic, which is really scuzzy drive commands, being conveyed by TCPI P protocol. And, and so it's kind of like a marriage of both worlds. It's a way of kind of passing scuzzy data over an ethernet network. And you kind of get a little bit more performance out of that gigabit ethernet connection, or it can be 10 gigabit ethernet as well. But so we were saying how there's like overhead to the file sharing protocols. This is like bringing your best friend to the party and not the entree to think about it like cholesterol in your arteries and ice Guzzy allows you to kind of scrape it away and get more out of that gigabit ethernet pipe by using a type of data traffic that is more optimized for higher real time performance, without that overhead recommended by nine out of 10 doctors. Speaker 0 42:13 So the thing about ice Guzzy is that not every file server allows users to connect to it as an ice Guzzy device, not every file server supports ice Guzzy as a way that people connect to it. And I talked earlier Meryl about how the nice thing about file servers and file sharing protocols is that they allow multiple users to write to the same volume at the same time. The thing about ice Guzzy is, is because it's the way that you talk to a local drive. You can't have multiple users using the ice Guzzy protocol on a gigabit ethernet writing to the same volume at the same time, unless, unless what, unless you have like a traffic director. Well, now we're going to start talking about sands now, Sans Sans, Sans. And now we let's talk about the difference between a sand and a NAS. So with a NAS or network attached storage or a file server, it's the file server and it's software. Speaker 0 43:23 That's mediating the multiple people writing to this volume or volumes at the same time without corrupting each other's data or writing over each other's data. And it's using these file sharing protocols that have evolved over the years. A sand is a little different in that Sans. And this is the crux of the change of the definition are oriented around multiple users writing to one or more volumes at the same time using these block level scuzzy command data sets. So it's more like talking to a local storage device than it is talking to a file server, but there needs to be some level of intelligent management involved to make sure that people aren't writing over each other's data, because you don't have those file sharing protocols at play managing that. And what the sand means is that it's multiple users writing to a volume at the same time or multiple volumes. Speaker 0 44:25 And there is something orchestrating this system, the sand software and systems. This is a system that we're talking about here. So we're talking about a specific software component that, um, I think I used the phrase before as a traffic director, but some somebody who says stop somebody who says go, if we have multiple people hitting the same volume and they need to achieve a different goals, maybe somebody just needs to, to access some content to do. And someone needs to nondestructive with an ingest workstation. So they're always capturing to it. Sure. So there's all this stuff, sand software, as you said to manage this, to make sure that multiple people are using these block level protocols like ice Guzzy, and this is where fiber channel comes in. Fiber channel is another San connectivity system that connects individual clients. And it's using fiber optic cable. Fib II are fiber optic, but fiber channel fib. Speaker 0 45:22 Our, he is actually not talking about fiber optics, even though it usually uses fiber optics. It's talking about what's called a fiber channel switch, which is kind of like, it's imagine it like an ethernet switch, but specifically to route block level data amongst multiple users and storage targets as they're called. So you have kind of initiators and you have targets and a fiber channel switches, a different beast and a more expensive beast, and yet a significantly higher performance beast. Cause these days almost every port on a fiber channel switch is going to probably be about eight gigabits. And the higher speed links on a fiber channel switch are going to be greater than eight gigs. And most fiber channel storage devices are going to allow you to connect it to a switch with more than one eight gig connection. At the same time, we're talking about levels of performance that leave gigabit ethernet in the dust. Speaker 0 46:20 And so fiber channel has just all these other requirements. Usually you're using fiber optic cable. You don't have a fiber channel port built into your computer. So you need a fiber channel and what's called an HBA or a host bus adapter. And then you need the management software and likely management servers on top of the actual storage devices themselves to orchestrate this whole thing. So you can present all of these spinning disks as a singular or several volumes and make sure that people aren't writing over each other at the same time and corrupting each other's data. So there's two main types of sand, a file locking sand, which is much like a file server in that it can be set up as one big drive, if you will, that people connect to their running sand software on their end, they connect to this drive and multiple people can be writing out data to the drive. Speaker 0 47:14 At the same time, I can all, we can all be doing captures. We can all be doing renders. We can all be tweaking project files that are on there and we saving it. Those are destructive kinds of operations that require right level access to a volume. Well, w R I T E. Yeah. Right, right. Like I'm writing data to it. Right. And thank you for clarifying that I'm not talking like in moral terms, like it's correct. No, we're writing data out to this drive or Stan knows about those moral terms after his wife left him, but to stand the decrepit immoral Speaker 1 47:48 At this point stands actually aged at least 50 years. Now he's an old, sad clown w you know, washed up. This is cause he didn't, Speaker 0 47:56 You realize we've lost any potential business that we ever had with like clown society of America's video editing department, right? The client society of America's video editing department, East coast division, the clowns, we're going to get so many letters from sending the clowns. So, so in order to get to the point where you're writing this scuzzy data over, let's say a fiber channel or ice Guzzy network to a true centralized sand system using these high bandwidth fiber channel connections, or a 10 gig or a gigabit ice Guzzy links to get more performance out of ethernet. You need servers running the sand software as well. These are often called metadata controllers. They add cost, they add complexity. They add engineering to the whole system. That moment to moment are managing who's writing data, to which section of the centralized storage at the same time. In addition to those servers, you typically need a whole separate ether net network just for the moment to moment traffic information to flow across. Speaker 0 49:01 So now we need to set up a fiber channel switch to actually have this media data flow across between the editors and the centralized storage system, the sand, and then a whole separate ethernet network. Usually not the same one as your general network, because this still needs to be isolated, even though it's not the media itself flowing over it because it needs to be a very high performance, low latency network for just like the traffic data about who's writing to which section of the sand at this at a moment. So they don't interfere with one another. That's adding a whole sucking network to the sand, more cost and more complexity. And yet they're getting cheaper and cheaper. They're very easy for us to implement. I don't want to over over frighten anyone about like how overwhelming Speaker 1 49:47 They're not, they're not, I mean, cost really, isn't the factor here. I think there certainly is a factor. I guess the point that I'm making here is that while, you know, it is not insubstantial how much you must pay for a setup like this, uh, ultimately the type of workflow that you have and the size of your environment, the number of editors, the kind of, you know, the flavor of video codec that you're operating in, all those things are ultimately the deciders to decide <inaudible> is the adding all the syllables towards today? Yes. Yeah. Well, you know, like didn't GW, he once said he was the decider. So these are all the deciders, um, for Speaker 0 50:28 Y a Y um, we would go one way versus another. Um, but yes, it's, it's a costly investment, but it's the kind of thing that you can maintain over multiple years. This is, um, they're easy to scale. We can fold storage, almost invisibly into a true San or storage area, network technology very easily and quickly in a way where an editor one day comes in and their San volume, eh, is, is a certain size. And the next day they come in and we've folded more storage into it behind the scenes. And it's just larger, but it's the same name, the same directory structure. We didn't have to back anything up or delete it. We can kind of expand it on the fly as you go. Yeah, it's a great, great technology. X is one implementation of this StorNext, which has kind of the same fundamental software as Exxon is the same. Speaker 0 51:21 And this is what active sand has now is that though the windows and Linux implementation. Yes, StorNext is the XN file system Apple licensed ex Sam from at the time a company called ADC that got bought by quantum and Andy, their original software was called StorNext. Basically the Apple version of StorNext is called Exxon. Most, a lot of people that were setting these types of sands up for now are, are putting in a store next metadata controller, but you've got a Linux-based metadata controller running a Linux server to manage the sand system, but you're still using the Exxon software. That's now built into iOS 10 as the software to connect to it. And then we're usually using a number of different manufacturers of fiber channel storage that goes on the fiber channel switch. You have these servers that are running the metadata controller software, the store, next software, they're on the switch. Speaker 0 52:17 They're also on the secondary ethernet network for the sand traffic data. And it's quite a production, but they are exceedingly high performance and, and wicked on every single level, if it's appropriate for your environment and you can afford it, speaking of quite a production. So this is probably the most ideal and most we use reality television production, um, as, as pretty much a GoTo for all these scenarios. Cause I think it is the most video heavy, certainly in, in as far as the ratio of our shot versus ours produced. But there is just if there was ever a time when you were just with multiple rolling cameras from multiple locations coming in, uh, and they need to be ingested and then assigned to, to, uh, you know, assistant editors to do string outs. These Sanders is ideal. Yes, this is to be giant high performance, hold lots and lots of video. Speaker 0 53:14 Cause again, the scalability is a very nice aspect of it and everyone can be freely reading and writing data to, and from the transcriptionist, just non-destructively pulling off, uh, video two to log hers. You've got ingest workstations, you've got editors, you've got producers. It's it's, it's, it's a very flexible system. Now there is a different type of sand system. It can be using gigabit, ethernet and ice Guzzy. It can be using 10 gigabit ethernet even to connect the clients and the storage, uh, using the IC protocol or it can be fiber channel and it's called a volume locking sand. So, whereas the file locking sand like Exxon and StorNext is often presented as one big volume. Everyone is reading and writing too. At the same time, if they have the permissions with a volume locking sand, you strip away the machines that are acting as the management servers, they no longer need to exist. Speaker 0 54:08 Each individual editor or client workstation is simply running a piece of sand software that you use to connect usually to more than one sand volume that you've carved out of your raid array and your raid. Even though it's like, you know, maybe call it 16 hard drives and we create a couple of raid sets out of it. We have very flexible ways about how we can actually carve those into the volumes or drives users Mount in a volume locking sand. We would probably actually present it as a number of drives because the rule of it is you have to use this volume locking sand software to connect to the sand volumes. And only one user can have read and write access over a volume at once. However, multiple users can have read only access off of a volume at once, even while someone may be writing to that same volume. Speaker 0 55:02 So think of having this little app, it gives you a list of volumes and you have users and they can choose a volume and Mount it either read only or read, write first one who could have mounted read, right? Because of the way the permission system that you've defined works, who mounts it read, right. Kind of locks to the volume. And now only can people Mount it read only until that original person relinquishes the read, write access. But I think it's an important distinction to make here that simply because you do not have right. Access does not mean that you can't engage in work because we're dealing in a purely digital world here where editing, when it comes to making edits on your timeline is non-destructive well, so, so let's talk about what things in the workflow of typical video requires read, write, and what requires only read level access ingest is read, right? Speaker 0 55:57 Yeah. Cause you're creating files on a drive. So if you are copying things off of a P two card to a drive or ingesting video from a videotape, if you remember what those are or a video feed or a camera directly that's requiring read, right? Cause you are creating new files. That is a destructive right operation, if you will. And uh, and in the same boat transcoding, so say you're doing output of a finished project. You know, you gotta throw, render it out. You've got to turn a move file from, you know, pro resin to H two 64, writing out that new file, whether it's a render or just a transcoded web version of it or whatever absolutely requires a new version of what about project files? Well, it depends on where you're saving those project files, but let's say, let's say, well, okay, what type of a volume? Speaker 0 56:44 Just turn this into a quiz show. What type of volume? Yes. Do you need to create a project file? Well, in order to create a project file, I need to be, I need to have read, write access to exactly. Cause you're creating something new. What about you open a project file and you make some tweaks to it and then you want to save it again. Well, that would be a right action. Yes. That still requires full read. Right? Cause again, you're modifying a file and creation and modification is the crux of what requires full read. Right. As you're posting a change, if you will. But, so what, and what about just doing a render, like render all and final cut? Well, no, because that's, that's uh, depending on where you have the direction of your, your render pointed to the volume that you have directed as the destination for the render files themselves, what level of access do you need? Speaker 0 57:35 Oh, well you need right access. Exactly. Cause a render is creating new files, Merrill three for three buddy. All right. So you want to go David, get this man a gold star. So here's the thing, right? So if I can do these very basic functions, what the hell can I do? What can I do? Well, here's the deal. Certainly you can have local project files, local project files. Um, aren't the end of the world, especially when the requirement to put them back up to the, to the network, uh, is not a constant requirement. It's something that you would probably do after the work for the day. So to, to have an immediate and ongoing rewrite access for that purpose, uh, seemed superfluous. Yeah. Yeah. And just to clarify more, a little bit when Meryl is talking about local folks, he's talking about saving these certain types of files out to the storage that's maybe built into or directly connected just to the system that you're using. Speaker 0 58:31 The editing system say the workstation, the laptop that you're on, not this centralized volume, that's part of the sand. So if you have your project file local or your local drive is the render destination. Um, which, which I actually, personally I would do that. Yeah. Rendering locally might be good. The only problem is if someone else opens up a project file that you've sent to them and they connect to the media, they may need some, the sand they may need to rerender sometimes there is value to having render files beyond the sand, but let's say, and of course, again, if you need to capture to a sand volume, you need that read, write access, but let's say you just want to use the video footage. That's already been captured in a project you're working on and your project file is local and your renders are pointing locally. Speaker 0 59:24 You only need read, Oh, you're not going to give me a quiz here. You're not going to ask me. You're just going to tell me now, okay. I see how it is. Sorry. Can you ask me then the question? Cause I was pretend you didn't hear that Merrill. If you just need to manipulate the footage, that's already the capture files that are already on a volume. And I think you probably know the answer cause I just said it, but this is kind of a way that we can restate the point if you, if you and, and to make you feel a little better. Oh, well, if I'm feeling better already because you're not standing at the depressed suicidal clouds put stands. Actually I tried to call his house, but he's not answering his phone now, but okay. So do I just want to, you have your project file is stored on your own hard drive and renders are on your own hard drive, but there's this sand drive that already has a bunch of footage on it and you want to edit it. Speaker 0 00:22 What type of access might you need to that I only need read access. And why do you only need read only access? I need, I only need read only access because the edits that I make, I mean, we're not, we're not splicing analog film here. I can make edits and they are nondestructive edits and I can save my edits in an XML file even. Uh, and that edit is independent of the actual video. It's the joy of being, just making pointers back to the original one. And your render files are certainly having to read off of the original footage in order to manipulate it and write out a new file. Your project files are getting changed and manipulated and modified to refer to kind of end points and outpoints and graphics and transitions that you want to have be baked into the video as changes in your renders, but the source footage itself to be using it in an edit is only something you need to be able to read from assuming it's already been captured. Speaker 0 01:28 That's the biggest hurdle that people have intellectually with dealing with this kind of setup, as they say, what do you mean I can't have read-write access. I need it. And the truth of the matter is when you, you only needed a little bit of the time of people who are doing editing during the day, they don't need that access. So let's cut to the point. There's a company that we've had a relationship for a long time. We've done a few different types of these volume locking sands that we've been talking about over the years. But the one that we've been working with for many, many, many years now is made by a company called SNS or studio network solutions. And the product is called San M P San MP was originally just a piece of software. And you would have say a fiber channel switch and you would attach a fiber channel raid to that switch. Speaker 0 02:17 And you would attach your editors to the switch. They'd have a fiber channel card and they'd link up to the switch and they would be running the stand MP software. And it is, it's basically a list of volumes or the volume. And are you connecting to it read only or read, write, and it works really darn well, if you bear in mind this fundamental set of rules about how volume locking Sans work, but they're much cheaper than file looking sands cause you don't need the management servers. San P the client program is basically throwing these little flags on a little like extra layer of information that's stored on the volumes and it should barring other people from getting in there. Doesn't need to be a significantly more complex set of management tasks going on than the throwing of these little flags on and off. Um, you know, and so you don't need the management servers that are running their own version of the sand software. Speaker 0 03:10 What that secondary sand traffic control network, it's just the storage, a switch your client systems with their connections to that switch and the sand software Sanam P now studio network solutions came out with their own storage appliance called the Evo, which don't even ask me what it stands for. Evolutionary. Yeah, I don't know. And Evo is basically a system that is both a file server and volume locking sand storage. Basically one, everything we've talked about, it can support, but it doesn't do file locking sand. If you're, if you have multiple users writing to a volume at the same time, it's still using a ether net file sharing protocol. If you want people to connect using ice Guzzy over ethernet or true fiber channel connections and all of which are possibilities to have installed right into the back of the Evo storage device, they need to adhere to these sand volume locking rules. Speaker 0 04:08 And they're running the sand MP software on their editing workstations to choose which volume they're connecting to. And if it's reader right now, we were also talking a lot about rendering locally or having the project files stored locally. But this is the thing you have to think about your workflow a little bit with a volume locking sand, but maybe you want to create one big volume for the raw capture files. And really only Frank and his ingest system is usually connected to that rewrite. Cause he's the one who's ingesting everything, but then maybe you actually carve up a number of little editing, a number of other little or volumes, and those orders for each of those volumes, you have independent read-write access. And that's the thing. So maybe you have five editors plus an ingest system and you create one big volume for the ingest guy. Speaker 0 04:55 And he's usually connected to it read, right? Cause he needs to add new footage to it. And then you create five littler volumes for your editors and edit one, two, three, four, and five. Each usually are connected to their own volume volume one, two, three, four, or five. And each editor has read, write access to their own volume. That's where they put their project files. That's where they point their renders to that's where put any miscellaneous files that they need to store. You could still have someone read off of someone else's working space volume. You can change. Who's connected to which working space of volume at which time, just through the sand MP application, it's a really bearable way of working. And again, the cost of these types of systems is a lot less than a true file locking fiber channel sand. And yet the performance, if you have people connected over over fiber channel is a lot greater than a typical gigabit ethernet system. Speaker 1 05:49 You know, one of the things we really haven't talked too much about, and I do want to wrap up here, but I think this might be a good, good way to close it out. When we talked about media asset management, we talked about how important it was to also have somebody who is the point person within the organization to really manage the process and manage what it is and what it isn't and how it's used. Right? So do we need, Speaker 0 06:15 You need a mom media operations manager. We need a mom. So yeah, no, it's a good point. Moving to these centralized storage systems is a great opportunity to begin to think a little more, you know, strategically and in a more organized fashion about your data, because gosh, a pile of individual FireWire hard drives or desktop drives, you know, leads to mismanagement almost inherently. And, and the, the, the spreading around of your data and losing track of what footage is on which drive. Speaker 1 06:50 And this is the precursor. This really is the precursor to media asset management in order to get to that place where you have everything centralized, but now you don't know how to manage it. You have to first have it centralized. It's very easy Speaker 0 07:05 For the centralized storage system to become almost as confused and the lack of management as to where the files are. As Speaker 1 07:14 I've seen file servers in another life, I was an it guy. And I think this still holds true today. If you give a bunch of users, sort of unregulated access to storage space on a network, God, they will, they will do whatever the hell they want. When you need a task master, you need to have someone on who Speaker 0 07:35 They need to be management, organized, you know, organized, they need to be management oriented. They need to be able to kind of say, guys, here's a directory structure. Here's the, the, the root level. Here's a group of first level. Sub-directories with very specific names, with very specific types of stuff, or we're always going to create for each client a directory. And it's going to have a very specific client name. And every project file we ever generate is always going to have that client name as part of the file name. And it's all going to always live in their folder. And their folder is always going to have these five sub-directories project files, raw footage, finished pieces. Speaker 1 08:16 Sure. Missile, extra data or whatever. So I guess having a system, you gotta, you got it. This is all about organization at the end of the day, every single phase that we ever implement when it comes to video work group environments and the workflow associated to editing and ultimately the management of that asset and the archive and or backup of it has to do with keeping your stuff organized. Speaker 0 08:44 Yeah. All the benefits you get off of a sand, the raid protection, the high performance. Again, multiple people. I don't know if we stress this enough. When you have a capture, when you have a video file that someone needs to manipulate for a project on a sand or even a file server, you don't need to duplicate it for multiple people to be working with it at the same time. Again, if they're non-destructively working with that media, they all can be reading off of it and be manipulating it within the context of various individual sequences and projects that they're working on. You don't need to do everything. All of these benefits are great, but you need to also have some organization in the way that you manage these centralized file systems. Or it just becomes a virtual ugly bunch of FireWire drives that are maybe a little less precarious physically, but aren't necessarily adding to the benefits that a consolidated storage system can provide organized it's it's centralized and organized. Speaker 1 09:44 It is. And, and at the end of the day, what does that mean for you? It means that you're spending less time looking for stuff, doing stuff that is not making money, Speaker 0 09:53 Copying files between drives and waiting for little lines to cross across screens while data is getting copied. And you can't really work effectively, will that copy is taking place. You know, Speaker 1 10:03 We all have billable hours and at the end of the day, we got to really focus on it. No, it's Speaker 0 10:08 Nice, right? A sand is not necessarily a luxury when, when, when people really think about what eats up their hours in a day, that could be billable. And they start to really think about like how much of my time is waiting to copy files between drives or searching for a file. When I don't know which of the five FireWire drives, it's on, you really think about that in terms of the billable time, you have to apply to your work and these sand systems or in file servers or whatever approach you take for workgroup storage can really pay for themselves. No one would put these things in. If all they were was a cost. And I really stress this as a sales guy, who's living is based on this stuff. No, I say this as a consultant, who is well aware of the realistic scenarios in a lot of these environments, which is these things can save you time significantly and allow you to make more money. Speaker 0 11:01 Because your time you can be spent on the creative tasks that at the end of the day, your users are paying you for. When they pay you to commission a video, to make a video, they themselves are paying you to copy data between drives. And they're not paying you to find a file on a drive that was on your third B roll drive from 2008, that you need to reincorporate into their new project that they're paying for. They're paying you for the product. They're paying you for your creativity. They're paying you for your skill, set, your narrative skills and your storytelling skills through moving picture and audio and graphics. And they don't care that you have to spend extra time in your day looking for stuff. They don't care that you have to copy stuff. They don't care that your FireWire drive died. They're paying you for the end product. Speaker 0 11:51 And the sands can allow you to bring in more of that revenue and cut out more strenuous activities, sands what they do, sand, sand, save money. They do. We'd love it. All right, so that was it. And this is a complex subject. We are always available to talk more about this. Give us a holler. I'm Nick gold, I'm Meryl Davis or of Chesapeake systems. chessa.com. Please rate the podcast on iTunes. We've made it into the new and noteworthy sub sub sub section of the technology section on iTunes. Thank you for listening. And until next week we are the workflow show piece out later.

Other Episodes

Episode 0

November 20, 2015 01:03:21
Episode Cover

#31 File Transfer Acceleration with FileCatalyst

Should you invest in file transfer acceleration? Find out by listening to the CEO and Co-Founder of FileCatalyst. Getting ready for a long distance...

Listen

Episode 0

March 01, 2019 00:44:28
Episode Cover

#35 "Training in Post Production"

This episode of “The Workflow Show” features Chesapeake Systems’ trainer Luis Sierra, who shares his perspective on how training needs in post-production have evolved...

Listen

Episode 0

July 17, 2012 00:18:21
Episode Cover

#1 "What's in store?"

  This is the first episode (length 18-min.) of a new audio podcast series from Chesapeake Systems. The Workflow Show features Merrel Davis and Nick...

Listen