#27 NAS vs. SAN Made Clear

December 19, 2014 01:33:16
#27 NAS vs. SAN Made Clear
The Workflow Show
#27 NAS vs. SAN Made Clear

Dec 19 2014 | 01:33:16

/

Show Notes

The Workflow ShowWhat's behind that seemingly innocuous small hard drive icon on the desktop of client workstations within a collaborative video post-production environment? The answer to that question is fundamental in determining the foundation of a facility's operating procedures. In our work as an IT-centric media systems integrator, we at Chesapeake Systems typically present our clients with storage options that involve either a NAS (network attached storage) or SAN (storage area network) solution. Is knowing the difference between the two all that important for the client to understand? We have found that when clients are cognizant of the differences, they actually are doing themselves a big favor, because being aware of all the intricacies can help us to home in on offering the most effective system, one that is at the ideal cross-point of budget and performance. The reversed acronyms of NAS and SAN can give a false sense of clear cut distinction between the two. While it would be correct to say that the differences in general lie between the way data is accessed, the factors in deciding which solution is best for a particular environment are many indeed. And it's certainly not all black and white. For example, we serve video clients that have separate NAS and SAN storage arrangements within their facilities. And sometimes, incorporating a NAS as a component of a larger SAN is advisable as well. Hmmm, a bit confusing? You're not alone. You can of course comb the web for articles about the differences between NAS and SAN storage solutions, but you will likely not find a more thorough yet clearly presented explanation than in this 90-minute discussion between co-hosts Nick Gold and Jason Whetstone.Nick Gold and Jason Whetstone As always, this episode can also be accessed via iTunes.  We welcome your comments below, or feel free to email us directly. We are in our third year of producing this audio podcast series. We invite you to check out previous episodes Want to discuss your particular media workflow situation with one of our expert consultants? Email the team or call 410-752-7729. SHOW NOTES
NFS
throughput and latency
link aggregation
DAS (Direct Attached Storage)
HFS+
RAID 0 desktop drives
800 gb  firewire drive
byte vs. bit
Elemental Technologies
Telestream Episode
XenData
StorageDNA
CPU vs. GPU
NFS, SMB, CIFS
AFP
StorNext
Xsan
fibre channel protocol
SFP transceivers
NIC
HBA (Host Bus Adaptor)
SNS’s ShareBrowser (providing Spotlight-like search for a SAN)
EMC Isilon
Nexsan E-Series RAID storage
Quantum QX storage
Open-E
NAS head
View Full Transcript

Episode Transcript

Speaker 0 00:01 Welcome to the workflow show. I'm Nick gold and I'm here with my cohort. Jason Whetstone. Hello everybody. And this is episode three Oh three when to NAZ and when to Sam. Right? The differences between a Nass AKA a file server and to Sam, a storage area, network storage area network and, and Nass is not the reverse of that. Right? And it's for network attached. It's not like God and dog or anything like that. It's, it's, it's more, well, it's a difference in, in the entire way you create a storage network. There's, there's a lot of differences between the two, but we, we hit this all the time, right? Because both of these storage networking technologies are number one options for people who are doing collaborative video storage environments. Right. And yet they're very different technologies even though at the end of the day it's a not terribly sexy hard drive icon on your desktop. Speaker 0 01:10 Exactly. But man, what goes into that hard drive icons? So important. It's pretty crazy. Yeah. There's a lot behind that little hard drive icon to your everyday user. One versus the other. Looking at files in the finder. It really doesn't fundamentally matter. One looks, one looks like one icon, one looks, looks like another idea. It's working well and it's well engineered. Frankly you don't want the user to ever think of it differently than any other hard drive icon on the desktop. It's, it's there when they need it to be. It allows for the type of collaborative workflows that they may need to participate in with their colleagues as far as sharing a files, whether those are media files or project files, what have you, it needs to be compatible with the applications that they're using at a given moment. It needs to be up to the performance tasks that the user needs. Speaker 0 02:03 Does it have the read speeds and the right speeds that someone needs to be working with the formats of video and the type of projects that they happen to be finding themselves in at a given moment. And um, you know, it needs to be a reliable piece of storage. We talked about that. It's there when you want it to be, but you know, is it ready to protect it? Is there backup systems behind it? Is it compatible with your media, asset management environment, all of these things. But at the end of the day, yeah, the user shouldn't have to really even worry about what's behind that hard drive icon. You want to make that almost as obvious skated as possible. You don't want them to have to think about it. Other than that it's there, it's reliable and it meets their needs. Right. That's for people like us to worry about, you know, we gotta make sure there's just the right kind of mojo and that hard drive icon for it to just do its thing day in day. Speaker 0 02:56 Absolutely. Are we, are we talking about, you know, seven 20 P footage? Are we talking about 10 80 are we talking about 4k or six K? How many layers of simultaneous video streams do you need to pull in that premiere project, you know, at once and keep up with those speed requirements. Are you doing live ingests? So yes, we're not going to get into the details about all of those nuances today with every possible workflow. I think we're going to spend this time today talking about what truly are the differences between sand technology, NAS technology or AKA file servers when a user might opt for one versus the other. When is one clearly out of the realm of possibility for particular environment? What goes into our thinking when we spec one out versus the other or as is sometimes the case when we say, well listen, you mr or mrs customer, you know who are asking us for a collaborative high performance storage networking solution? Speaker 0 04:06 Well you could go in the NAS direction or you could go in the sand direction and there's a lot of, you know, again, nuance into why a customer may go in one direction versus the other. So we're going to try to delve into this today so people can be armed with the knowledge to make the best decision both for the organization today and what's the right solution to build on. Because you know that hard drive icon may not be so special looking, but in some ways it is the foundation of your collaborative video environment. That is it. So let's start with, let's start with a NAS. I mean, uh, if I go to, uh, to my system preps and I go into sharing and I, and I turn sharing on and I give people the IP address in my machine or the name of my machine, and they say, and they connect to it and it shows up on their desktop. Speaker 0 04:59 There's that icon on the desktop. Is it that that's a NAS, right? I mean, I can edit off of that. Or now a NAZ, mr Whetstone, your computer has become a NAS. So let's, let's first explain what NAZ means. NAZ simply means network attached storage. And that is just another word for a thing that most people commonly refer to as a file server. Right? And so, yes, for many years people have gone into the sharing preferences on their computer, whether it's a windows or a Mac, and you can reshare some or all of the storage that might be inside of your machine or tethered to your machine across an ethernet network. And that's part of what makes a NAS and NAS. This is using ethernet networking technology, whether that's single gigabit ethernet or the newer faster 10 gigabit ethernet, you know, NAS and file servers use ethernet as the shared environment and network switches. Speaker 0 05:59 All of that is ether net based. Now ether net doesn't necessarily mean those copper cables that most of us do plug our computers into the walls with to get on the network when we're not doing wireless. So you can have ethernet over fiber then. That's right. And that's, you know, it's an important thing to point out when we talk about fiber, we sometimes mean it in several different ways and we'll talk about it more in the sand way in a few minutes. But you can use fiber optic cablings cablings. Yeah, those cablings is, you can use fiber optic cables in an ethernet network that plug into ether net switches that have special types of modules installed in them that allow them to connect to other systems via fiber optics versus those copper cables that have those RJ 45 connectors at the end that we'll call twisted pair. Speaker 0 06:45 So fiber isn't necessarily fiber channel, no fiber channel. We'll talk about later when we get to the sands, but we you can have fiber optics involved in ether net networks and fiber optics while they're not the only cable that supports 10 gig E for ethernet networks, 10 gigabit ethernet. We most commonly use fiber optic cables for 10 gigabit ethernet networks. Is that because of the distance? Yeah. Fiber fiber is, it's both a bandwidth thing and a distance thing. Now they're, they're more recently as a spec that actually uses the copper twisted pair RJ 45 terminated copper ethernet cables to carry 10 gig ethernet, but it's much more limited distance wise than running that same protocol over a fiber optic cable. Right, that makes sense. So, but let's talk about what, what are you doing when you've turned on sharing on your computer? Now it's available on the network as a SharePoint or in a more robust environment. Speaker 0 07:39 You build a file server with its own block of storage that becomes the file server, a dedicated file server system that you're connecting to over an ethernet network. And that makes a lot more sense because you would want that machine to only be doing email sharing. Yes, exactly. When we're building a shared storage system of any sort, it becomes a very dedicated system just to do that. Storage, networking and people's environments, we don't tend to double duty it. It's not a workstation as well as a fan on. You know, and again there's always some caveats here and there about particular environments and workflows, but you typically want your file server to be a dedicated piece of equipment and this starts to get into the nuts and bolts of what a file server really is and this will also differentiate it from a sand. So a file server starts with a server. Speaker 0 08:29 Again in the case that you were outlining earlier, your computer becomes the server if you turn on the sharing. But in this environment it's a dedicated server. You know it's a computer, it has a CPU and Ram and a motherboard and you know all of the things that make a computer a computer. But in a server form factor, it's typically something that's going to be rack mounted. Although there are small office home office servers that are more of like a little storm factor and that's, that's important because the server is running, you know, an operating system number one that's often, you know, it could be on Mac mini running iOS 10 that's less common for us these days. Very, it could be a windows server with its windows file sharing turned on windows. We do do it. We do do it. I know, I mean at the root it's a server that's sharing out a file system. Speaker 0 09:23 Exactly. And often for us now file servers that we build for video work group environments are running some variant of Linux. There's a particular commercial variant of Linux called open E that we use on the file server appliance type of boxes that we tend to sell. And opening is a version of Linux that is tuned specifically for storage boxes basically. And it can share storage out in several ways. But one of the ways it does that is through file services. And so you have to think about, you know, imagine a server, there's this computer that has an operating system and some software that runs on it typically as part of the operating system that are there specifically to share storage out to other users so that they can utilize it by hitting it across an ethernet network or even, you know, the internet at large. Speaker 0 10:16 Theoretically if your VPN in or something, and that's a set of software services that are running and that file server, that computer that's running the file services, the NAS box itself, it has a block of storage attached to it. Now that block of storage that the NAS box has, it is the machine that's directly connected to that storage and what that server that sits on top of that storage does is it essentially presents a virtualized version of the file system that's actually running on that storage that turns it into a block of addressable storage instead of just a bunch of hard drives sitting under the hood virtual. So when I hear the word virtual light, I tend to think the machine is doing something at the same time as it's doing that, whatever that virtual thing is, I think a little bit slower maybe. Well exactly. Speaker 0 11:12 Think about this. When you connect to a file server from your computer and a little shared drive icon comes up on your desktop, it has that little icon. I think on iOS 10 it's like the little people holding hands. It's like a network share, but it looks like a volume. It looks like a drive icon and so it's tempting to think about it like it's just another hard drive you've plugged in. Right, but it isn't. You want to know what it is, Jason, why is it a lie and it's lies. It is. It's just a lie. It's a lie because as you write files to that thing that looks like a drive icon, you copy data to it. You ingest video, you render something out to it. It's tempting to think that you are addressing that storage directly, but you're not because it's a virtualized file system. Speaker 0 12:04 Exactly. What you are really doing is talking to the file server computer over the network using one of several common file sharing protocols that we use. Typical one these days is called SMB slash sifts CIFS and sifts is for a while was kind of the default windows file sharing protocol but nowadays even max kind of use it as the default because it's pretty industry standard. Another one that we sometimes use is called NFS, which has its roots in kind of Linux and Unix based operating systems as their file sharing technology, but you seeing that drive on your desktop is kind of this virtual drive that the file server is representing to your client system, your connected workstation as if it were a volume of storage that you can directly address, but really that's not true. The thing that writes the data to that underlying storage and the file server is the file servers computer itself. Speaker 0 13:07 It's running those file services, so it's almost like a middleman. It, it's a perfect way of looking at it. It is a middleman. It is saying, well you're really kind of transferring data to me at a really high speed and then I'm going to write it to my underlying storage that really I'm the only one who's directly addressing but I'm going to kind of trick you into feeling like you're doing it and it might suffice for your purposes. So probably I'd say maybe five years back when computers weren't as fast, we didn't have very highly optimized operating systems that were very attuned to working specifically as file servers when networking switches, especially the lower price switches, you know, hundreds or several thousand dollars range just weren't as good and efficient when a lot of factors weren't kind of fully baked in the way that they are today. Speaker 0 14:02 We were very reticent to use file servers and the underlying storage that they were kind of virtually presenting to clients as real time video production storage volumes because that because of the speed or the speed and again the speed and latency cause it's important to kind of differentiate between the, the sh the sheer throughput as one metric of speed and latency. Are there little hiccups involved in another? Well, back then we were very conservative about it because both the act of this file server acting as a mediator to the underlying storage could affect both throughput speeds and latencies causing hiccups and for realtime video ingest for instance, which was a lot more common five or six years ago before file-based workflows just almost completely took over. Even a little hiccups could mean dropped frames plus 10 gigabit ethernet really wasn't even an option at that point. Speaker 0 14:59 Largely for price reasons and just availability of equipment. So we were squeezed into these one gigabit pipes universally and for like all of these reasons we didn't think Nazism is file servers were suitable. Production storage. Well what about link aggregation? So link aggregation was one way we dealt with this back in the day. Link aggregation as a way of taking several independent ethernet links cables really. So picture kind of the file server as one box and your ether net switch as another box in the middle and it's got some connections and then the ethernet switch goes out to all of your clients. Right, right. Well to open up some of the bandwidth between potential bottlenecks and a lot of that time the bottleneck was between the file server itself and the switch because you have one pipe going into the switch and then exactly to open that pipe up, we would aggregate multiple single gigabit connections together. Speaker 0 15:52 So there was enough bandwidth between the actual storage system and the ethernet switching environment. So all the sets of users who might need to connect to it at once would have enough aggregate bandwidth between them to get to the storage at the same time and people aren't kind of taking up the bandwidth. However, something that happened, one of the trends that changed I'd say over the last five years or so is that 10 gigabit ethernet became much more viable. So we don't have to aggregate individual gigabit links anymore to increase bandwidth across any part of this kind of network. We could have one or several 10 gig links between a file server and a switch that has maybe a handful of 10 gig ports and then have a bunch of clients connected via single gigabit ports. And you know, that opened up a lot of bottlenecks. Speaker 0 16:40 Plus we have better file servers now, you know, the better switches. There's a lot less of both of those latency hiccups and just throughput bottlenecks. And so we've gotten to a point that file servers can give users potentially if it's well engineered with some caveats, we'll get into that, you know, enough bandwidth to do actual editing off of. Sure. It sounds like, I mean it sounds like if you really do have a tuned box that's really made for sharing files, it could be a pretty powerful system. Yes. And so that's an important thing, right? You can buy a Dromo that presents itself out as a file server and say, well you know, I could edit off of this cause I can Mount it on my desktop and you know, some applications we'll be able to use it for. The problem is it's not designed for that. Speaker 0 17:27 You know, there's a lot of pro-sumer, you know, small office, home office, Soho, you know, type NAS boxes that maybe a single user could do some very basic cutting off of. But the moment it needs to be kind of the foundational element of a workgroup collaborative, high performance production storage system. You can tell it said that for various iterations, 400 million times in my life, right? But they're not going to cut it. They're going to fall apart under pressure. Those things are for hosting a few files in a small office environment that you're not needing to manipulate in real time and have high bandwidth requirements associated with not talking about video. It's not streams of hundred to 200 megabit pro Rez files of what you need to be pulling three per workstation simultaneously across a work group of six to 15 users. You know, those things are gonna fall apart for that. Speaker 0 18:19 So a well engineered system, we tend to build our own right. We, we spec out all the hardware down to the subcomponent. We choose the raid controller cards, the hard drives. We run open E on it, we spec it out. We make sure there's enough bandwidth between what this unit will be in the ethernet switch environment that all the clients will connect to it through to meet an environment's needs and it can work. Sure. But there's caveats like I can think of one. Uh, what happens to the trash can? The trash can. So what happens when you want to throw away a file on an ass? That's a good question. What does happen? When you want to throw it, I can tell you, you gotta you get a little dialogue box that pops up and says, are you sure you want to delete this file? Speaker 0 19:07 It will be gone for good. And if you say yes, then the file will go away. It doesn't go into the trash can or the recycling man, it just goes away. It's gone. And this is part of that byproduct of the fact that you're working with a virtualized operating system. It doesn't work exactly the same way built in what we call direct attached storage. Does storage that is being locally addressed by your computer or through a standard file system and the Mac that's usually HFS plus or the Macco S extended file system. That's not what you're interacting with with Inez. There is a file system under the hood, but the only box that's really touching the file system directly is the file server server. So things like that behave differently. But there's also a lot of performance ramifications still even with all the improvements. Well let's talk about this for a moment. Speaker 0 19:55 You talked about 10 gigabit ethernet, right? Yeah. So I've heard you know, a sand really like you know, we're not getting really speeds above four to eight. Sure. So wouldn't, wouldn't attend gig ethernet link be faster? Well let's, let's re, let's talk about single gig because a single gigabit ethernet from especially the clients to the ethernet switch that acts as the switch that you know, bridges the clients to the NAS. A lot of environments we're doing, it's still single gig connectivity between those clients and that switch. Right. And some of that's for cost reasons. They don't want to run the fiber optics that 10 gig requires. We'll talk a little bit more about some of the networking stuff, but just as far as the speed of the gigabit, right? Right. Gigabit in real world scenarios on a user by user basis, and this assumes that the backend of the system as well engineered as functioning properly, that it's a dedicated ethernet network. Speaker 0 20:57 This is an important point. Dedicated ethernet network, so not the same one. You you hit Safari and are watching YouTube and all that kind of stuff. Whether you use wifi for your general network access or you have an ethernet port on a gigabit or a ten one hundred network that gives you access to the internet. You know your mail server, maybe a company file server directory services, whether it's active directory or or open directory. That's not the network we're going to use for the shared storage network for video production purposes because we don't want all the other traffic on that network. I'm pulling down a webpage, I'm updating <inaudible>, whatever it is. We don't want that traffic to in any way, shape or form. Get in the way of the video traffic that needs to meet certain real time performance metrics. So when we do a file server, a NAS using ethernet file sharing as a workgroup storage system, we will run it as a totally separate network from your general network and that often means your client workstation will need a second gigabit run connected to it. Speaker 0 22:07 Going to usually a dedicated switch that again is just to connect clients to the video shared storage, NAS. Okay. And again, that's one of the best practices that we've developed over years of doing this stuff because again, we found that the moment you're kind of trying to combine general networking stuff and this video specific networking, again, that's one of the things that can create more issue clamps on the bandwidth. So a lot of people are still using single gig and single gig in the best possible situation. The shared storage system itself is high performance and can meet the aggregate needs of all of the users at once. It's on a dedicated network. There isn't a performance bottleneck between the NAS itself and the switch and that if single gig is how the clients are connecting to it, your performance is going to be about akin to editing off of a to drive raid zero FireWire 800 drive like a G raid or, let's see, these are one of the, the the desktop drives that actually have two hard drives built in and because it uses raid zero which is a Stripe across those two drives, it's going to combine some of the performance and all of the space of those two physical hard drive mechanisms and you're just going to see it as a single icon on your desktop. Speaker 0 23:26 A single drive and gigabit ethernet is on paper a little bit faster than the 800 megabits a second that a FireWire 800 that's the 800 and FireWire 800 gives you and frankly a lot of the times the little raid controller chips that are doing the raid zero and one of those desktop drives aren't the most high performance things in the world. There's a lot of bottlenecks with a lot of the FireWire 800 chip sets that are out there that actually turn the drive into a FireWire 800 device. So really in a, in best world conditions, a gigabit connection to a well engineered NAS on a dedicated network is going to probably net you a little bit better performance than connecting to a to drive raid zero FireWire 800 drive on your desktop. You know, I'll use megabytes a second just to confuse things, but I don't know, we kind of go back and forth and forth. Speaker 0 24:24 Bits and bytes. Just remember folks a bit is one eighth of a byte. A byte is eight bits. So that's how you do the math there. It's about all the arithmetic my mind is able to handle and usually that requires a calculator. But let's talk about megabytes a second. So a FireWire 800 drive, I've often seen like a read speed on those top out at like 60 to 70 megabytes a second in a, in a gigabit environment. We see those read speeds off of the shared storage system. Probably more in that 70 to 85 megabytes a second. And so what do you get over that? Assuming again, everything is all right in the world. Well that's probably just, we talk about guarantees, we try to throw kind of conservative numbers out there, right, right. Pro Rez standard pro Rez ProRes four two you'll probably be able to pull in real time three HD pro Rez four to two streams over a gigabit ethernet connection. Speaker 0 25:27 We talking like seven 27 27 2060 or 10 80 you know, 59 nine four frame fields rather. So maybe like 10 80 P 30 or I 59 nine four seven 20 piece 60 you'll probably get three streams of that in real time through that connection. If you move up to the HQ variant of progress, you're probably talking about only a guaranteed two streams. So when we talk about stream counts and why it may be better to have more streams, you really, users need to think about their video projects. This is something that is very specific to NLS, to video editing after effects because at Cassius, so much in Ram and things like that, it doesn't stress the storage in the same way that you know, video editing itself does not real time usually. Exactly. Exactly. But what's real time when you're editing, right? So people tend to know that when you render out, right, you're basically running your project files, essentially an instruction sheet, right? Speaker 0 26:30 It's a recipe, right? That's all a project file is. It's a recipe and the stuff that you're manipulating, you're never destructively manipulating the constituent elements and video files and graphics and you know, photos that are part of your editing project, right? Those are non-destructively being kind of run them through this set of direction they're referenced, they're referenced. And what you do create or spit out whenever you render is a render file. Right? And a render file kind of takes that raw stuff, runs it through your instructions or your recipe and spit something out afterwards. But these days, NLS, nonlinear editing programs like premiere, final cut 10 or even back in final cut seven days, we're really good at doing kind of real time renders in the background that should show you what an effect was going to look like. A graphic was going to look like picture and picture video was going to look like. Speaker 0 27:29 And what it did is it and continues to do is really real time rendering in the background, but it's not spitting out a new file. It's doing it in the background using your computer's processing power so you don't have to wait for that render. Exactly. And so what happens though as a result is when you're previewing these kinds of things, you haven't flattened everything out into a single layer of video in the way that if you do a render all command and it actually creates true dedicated render files, you're getting, cause when you render something out, you're flattening all the constituent layers down to a single file. And so you only need to be able to pull one layers worth of video performance in order to view it. But before you render, you're potentially having to pull multiple streams of video at the same time in real time. Speaker 0 28:19 So I think the easiest way for people to think of this as like your typical cross dissolve, right? Sure. Think about your project. You might have 20 layers of video, but what we're really talking about here is those moments in the project that based on effects and other things that you're doing, there's multiple layers of video actually playing to screen at the same time. And so think about a cross dissolve, right? You got one video and then another video, but there's like three seconds where they're overlaid because there's a dissolve going on between one and the other. While while you're just previewing those effects before you actually do a true render to generate a little separate file that represents that kind of intermediary period, you're actually having to pull both of those streams of video at the same time for the three seconds of that overlap. Speaker 0 29:07 Okay, so it's not just like me opening up three different videos in the finder or in QuickTime player and playing them all alongside each other. This is actually, I've kind of two would have those types of performance requirements but all I'm saying is it's not, it's, it's something that actually affects users as they're editing. Exactly right. Cause the moment it stops being able to pull the performance for those three seconds during that cross the dissolve of both streams at once, you will get dropped frames on playback. Now drop frames on playback can or you know they, they may or may not be really harmful. If it's dropping like tons of frames, it may be so bad you can't even really tell if that effect is working properly. If you know it's just one drop frame cause it would just quickly had to stutter. If you turn those warnings off, you might not even visually see that the drop frame occur on playback. Speaker 0 29:55 Of course on ingest drop fames are a real pain because you may only have that one shot to ingest something. Right. Well wait a second though, Nick, what about now we have a lot of clients that are getting away from tape-based workflows. But what about, what about those clients that need to deliver tapes to TV stations or you know, broadcast and there are still people who ingest, especially for legacy materials, right? People are going to be ingesting off of tape for decades just because of all the legacy materials that they've accumulated. Well, again, that's another time where you need to make sure that the performance of the right's speed to the shared storage system you're on can keep up with the one stream of video that you're ingesting. And most shared storage systems and storage in general has slightly slower write speeds than read speeds. Speaker 0 30:43 So you do need to bear that in mind. But the reality is, you know, I said that like you can get two streams of ProRes HQ or maybe three streams of pro Rez regular progress, four to two for a read off of a file server, you're probably not going to have a problem ingesting one stream of that, right? And if you're using lower bit rate codex like XD cam 35 or 50 you know your performance metrics are even lower, right? So you know, you can fit two to three times one of those streams in the bandwidth of a single stream of ProRes or ProRes HQ. So gigabit might be borderline for you if you have a pro Rez workflow but might be very easy for you to work with. If the top Kodak you're ever using is 50 megabit a second XD KMHD off of an SBS card. Speaker 0 31:31 So you know, you really need to think through the codex, the number of layers your projects may call for, you know, picture of the Brady bunch thing, right. Even if even if you had like a nine different streams of video on the same. Yeah, exactly. Like before you render that, that's going to have to pull nine stream simultaneously. So you have to multiply the bit rate of one stream times the number of streams and that'll tell you do you have enough headroom in your connection to the shared storage, assuming all of those streams are on that same shared storage device to even be able to pull them through without it going crazy and dropping frames and without you having to send it to motion graphics. And what we find is most of our clients I think are pretty comfortable. They say, well, normally we're two to three and maybe projects get crazy if they're at five streams at once. Speaker 0 32:22 Pre-render right? Right. And so a client has to ask themselves, well what if I told you over this gigabit connection, if you're typically working in pro Rez HQ, say I can guarantee that you'll probably be able to preview a cross dissolve or any two layer thing without having to render. But the moment you start getting into three plus layers at once, you're probably going to have to do a render command to flatten it out into that single file. So it lowers the bandwidth requirements. So you know, the thing about gigabit and NAS is, is that people have to think about it still. It's not enough bandwidth. In today's day and age with HD being totally common. And even with things like four K plus kind of hitting, you can't really do stuff with 4k materials adequately. Maybe one layer may be over a gigabit connection. Speaker 0 33:19 So gigabits still requires some thinking, which is one reason you know we hate thinking around here and so no, but seriously like if you have to think about it even around today's requirements, you know that that's probably gonna represent a fairly low ceiling on you for tomorrow's endeavors. Sure, and you were talking about 10 gig, well now that we have 10 gig, what if we just have enough 10 gig pipes between the Nez and that dedicated video network, Ethan at switch to give you enough backend performance to sustain the entire work group working at once and then you run 10 gig out to the individual client workstations. Well that helps a lot. That gives you way more headroom. Sure. It doesn't give you 10 times as much though as the pure numbers on the paper would seem to indicate, because remember we were talking earlier about all of those different latencies and overheads associated with ether networks. Speaker 0 34:16 The file sharing protocols we use, the fact that you're going through a file server, head node, this is essentially a virtualized file system that you're interacting with. Some of those head, those that headroom still comes into play even if you open that pipe up. So what we tell people, again, it's probably a little on the conservative side, and again this is all dependent on the backend of the system being well engineered and there being enough dedicated and total bandwidth between the file server and the switch to sustain everyone. But with all that said, even if all of that's in place, really, there's probably people should only count on maybe a four to six time or roughly five times improvement of performance to a client workstation from going from one gig to 10 gig. So not a 10 times performance or even nine times per hour. Speaker 0 35:10 The reality even is even five times performance is a lot. That starts to get into a zone where from a bandwidth perspective you don't have to think as often. Right. You know now, now that it's six, seven streams of pro Rez or pro Rez HQ or even more with lower bit rate codex over 10 gig, it's like well I'm not going to have to think about that cause those are some real outlier scenarios that I might have a project calls for that. Right, right. Here's the flip side though. 10 gig is still much more expensive than single gig. We've got some cheaper switches that we can do that may be have four 10 gig ports and like max out at four 10 gig ports and I have a bunch of of single gig ports and then they have a bunch of single gig ports and we typically are doing HPS because we find that they work well in the real world. Speaker 0 36:02 But there are low enough cost where it's not a mind bender for balance folks. But you know that gives you up to four 10 gig ports and you know up to 48 single gig ports. Well we're going to typically recommend that you have at least two 10 gig ports going between your NAS and that dedicated video ethernet switch. Right. So at most leaves you two more 10 gigs to run to your client workstations. Those client workstations will need to be outfitted with 10 Giggy cards, right? Or these days, Thunderbolt boxes. It's basically a 10 gig port on one side to connect to that fiber optic cable probably and go to that switch. And on the other side, it just gives you a Thunderbolt connector to connect to your math via Thunderbolt. We also have PCIE card versions to install into older Macs, windows machines. But those things are, you know, not free. Speaker 0 36:54 I'm trying to, I'm thinking probably along the lines of what it would cost to plug in a fiber channel. They're not much different than the cost of a fiber channel interface. And here's the other rub, right? To use a Brian Summa phrase, here's the, here's the rub. Yes. I used to use that Shakespearian phrase. I know, right? He's such a theatrical guy, although you guys, he doesn't hold a candle to you here. Oh, well thank you. You're mr theater. Thank you, Nick. He was doing some voices before we started recording. I was very impressed. I could go Charlton Heston snake to tell me what the difference between San and NAS is today. Yeah, exactly. Now, let's see. Now I feel like I'm really on this. Speaker 0 37:40 So here's the rub. The moment you need a switch that either out of the gate has more than four 10 gig ports or maybe even has like, you know, a dozen of them or 20 of them or whatever it may be that you need. You're starting to get into a class of ethernet switch that it's much cheaper than it was a few years ago, but it still ain't cheap, right? So we're typically using Cisco's that come with eight 10 gig ports on board and then have these two other module slots that can either be per slot 20 single gigabit ports or eight more 10 gig ports. And so maybe we have a work group that really needs a total of eight 10 gig ports and some single gig for other connections. And we'll have to sell one of those switches with the 20 Hort single gig module board installed. Speaker 0 38:31 So you can mix these connections in your environment. You can look and say maybe two editors over here need a 10 gig connection and everybody else can have a single guy. Absolutely right. And you know it is important if, if someone is looking at a NAZ and you know is having a conversation with us to determine, well I need single gig for this user. Can I get away with you know that or do I absolutely need 10 gig? Maybe I've got like a couple of producer workstations that are doing at most some cuts, editing or very light stuff. And then I've got a couple of like primary rooms that do the more complicated stuff. What maybe you can get away with a cheaper ethernet switch that has four 10 gig ports, two of them going to the NAS, two of them going to the high caliber workstations and then a number of the gigabit ports, single gig going to the less robust systems. Speaker 0 39:16 We do that all the time. And a conversation with us can pretty quickly determine what that is. As long as we know what are the codex you're working with, what is the complexity of the projects, yada yada, yada. So, but a totally 10 gig Nez based system because largely of the cost of that switch that's going to be necessary to give you a good number of 10 gig ports. It starts to get a little pricier than a lot of people realize though I thought I was saving money with ethernet and it's like, well you needed a lot of 10 gig and that's still a lot of 10 gig. Hey, what about adding things like trans coders? Like what if I want to put a, you know, an elemental or an episode box or something like that on a, on a system like, or a ma'am or an archive system and application server for a man, something like that. Speaker 0 39:58 I mean, we're starting to look at other machines maybe needing some higher speed or just a little bit more horsepower to access that store. Again, that's where an even deeper look at workflow is going to be necessary, right? Because we have to determine to those various systems doing their thing. Maybe it's a tape archive server, so it's, it's running something like archi where archive and it or or you know a tempo digital archive or one of many different tape archive packages, you know, Zen data or storage DNA or whatever. Right. It is going to need to mostly read data off of the storage system and then write it to tape that it is typically going to be hosting directly itself. You know, cause it's going to be tethered to a tape library or a tape drive. Right, right. Well we know that tape drives right at certain speeds and the reality is even a single tape drive can be written to at faster than the speed that box will be able to read off of the NAS. Speaker 0 41:02 If it's only connected via single gigabit connection. No. Right. You know, so then you get thrown on a tape library that has three or four drives and they all may be reading and writing at the same time. Depending on certain types of operations that may be going on. That single gig port will be a bottleneck. Right. It won't go as fast as it normally would unless you give it a 10 gig port. So we will often put archive servers on a 10 gig link to a Nez or the switch that acts as the intermediary to the NAS. We will often put ma'am systems on a 10 gig port because it may have to do broad ingesting of assets, moving assets around bouncing assets between itself and an archive system, transcoding assets, transcoding trans coders, trans coders by themselves. If you have like an episode engine, that's kind of an area where we have to think about it a little bit because one of the things about things that move files and transcode files, file movement will typically saturate a link, right? Speaker 0 42:04 If I'm copying something using some box as an intermediary, even a client workstation, right? Yeah. Or some part of a ma'am system that might automatically be moving files based on certain criteria being met as long as the storage system or systems it's connected to can sustain the speeds behind the scenes based on the number of drives and just the speed of the storage system in general. But it'll just kind of grab whatever it can grab and it'll suck as much data off that transfer is occurring and saturate a one gig link or saturate a 10 gig link that can now have performance ramifications, should have users who are maybe simultaneously using that storage system because the storage system only has so much performance in terms of reads and writes to Dole out. Right. And so anything that suddenly is just mass reading data off of it because say a file is being transferred off of the system or mass writing data to it because the file is being copied onto the system or ingested onto the system. Speaker 0 43:06 It can potentially saturate those connections in a way that an editor typically wouldn't even through normal video editing operations. So that's one thing we have to look at. Trans coders are kind of a funny use case because depending on the nature of the jobs, the trans codes codex, whether it's a bunch of smaller parallelized jobs that are happening simultaneously versus one big job, it may or may not be disk speed. That is a bottleneck to those particular transcoding operations. So it could be the transcode itself. That's, that's the bottlenecks. So maybe the transcoder can't read the files or write them or write. But you know, what we found, and this was a little bit of a surprise to me and probably a couple of years ago, is we started to realize, I, you know, I thought we were, well still at the point where trans codes were typically bottlenecked by CPU. Speaker 0 43:59 You know, you're processing power and trans coders that do use CPS as their, their main transcode engine or you know, in the case of GPU accelerated trans coders like elemental, that the GPU still really represented the bottleneck for a lot of the type of trans code jobs. So we would never get into a situation of having to saturate our link to the storage with either reads or writes, because gosh, isn't this stuff still really bound down by processing power? Right. Well, it turns out that that's actually not the case a lot of the time these days that a lot of the performance now, now that the processing power and that, you know, we have single boxes that may have 20 cores, that 40 cores, like a 44 machine, you know, I was talking to the other day about Intel's next generation of Xeon chips. They're going to have one that's 18 physical cores per chip that act like 36. Speaker 0 44:53 Because of the hyperthreading function of these CPS. And you can have two of them in a single Roku box. So it's like having a 72 core computer in a single rack unit of Rackspace. So imagine putting, uh, you know, any kind of multithreaded transcoding engine on some engine they screen, there's been way more advancements, you know, in the last five years in CPU than, well, maybe I'm still stuck on the same gigabit connection that I was five years ago. Right, right. And so the speed to the storage has started to matter much more for transcoding operations than sometimes the horsepower of the transcoder itself because they can just chug through so much more work. Cause there it's like having 20 or 30 machines in one now that that link to the storage becomes the bottleneck. But that might be good. You may want to like hamstring the transcoder so it doesn't completely bogged down the read write performance of the underlying storage system just through doing trans coding activities. Speaker 0 45:57 You don't want that to disrupt an editor. You may decide we need a separate silo for transcoding, completely separated from the storage system that's being used for the editors or at least a dedicated set of drives in um, a file server box that is reshared as a separate share specifically for trans coding purposes. Cause if we had the transcoder saturating a 10 gig link that might, you know, dramatically impact the real time editing environment and people are simply dropping frames and like what the hell is going on? No one's editing all sorts of crazy stuff and it's like Ooh, we got that trans code. One of the applications that I can think of right off the top of my head would be like a man doing proxy transcodes that's happening most of the time as ingests are happening. Exactly. So moms are, you know, sucking that down and you know, definitely having a toll on both the read and write performance of the underlying storage system. Speaker 0 46:51 So, so splitting that out into a separate pool of storage would be good. I mean, again, it's not always necessary. These are things that people have to look at when they look at the totality of the system. Can it keep up? Do we want to artificially limit anyone's links to it by putting them on single gig instead of 10 gig? Lot of different things we have to consider. Right. So those are NAS is, there are definitely some bottlenecks and overheads. You know, another thing we didn't talk about are these protocols, right? The way in which the file server virtualizes the file system. So we're talking <inaudible> we don't use AFP as much anymore because it was the Apple centric one that even Apple has kind of given up on. Really the world has come down to NFS and I should mention Mac support, both of these from the client side, so you have NFS, if you want a more old school, unique C style one and SMB sifts, which is a little more new school and Apple has kind of adopted as the standard. Speaker 0 47:48 Well here's the thing, from a client perspective, first of all, connecting to those shares isn't exactly the same process. Depending on which you're working with. It's not like, you know, there are ways to work around this with some third party software and some scripting and all of that, but it's not like you turn on your machine and you know the builtin hard drive is there and the Nez just automatically appears there. There may be a procedure a user has to use to connect to it. Sure we can automate this, but there's still differences and the reality is SMB sifts from a client perspective is a little nicer to work with. From what I understand, just it's a little more natural. I think it's a little easier for us to auto Mount an SMB sifts share when a user logs in than an NFS share. So someone might be like, well why would you ever use this kind of bare bones NFS thing at all? Speaker 0 48:41 Right. Well it's actually because it's bare bones, we find that NFS SNB steps has gotten a lot better. There's a new version in the very latest builds of iOS 10 that seems to improve performance on the client side. Sure. Um, but we tend to find that NFS universally is the fastest because it's so basic and so stripped down and thing. There are particular apps that don't respect or even allow you to work with an SMB sift share, but do work with NFS final cut 10 being a huge one. So granted, you know, not a ton of our users in a shared work group, you know, storage environments are using final cut 10 but there are some, sure. And if you're gonna use final cut 10 with shared NAS storage, you have to connect to the NAS using NFS, at least as of today. Maybe that could change in the future. Speaker 0 49:34 My thinking is that Apple probably did it because the performance of NFS tends to be a little better. It's again, because it's a little more bare bones as far as some of the deeper level functionality that these file sharing protocols can offer. For instance with SMB sifts, you can set up its own set of users and groups so you can kind of apply permissions using SMB sifts itself, right, and set up users or groups for permission purposes on the file server. NFS doesn't work that way. NFS would require that you have a separate directory services system. So yeah, if you have final cut 10 you're going to need to use NFS, but it doesn't have its own set of permissions. So you'll need directory services if you want to, you know, rule who can manipulate content in specific directory or which group of users. So yeah, there's just a lot of whole other discussion entirely is like I hear a lot of, I hear a lot of users and admin saying, well, we just want to have our San wide open, you know? Speaker 0 50:36 Yeah. That could be a whole other episode. We sure could. Write that one down. Yeah. The users groups, permissions, directory services, permissions to be one of the biggest misunderstood children of the, I'd say it's 50% of the support video production it. Yeah. 50% of our support phone calls at some in some way relate to permission. So we will address that in a, in a future one. But sure. Permission systems do have an impact when you're putting together a NAS or a San system. So again, you know we talked about the NAS is there's the file protocols, there's the latencies, there's one gig versus 10 gig. There's having it on a dedicated ethernet network that isn't your general ethernet network. Having a well constructed file server behind the scenes. There's just a lot of stuff to it and it requires some thought. And yet we now know how to execute them pretty well and they are a viable choice. Speaker 0 51:30 So that leaves, now let's talk about the sand. Mr gold, tell us what the difference is between a NAS and a sand. Yes. So they're not just the opposite of one another despite them the same three letters in a different order. So NAS was network attached storage and represents a virtualized file system that a file server is sharing out and tricking you into believing that you're manipulating directly. A San sounds very similar sort of, it's a storage area network, a storage area network. I don't know that it came up with that name. Well, the way that definitely implies something a little bit different than does it and what we just talked about, I don't know I, well okay, I'll put it this way. It does in that a NAS is a network attached storage device and you know in NAS when we talk about a NAS, we're talking about that box on an ethernet network that's acting as a file server. Speaker 0 52:36 Sure. With a sand, I think you're actually right, right? With a sand. A sand is the entire network including the storage, right? It is a network that is exclusively used for storage. It is purely a storage networking technology. What I've said before is bringing a sand into your environment. It's, it's really, it's really changing your environment entirely. So you know, you think of a storage area network, you are essentially giving everybody a local hard drive that they can all share. Well and that's like the first big difference, right? Cause we talked about how a file server at the end of the day is lying to you. Sure. A sand is not a lie. Sand is truth. It is the essence of truth. That's, that's deep. I know. I'm a deep guy. Yeah, yeah. We should talk about some deep stuff sometimes. Okay, sure. Speaker 0 53:32 Yeah. Okay. So, but really, really a sand is not a lie because it works very differently than a file server. We'll talk about the building blocks of the actual network side of a sand and a little bit. But here's the heart of it. And once again, I have to thank Brian Suma are in trepid senior video and media storage systems. Engineers been here for like 15 years, a hundred years. So like that is pretty old school. But um, Brian explained it best to me one day. He said, you know with a file server you're having to write to the storage through the file server with the sand clients are actually interacting with the underlying storage directly. When you Mount a sand drive on your desktop, you are writing to the storage. You are manipulating the file system directly. It is not a virtualized file system. Speaker 0 54:34 It's a specialized file system. It's not the same file system you use for a local storage device like your built in hard drive or a Thunderbolt or a FireWire USB drive. So it is a file system though. It's running a sand file system. Right. The one we use most often these days is really called store next. It's also known as Exxon when you happen to be interacting with it from a Mac. Right. But X Sans and store next, despite some kind of versioning things, they're really the same thing. It's, it's kinda like the Mac version of Microsoft office versus the windows version of Microsoft office. It's gotcha. It's Microsoft office. The document types are interchangeable while in the sand it's store next when it's a Linux or windows server hosting the sand. We'll talk about what that means in a little bit or a windows or Linux or Unix client connecting to the sand. Speaker 0 55:30 They run a piece of software called StorNext, right on a Mac. If you're hosting a store next sand or connecting to a store next. Sam, the piece of software you do that with is called X Sam. And for several years Exxon has actually just been bundled into the operating. So since lion, as a matter of fact, 10.7 lion, I can't believe I remembered a ten seven was lion can't keep all these cats straight. I'm so glad we're into like rented land landmarks in California. Right. Um, great. So how does that work, Nick? Is that like you say it's a piece of software. Does that set something I have to open to be able to work with this file system I'm running in the background and you know, another nice thing is that you know, you, we can set a sand volume up to just auto Mount at login and user doesn't have to do anything. Speaker 0 56:19 Right. That's actually been my experience with every sand I've ever encountered. Yeah. You turn your computer computer and again it does act a bit more like real storage that you're interacting with than a file share for a file server because it is, I'll explain how the magic of multiple people writing to locals, what essentially amounts to something that the computer thinks of as local storage. How that all works in a moment, but that's a nice thing just to kind of think about. There is no intermediary system. There's a lot of performance benefits to this latency benefits to it or it's less layers of in some ways, less layers of complexity. There is a lot of complexity behind a sand. Yeah, and there is a trashcan. Oh, is there? Yes. Yes. When you delete files off of your Mac, you do it the same way you do. Speaker 0 57:09 You delete any other file, you drag it to the trash or press command, delete, and it goes into the trash largely in there and get it back out a few and that's largely because of how Exxon itself as client software to store next sands has been written by Apple. Right. They wanted to make the experience of interacting with Exxon or StorNext as seamless as possible because Apple was really pushing it quite heavily for a while. Not as much anymore, but you know, XN is almost only used on the client side. Now we typically host a store, next sand on Linux servers running the store, next software. But again, a Mac uses X hand built into the operating system to connect to it. But Apple has still tried to make that experience as seamless as possible to your average Mac user, and you could be a windows user connecting to a store next, Sam running the store, next software or Linux, et cetera. Speaker 0 57:58 And we do that, but most of our clients are using Macs to connect to it. That's what they're doing the video editing or their graphics on, and they don't even have to think about an extra thing to buy. We have to do a little configuration, but it's built right in there. Right. But you may be asking, well, how is it that multiple computers can be writing to a file system at the same time? That's it. That's exactly what I was thinking. You can't have multiple users, you know, the only way you can do that with a NAS is because you don't actually technically have multiple users writing to the file system at once. It's kind of like a mailroom or something like that where you have one person sorting all the mail. We did a good analogy, really target, you know, use that analogy when we're talking about NAS is a few minutes ago. Speaker 0 58:40 Um, it's kind of like handing all of the packages to one person and the postal in the, uh, in the mail department and they have to figure out what to do with them is running around with the cart. Things are flying off the sides. Sure. But not in a sand though. And in the sand the users can put the, can put the packages where they belong. Yes. So how, how is that possible? Because one of the things about file systems is most file systems aren't designed for multiple people to be interacting with, especially writing data to at the same time, typically with most file systems that leads to corruption. Right. That makes the whole thing go belly up. So there's a, there's, there's a couple of ways that this was addressed by store next to next sand. So with StorNext number one, the software that runs the file system itself is designed to have multiple users writing to it at the same time. Speaker 0 59:33 But there's kind of a key ingredient there. It's everyone is writing the data for themselves, but I guess I could say there is still a central orchestrator involved. The orchestrator is the thing that we call a metadata controller. Oh, it sounds like we're talking about ma'ams now we're talking about metadata. No, different metadata. Okay. Yeah. When we talk about ma'ams, metadata is like, who shot this and what's this? Oh, and what's the Kodak and what's the frame rate and who's the client and what's their job? Things that either get pulled out of the files directly or asset tagger, putting somebody shagging logging, all of that in there. We're talking about, no, so metadata really just means data about data, right? That's why we also hear the phrase metadata on the news when we're, you know, talking about, you know, Edward Snowden and stuff and phone records and I won't go into that anymore, but there's lots of metadata because there's in, it's in this futuristic it complex world we live in Jason, there's lots of data about data. Speaker 0 00:34 So when we talk about metadata controllers in a sand environment, the ma, the file system itself is the metadata we're talking about and the instructions that are flowing between clients and this orchestrator computer, it's really typically a pair of computers because they act as kind of in a high availability. Yeah. In case one goes down, the other can immediately pick up the act so the whole system doesn't go down. But in a sand, this metadata controller server, it's not the one that all the information has to flow through in order for it to get written to the file system. You are writing it to the file system, however it is, you know, think of it in front of a big orchestra, right? It's, it's a waving the Baton and saying you right here and you write to this little section of me so you're not stomping over that other person's data and corrupting the file says, to get back to our analogy with the mail room, it's kind of like a user goes into the mail room and puts a package down and there's some barcode scanner in there that says, Oh Hey, this user put this package. Speaker 0 01:40 I put it this way, right? It's like, Oh, I'm going to quickly give you a set of directions to get to the mailbox that you want to drop that in. And it's a specific set of directions that I'm generating at this moment in time that prevents you from running into one of those other people who are trying to get to their mailbox. Okay? Yeah, that's great. You know, so, and that's what the metadata controller does. It's saying right here, right here, right on this little section of my storage, right on this section of my storage. Don't collide with what that other person is doing because I'm only having one person write to individual blocks of storage. This is way deep under the covers of of of hard drive and sand technology, but you're really preventing users from writing over each other's data. And so the metadata we're talking about in a sand, it actually flows over an ethernet network. Speaker 0 02:33 It's a dedicated network. It's gigabit rather dedicated, not the network that the data flows through, nor is it your general network. There is yet another ethernet network. It's, it actually is ether net and this is purely called the metadata network and it's where this metadata controller servers send this traffic control routing right here. Don't write there moment to moment information to all of the clients of the sand system. So their data doesn't overlap and corrupt one another and there aren't collisions and everything just works properly. Gotcha. So it's not the general network. And then you have the metadata network. Well the thing on another layer of the sand that's special is that it doesn't use ethernet or for the actual data to flow over. So as sand uses a different type of network called fiber channel and fiber channel and refers to a specific type of networking switch that's just used for this type of storage area network. Speaker 0 03:37 They, you know, you exclude, well they, they sometimes use copper cables. It's very rare but they're totally different form factor. They are typically fiber off 45 they're typically fiber optic and that's where the data flows through. So it's kind of giving the traffic control data through that metadata ethernet network. The data actually flows over the fiber optic fiber channel network and then you still have your general networks. So there's actually three network involved in a sane, right. So, and then there's this metadata controller. There's all these layers of complexity. It sounds like it's a little complex and cost, but you want to know what the nice thing is. What's that? It's this big hard drive icon on your desktop that everybody shares that you are directly addressing. Really behaves like local storage. Pretty much any program you use is going to think of as local storage. Speaker 0 04:32 There aren't these bizarre, you know, procedures you have to use to connect to it and I shouldn't say bizarre, I don't want to over-hype it, but it just behaves the way you expect it would. It's especially for a Mac client, it's like the closest thing to that hard drive icon you just never have to think about. That's very robust, has high uptime and is just there doing its thing. The fiber channel network that uses fiber optic connectors and the fiber and fiber channel is typically spelled R E versus the fiber fiber optics ER, because it's referring to a fiber channel switch networks. That is kind of like a protocol. It's a protocol name. It doesn't actually even have to do with fiber optics. From what I gather, it has to do with the way that the switches work under the hood because all of these fabric switching network, all of these fiber channel devices all need to have what we've been calling SFPs, which are transceivers. Speaker 0 05:27 Yes, there's little two hickeys that go into whatever your fiber channel adapter interfaces, whether that's a Thunderbolt box or an internal card. And when we say HBA host bus adapter, we're usually talking about the fiber, the metal Thunderbolt, that Thunderbolt box that does the fiber channel where the fiber channel card, ether net, we call them nix network interface connectors, network interface connect and fiber channel parlance. We call them HBH, host bus adapters, but it is that card or box that puts you on that particular type of network. But because we're almost exclusively using fiber optics, you have to have a little do Hickey in that box called a transceiver module that literally turns electrons into photons and back and forth again. So you can be communicating data over an optical cable, right? Um, you're not transmitting electrons over a fiber optic cable. It's all photons, but they get translated back to electrons on the switch side and the raids all speak electrons. Speaker 0 06:26 So you need these little things. But those are kind of just a little thing at the end of the cables that our clients don't really have to worry about too much. They just need to make sure they have one for every piece of equipment. Right. But um, you know, this fiber channel network is getting you through the complexity and the engineering and administrative side of things to a level of simplicity on the user side. That's very elegant. And you know, most users these days have one or two eight gig fiber optic fiber channel connections between themselves and a switch to is usually used for redundancy purposes more than speed or speed. Exactly. But even if you just have one that eight gigs of performance, again, if the system behind the scenes is capable of this, you get almost that full eight gigabits worth of bandwidth or one gigabyte per second for the user. Speaker 0 07:23 So it's not this sort of hamstrung actualized, well we've got to give up some overhead, you know, speed for the overhead of the file system virtualization. It's a much kind of, it's a much rawr way of the computer communicating with the storage and writing data and reading from the storage. And so because there's way less overhead involved other than that metadata that's flowing. But that's handled on a totally separate network. So it kind of keeps it nice and separate from where the data is flowing, but that that network that the data is flowing on is literally just the raw data you might be writing to the drive or reading drive. I said to uh, to sort of explain this to people, the fiber is the block level data. The ethernet metadata network is like the directory, it's like traffic control or, yeah, it's actually, it's actually how you can see the data. Speaker 0 08:16 Exactly. Exactly. And tells you where to write the data in a file system on your, you know, on your Mac for example. That's sort of merged into one and sort of obviously obviously skated from the user, but an exact StorNext environment. There are two separate things and that's one of the reasons it works so well. Exactly. And it's very highly reliable. We talked about how the metadata controllers are often run as a pair, so they're running in this high availability mode. All of the raids, when we talk about fiber channel networks typically have redundant controllers. We typically have redundant switches. When we do a store next sand, there are usually almost no, I'm not going to say totally no, but there's usually almost no single points of failure. You can have a metadata controller go down, it's going to fell over to the other one and a lot of users aren't even going to notice that difference. Speaker 0 09:07 You could have a raid controller go up on a raid. It's still going to be running when we build file servers for people, there tend to be significantly more single points of failure. Right? We don't run into those hardware failures that often, but if you have a totally, you know, high uptime requirement in your environment, you're in broadcast, you are, you know, just have a lot of heavy deadlines. You've got a ton of users. They're working around the clock. You can't sustain damages sand with all of its additional redundancies that it tends to offer above and beyond your typical file servers. Another great advantage, she got sure better speed and overall performance. You know the apps like it more because it behaves more like a local judge. The users can think of it more like a local drive. It behaves like a local drive. One thing I should say is we don't get spotlight searching on it. Speaker 0 10:04 That's true. So, and that's again, because the file system, let's say that again Nick, you don't get spotlight searching on a sand on a, on a special, well specifically a StorNext slash sand sand. There are applications out there that can, that can get you that functionality. Um, you know, a man being one of them. Uh, ma'am. Or there are some indexing tools that we can use that give you a separate thing that's a lot like spotlight search. Um, we have feature of those studio network solutions actually makes one of those. So those are all options. That's kind of the one caveat and where it doesn't kind of act like local storage, but the reality is when you get to the amount of capacities most of our sands have, you're going to need a more sophisticated file management tool than spotlight just to deal with it anyway. Speaker 0 10:48 Right? You're going to probably want to be looking at a ma'am to have things tagged and indexed in a true database and all of that. Right, right. But we hope so. Yeah. Yeah. Well we, we do like our memes despite the fact that we do, they sometimes paying us or we like our moms, but we like the fact that people are thinking about how they can find this and, and sometimes in many cases monetize on this, you know, wealth of, of material they have. Yeah, absolutely. Absolutely right. Because, well, we've always talked about moms a lot on this show, but you know, there's both the, the things the ma'am does for your organization internally and the things that it can enable for you externally as far as easing distribution and giving your clients a window into the content re monetizing and all that. But it'd be helping you with workflow, be a workflow engine, automate some tasks. Speaker 0 11:39 But here's the thing, right? You want a strong foundation under that and you know when someone wants the most robust storage platform possible for high performance video production work group environments, Sam is what we push them towards, but at a sand like StorNext is probably one of the more expensive types of storage that we sell. Now, here's what's interesting. There are manufacturers who sell systems that either are just really fancy Nazism but still present a lot of those same issues that we talked about earlier with NAS to the users, but maybe cost a lot more than a store next sand or, and this is interesting and we do this ourselves, sometimes systems that are really sands behind the scenes but then present themselves out to the end users through a NAS file. Shand right. You know, this would be something like EMC Isilon. Sure. Isilon really is a sand behind the scenes. Speaker 0 12:40 It has a robust sand type file system. It, it writes data across many, many drives. It can be scaled out, but most users are connecting to it through a file sharing protocol call like NFS, you know, maybe 10 gig. But a lot of those would be single gig connections. So there's still a lot of coffee stuff. They infrastructure caveats. Exactly, exactly. So there's a lot to consider with sands. NAS Sans is one required in your environment. Other reasons that you would need to move from one to the other. You know, let's talk about that for a second. Um, so uh, moving from, uh, I've seen um, situations where a, an organization has decided the limitations of NAZ are really not working for them. A lot of times it comes around the time where we decided, Hey, we need some more storage. Like we rent, you know, we used up all of our storage where it like 85, 95% and we needed to get some more storage. Speaker 0 13:37 So, you know, we've been, we've been thinking for awhile we want all the benefits of a sand, so maybe it's time to go in that direction. What parts of the currently in existing NAS infrastructure could potentially be like repurposed for the sand, new sand environment? Well, we talked about how in a well engineered video NAS environment, you're going to have a separate ether net network than your general network. Well, maybe that switch becomes the metadata, Ethan, at switching your sand environment. It's no longer with the actual data flows over, but it's still a useful networking switch for the, the metadata to flow through and these metadata switches don't need to be anything real spectacular, but in a lot of cases, if you went from a NAZA environment, they you've got to switch and it's going to suit all those needs. So you've got that right. Speaker 0 14:22 You've also already got the network run, most likely if you've done the job, the cabling issues associated with both of these networking types are nothing to scoff at. Within that, as you know, you're going to need at least one more ether net connection between the switch location and your client systems. With a sand, you're going to need that. So the general network, the ether net network for the metadata and the data network, which is going to be fiber optic. So cabling is definitely a thing and having that second ether net run already done, that can go from being the NAS storage network over to the sand metadata network. That's useful. Yep. And then the big thing is usually the storage itself, right? Right, right. These storage, it would be nice to be able to use those 30 or 50 or a hundred terabytes that was previously kind of shared out as a NAS and somehow turning it into a building block of a sand. Speaker 0 15:19 And this is one of the things we talked about how the NAS is we build tend to be these kind of self constructed servers running the open ease software for sands because we like them to have high availability. We generally use a third party commercial product as the, it has to be fiber channel storage, it has to have raid controllers with fiber channel ports on them. And these days I'd say the two primary ones we use are next sands storage, which we've talked about on this program. And we often put into environments and quantum his own QX series of storage, which is also just fiber channel rates, right? Yep. And that may have been the storage we sold you for a NAZ and we may have taken that tier of storage with its redundant raid controllers and everything and put it behind a server acting as the server head node, the the file server. Speaker 0 16:12 And maybe that's what your NAS is, in which case it's pretty easy to repurpose the storage, although I have to be very clear because we need to put the sand file system on it. We're going to have to back up all your data and blow it up, you know, figuratively. But we have to back up the data so there's data on it and then fold it back into the new San Architizer data migration step there. But we might do that. Or if you had purchased one of our open ear units, uh, one of our file server kind of appliances tuned for file storage, which yes, we may have put in as a file server initially. And so we outfitted it say with several 10 gigabit ethernet ports to connect it to that video specific ethernet network, but we actually can convert those into fiber channel raid storage as well because one of the things that open he can do in addition to acting as a file server is that it can actually be the management software for what essentially becomes a fiber channel raid. Speaker 0 17:15 Now the one we sold you as a file server which had a single motherboard and a single raid controller, you know several other single things in it isn't going to present the high availability with the re dual redundant, what they call active active raid controllers that we would often put into a sand. But we've definitely put openings into sand environments. You can what their single points of failure. Again, we'd have to backup all the data onto one of our reds, wipe out the open box, put in a fiber channel card because again, on a sand we'll need a fiber channel card for the storage to connect to the fiber channel switches, which the clients and the metadata controllers are also connected to. And then we could add it as a building block of the sand and then add more on top of it. And then once that's done, transfer your data back to it. Speaker 0 18:11 And we may say, because again, not always do sands, just present as a single drive. We may have sands that are several volumes that there's several drives. We don't really drive for each a volume for each work group. And so we may set your NAS up as a, you know, one volume of a sand and then have new storage may be high availability storage that we fold in like a neck sand or a QX from quantum unit that has the redundant raid controllers and all that. It's really like two computers in one, right? Um, we might fold that in as the new storage in a sand environment. So you get the benefits from that high availability in that high performance. But say, well, here's your original volume and maybe you treat this as kind of nearline because it sure potentially more subject to going offline. Speaker 0 19:01 If a part fails, you might not want to put projects that are being worked on now. Maybe it becomes a disc based archive tier or maybe use it for proxies for your man proxies. It could act as a backup to a primary Sandvine if it's large enough, if we want to replicate your production volume over to another just for fail over purposes. So do find that in, in uh, in a lot of the transitions we've done that they're more successful if we add a sand pull of storage and migrate from the NAS to the sand. And that way, you know, you're migrating from old storage sooner storage. It's not, there's not a point where you're, you're like down and you can't use the storage migrations happening leads. I think to, you know, one of the last points I think we ought to make here, right? Speaker 0 19:44 Which is that in a lot of our clients' environments, they use both of these technologies and often at the same time, and that may be true if you started with a NAS and want to adopt a sand, you don't have to stop using the NAS because you no longer probably have it on a dedicated ether net network for live video work. It may inherently become of tier two storage or that's on your general network, but that doesn't mean that it's not addressable, it's just that you might not be doing live editing off of it all the time, but it might still be a useful repository. And so you, a lot of our clients will have a little bit of this and a little bit of that. They might have a sand for production purposes and maybe have a, ma'am that's really just used as kind of the front end towards archive and tape archive. Speaker 0 20:36 But it has a disc tear, like a silo of storage that's like kind of the disc side of archive. There's a ma'am managing just that stuff and then tape library connected to it. And so you may have, uh, what the NAS essentially becomes something that you only interact with through the ma'am. Right. And it's there under the hood is kind of the ma'ams Cassius storage, if you will. And nearline so you know, to this tape exactly. There's a lot of ways you can do these environments. And again, sometimes you can get away with either technology. Sometimes there's just a lot of reasons why one seems to be the appropriate one in your environment and it's, it's not always the sand. It's sometimes the NAS just makes the most sense for you. Right. There's a client that we're talking with now where there's just a preponderance of factors, some of them budgetary cause Nez can be done more cheaply than Sans, but it just seems like a NAS might be a good fit for them. Speaker 0 21:35 Um, there's other environments where we wouldn't even think to sell them anything other than a store next sand. Some of our big broadcast customers who have operations that are going around the clock. In fact, for some of those types of folks, we might sell them multiple sands with multiple sets of metadata controller servers where if a live feed is being ingested, we might set it up. So someone is ingesting onto both sand volumes simultaneously. So for whatever reason, these things are not a hundred percent infallible. If something went down, they still wouldn't lose the capture. Right. So I've seen a sink volume saying, you know, sands that are set up as sink volumes. Absolutely completely separate sands. One is sinking to the other one. Yep. So there's a lot of things we do and obviously it's our clients talking to us and having some pretty deep conversations that leads us to us. Speaker 0 22:31 Make maybe one or even multiple recommendations and we usually try to say, listen, we're not going to put something in front of you that we don't think would meet your needs. However, let's explain the differences and what will be affected day to day if you go this route versus that route, right? Right. Here's a Nez. We think this'll work, but here's your caveats and you're going to really need to make sure that you can break down for us who your power users are, who might need 10 gig and who can get away with single gig. We're not going to make guesses about this. You need to be willing to engage with us in the conversation, right? If you don't want to think about it, then you're going to have to spend more money so we can put something in that we know inherently will have more overhead than you really know what to do with. Speaker 0 23:12 But you know, if you want to have a nuanced conversation, which we're always happy to have, then maybe other things become possible. Sure. But we really kind of demand those conversations of our customers because we can't guess about this stuff. We don't want to guess that a NAS connecting to your clients over single gigabit is going to work because who's going to be the first people to get the phone call when people start dropping frames because it wasn't specked out adequately because you said that you only needed three streams of XD cam 50 at once per client, but Oh well we decided to do this project in red code a well that might make a difference. Sure, sure would make a difference. You know, there, there's some uh, performance differences between read code and XD cam HD, right. So, but people don't often think of this stuff. Speaker 0 24:02 I mean, some do but some don't. And you know, we really like to make sure that the technology we put in that is the bedrock of the organization. Your storage cause your data is like the most important thing. Your people in your data, you can replace computers, you can't necessarily replace people and you definitely can't replace your data if it's lost. That may have been a unique thing that isn't going to get redone. And so, you know, we take this stuff seriously, we'd like to build some scaling into every system is sure you can start on a single gig NAS system and move to 10 gig at some point or start just a couple of systems on 10 gig and fold in more 10 gig over time. When you have more budget, that opens up for a bigger 10 gig ethernet switch. As we talked about, you could start with a NAS and still move over to a sand either by adding it or turning it into a sand later on. Speaker 0 24:54 Um, we can build smaller sands at first or one of the great things about sands is they're very easy to add to. We can add storage space that both ups the space of the sand and gives you greater backend performance on the sand at once. And it's like an hour or two of downtime and the users come back. And then the sand is just bigger and faster. That's one distinction I wanted to make. A lot of times with Nazism, this isn't all NAS systems, but with the basic NAS systems, what happens when you run out of storage and you just, you know, we're not going to archive. We're going to, we're gonna, we're going to add, we can add storage to Inez, right? But when we add, you know, cause again think about these shares, these things that are these virtual volumes that you connect to using sifts or NFS. Speaker 0 25:39 These shares will originally set up and let's say we sell our file server. That is the one that holds 36 hard drives in a four rack unit. And let's say we all, we set them up as one big group of drives and they're all presented as a single share. We can add drives through an expansion chassis later on and even associate that with that share that share. But the thing is we're not really upping the aggregate performance of the share because I think the way it works, someone will correct me if I'm wrong here on my own team side, but I think when we add drives to a share post facto, it kind of fills up the original group and then flows over to the next group. And so then the performance of the share kind of becomes the performance of the new set of drives. Speaker 0 26:34 And of course you can draw on some of the performance from the old set, but it's never going to really outperform the original set. Right. And I, I would think that one of the reasons an organization might say, you know, we want to add some storage is maybe they've grown their team a little bit, maybe they've had to add a few workstations, maybe there may be an increased bandwidth requirement then at that point. So again, with sand sands give us the most flexible way because again, the way that those file systems are written to inherently kind of allow for multiple users. One of the other nice things about sand file systems is that they're very good at allowing you to fold in additional storage and have a very elegant way of having that add to the performance in space of that whole file system. Again, that thing that will just look like a drive, right? Speaker 0 27:21 And we use these terms kind of interchangeably drive. We're not always just referring to the hard drives. We're referring to the drive icon icon on the desktop. That drive icon is a volume, which we sometimes say is a file system, right? Because there's one file system that determines the volume. That's what kind of turns one or more hard drives into the volume is the file system on it and we may also refer to a namespace. It is a volume that has a particular volume name associated with it and you know, sand file systems are typically really good at allowing you to add space and performance under that same namespace namespace in a way that to the users is just totally seamless. They show up tomorrow and Oh, the sands twice has been twice as big. You know that there's more nuance involved when doing it with a NAS that's typically a little less glorious. Speaker 0 28:19 Sure. Um, well let's, let's just, let's, let's talk really quick about re shares. Sure. Uh, when you have a sand, you may have, you know, your, your production team that wants to access the sand 24, seven, you know, all the time and then you may have a couple of fringe users. Maybe they just need to pull assets every once in a while that that maybe we can't afford to invest in the infrastructure to make them full on sand clients. Can we have like a machine somewhere on the network that is attached to the sand say share that out as a NAS to somebody else. Remember how I said there are some manufacturers that almost specialize in building sands that only get presented out to the clients as NAS is? Well we actually do that ourselves with StorNext. Sometimes it's really under types of circumstances that you just outlined. Speaker 0 29:03 It might be a graphics user or an audio user who's doing more like pushing and pulling content or maybe they're working locally, right? Cause pro tools is at least most predictable when you're using local storage, but they still need to be able to like pull a video file down. Sure. But they don't necessarily need the fiber channel connection to do that. Well sometimes we'll build a sand, we'll make one of the clients a server. That itself is now connected to the sand file system and is resharing some or all usually some of the sand, maybe a particular directory or a set of directories as shares using SMB sips and NFS. And it's basically turning a portion of the sand for some users into a NAS. Right. And that's where the rules of the NAS kind of come back up. Again. There are potentially more latencies involved. Speaker 0 30:02 There is more of a performance bottleneck, but you may have an environment, and we've done this quite regularly over the years. In fact, I'd say in most of the bigger environments we cater to, you know, least half of them are probably sharing out at least a small portion of the sand over ethernet as a NAS. Right. You know, so other fringe users, you know, at the periphery of things can kind of still have some data exchange with the sand users, but they don't necessarily need all that high performance. Yeah. So we kind of create a hybrid environment where the sand is a sand to most users. But for some users there's this gateway in between. It is a file server. I've heard it referred to as the NAS head, the NAS head. It is a sand client, but then it virtualizes a portion of the sand out for another set of users. Speaker 0 30:51 Right? So that gives us obviously the most flexibility because then on a user by user basis, we can say, well, you clearly need to be a sand user. You need to be a sand user. You could be a nasty user, you could be a sand user, you three over, there could be NAS users and there is the cost of that file server box, but they're not hugely expensive. It's a server. Right? Right. And that way we can bring down the costs on a lot of users by not having to have fiber channel HBH, HPAs less cabling needs to be run out to them and they still get the performance they need, but in a way that allows them to seamlessly exchange data with the same sand storage environment that the sand users are on. So we got a whole lot of options. Yeah, sounds like it. We got options, we got options. Mr gold. We have lots of options. So I think we've probably blown most people's minds here. Um, NAZA is in sand sands and NASA. They're not the same, but they can live in harmony together with hugs. Just like the little icon on the desktop with the people, people holding hands, shared storage. Yay. Yay. Speaker 0 32:00 So I thank everyone for listening. Thank you. And thank you for asking all these great questions. Hopefully it's a good thank you for answering great questions. I tried. If I lied about anything, you know, feel free to send an email to uh, Jason Whetstone at Chestnut and we definitely will get into some of those issues we were hinting at with directory services users, current missions. That's a biggie. Maybe, maybe we'll bring in the heavy artillery for that conversation. Absolutely. Well thanks folks for listening to another episode of the workflow show and we will speak with you soon and take care. Thanks. Bye. Bye. Speaker 1 32:37 The workflow show is a production of Chesapeake systems. We welcome your comments and suggestions for upcoming episodes. You can email [email protected] that's workflow [email protected] and if you'd like to talk with a member of our team of experts about your particular digital media workflow needs, email [email protected] that's pro [email protected] we appreciate your listening to the workflow show and we'll catch you next time.

Other Episodes

Episode 0

March 01, 2019 00:44:28
Episode Cover

#35 "Training in Post Production"

This episode of “The Workflow Show” features Chesapeake Systems’ trainer Luis Sierra, who shares his perspective on how training needs in post-production have evolved...

Listen

Episode 0

September 26, 2013 01:19:05
Episode Cover

#18 "An In-depth Conversation with Dave Helmly of Adobe"

Dave Helmly of Adobe joins us for this episode of The Workflow Show. Dave is universally recognized as "THE Go-To guy" at Adobe for...

Listen

Episode 0

November 04, 2013 00:37:28
Episode Cover

#19 "Beyond Metadata: The Role of Audio and Video Search"

We recently conducted another of our small group "think tank" digital video workflow symposia in New York City at Quantum's offices in Lower Manhattan....

Listen