#53 Shared NVMe Storage, Object Storage, and the Future of StorNext, with Eric Bassier

September 11, 2020 01:13:07
#53 Shared NVMe Storage, Object Storage, and the Future of StorNext, with Eric Bassier
The Workflow Show
#53 Shared NVMe Storage, Object Storage, and the Future of StorNext, with Eric Bassier

Sep 11 2020 | 01:13:07

/

Show Notes

CHESA’s Ben and Jason chat with Eric Bassier, Senior Director of Product and Technical Marketing at Quantum. Ben and Jason ask Eric about the future of storage systems in and out of the cloud, and what changes within Quantum’s products are helping businesses adapt their workflow to the current climate of on-prem, cloud, and hybrid systems. Eric details the benefits and use cases of NVMe and how companies are migrating to it.
  More From CHESA on NVMe: For more on NVMe read Ben Kilburg's blog Is NVMe Right for You Right Now?
View Full Transcript

Episode Transcript

Speaker 1 00:00:04 Hello, and welcome to the workflow show where you get some workflow therapy whilst listening to discussions on development, deployment, and maintenance of secure media asset management solutions. I'm Jason Whetstone, senior workflow, engineer and developer at Chesapeake systems. And I'm Ben Kilburg senior solutions architect. Speaker 1 00:00:25 Today. You'll be hearing our interview with Eric Bass here, senior director, product and technical marketing at quantum. Eric will be helping us guide you through some of the storage and networking technologies behind StorNext. The modern file system offered by quantum we'll be discussing storage technologies like NBME its performance benefits and advantages and how well it pairs with networking technologies like RDMA. We'll talk about hybrid cloud and maybe even file systems in the cloud. And we'll cover some use cases for block level storage versus object storage. And finally, Eric will give us some vision into the future of StorNext by talking about the roadmap for store next seven and beyond. Before we get to that, though, we have a few quick things to ask of our listeners. First, you can reach out to us directly with questions and thoughts on [email protected]. We're also trying to get to know you better. Speaker 1 00:01:16 So let us know how you found out about the workflow show. You can reach out over email or at Chessa pro on Twitter. And if you enjoy listening to the workflow show, then listen up. We've leveled up our content production schedule, which means more episodes, more guests and more workload therapy. So please subscribe to the podcast. So, you know, when the next game is before we start our discussion with Eric today, I wanted to call out some previous episodes that cover some of the same information that we are going to talk about today, but maybe in a little bit more depth, uh, I wouldn't call them prerequisites for listening to this episode, but they do give you some more in depth information on some of the subject matter. So the first one would be number 27 NAZA versus San made clear it's a sexually one of our most popular episodes where we sort of compare and contrast the difference between a sand and a NAZ. Speaker 1 00:02:02 The next one would be number 33 file systems. And beyond our guests, there was Brian Summa, our senior systems engineer. We talked to him about what a file system is and how they interact with systems, workstation servers, et cetera, and last but not least, we'll be the episodes from part two of our five-part media and entertainment basics series. So those episode numbers would be 49 and 50. All right. And without further ado, let's get into our topic. I'd like to introduce our guest Eric Bass here, senior director of product and technical marketing at quantum. Welcome Eric. Hi, Jason and Dan, thank you guys for having me. Absolutely. Thanks for joining us. Yeah, so we talked in our media and entertainment, basic series, just a few episodes ago about some different storage technologies. And what I'd like to do is just give a brief recap. I want to talk about the newest, latest and greatest one. The one that everybody's kind of talking about these days, which NBME Speaker 2 00:03:00 So Eric, why don't you to give us a, just a high level overview of what NBME is and why it's so cool. Yeah, for sure. NBME stands for a nonvolatile memory express. It's actually a new protocol for reading and writing data to flash. And, you know, in, in this industry, we're seeing a lot of our customers deploy this really over the last year and a half for kind of that tier one, you know, production and post production. We can get more into the technology as we go, but it's orders of magnitude faster than traditional hard drive storage and even SSD. And what we're seeing is as our customers in this space are dealing with more and more high rise content, and they have a need to add more VFX, do more rendering. And especially in today's climate, you know, a lot more transcoding, you know, to distribute content over different channels in that the attributes of NBME and the performance attributes that it has are making it kind of the go-forward choice for that tier one production storage. Speaker 3 00:04:07 Right? Absolutely. If I remember correctly, NBME, um, it's bus protocol is very different, right? It's connecting through the PCIE bus and it doesn't have a traditional, a storage controller like SAS or Satta, right? Speaker 2 00:04:21 Correct. Yeah. Bypasses some of those elements and in some of the legacy kind of storage array, architectures, you know, things like a storage controller or raid controller can actually be a bottleneck. And NBME kind of gets around that it's it's reading and writing data directly to the storage devices. And that's what allows us with our products and others to kind of achieve some of those performance advantages versus, um, you know, most of the current technologies that are deployed out there today. Right. Speaker 3 00:04:51 The way I typically like to think of it is instead of, you know, in your typical commute, like say, imagine your hard disk is your car and you're driving to work and your typical commute with a traditional hard drive or even a traditional SSD using a sat are SAS backplane. There's a couple of stops. Like Eric was mentioning where we're going through a raid controller, we're going through a device controller. And if we're driving, that's a car we're stopping at stop signs and going through a, you know, waiting for various traffic. But with NBME, it's kind of like you have an on ramp to the highway in your backyard and you just go, Speaker 2 00:05:28 Yep. Yeah. Right. I liked the analogy and kind of building on it. I mean, one of the other new technologies that kind of goes hand in hand with NBME is, um, RDMA, which is really more of a networking technology, but there's a strong parallel and the two work in concert. So Ben, with your analogy there, it's kind of like with legacy kind of more of a TCP IP based protocol, I'll try to think of, it's almost like you've got a lot of friends in your car, but you're stopping every, just to make sure that they're all there and you buckled it. And, uh, and, uh, you know, with already a Mae, what it's allowing our customers to do is achieve very low latencies on an ethernet based network. Right? I mean, the spot we come to, a lot of our customers are still on a fiber channel, San and, you know, we're where customers have that infrastructure. Speaker 2 00:06:20 That's great. It's very low latency, but going forward more and more customers are moving to an ethernet based infrastructure. It's less expensive. It's less complex. And using RDMA with NBME is one of the ways that they're able to do that and still do all the work they need to do and, you know, ingest all the content that they need to do. So those two kind of work in concert and, um, we're seeing a lot of good adoption in this space. A lot of interests, even through COVID Ashley, a lot of people investing in this tech. For sure. Yeah. Let's just go, Speaker 4 00:06:52 Let's just take a quick pause and define what RDMA is just for our listeners. So RDMA stands for remote direct memory access, and then it's kind of like a mind-meld protein, Speaker 3 00:07:04 Right? We're fond of calling it that for sure. If you're a Trekkie and you remember our friends, the Vulcans, and they do the Vulcan mind meld, obviously a remote direct memory access. My thoughts to your thoughts, your mind to my mind, or my mind to your mind, you know, in this case, it's my memory buffer to your memory, buffer my data stream to your data stream R D M a bypasses the CPU. And so it takes a much more direct route and can also bypass that networking layer and TCP IP like Eric was mentioning. So it's wickedly fast and super cool. Speaker 4 00:07:44 So it sounds like it really is a good pairing for NBME because we're sort of bypassing all of the things that, uh, you know, this, the, the switching protocols as it were that kind of can slow us down. Yep. Speaker 3 00:07:55 Yeah. And that doesn't mean that on the, at some point in your storage stack, you're still going to be using TCP IP somewhere, but it just means instead of using those traditional layers on the backend, like raid controllers, that we're going a heck of a lot faster through those Gates than we would have otherwise. Speaker 2 00:08:18 Yeah. It's all about, you know, we're removing bottlenecks in either the network infrastructure or the storage infrastructure and, you know, Ben, I think you said it, uh, hopefully we get a chance to talk a little bit more about this, but with our StorNext customers, what we're seeing is that as you said, you know, many users of the data on the storage, they may still use TCP IP or traditional NFS or SMB access. It's fine. Maybe they just need to review low res versions of footage and sign off on it. But where you have, you know, colorists working in AK raw at very high frame, you get their workstations connected directly to this NBME storage. And, you know, they're just able to produce more. And the, it is orders of magnitude faster Speaker 3 00:09:06 When you're talking about connecting their workstations directly, our youth saying that directly to the back of an F 2000 or an F 1000, we can just plug in our ether, net, Nick, and, you know, our colorist is maybe 25 or 50 or a hundred gigabit connected to the backend of the storage. Speaker 2 00:09:25 Yeah. I mean, yeah, direct, connected can mean different things, but this is also part of the magic of what we do with StorNext share using our StorNext file system client. They can actually connect their workstation over the network directly to that backend storage and the underlying protocols of RDMA, as well as MVME. Like we talked about, you're bypassing all these other things in that chain that would have been a bottleneck. You're not using a chatty TCP IP protocol, right? The network storage looks like local memory from the perspective of your workstation. Now you need a new Nick to do that, right. There's new Nick's to do that. And you need some switching infrastructure. And so that's part of the investment people are making, but for our products that at least we're seeing, NBME deployed both in fiber channel sands and fiber channel environments, as well as in these new ethernet based environments with RDMA and those protocols. Are you seeing a lot of DLC with that? Yeah. It's a mixture of both. It's some kind of direct connection and some kind of through a proxy of sometimes. So. Gotcha. So then what Speaker 3 00:10:35 DLC is that the, uh, what does the D stand for Eric? I know it's the land client, but Speaker 2 00:10:40 Distributed. Yes. DLC is StorNext distributed land clients. It's a way that users can connect to a StorNext file system cluster over the land. Right. And it's something that I think we probably haven't talked about enough as a company. It's one of the things that makes StorNext very neat. And it's one of the reasons that the file system is so widely used in media production and post promotion, Speaker 3 00:11:03 Right? Because it gives you some of those benefits, um, that you traditionally get with a fiber channel San meaning that we're talking at a block level. And, uh, for those of you who are new to quantum and StorNext StorNext is the file system, which is a block level, a storage area network wickedly fast, probably one of the fastest file systems in the world. It's a clustered file system. So multiple rate array, multiple storage, these are all available, sitting on your desktop, all humming together, a hundreds of hard drives or, um, dozens of MVME drives, or both altogether as one sitting there as one icon that might go somewhere like 25 gigabytes per second, which is Lightspeed. Right. And that's the key there is that it's that single namespace, that single volume on your desktop that you'd be working off of and a production environment. Um, there are many other very, very fast file system types out there that can scale out like you can with StorNext. They just do it in different ways. Uh, you may end up adding separate volumes, you know, things like that. So, um, so DLC to piece of software that is either part of Euro S in the case of a Mac, or it's a piece of software that you would install for windows or Linux. Um, and it allows you to access the block level storage over the ether, net switching and ethernet is a little bit cheaper than fiber channel and these days maybe even a little bit faster. Speaker 2 00:12:34 Yup. Yeah. The way I think about it is it's maybe not a hundred percent technically accurate, but, you know, I think everybody is familiar with an NFS client or an SMB client because it's just built into the OLS for sure. And those clients use TCP IP as kind of the, one of the underlying protocols they use to send data between a forks, you know, workstation and storage. And it's great for most types of data, but it's got overhead. It's hard to get really low latencies with that stack of things. Our alternative would be we use a StorNext file system client. We also support NFS and SMB because people need that too. But for the really high speed work, the really low latency work, we would use our file system client. And then in the case of what we've been talking about, we're using a flavor of already made to send the data back and forth. We're using the NBME as the storage and the very high speed flash storage. And so it's just, it's a much more efficient kind of stack or chain. And it's why we can achieve the streaming performances of, you know, 25 gig, a second and upward actually, we're going to improve on that again this quarter. And I think, hopefully say definitively, we are the fastest file system on the planet by a long shot. So maybe that can be our next, uh, workflow show. That Speaker 3 00:13:51 That sounds good. Right. So along with, NBME obviously the physical footprint of a volume like that by way of comparison to traditional hard disk drives is radically different, right? So, um, if we had a fiber channel San, right, and we wanted to reach 20 gigabytes worth of performance, we might have something like, I don't know of four of the queue access for eight fours. Um, and that would take up 20, you have Rackspace to provide that kind of storage versus something like an F 2000 with 24 MVME drives that would give us 25 gigs of performance. And then it would maybe be what like Speaker 2 00:14:37 You, yeah, yeah, you're right. I mean, it's, it's one of the other benefits of NBME. This is one of the other reasons that we're seeing companies in media invest in NBME right. So recapping little bit, the first is just that everyone is dealing with more high resolution content. For sure. I am more 4k, more AK and at high frame rate, and they're typically ingesting it raw. And then they, you know, we create proxies and, you know, do different things. But one of the initial customers of ours that deployed NBME, they upgraded their entire editing environment to all end Vme, a hundred percent in BME. And you don't have to do that. We're going to talk about some things that are new and StorNext, and that, that allow people to kind of mix and Vme and hard drive storage. So you can kind of think about, you know, our customers can kind of deploy a little pool of NBME storage, you know, maybe for their colorists or for their, you know, high end editors or something, but keep the bulk of their production base on hardness, which is less expensive. Anyway, this customer upgraded their entire production storage environment, and they were able to reduce their storage footprint from three full racks of hard drive storage to 14, you and DME storage servers. Right. Wow. Seven of our FFP thousand series. But you know, when I'm out talking to customers and partners, Speaker 5 00:16:13 Almost this initial perception, Speaker 2 00:16:15 People like, yeah, I love MBME, but it's just too expensive for me. Right. And what we're finding is it actually is driving some cost savings and business line benefits and other areas that really offset that. Right. I mean, one of them is reducing that data center space. Yep. The correlated one is think of the power and cooling savings that are associated and it's maybe worth double clicking on a little bit to Ben. So the audience understands that the reason that that customer had so many racks of disk was that they needed a very large number of spindles to achieve the performance that they needed to do their work. And because of the performance advantages of NBME everything we've been talking about, they were able to shrink that footprint by, you know, whatever that is 80%. Yeah. And that's power and cooling savings. It's Rackspace savings and that's expensive. Speaker 2 00:17:06 You know, data center real estate can be expensive. So, you know, we, we could talk about some of the other things like reducing fiber channel port counts, right. Which has a real, you know, concrete cost savings, or maybe moving away from fiber channel altogether. Right, right. Let alone just the increase in productivity that it can drive for customer's workflows and customers, media pipelines. I mean, you have your creative teams can get their work done faster. You know, if you're a post house you're completing projects faster, you can take on more projects, you know? So, I mean, these are even kind of through COVID. These are some of the reasons why we're seeing such interest in NBME. It just has a really good value proposition. Yeah. Yeah. Awesome. That's great. That Speaker 3 00:17:50 Total cost of ownership or TCO goes way down when you're dealing with a whole lot less, uh, power and cooling, as well as things like your support contracts. Right. Because if you've got four raids to deal with, as opposed to 20 rates, the yearly upkeep on that is a whole lot lower. So there's a hidden benefit to, to consider. Speaker 4 00:18:11 Yeah. That's something that we should just make sure we call out specifically if you listeners, if you're pricing out an NBME storage solution, think about those things that might go away. And that TCO, like we said, that Rackspace that heating and cooling, I'm sorry. Cooling mainly. And, uh, and then that support contract costs. So just keep that in mind, right. Speaker 3 00:18:31 When, while we're playing the acronym game, I know we've thrown out TCP IP a bunch over the last couple of minutes. So we should probably define that too. We're always up for making sure people know what the heck we're talking about, because you might, you may be an it expert. You might be a video editor who knows. Right. Um, however, you came here, thanks for spending time with us. So TCP IP is transmission control protocol or internet protocol. And that's how, um, how we're talking together over the internet right now, or how we search for all those great cat memes or even watch great videos and, um, enjoy Netflix like we do on a daily basis. Speaker 4 00:19:10 Yeah. Basically that backbone of communication that we are all used to seeing is all TCP IP based. Speaker 3 00:19:16 Right. And Eric had mentioned when we were talking a little bit using our car acronym about checking to make sure everybody was belted in TCP IP in terms of a protocol it's job is to make sure that those data packets arrive from point a to point B. So if the path or the street is not 100% clear and it runs into, you know, bumps in the road and it has to slow down or in multiple hops across the internet, that's when we can drop frames. You know, if we're talking about a video editing workflow or when we're watching each other on zoom and our video and audio starts to get garbled, or we're watching Netflix and the quality goes down, that's why largely in part, because the protocol was created back in the what, uh, early seventies, maybe a little bit earlier, largely in part to make sure that our allies in Europe were still there after a nuclear war. Speaker 3 00:20:15 The idea is to make sure that those data packets go from point a to point B successfully. And so that we can make sure that the information gets there. It's not about speed at all. So using it in places where we expect to have a lot of speed, we have to make sure that all of the conditions are perfect, meaning that we haven't a switch where we've got gobs and gobs of bandwidth. And if we need buffering in that switch to make sure that packets aren't lost between the storage and your workstation, that all of that happens in any way. I digress. Let's talk about more. Well let's I guess we should talk about the F series a little bit, right? Yeah. So quantum has two models of that F series storage, um, Eric, right? The F 2000 and F 1000. And they're both a little bit different performance is kind of the same. The F two thousands may be a little bit faster on REITs, but why don't you tell us a little bit about, Speaker 2 00:21:16 Yeah, I mean, so first off we introduced the StorNext F series last year at NAB. Um, so it's been out in market, uh, almost a year and a half now, very strong adoption, uh, almost all of it in media production and post production, but, um, we're now starting to see it being deployed outside of the media and entertainment space. We could talk about that more, but yeah, the initial product we introduced was an F 2000 a to two U NBME server designed to be deployed as part of a StorNext file system to controllers, fail over kind of high availability, everything, and, and a good example of, you know, one of the initial customers there, like I said, they upgraded and they upgraded their entire editing environment to these boxes, you know, so they're seeing great results, very happy. And as we got into the market, we were starting to hear from customers and partners that they said, you know, Hey, I love the value proposition of MBME. Speaker 2 00:22:11 Um, I don't always need it to be the most highly available, uh, device like, so we have some customers where they, this is another great real world example, where they were doing a large multi-node render job on a StorNext file system with a bunch of disks. And it was taking them almost a day to complete the render job. And they put in a little bit of NBME same render job gets done in 30 minutes and just sprinkle on a little NBME. And their job has reduced by 23 and a half hours. Right? And in this, in this particular case, this was a post house, you know, they're not rendering 24 seven. And you guys know, sometimes people are using the cloud to do burst render and some of this stuff, but for them, they're like, look, it doesn't need to be highly available, but I'd love to just put a little bit of NBME in my environment, get that boost, see that benefit. Speaker 2 00:23:07 And so right at the start of this year, we introduced the F 1000 uses the same software that's on our F 2000. It's just a lower cost. One use server, 10 MVME drives. And the entry point of that is about, Oh, gosh, I don't remember exactly. I want to say it's about 35 to 40 terabytes. You know, it's not much. So we're now seeing a lot of customers that are just purchasing one of these, and they kind of almost put it in front of their current storage environment. If that makes sense. Like if their current disk-based storage or SSD storage is their tier one, call this a little tier tier zero, right. And they, you know, they, they give to their colorists or they give it to their editors that are working in AK, or they use it for render jobs. And it just gives them a, a boost for whatever step in their workflow they need. So I love it. Yeah. We've got the two models we're going to be enhancing them again toward the later part of this year with new software enhancements that actually speed up performance even more. So we've been really pleased with the adoption so far Speaker 3 00:24:06 Software. One of the things that makes this possible, right, because we still need some traditional data protection services like raid and trying to make it highly available, you know, in the case of the F 2000. So there is a software defined layer or a layer of virtualization that happens on top of that, that you guys have really kind of jumped into and are kind of working in both the present and the future of storage topologies. Can you tell us anything about that? Speaker 2 00:24:39 Yeah. This was actually one of the things that we did just kind of almost under the covers in the last year and a half, but we talked earlier about how, you know, dedicated raid controllers. Some of those things can be bottlenecks and more of the legacy architecture, but you still do need that data protection. So we developed our own software, defined a block storage stack. Uh, we actually made a very small acquisition about little over coming up on two years ago, integrated some technology and some people, and then built our own software defined block storage stack based on that. And the, the first place that that came to market was with the F series. So we use that as kind of that underlying block stack. And then we use it with a different product line. We have focused on video surveillance. And when we think about where we're going with StorNext, we are moving to a totally software defined architecture for StorNext and this was one of the technical building blocks or stairsteps that we needed to get there. And so you're actually gonna see that in the future store next, when we go to store next seven and beyond that, we're moving to software to find everything. Speaker 3 00:25:42 Yeah. I mean, that seems like it's very much been the present in a lot of the more advanced, big data cloud workflows that we see, but seeing that it's coming to, um, something that we might use in M and a, and start to trust in environments like that is really exciting. Speaker 2 00:26:02 Yeah. Yeah, for sure. Yep. Frankly, one of the things that makes it unique is, um, you know, this is a little technical, but if you think about most of the storage arrays that are out there, they were designed for enterprise, it workloads, right. They, they were designed to run a database really fast, and they have a bunch of really fantastic data services that were built for those types of data sets. But this has allowed us to build a block storage software that was really optimized for this type of industry where it's really about streaming performance, for sure. Things like impression, things like data. I mean, these little data services that go in the code stream, we don't care about that here. We've, we've stripped all that away. Right. We have a very, very fast block stack and it's why we can achieve the performance that we can achieve. Right. Just it's allowed us to kind of design something that I think is more specific to this, uh, this industry. Right. It's gotta be a flame thrower, right? Speaker 3 00:26:59 Yeah. I mean, other good use cases for NBME might be something like high frequency trading or genomic sciences anywhere we need to crunch a ton of numbers as fast as humanly possible. Right. And we can't have the storage be a bottleneck. Right? Speaker 2 00:27:17 Yep. And you know, our, our go to market with this product is that we sell it with StorNext. I mean, I would think about it that way. So this is NBME storage for our StorNext file system. And our StorNext file system really excels with big files when people are dealing with big unstructured data and obviously media and entertainment industry is a great example of that. The other industries or other types of data where we shine would be things like, you know, genomic sequencing data, which is basically a series of very high resolution images, MRI images, cat scans, I mean, satellite imagery. When you think about these use cases, it's customers, you know, they have some device that is creating these very large files and it needs to be ingested very quickly. And then they need to, they have a stage in their workflow. If you want to think of it that way, where that data needs to be worked on, you have a bunch of very highly paid, highly skilled creative people that need to collaborate based on that data. Speaker 2 00:28:19 They produce some results. In the case of media, you produce a finished copy and you distribute it and then you need to archive it forever, you know, forever and not, not for seven years for Sarbanes Oxley, but like forever. Right? And those that pattern that, that use case is where, what we've designed our whole product. So for, so one of our customers, we were talking about prepping for this workflow show is actually in a healthcare research. And they're now using NBME. They have a repository of over 300 million MRI images and cat scans, basically medical images. And they're using a GPU cluster now to analyze those images, to look for patterns, to kind of find cures for diseases and stuff like that. So one of the studies that's always helps me kind of think about it, to bring it to life pretty interesting. You know, they have, um, MRI scans of a bunch of different, let's say athletes that have had head trauma, so they can look at these scans and they can look for patterns to try to figure out maybe how to make, um, helmet design, uh, things like that is just one example. Speaker 2 00:29:32 So, so in that case, you know, with NBME, it's really about the performance that it's able to feed to that GPU cluster. And again, it's outside of media and entertainment, but we're, we're starting to see those use cases as well. Speaker 4 00:29:46 That was actually something that I really wanted to ask you about. And the discussion sort of naturally moved to that, which was, you know, we are very much aware. We HS are very much aware of the eMoney offerings of quantum. It's kind of been one of the things that we've spent lots of time looking at and working with you guys on. So I was really interested to know, like some of the other industries that you guys work with, and you just mentioned the healthcare, uh, MRI space. You mentioned satellite before any cool stories there. Speaker 2 00:30:14 Uh, yeah, I mean, in, uh, the U S government, the Canadian government, the French government, all of their agencies are capturing a massive amount of satellite data to study the planet. The agency over in France is a Cree. They're one of the European space agencies. They're a public case study of ours, but yeah, they're actually using some of that imagery to study the effects of climate change on the planet. And, you know, StorNext has excelled in those types of use cases for many years. And, uh, I think that with, with quantum, if you kind of think about the path we're on with our portfolio, you know, we added NBME to be, which we think NBME is the future of production storage, right? I mean, that'll be gradual, but like it's already very similar in price to SSD storage arrays. So like pretty quickly production storage is all about NBME. Speaker 2 00:31:08 We acquired the active scale object store business because we think the future of where data is going to live is going to be, uh, object storage. And we're going to put tape underneath it because we think that the, where data's going to live for a hundred years is probably still tape until it's a it's DNA storage. Right? So you kind of seen us built this stack of products, which we now have in market. And the use case I described is really applicable in almost every industry, you know, any industry that's dealing with massive unstructured data. We have, I think the best technology stack for that now in the market, you know, of anybody, Speaker 4 00:31:45 Eric you've mentioned that phrase before unstructured data while I was going to go there too. Yeah. Yeah. I mean, it's essentially anything that's not in a database, but why don't you go into it a little bit more, Speaker 2 00:31:57 You know, if you simply think of the data world, you have structured data, which is something like a database rows and columns, very searchable, pretty straightforward unstructured data is generally file data. And the biggest subset of unstructured data is video and images. Now a lot of video and images are for maybe entertainment purposes, but not all. I mean, there's a lot of video that comes off of a drone or satellite, or, you know, an autonomous vehicle produces a huge amount of video and video light data. And that's really our focus. I mean, there's some unstructured data, that's things like, you know, log or text files and things like that. That's not really where we play, where we hunt. We're really focused on video data image data. And that is the class of data that is growing much faster than anything else out there it's harder to manage. It's harder to search. It's harder to extract insight from it. And I'll say this one more time. The thing we're starting to emphasize more is we're finding it has to be kept forever. Right. And maybe not, you know, we're, we're starting to talk about this idea of like a hundred year archives, cause we've been dealing with this problem in media for, you know, for 20 years, how long do you need to keep the original digital content of star Wars? Speaker 4 00:33:18 Oh, forever indefinitely, come on. It's a masterpiece. Speaker 2 00:33:22 And it's like, you know, uh, one of our customers, the national Institute of health clinical center that we were talking about, you know, they want to keep those MRI images, those, that medical image archive forever either because of patient reasons or because they want to have a Corpus of data that they can mine in the future using AI and ML type of technique and, you know, artificial intelligence and machine learning type of techniques, for sure it get new insights and new results, right? Speaker 4 00:33:48 Show me everybody who had this thing wrong with a minute, let's try and correlate some sort of information to get to a curative response rapidly. Speaker 2 00:33:57 Correct. You know, Quantum's a member of the sports video group and I've been at many panels there. And one of the sayings that I've heard a lot in that industry is they're kind of like, you know, the content in your archive isn't valuable until it is. You never know when that's going to be, you know, I mean, um, and so this idea of being able to build these extremely durable, searchable forever archives, you know, that is like what we do. And then we're seeing more deployments of that. So, you know, we're kind of hopping around, but it's like, we definitely see that where data is going to work is going to be high speed file and block on NBME and where data's going to live is going to be these massively scalable searchable archives, you know, and, and being able to move data to and from and classify it. I mean, that's kind of where we're yeah. Yep. Speaker 4 00:34:48 Us too. Alright. So Eric, thank you for bringing us up to speed on sort of the future of production storage. That's really fascinating. Um, I want to switch the discussion on a little bit to going into the object storage and sort of cloud integration, part of, uh, where we're going. Um, so let's start with that sort of transitional phase of moving files in and out of, of object stores from sort of production through to maybe a non-premise cloud, or maybe even just a full on cloud object storage. Where are we going there? Speaker 2 00:35:21 A couple of different angles to that? I mean, one is, we know the future is hybrid cloud and we know the future is going to be multi-cloud. And we already have many customers that are, are doing both. So one of our star next customers, a case study that we're working on not published, um, large broadcaster over in Europe. And, uh, they have a StorNext environment at their main studio, uh, which happens to be in London. You know, they have avid and they have Adobe editing suites that connect into that. They're using a Google cloud archive service to house, all of their kind of raw content. And they use their asset management software to direct the star next file system to move files and move directories between that cloud archive and their on premise. So they actually, they might bring some raw content back on premise. Speaker 2 00:36:17 They might produce some new material from that, send it out over social channels and then, you know, expire that and kind of leave the raw content in the cloud, if that makes sense. And we have customers that actually do kind of almost the reverse in different ways where they, you know, they may push some content up to the cloud and then maybe they do a quick render job on something or a quick transcode job. And then they just expire it. And I said a lot there, but StorNext for the last several years has had very good integration with both cloud-based object stores, as well as on-premise object stores. And, you know, obviously now with quantum active scale, we've got very tight integration with that object store, but, you know, we support AWSs three, we support glacier deep glacier, Microsoft Azure block blob, Google cloud, a wasabi, you know, the list goes on and on, but right. Speaker 2 00:37:12 So yeah, we, we have many customers kind of using StorNext as the means to move files and move folders between an on premise environment and the cloud and back. And we think that increasingly people are gonna want to use different services from the different cloud providers, because that's a whole nother competitive battlefield. You might want to use the Google render engine, but you might want to use the video indexer from Azure because it's better being able to have that choice, that flexibility to move between on prem and cloud. I mean, we think that StorNext and quantum kind of being the orchestrator or kind of the engine to help you do that is kind of where a lot of the future is. Speaker 3 00:37:52 Yeah, I wouldn't disagree. We talked a little bit about the future being more software defined and one of the things that we're a big fan of, and that we talk about here often, and we will talk about more in detail later is rest APIs, right? So if, if, if we can talk to even the storage layer and say, Hey, give me this, or I'm gonna, I'm gonna give you this, give me this back. Right. Um, and get more information and share more information at that command level that just helps us automate everything and make everything that much easier to use, even though it might be a little bit more difficult to set up. Yeah. Imagine if you're Speaker 2 00:38:38 System had an API, a rest API that you could send commands to and receive commands from, and that could be integrated in some sort of a automated workflow process. That would be pretty darn cool. Yep. Yup. It's it's one of the things StorNext has had for many years, pretty tightly integrated with a lot of the leading asset management software applications out there. And, um, you know, and then in terms of kind of a restful interface or API, that is one of the reasons why our customers in this space are looking at object storage and storing content in an object format has some advantages. It can be more searchable because the metadata can be embedded within the object and where you have applications or where you have steps in your workflow that lend themselves to using that type of an SD interface, a restful interface, you know, object storage is, is great for that. And that's kind of where we're seeing it. I mean, some workloads still have a file interface, and I think they will for a long time, uh, some use an object interface and we can kind of deploy the right solution based on what people are trying to do Speaker 3 00:39:47 Right now. It's worth redefining a little bit. We've talked about object storage in one of the last episodes in the eMoney basics series. It's worth, again, stating that an object storage or something that uses storage buckets, that's they typically call that because it's a flat file system, right. Where we're throwing a bunch of objects or files into a container and then giving them unique identifiers to say, Oh, I want this, I want this file back. And we've got this hash, so, Hey, here's the key, please give me my file back. Um, one of the recent developments, I think in StorNext six, four is that you guys are adding a little bit more information in and around the object metadata that will make it easier for people to use object storage in the future. Right. By allowing us now to say also include the file system path in the object data, right? Speaker 2 00:40:45 Correct. Yeah. With, with StorNext six dot four, we introduced two things related to object storage. Uh, the first is self describing objects. So when StorNext moves a file to a cloud based object store like S3, we're doing that file to object kind of translation. Like you mentioned them, right? Yep. And prior to StorNext 64, we were able to move objects and store them up in the cloud. But if someone wanted to use that object for some reason, if they wanted to search it or run some service on top of it, they kind of had to bring it back through store next first, right. With self describing objects, they can now use just off the shelf browsers, different scripting tools or whatever, to be able to access that content right out of the cloud. They don't need to do that through StorNext. And so for use cases, maybe like dr, or maybe, you know, sharing content, in some cases, it can work. We'll come back and talk about that more just quickly. The second thing we introduced was we actually improved the way that we can do reads and writes of data to, and from an object store. So we actually really improve the performance that users can expect for store next kind of moving data in and out of an object store. So both related to that, and I think both could apply to either an on premise object storage system, like active scale or public cloud object stores, you know, in a more of a hybrid cloud workflow. Speaker 3 00:42:09 For sure. Jason, one of the first things that I thought about given these self describing objects is something like iconic, right? Where we've got a cloud based ma'am that can be a really accessible front end. And so if we had a StorNext sand in our production environment, but we also wanted to share things front facing via the ma'am, if the ma'am could access that same bucket, that the file system was moving things into or replicating things into, uh, that would make certain things really easy. Wouldn't it Speaker 4 00:42:46 Potentially, yeah. That's sounds like a very solutions architect, Sherry. Yeah. It's story there. You just told, and it was a very systems engineer answer exactly. Then that's it, Speaker 2 00:43:05 You know, this, this whole topic started in interject there as this whole topic is so top of mind for people with COVID, right. I mean, everyone is focused on remote, online editing, and I think you've seen this in, in, in your industries. I mean, COVID has kind of been an accelerant for where the industry was going. It's just that I think we're going to get there in one year. Not 10. Yeah, exactly. Speaker 4 00:43:28 We have what we have actually been talking about this for the last few months here on the show about how, how, about how fast everything went. Um, Eric, we just, uh, we just did an episode in November, December of last year about like the future of cloud editing. Right. And it was outdated in a few months, like completely outdated. And we were talking that the episode was very much about what is possible today and, you know, the perception versus the reality. And like, when will this be possible? And we kind of just blew that out of the water three months later, you get it. Speaker 2 00:44:01 It forced people to get creative. And it, it, uh, what our customers have found is, you know, they were forced to set up a studio in the cloud. Right. And now, you know, maybe that wasn't a blockbuster movie, but maybe for their news team. Right. And they were forced to do it and guess what it worked. So we talked about what we did recently with StorNext and again, it's kind of these building blocks for the roadmap that we're on. I mean, our next step is to bring market where you can actually set up a studio in the cloud or a file system environment in one of these clouds with StorNext. And we actually have that deployed at a couple of customers already, you know, I would say in kind of a, kind of a beta, but I mean, they're running, StorNext in the cloud and that'll be something you'll hear more from us, you know, as we go forward on our StorNext roadmap because clearly an area of a lot of interest from a lot of people. Speaker 3 00:44:56 So StorNext in the cloud. Are we allowed to ask questions? I was going to say, Speaker 2 00:45:02 Yeah. So yeah, Speaker 3 00:45:05 If I, yeah. I mean, uh, the imagination wonders, right. Um, if we're spinning up a sand in the cloud, if we've created essentially virtual machines of our metadata controllers and we're, we're talking to storage layers because in a sand file system with store next, we've got to have a metadata, right? Because the data about the data for the file system, all of the eye nodes, uh, essentially like the roadmap or the treasure map to where your files live in the file system, that's what makes it so fast. And that's what also protects people from overwriting. Each other's files is that we have this wonderful file system metadata. That's got to live in its own little chunk of storage or on a Stripe of storage. So, and then we've got the block storage, right? So all of these different storage components go into the StorNext file system to make a sand environment. So in order to be able to press a button and spin up a sand, which is essentially like the awesome futuristic robot version that we're talking about here, um, everything would have to be living on some sort of a hypervisor and spin up like magic. Tell me, tell me secrets, Eric, tell me, Speaker 2 00:46:19 Um, the way I think about that as a series of stairsteps, and I think it will help point the path for where we're going. Right. Okay. We started talking about F series and one of the things we did was we developed our own software defined block stack, right. And we've also developed a very basic hypervisor and we're using that today on some other products. It was. So Speaker 3 00:46:37 Now a hypervisor is what, Speaker 2 00:46:40 Um, a hypervisor think about it as a layer of software virtualization software that can host multiple guest VMs, right. And converge those and run them on a single physical server. I didn't say hyperconverged, but let's just say converge. Sure. Now we've got all those pieces you can say, well, okay. It's pretty easy for me to imagine that when we go to a fully software defined architecture star, next seven, we can actually run StorNext along with the block software and all the components we need on a single to use server. Right. And we run store, think about it. We run store next in a VM right earlier, we talked about the F 2000, which is a two you NBME server that runs quantum software. It's got two controllers. You can actually envision running an entire store. Next file system on that F 2001 box right now that's for NBME. Speaker 2 00:47:35 And we're going to have other boxes that have hard drives and all that, but that's not too far out. That might be our next workflow show where we talk about this maybe next quarter. That sounds awesome. Once you get to that point, it's a pretty easy next step to say, okay, if I'm running StorNext as a VM on a hypervisor, it's actually not that hard to run it as an AMI in AWS, or run it as a VM in Azure. Right. And that's, that's kind of the concept, how it's architected, how the MDCs and all that. I'm not sure of that yet, but I think hopefully that helps give you sort of the path and for the listeners out there, you kind of understand where we're going a little bit technically. Yep. We'll be making some announcements here. I mean, the expectation would be, uh, uh, early next quarter, we'll be making some announcements around that. Speaker 1 00:48:20 Super cool. Awesome. So speaking of the future, and I'm talking about store next seven, let's talk about file system pulls a little bit, best of both worlds. What are we looking at there? Speaker 2 00:48:33 This is kind of a new data management capability that we're adding in StorNext seven and it was really driven by NBME. I have to say, I mean, so, okay. Because we think the, you know, future of production storage is around NBME and we want to enable our customers to take advantage of that, but we don't necessarily want to make everybody have to upgrade their entire production environment to NBME right. It's not economically viable maybe. And so we end and we had a number of customers that said, look, I want to deploy some NBME in my production tier, but I may want to leave my existing storage investment in place because the storage arrays I have today are fine for 80% of the users, but, you know, the 20% really need that performance boost. So, okay. Got it. Right. Um, what we developed was a new capability in StorNext that we call file system pools and there's two components to it. Speaker 2 00:49:30 First is it allows a system administrator to define pools of storage within a StorNext file system. So for example, you may define an MVME pool and you may define a nearline storage pool. Right. Or, and you hard drives, right. So hard distract. Right? Right. Yeah. And then you can kind of assign specific users and clients to that using the features that are in store next. The second component of a file system pools is a policy engine that allows it administrator to then once the pools have kind of been defined, you can then define policies that say, okay, I want to move this file. Or this directory between these pools of storage store next manages all of that under the covers. So to a user, to an editor or to a colorist or a visual effects technician, they don't see this stuff moving around. It just looks like their local drive, but the file system can be moving these, this data, the background. So one way that customers are using it today, cause we're already, this is deployed in production. And some select accounts is, I guess I talked about these examples already. You may have a colorist that needs to finish something and you just promote the files or directories that they need to work on. You promoted up to the NBME pool to them. It doesn't look any different. They just get a lot faster performance when they're done. You can promote it back down. Speaker 3 00:50:51 Right. Or, or maybe the ingest operation just always first goes to the NBME pool because we know it's a new, hot thing, a new hot project. And maybe we can set a policy on the back end that says it's been there for a month. Nobody's touched it demoted. Right? Speaker 2 00:51:06 Yeah. And then demoted. Yeah. And those are all, those are examples of policies that administrators can set. You know, you can kick off a job and, you know, move data, move files or directories. You can use API APIs and all that, you know, scripted and everything. But you can also set policies based on time, based on certain thresholds and things like that, to be able to move between pools of storage. And so really what it enables is it enables customers to start to use, NBME get the benefits of it and start to build a production storage environment. That's, you know, partially NBME partially these lower cost types of storage and then migrate to the future over time. So this is great. I mean, this, this opens up a lot of possibilities. You mentioned this can be done with rest. API is, I mean, I imagine it could also be done with a gooey and, uh, you know, other ways too, but with the rest API, uh, that gives us the capability of hooking that into some sort of an automation orchestration platform, uh, or a man with that functionality and, you know, really sort of drive that based on where things are in the process, if we're tracking the process through that project. Speaker 2 00:52:09 Exactly. Right. You know, there's, there's so much we're doing and in a store StorNext seven, there's so much we have going on. You know, we have a new simplified user interface. I mean, is, it is a dramatically simplified user interface, right. And this setting, these policies for file system pools, you know, we can show that today. We're, you know, we're demonstrating that today. So you've got that option. And then, um, yeah, Jason, right on, I mean, this is integrating with asset management applications or other kind of orchestration tools make it really easy to do. And we've got a good, long heritage of doing that. Actually we, you know, we want a technical Emmy for that at the start of the year, which was kind of cool for our contributions in the, in the, uh, fields of data management. So that was neat to be recognized that way. So that's great. Speaker 3 00:52:55 It's worth mentioning too, that you guys have had a policy driven hierarchical storage management layer for a long time in storage manager, but this is something completely different, right? Speaker 2 00:53:10 Yeah. I was just going to ask like, how is this, uh, how is this different from say a storage manager, moving data to the cloud or to tape, or maybe an affinity. Yeah, this is something totally different. So storage manager was really designed many years ago and that was really about moving data to tape. Okay. And, you know, that's where we, you know, we've got a lot of customers that still use digital tape for their archives, and it's an important thing of what we do, but we've built on those capabilities over the years to be able to tear data off to different secondary storage targets, you know, and that might be active scale object storage. It might be AWS or Azure file system pools is really all about moving data within the primary file system. So it is a, it's a different layer. It's a different policy engine as well, actually. And yeah, I mean, the benefits of it are just for simplicity and it's really all about enabling NBME. I would say more than anything else as that go forward, tier of production storage, right. Being able to move data seamlessly between NBME and hard drives and just boosting performance. Speaker 3 00:54:16 And this is going to be something that's part of store next straight. It isn't an additional license. It's just going to be a huge value add for customers. Correct. Yeah. Awesome. I think people are going to be really excited. Speaker 2 00:54:28 One thing I will say, and you know, when we moved to StorNext next seven, we're moving to a software defined architecture. Technically we're going to move to more of a software defined business model to, you know, we're actually going to be simplifying our licensing and, you know, allowing people to consume the software more like they're used to consuming maybe other software applications they do today as a service and a subscription based type of thing. So we're going to have some new licensing options that will, we'll introduce Astronics Speaker 3 00:54:56 San as a service, God help us. Speaker 2 00:54:59 And you said that a couple of times, I, I don't refer to StorNext as a sand it's a file system. Right. So yeah, maybe Speaker 3 00:55:08 That's how we got into this game. That's what quantum has always been in the back of my brain. So obviously the stored X file system is the heart and soul of quantum is business, but you guys do a whole lot more so. Yeah, for sure. Speaker 2 00:55:20 Yeah. Yeah. And yeah, it's also going back to the cloud conversation. I mean, it's easier to think about, I would never say our plan is to deploy a sand in the cloud. I mean maybe, but deploying a file system in the cloud. Yeah. We're going to do that. Speaker 3 00:55:33 Right. And San is a storage area network. Right. We still build them these days if people still need wickedly fast block level storage in the case of fiber channel. But I always just think that the concept of a storage area network, for me, it's a kin to what we're talking about in terms of hyperconverged, you know, storage, right. We have all of these multiple ins and outs and various protocols that we can bring just the terminology kind of sets my mind to wondering. So that's kind of what my brain does with that term. Speaker 2 00:56:11 Yeah. Went on. Speaker 3 00:56:14 Um, so we talked a little bit about private cloud and active scale. It's probably useful just to talk a little bit about a couple of use cases, talked a little bit about that in the past and our second episode of the eMoney basic series, but in terms of what you guys are using active scale for, where do you see great user adoption for that product? Speaker 2 00:56:39 You know, in, in a kind of a market tiering way of talking about it, we're seeing that high-speed file and block is where data works and object storage is where data lives. And it's also, you could say it's where data lives and maybe, you know, lives for a really long time, almost all of the use cases for active scale that we see it is a very large scale content archive. It is a content archive and media it's, it's a content archive outside. It may be in an active archive and this may be because of Quantum's portfolio. But most of the deployments with active scale, we have a StorNext file system cluster in front of it, right? So people are using StorNext to do their work, to ingest the content quickly when they're not actively working on it, we can move data to the object store and it's protected, right? It's so it's itself protected object store. And if they need to bring it back, they can bring it back and work on it and they can also directly access the object store for other things and other reasons. But most of the time, what we see is active scale being a kind of very large scale content repository. Speaker 3 00:57:47 Yep. That makes total sense. Speaker 2 00:57:50 We've got, yeah. I mean, good, good adoption and meet media and entertainment as a content repository, um, sports and sports production. A lot of big sports content archives are based on object storage. And then outside of that, it's genomics research in life sciences. Um, we do see, you know, financial analytics and stuff where it's effectively like a big data Lake, you know, and someone running the analytics on some type of, uh, you know, file cluster, but it's a big data Lake. And from that regard, we think active skills the best, uh, it's got some unique technology in terms of how we place data and how we place the racial codes that just give it the highest levels of durability, highest levels of performance. And, um, you know, we have customers managing over 200 petabytes of active scale with a single administrator because it's just the system just tear of itself. And I don't think there's any other object stores that would say that because our software is so intelligent and self healing and, and the way it works. So, you know, that's really where we see it shining is it's like billions and billions of files or objects and, um, massive multi-pack abide scale content repositories. Speaker 3 00:58:59 Yeah. That makes total sense. And, uh, correct me if I'm wrong too, but having a single object space live across multiple geographies for that kind of bullet proof layer is kind of a good way to think about it too. Speaker 2 00:59:17 One of the things that makes active scale unique is that we can actually deploy a single object storage system across three geographic sites. Now not every customer has three sites, but for those that do the way our software works, you can actually lose one. And in some cases, multiple sites, and we can still rebuild all your data. I mean, that's the magic of the eraser coding software that we do when you have really, really valuable digital assets. Like basically every company, every customer we have in media, the digital assets are their most valuable assets typically of the company. And they need to be preserved and protected forever. And the best practice would say, keep that data in three places. That's just a, it's a best practice. Keep three copies if you want to think of it that way. For sure. Now when you do that with tape, you've basically today, you know, tape is very low cost, but you've tripled up your tape capacity, right? Speaker 2 01:00:10 You put tape in three locations, you've tripled it. And at that point, the TCO actually is a little bit better where you say, use an object store system with software user ratio coding to spread those objects across the three different sites. And you have a very, very durable kind of protected against disaster archive, you know, and that's really where we see active skill being deployed. We're going to be talking about it a lot more this year, I think for your customers and media. I think everybody should be looking at an object store based media archive because of its durability. And because of Ben, what you said a couple of times, because of that object interface, it actually makes it easier to search and to index your archive, right. It actually makes it more searchable, more accessible. So that like our customers in sports right now, they've got no new content to show. So they're trying to get stuff back from the archive and repurpose it so they can have things to broadcast. Right. Speaker 4 01:01:07 And I also want to say that working with an object storage interface, even when we're talking about working with say a rest API or something like that, to me as, as a workflow engineer, it's much easier to work with than say a piece of middleware that manages a tape library. Cool. Well, there's another product that you guys have recently acquired that I am really excited about that. I wonder if we could talk a little bit about a TAVI, um, or a Tavion. Um, I've heard it pronounced both ways, but, um, Speaker 2 01:01:40 Yeah. Going back to kind of a big picture kind of storyline here, we build our own MVME products or the software to run our own NBME products. That's about production. We acquire active scale to have the object store software with a ratio coding. We obviously have the tape products and you know, we're going to bring all that together in a stack to where we've got all the different tiers of storage, you would need to build a hundred year archive, right. And we've already got good data management was StorNext to be able to move files and objects between all those tiers. We actually have a very good orchestration engine, the next kind of big challenges. Well, how do you classify all of that data? You know, how, how do you make it searchable? How do you make it browsable? And we were fortunate enough in March, we closed a small acquisition of this TAVI source code, and we actually had a couple of their key team members join Quantum's team. Speaker 2 01:02:40 So the one of the cofounders there, a guy by the name of ed Fiori, he joined quantum in March. He's the general manager of our store next business and our primary storage business. And the thing that a Tavion did that I think was very unique, was a data classification and tagging engine. So when files are ingested, they basically get classified and then they get tagged as they come in. And once you do that, it makes the entire file system or the entire repository, much more searchable in a variety of different ways that we've kind of talked about through web services, API APIs, or through user interface, you can look at a bunch of statistics and all that, and you can kind of see that, you know, our intent is to take that technology, integrate that into our StorNext roadmap and bring it to market. And I think this is within the next six to nine months. Speaker 2 01:03:33 I mean, maybe six months. I mean, this isn't that far out, this is pretty near term where you can now build a well classified and tag archive where we've got the file, the object, the tape, if you want to keep it there forever, and you can kind of search and index everything. And, and by the way, even as you're moving data between on premise and cloud, right? So this idea of kind of providing that seamless bridge from on prem to cloud, this is another key, you know, technology piece that we're able to bolt on and that we're going to integrate into our roadmap. It's very exciting. You know, you, you guys know, I mean, one of the biggest challenges customers have, if they have a lot of these unstructured data files on a large scale out NAS platform, you know, if they need to search it, it can take a long time, you know, they need to understand and get insights into what's really happening with their data that can take a long time. And I think that's the next thing we're going to solve with this tech. Speaker 4 01:04:25 Yeah. Well, it certainly seems that way. I mean, a and any kind of time savings with the ability of getting those tags upfront and then being able to search on anything later, that's really where, I mean, that's the kind of thing people are looking for these days. So yeah, that's pretty cool. Conceptually. I mean, we talked about, Speaker 2 01:04:44 It's a data being ingested into, let's say a store next file system. Right? So like, imagine that as that's happening, we're tagging that content. And then we can basically classify it a whole bunch of different ways, run analytics on it, a whole bunch of right. That is kind of the conceptual intent of how we bring those technologies together. So yeah, Speaker 4 01:05:01 I'm thinking that that has really huge implications for the sorts of use cases where, you know, this is content that we need to bring in very quickly and just save, you know, in case it needs to be found later for say legal proceedings or something like that. So maybe security footage, body cam Speaker 3 01:05:18 Footage, it sounds like there's a lot of use case there. Speaker 2 01:05:20 Lots of good use cases there. And I think when we think about hybrid cloud deployments and multi-cloud deployments, right? I mean, we, you know, we have customers today that have files on premise. They have stuff, they have objects in AWS and they have a small editing environment in Azure. And we have customers that have that today. When you think about managing that content across multiple clouds, having that data classification engine, and then being able to manage data and move data around and orchestrated, if you will, based on those tags, like that's what we're obsessed about now, right? Speaker 3 01:05:52 It is gold because siloing is a thing. Yup. No, that's, it's super exciting. I mean, being able to do some of the things that we do with ma'am in that we pull as much available metadata out of the files themselves, the file extensions, you know, learning what type of media it might be upon ingest into a ma'am who created that, right? What the file path is any relevant metadata in the file path. All of that stuff comes from the file system. And if the file system is already aware of it and can feed it forward into the ma'am and then if StorNext file system understands multiple silos, multiple cloud repositories that are self describing, it just makes the end users experience so much richer and faster. Right. Which is just Speaker 2 01:06:48 Awesome. Yep. Cool. Very cool. We're very excited about it. And um, yeah, I mean, we're just gonna, uh, we're gonna just turn up the volume on everything we're doing because you know, we've made a tremendous amount of progress in the last two years. And, uh, I think over the next year, we're really gonna pull away from all the competition. I think people will see that. And, uh, I think we'll, um, I think we're going to blow people away with, with everything we've got sweet. I'm looking forward to being blown away. Yeah. Not already. Yeah, Speaker 3 01:07:22 Absolutely. Something that we've talked about recently in terms of kind of trying to have a regular segment is asking people, you know, kind of what some of their favorite workflows have been, or have seen like really helping people, um, as well as kind of what media they have been enjoying. So Eric, let's ask you that, like, what's a cool workflow that you've seen recently that really helps someone, Speaker 2 01:07:47 You know, the ones that come to mind for me recently are the ways where we've worked with partners, such as you guys to help customers set up remote, online editing. And we, I mean, obviously top priority and, um, you know, that was an area that I didn't know much about. I had to learn very quickly, but we, so, so say we'll write, but it was really cool for me to talk to customers. I remember speaking with a few customers specifically, like it was in, um, April. And so it was kind of like they had gone through a few weeks of just total crisis mode, right? Like, can't get like businesses shut down and they were just coming out of it. And I was talking to them about how they had set up remote, online editing, and they were using virtual desktop technologies to access the workstations that were at the studio with Astronics client. Speaker 2 01:08:38 Right. So they'd figured out how to use their existing StorNext infrastructure and use these virtual desktop technologies. There's a whole bunch of them out there. They kind of like encode the pixels on a screen and encrypt them and send them to the home office, so to speak. Right. Um, but it was neat to talk to them to kind of, they were sorta like, Hey, we've got it. Figured out, people are working. We're, you know, we're still doing projects and, and now we're kind of cleaning up some things, I guess. I really remember those cause it was nice to feel like our customers were sort of coming out of crisis mode and getting back to a little bit of getting back to work kind of, and for sure, we're seeing that a little bit more, even this quarter that we're starting to see new projects and all that, but yeah, I think those work flows and we worked with partners such as you guys to help design those, to get them running. And you know, in some cases we had to, uh, get some new clients deployed and stuff like that. So that was a pretty interesting learning area for me. Speaker 3 01:09:30 Yep. For us as well. Like we said, Speaker 2 01:09:33 I think related, it's kind of what people are doing in terms of just studio workflows and studio workflows in the cloud. And you know, we're now working with a couple of our, I would say a couple of our larger StorNext customers now, but they're using StorNext to run editing suites in the cloud seems to be working. And I'm really encouraged by that. We got to figure out how quickly we can bring that to market. But you know, those two things are both in direct response to COVID and kind of just, the industry has just accelerated how quickly they can do that stuff. So that's been, Speaker 3 01:10:02 Yeah, for sure. Um, what's some cool media you've been enjoying during the pandemic. Speaker 2 01:10:10 My favorite customer Quantum's is red bull media. Okay. And, uh, I've got the red bull app on my phone and my little son who is, uh, almost four. Uh, he and I love to watch, uh, biking and mountain biking videos on red bull media. We do that on YouTube or on the red bull app and we, you know, cast it to our phone. So, I mean, that's, I, I love that the content's awesome. And I watch it with my little son. So we've been enjoying that usually once a night, we'll watch a little, you know, 10 minutes or five minutes, a little biking videos. So cool. I've been enjoying that. Speaker 3 01:10:44 Yeah. They do some really cool music stuff too. I've definitely enjoyed some of the music content that they've been making. Speaking of which what's some cool music you've been listening to. Oh man. I, I, uh, Speaker 2 01:10:55 I'm always listening to new music. I'm a big fan of punk rock and heavy metal. So like what, uh, Oh gosh, I gotta think about some good recent examples of stuff here. Um, that sounds like Speaker 3 01:11:08 House. I don't know what you're talking about. Not a metal head over here. I don't have five guitars sitting behind me. Speaker 2 01:11:14 It's funny with, uh, so I, I subscribed to Apple music and you know, they just always have new stuff coming out. So I'm always kind of just like downloading stuff and listening to it. And I kinda just put stuff on shuffle. So trying to think of albums that have come out this year that I think have been like really, really good. Speaker 1 01:11:28 Have you heard the latest hum album inlet? Speaker 2 01:11:32 Yeah. You know, I did hear that. It was, it was good. It was a, I actually, yeah. I downloaded that and I listened to that. Yeah. It was pretty good. It's a, I remember those guys from the nineties or something and I downloaded it again and I was like, that's cool. Right. You come through when I got here, you know, I've been liking this band called white Reaper on, if you guys know those guys, they're not like punk rock. They're just like kind of good rock and roll music. And there's like these young kids and they got some good stuff that I've been, I hadn't listened to them before I liked them. Um, yeah. I don't know. There's a bunch of new metal stuff that's supposed to be coming out later in the year that I'm looking forward to. But as you guys know, a whole bunch of releases have been Speaker 1 01:12:09 For sure. If you can't tour and do the support for the album, you're kind of dead in the water. Yeah. Well, awesome. Thank you for sharing. Yeah. This has been great. Well Eric Basey or senior director product and technical marketing for quantum. Thanks for Speaker 2 01:12:24 Joining us, Eric. Yeah. Thank you guys. Again. Really appreciate the opportunity. Speaker 1 01:12:28 We appreciate having you here. This workflow therapy session. Unlike all other workflow show episodes is a production of Chesapeake systems and more banana productions. I'm Jason Whetstone, senior workflow engineer, and I'm Ben Kilburg senior solutions architect. Then also records and edits the show and produces the original music. If you enjoy the show, please subscribe in your podcasting app of choice. And please tell a friend or coworker or client about the show. We'd love to hear what you love about the show too. So email [email protected]. Thanks for listening.

Other Episodes

Episode 0

January 19, 2015 01:08:41
Episode Cover

#28 Marquis Builds Video Workflow Bridges with Daniel Faulkner, Business Development Manager for Marquis Broadcast

There was a time when it made sense for a video facility to obtain its NLE, storage, archive, MAM and other sundry post-production workflow...

Listen

Episode 0

October 10, 2019 01:17:35
Episode Cover

#40 “How to Buy a DAM...and How Not To”

Are there right and wrong ways to purchase and deploy an asset management system? Ghost ships happen. Is there a failed MAM or DAM...

Listen

Episode 0

February 08, 2022 00:59:12
Episode Cover

#67 Rapid Media Production in the Age of Remote Global Collaboration with Jack Wenzinger, Global Lead M&E Content Production & Matt Herson, Principal Content Production Specialist at Amazon Web Services

On this episode of The Workflow Show, Jason and Ben bring in Jack Wenzinger, Global Lead M&E Content Production & Matt Herson, Principal Content...

Listen