#69 Accessing Massive Video Storage Systems in the Cloud with George Dochev, Co-Founder and CTO at LucidLink

February 22, 2022 01:14:18
#69 Accessing Massive Video Storage Systems in the Cloud with George Dochev, Co-Founder and CTO at LucidLink
The Workflow Show
#69 Accessing Massive Video Storage Systems in the Cloud with George Dochev, Co-Founder and CTO at LucidLink

Feb 22 2022 | 01:14:18

/

Show Notes

On this episode of The Workflow Show, Ben and Jason bring on George Dochev, the Co-Founder and CEO of LucidLink. LucidLink is a “Cloud Native NAS-replacement” that provides a solution for cloud-based video storage and utilizes block storage mechanics to do so. Jason and Ben ask George to better explain how LucidLink enables certain cloud-based workflows that were previously not possible, and what kind of new innovative workflows could spawn from LucidLink’s capabilities. They also all discuss specific workflows that have utilized LucidLink as well as CHESA’s experience in implementing it on client systems. 

View Full Transcript

Episode Transcript

Speaker 0 00:00:01 This is the workflow show, a podcast covering stories about media production technology from planning to deployment, to support and maintenance of secure media solutions. We cut through the hype in the media industry and discuss solutions to the human challenges in media production technology. This approach, we call workflow therapy. I'm Jason Whetstone, senior workflow, engineer and developer for Chesa and I'm Ben Kilburg senior solutions architect at Chesapeake systems. Today we'll be discussing an innovative hybrid approach to managing file based media in the cloud. We've talked in previous episodes about some of the differences between file systems and both cloud and on-premise object storage. The push to remote work in the last few years has necessitated a faster adoption of cloud-based technologies, but applications still require a performance file system to work in a satisfactory way for most production schedules. The file sizes we're working with here mean that upload and download workflows are not practical or timely for many use cases. Speaker 0 00:01:05 Most cloud-based storage is object storage and virtualized file systems backed by cloud storage often have performance issues that preclude them from being used for work in progress media applications we use in the media industry enter lucid blank, an innovative technology that essentially treats object storage as block storage. Some organizations can adopt this technology to reduce their dependence on on-premise storage. And in some cases eliminate on-prem infrastructure altogether. At the same time, lucid link can enable certain remote workflows that were previously not possible. As long as users have a good reliable internet connection. Our guest today is lucid link co-founder and CTO. George dochev. We'll talk with him about how this technology works, what kind of workflows are possible, and we'll cover expectations about performance and security. Before we get started a reminder to please subscribe to the workflow show. So, you know, when we drop new episodes and new content, if you have suggestions for guests or episode topics, tweet@theworkflowshowanddothesameonlinkedinworkflowshowatchesa.com is our email address. And now onto our discussion with George dochev co-founder and CTO of lucid link. So let's just jump right in, like tell us a little about your journey, like how you're a storage guy, right. So how did you get into this cloud sort of storage? Speaker 2 00:02:41 Well, that's an interesting question. I spend the bulk of my career at a company that was writing storage software. Okay. Storage virtualization is what we called it, used to call it, which is now known and soft as software defined storage around 20 13, 20 14. I realized that the workloads we're slowly going to move to the cloud. And the question then became, how do we respond to this new trend? Now, of course it took us another, you know, seven, eight years for that to happen. But now this is actually happening in earnest as a head of engineering at a data Corp software. I experienced the challenges of accessing remote data firsthand because we had a very distributed team. That team was partially in Europe parts. Part of it was in the U S we had some people in Japan, friends, et cetera. And the challenge at the time was, were working. Speaker 2 00:03:40 These massive builds that, uh, we had a built farm in Florida. We had 20, 30 servers turning out bills on a daily basis. We all have to work in these builds at the time. Those were massive datasets in the 10 gig plus range per bill. But imagine you have 10 20 of them per day. Uh, it becomes very quickly impossible to synchronize all of that data across that largely, very highly distributed team. And so I experienced this firsthand and I thought to myself, there's gotta be something we can do there. We looked around, we tried different existing technologies and none of them fit the bill. And I said to myself, this cannot be an isolated problem. I am sure that other people have this problem. I didn't think about media and entertainment at the time. I was just trying to solve this fundamental storage problem. And then fast forward to 2021 is when we actually found a product market fit in the media and entertainment in the rich content space. Right. Speaker 0 00:04:46 I think to sort of talk about why that is, we should talk a little bit about what lucid link is and what it does. So just give us the high level overview. Speaker 2 00:04:55 Sure. So Wilson link is a file system as a service where the data is hosted in the cloud of your choice. And it's streamed on demand back and forth from the end device, which is typically on the edge, but can be in a hybrid scenario where you might have certain device or servers in the cloud itself. And some of them are outside. The typical scenario would be creative people collaborating on large data sets or watch files. It could be individual large files like video content, or it could be millions of small files in either of those situations. You run into a problem because the existing technologies today are based fundamentally on synchronizing what's in the cloud down to your local machine, which is a form of copying files back and forth. There was great, and I'm not trying to diminish the, the success that these have had for the industry as a whole. Speaker 2 00:05:59 We all use them and love them, but they come short when it comes to accessing large files or large data sets instantly. And so unlike those technologies, what we do different is we keep the data in the cloud and we stream it on demand instantly as the application requested, this is very similar to how your local file system uses your hard drive. You're not moving everything your entire hydraulic drive into a memory before using it. You're reading on demand, right? There are techniques and other things that we employ, and we do the same things as listening, but that's the fundamental difference is that we don't copy or synchronize any of the data that lives in the cloud before you can use it before you can access it. And by virtue of doing that, we now solve this problem of large data, large files, large data sets, because with our technology, you have instant access to say a petabytes worth of data, right from your laptop without having to synchronize any of it. Speaker 0 00:07:13 Right. But you're not synchronizing the data, but you can still, you can still see the file system. That's one of the things that seems so magical about it. And I think people that are less, I would say, less of a storage background, focus on the fact that like, oh, how has, how do I see all this data? It's not really here. You know, it's, it's, it's actually in the cloud. I mean, I think George, the way you described it as is great, it works like your computer is streaming the data off of your hard drive. We have a past episode. Um, it's actually, it's several years old at this point, but about, uh, storage and how, uh, how file systems actually work. That might be something for listeners to go back to if you want a little bit of a refresher on how that works. Speaker 0 00:07:54 But one of the things that I think is so fascinating about this technology is that it really does work like a hard drive. It works very much like some of the, uh, block-based storage that we are used to using in the industry. On-prem, you know, so, you know, you've got like a sand and you're building a lot of us in the industry are familiar with the idea that there is a block level communication over fiber, usually between, you know, uh, between the machines and the metadata controllers or, um, between the machines and the storage. And then there's this ether, net based communication. That's the metadata network. And we're not talking about man metadata, we're talking about file system metadata things about the files. So that's what you actually see when you open the finder, right? Yeah. Speaker 3 00:08:32 I think it's worth hammering home a little bit more about the file system metadata and the delineation there and what we encapsulate there. Right? Cause it's, it's an important distinction because we obviously talk about metadata all the time here, but usually that's custom user metadata within the context of cool workflow. But what we're talking about here with file system metadata is stuff like icons names, the file system, tree hierarchy, stuff like that. And I'm sure George, you can tell us all sorts of amazing things about how lucid links handles that metadata. Speaker 0 00:09:06 Oh, well, where, where does the file start on the actual disc and where does it end and how many parts is it broken up into? A lot of people don't realize that your files are actually broken up into a lot of, they might be, they usually are broken up into several parts depending on you know, where the holes are, you know, so to speak in the, where the spaces available on the, on the drives themselves. Yep. Speaker 2 00:09:27 I made this analogy and I should elaborate a little bit on that. So regular file systems, the way they function is they put the data on your local drive and they layer the higher goal model on top of that block device. And the block device has certain characteristics. It is partitioning to equal blocks, hence the term block device. So they're fixed size. And of course he has a very fixed capacity as a whole. Now what we do is we host the data in the clouds, in an object store in object storage has fantastic characteristics. Object storage is highly durable. It's very reliable, it's highly available. It's elastic. There are a couple of things that object storage gives you that your local disk could not give you. First of all, of course, accessibility from anywhere. This is a big thing, but it's also in terms of sheer durability, hard drives born, and you lose data object storage typically, especially at the hyperscaler level is built. Speaker 2 00:10:35 I'm using erasure, coding and other techniques to provide that very, very high degree of durability. So the odds of losing data that stored in object storage are slim to none. Typically it's a human error. It's not no longer technology. Sure. That's not something that you can recreate easily in our local data center environment. Even the high end storage systems do not provide this level of reliability, availability, and durability that the hyperscalers uh, provides for everybody, you know, these days. So you have this giant elastic, super reliable, super durable, always available, hard drive sitting only one internet connection away. Okay. So the challenge becomes that internet connection. So instead of the local drive, we use the, the, the object storage that's in the cloud now file systems create, um, a structure that is much more easily digestible by humans, hierarchical structure with folders and files and humanly readable names and all these nice things that we know in love, object storage. Speaker 2 00:11:43 Isn't a file system. It is funnily more akin to block storage, as we know it, except that it's elastic and you don't have these size, the constraints and all these other things, but it's similar to block storage, not similar to a file system. And so what you gotta do is build the file system semantics on top of that object storage. And that's exactly what we do. And the way we do it is we split the file system into sort of two planes. One is the data plane where the data goes into the object storage, and then the other one is the metadata plane. And again, I want to make a distinction here. We're talking about file system metadata, not the media metadata that some people might. Speaker 0 00:12:25 So again, w where does the file start? Where does it end? How many extents does it have? How many pieces is it broken up into how many objects in this case, is it in the bucket? Which objects are they, how are they ordered that kind of stuff? That's the metadata we're talking about, Speaker 2 00:12:41 Including the user generated metadata file names, structure, access control, additional extended attributes and all these things. They comprise the metadata that metadata we keep separately because object storage is just not a good repository for that kind of, sort of fine grained, you know, information, which is similar to a database rather than object storage. Right? And so by combining these two, we create this file system and going back to my analogy with local file systems. So they don't metadata is stored on the local disk. When the application, uh, read something from a file that some of that data is brought into main memory and operating systems typically would cash these things. So that's frequently accessed data is not re-read from your disc all the time. It actually stays in memory, Speaker 0 00:13:39 But for our listeners, think of this as like your browser cache, right? You know, when you go to a website, what your browser will cache data, images, videos, things like that, that you are repeatedly accessing. So it doesn't have to keep downloading them from that website. So, Speaker 2 00:13:52 Yeah, and all this caching is usually performed by the operating system environment itself. So all this magic happens automatically. And in our world, we employ similar technique except that the data is not on your local disc today is in the cloud, in the cash, becomes your local disk. So we have a persistent cache that lives on your local device. The reason we do it is to solve this problem of latency because we have fantastic object, storage characteristics, fantastic storage platform. However, we're very far, we're typically thousands of miles away. And those are latencies that are in the range of 50 milliseconds to a hundred, hundred and 50 milliseconds. This poses a significant challenge to an application. If you had to do a round trip to your object storage every time. And so in order to minimize the effects of latency, we employ all kinds of different techniques, but predominantly we use the local device, the local hard drive as your sort of temporary cash. We do prefetching we do all kinds of optimizations. We have looked at what a local file system looks like instead. Okay, you cannot take this. And bill the exact same sort of use the exact same architecture to build a file system that would function well in a cloud environment. So we have to revisit everything from scratch and say, okay, what does a more on file system would look like that runs in this cloud environment that runs over the internet with these high latency, connections, et cetera. And that's what we've done here, right? Speaker 0 00:15:35 Yeah. And it is, it's very cool. I will say, I think we have a chest that we've, we've really, um, kind of taken a liking to the technology. And like you said, George, it's really using, but you know, object storage is a really great, it's very analogous to block, to block level storage with all of those benefits. I mean, I think you guys have done a really good job of taking the best things and, you know, kind of doing the best that can be done really with the limitations that we have, which are the, you know, like you said, the latency and the, and the distance to the storage, we've also got another potential limitation. This is something that I th I feel like we have to discuss with everyone that's interested in this technology is the actual user's bandwidth. So we're talking about a technology here that potentially lets you do some remote editing and all kinds of things like that, all the things that we've been talking about for years and you know, it, it really does bring this into your own, you know, the ability to edit with Adobe premier with content that really is living in the cloud. Speaker 0 00:16:29 One of the limitations, obviously that we don't have very much control over is the user's individual bandwidth. So let's just talk about that challenge a little bit. Like what does that look like? I know that that's something that's difficult to socialize sometimes Speaker 2 00:16:41 For sure. So that's the other side of that coin. One side is latency and the other side is bandwidth. Okay. So you are sort of handicapped by both. Okay. And that's exactly what the technology is trying to solve here and mitigate the effects of your limited bandwidth in the distance to data. And when it comes to bandwidth, there's certain limits that we have to abide by. So for instance, let's say you're streaming video, video content is already compressed and it's not subject to further compression. And so if you're streaming at a certain bit rate, let's say a hundred megabits per second video content, then the expectation is you're going to have to have a hundred megabits internet connection. There is not much you can do about that. However, depending on the content, there are other files in other data sets that are highly compressible or relatively compressible. Speaker 2 00:17:37 And we employ on the fly compression for those. So depending on the content, you may be able to stream at a much higher rates than your bandwidth actually fundamentally allows you because everything is compressed on the flight. And those are some of the things that we do in order to improve that user experience and give you this sort of cull near local user experience as if everything is running off of your local disk, even though it may be thousands of miles away when it comes to video content, video content I have to say is one of the toughest nuts to crack. First of all, because of what we just said and the inability to really compress and push more data through that channel that you have available to you, but also because the nature of the workflow, when it comes to video editing, you're very susceptible to interruptions in theater, right? Speaker 2 00:18:31 The video has to flow. It cannot interrupt. And so solving that problem for video is basically a Testament that this technology works well. In other scenarios where the end-user sitting in front of the computer is not as sensitive to these latencies, say I'm opening a large CAD cam file, uh, that maybe, I don't know, consists of many other small files and assets and whatnot, whether you open it for a win in 10 seconds or 15 seconds, it doesn't really affect your workflow and user experience as much. But if you're streaming video, that video really needs to flow smoothly. And so we've proven the technology. One of our largest use cases is rich content creation, video editing, post production, workflows, all these nice things that you guys actually know much more about than me. I'm a storage guy, you know, I pretend to know your workflows, but I don't really Speaker 0 00:19:27 That's. Okay. And these do built as a marvelous playground to a Juul that's right. Yeah. I think it would be useful. We sort of talked about the underlying technology and how it works. I think it's probably a good idea to kind of tease out like maybe some of the most efficient ways to use this technology. I know one of the things that I have experienced recently is, and thinking about how to use this technology, sometimes it might require just sort of a rethinking of some workflows or how we're actually interacting with the content, maybe how we're using our media asset management platform and what kinds of workflows we're triggering in there. Actually, why don't we talk a little bit about pinning. This is a feature that the lucid link software allows you to do. Tell us a little bit about, about what pinning does Speaker 2 00:20:06 We designed pinning to address exactly that limitation, where your internet connection does not allow you to work live with constantly streaming data back and forth. Okay. Because your underlying internet connection isn't good enough. Let's say you're working in 4k video, but you're on a 50 megabit connection. Well, unfortunately you won't be able to stream that directly from the internet. And that's where pinning comes in and it's sort of a hybrid between streaming everything on demand and synchronizing everything locally. Like the file sync and share guys to, uh, what we have is I can right-click on a, on a directory or set up files and say, I want those files bins locally, which means bringing all the data to my local cache just for that sub set. So I can work off of the local disk at the speed of my local disc, instead of bringing it live from the internet, we used to call this pre-loading, but it's an essence. Speaker 2 00:21:08 We said, okay, well, let's come up with a sort of a user-friendly term and we call this spinning. And that's what a, in essence, does it hydrates your local cache, which as I mentioned earlier, is your local device, local disc hydrates with, uh, with your hop working set, whatever you're working on. And you can do this today. You do this manually, we're going to extend this feature so that it's, policy-driven, let's say that, uh, Jason, you know, that Ben is going to start working tomorrow or something you're finishing work and you're, you can say, Hey, I will, I'm going to preload this onto Ben's computer if it's up and running. So that tomorrow morning when Ben wave, so he can start immediately working on that, or it could be with through administrative policies and, and all these things. And ultimately we're trying to solve that internet bandwidth limitation that a lot of people have to deal with. Of course we w we all want to have the gigabit internet connection. Not everybody gets to enjoy that today, and things are improving. So the trends are in our favor for sure. But that's sort of the interim solution, the hybrid solution, if you wish. Speaker 0 00:22:14 Great. Yeah. And that's really useful. Again, I've had some discussions about the bandwidth limitations. I, you know, Hey, my internet connection, I personally have cellular at home. I live in the middle of nowhere and cellular is the best I can do. So that's what I got it. It works. I'm here talking to you on zoom, you know, and it, it works fine. It doesn't always work fine, but, uh, it usually does it it's good enough. Let's just put it that way. But I have tried, you know, doing some things with Lisa Lincoln, I've had some pretty good success with it, really, you know, I got to say surprisingly enough, I've had some pretty good success with it, even over cellular, but you've got to, you know, you've got to keep your expectations realistic, so that's right. Yeah. And that's really important to call out. I think if I was an editor and I was working on a project that was a 500 gigabytes in size, I'd better make sure number one, that I've got the time to pull that content down. Speaker 0 00:23:01 Maybe even if it was high Rez content, you know, I have to make sure that I have the time to pull that content down as if, you know, uh, or at least the parts I'll be working on. Let's just put it that way. And I've got to make sure that I have the space to catch that content locally. Um, so, you know, just to George's point, we're not pulling down. We're not, you know, we're not, we're not like going over to lucid link and dragging the files over onto our local drive. That's, that's not how this works. It's all sort of obfuscated and managed by the software, which again is really cool. But when we set up lucid link on our workstation, where identifying a location to say, this is where the cash will be stored, and you have to kind of like discard that space. Speaker 0 00:23:41 So you, you need to have like some available storage for the cash. And, you know, that's a scenario where you may need to, at the beginning of a project, you might need to look at, well, how much content am I going to need to be working on? How much of it do I need to be working on in S you know, say this real time, is there a portion of it that I could work on at home and then maybe, you know, be somewhere closer to the content, better connection, that kind of thing later, you know, so those are the kinds of considerations I think, are really helpful with, you know, to make this kind of an effort, like really successful things that you might not have had to think about before, when you were sitting in a, you know, in the office with, you know, connected to the sand. And it was all just there. Speaker 3 00:24:18 Jason, it makes me think of a couple of things here. One thinking through some of the codex we use, right? Like we'll just use pro Rez as an example, right? Pro risk 4, 2, 2 is 147 megabits per second, right? So that means you need, if you want to stream pro Rez via lucid link, and you're not going to pin things so that you're downloading the portions of the file and pre allocating that in your cash, you need to have internet bandwidth enough to be able to stream that if you've got two streams of fro Rez, you better have 300 plus, you know, mega bits per second available. You know, so that's where we get, you know, if you have a gigabit connection, then it's possible to, you know, really have an experience that is very similar to what most editors are used to working in the office off of something like a Nass or sand. Speaker 3 00:25:17 W we talk about it feeling a little bit magical because it does, right, when you're able to hit that space bar and it will play those chunks of data from the middle of a file, having not downloaded the whole file, because it's smart enough to go and fetch just those chunks. You know, it really is an illusion of everything being there that really is kind of amazing the first time you play with it. The thing that also makes me think of obviously is, you know, what we're used to with doing with, for Netflix, you know, when, how we enjoy streaming media all the time, it's consumers, um, or something like a video game, right? Where they're, pre-loading some of that map data, right? So that what you're seeing in front of you is loaded into the Ram as you're kind of walking around this virtual environment, but they're not loading the whole map. You're just seeing a portion of that map, or if you're a nerd, and you're thinking about simulation theorying whether or not we're all living inside some of some huge virtual servers somewhere, perhaps reality is doing that as well. So that is to say there's a ton of different ways that you can use this technology, but you need to think about some of the fundamentals, which is, you know, how fast your internet connection is. As we're fond of saying the laws of physics still apply, Speaker 2 00:26:37 Take a step back and just say that, first of all, a lot of people said, this cannot be done. Okay. Because a lot of people try to do it and it didn't work. Now we've taken some novel approaches, we've done some novel things, but fundamentally what made this possible is the advances of internet connectivity and the maturity of the cloud storage itself. Okay. The, that those things are critically important. It's something that couldn't be done even 10 years ago, let's say, and today it's not only possible, but in a couple of years, there'll be, table-stakes, it'll be mundane, right? Uh, but today people, when they experienced it at first, they still feel as if it's magic. There's nothing magical about it. There's a whole lot of technology underneath, but we're not beating the laws of physics. We're streaming based on what you have in bandwidths improve all the time. Speaker 2 00:27:32 So of course not everybody is going to get that right away. I live in the future. I have a gigabit internet at home and I'm paying 60 bucks with at and T fully symmetrical fiber to home. Of course not everybody gets that experience. I have true, fantastic experience. And when you get to that gigabit speed, plus this rivals, like you said, this rivals the experience that you're going to get from your local ness, working, working at the office. And what's interesting, in some cases it might be better and it might be better because like I said, we didn't take a regular file system and slapped it and made it work for, for the unit environment, where should we redesign everything? And we have our own unique architecture that takes advantage of certain things like taking advantage of your local disc as a caching device. Your NASA doesn't do that, but we do. And so what happens is that we have customers who compare the performance of their Nass. High-end Nass. I'm not gonna name names, but think of a high-end mass compared to the experience that loosening provides, and they get better user experience and better performance because of our caching itself. Of course, you have to have the one gigabit connection, but once you have that sure, we do things in some cases faster, even that kind of catches people by surprise initially. Speaker 3 00:28:59 Awesome. Right. Yeah. I guess imagine it, if you are able to sync the metadata for those files and so that all of that shows up immediately, and then it's just you asking for the blocks underneath of that and the pipes are wide enough to handle it. I'm sure it's just fantastic. Speaker 2 00:29:20 That's, that's another observation that we learn early on when we were sort of experimenting and trying to understand is that even feasible, that's something because a lot of people have tried and failed. So how are we going to be different? We realized quickly that the metadata portion is critically important. It has to be welcomed or at the bulk of it has to be local because the bullets of the requests that come from the operating system, actually our service by the metadata, and you need to have that metadata. You don't want to incur around for times over then. That's when your service metadata calls, there are 80% of all the calls. And so we said, you know what? We're going to build a system so that we synchronize the metadata and keep the data where it is. Unfortunately, we became very successful in the past year and a half. Speaker 2 00:30:13 And now we have customers who literally have 50, 60, a hundred million files in some extreme places and data sets that have gone beyond one petabyte, ready to get to that tail, to that magnitude. You can no longer even synchronize the metadata. It's just not realistic. The metadata becomes in some cases, a hundred gigabytes. Okay. And so now what we're doing is we're moving towards a new model where we're streaming the metadata as well, employing similar techniques that we employ for data in terms of looking at a user behavior, trying to figure out where the is going to go next, or, or the applications of a thing we're trying to, prefetch the metadata that we would need. We store the metadata locally. We cache it locally on the et cetera. So the same things we did for data, we're now doing for meta data, employing local caching, prefetching all these other things, compression, et cetera. Speaker 2 00:31:17 Right? Yep. And that allows the system to scale into the better by petabyte range and a hundred million files. I was surprised. I felt well initially, you know, a couple of million files, that's all, I've got 200,000 of my laptop. That's all that ought to be enough. And it turns out quickly that it isn't because what we did was we, we now allow hundreds of people literally to collaborate on the same cloud volume, which in our lingo is file space. What we call a file space. We have, we have hundreds of people collaborating the same on the same file space. You'll literally have millions and millions of files. And so keeping all of the metadata locally is no longer possible. And so that's what we're doing. We're when late beta stage for that and we'll release next month in what's interesting, by the way, I was going to mention the other comparison to your local ness, sort of on-premise storage. Speaker 2 00:32:16 There's one thing, very important that we're learning. When we set out to build a system, we said, we want the system to look and feel like your local ness in terms of user experience, but also in terms of performance. And we, by and large accomplished that there are certain exceptions. You're probably not going to put this highly transactional workload, like a database in the cloud. So I'm not saying that we can cover all use cases. I'm saying we can cover 80%, 90% of the use case. That's good enough. Right? Yep. Right. And so that's important. But the other interesting element is that once you move that into the cloud, you allow people to work from anywhere, including from home. God forbid, we got hit with a pandemic. Well, guess what, that's exactly what happened. And interestingly, people discovered not only can stay productive and work from home, but they can tap into talent. Speaker 2 00:33:12 That is not in the same geography where that office, where that your local storage is because the storage has gravity peoples say storage has gravity. It means you've got to keep your workloads where the storage boxes. And we're saying, no, you don't. You can actually work and be productive. Even the, even though the storage is in the cloud, and now you can tap into town everywhere. And that changed for a lot of organizations. That was an eye opening experience. Not only my people can work from home, I can now find people in other regions or in other countries, you know, the specialists, the creative people, those, we have certain specific skillsets that are not available where your office used to be. And that's a beautiful thing. And the another thing I would say, if we're talking about comparing to local storage boxes is your local high end storage box may be very performance. Speaker 2 00:34:02 You may be talking about hundreds of thousands of Iams and you know, tens of gigabytes per second. But here's the interesting thing. This is scale up system in our world. We're leverage, leveraging the clouds, which is a horizontal scalable system. And also in your local office environment, everybody's going through the same pipe. Okay. There might be some redundancy, but it's two X, four X, eight X, whatever it is, there is a limited bandwidth in aggregate and limited number of spindles or SSDs, wherever it is in our world, it is literally unlimited. And when you look at hundreds of people, everybody using their own individual internet connection, hitting thousands upon thousands of servers simultaneously this system scales horizontally. So your aggregate performance is always going to be much, much higher than any local on-prem storage system. It doesn't matter how expensive, how high end it is because just a different beast altogether and stolen from a performance perspective, you look at it at the aggregate. We actually have performance benefits, Speaker 3 00:35:18 Right? Yeah. Uncle Jeff's engine is always going to be bigger than yours because he built spaceships as well as data centers. Yeah. So going back for a second there, we're talking about globally distributed groups working together creatively. One question I know that we've we've had is, you know, when we were talking about multiple geographic locations, you know, say coast to coast or internationally, there's always a little bit more latency there, right across the numbers of hops that we have, um, going through the pipes of the internet, what experience can you tell us about people who do collaborate and are there ways that you guys are working to mitigate some of those latency issues internationally? Speaker 2 00:36:11 Right. So typically you got a good experience when you're a couple of thousand miles away, which is already good enough, but I'm not going to lie to you. If you, if you're on the other side of the planet, the internet infrastructure is just not quite there yet. So you're not gonna get the same user experience. And despite that, we have a lot of customers who do exactly that because they have these distributed teams and, and the alternatives to that, let's not forget is shipping data to another location, uh, putting it locally and then consuming it off of the local storage. And so even the experience may not be as good. It is still better than the alternatives. So that's one thing to keep in mind. And then the other thing is when it comes to how we can improve on what we already have, where we're going to take this technology, and next is to offer what we internally called multi-cloud support. Speaker 2 00:37:04 And that is the ability for a cloud volume to span multiple object storage locations or buckets and replicate data across multiple zones. Or it doesn't have to be the same internet provider. I mean, cloud provider, it could be different cloud providers with different performance and cost parameters. Okay. Because of that replication, we will now have the data in multiple locations around the world. And our software would be able to go to the closest location to solve that distance to data problem. And by virtue of addressing that we can solve other issues like additional redundancy, economics, keep the colder data onto say lower performance, cheaper storage tiers, or cloud vendors. Right? All these, all these good things. Speaker 0 00:38:02 Yeah. That is currently what we've been doing in these situations where we've got, you know, multi-regional situations is there's some sort of a sync happening, you know, manually it's happening automatically, but it's happening sort of outside of the band of this software to get that data sync to the other region, at least into the other cloud, you know, the closer cloud and then, you know, have that data available and the other location. So just hearing that you guys are working on that multiple cloud support, that's highly valuable for sure. Speaker 2 00:38:32 It's funny. We, we say internally that our biggest today is FedEx. So that is still state-of-the-art. I mean, people are shipping rice because nothing else actually works for, for some of these faults dealing with these large data sets. I think that there is a lot that can be done to improve, but the way we look at it is we believe that the future of storage is in the cloud one way or another. Okay. And I spend the bulk of my career building on premise storage systems. So I know what I'm, what I'm talking about. When I say I don't see very bright future there. Right. And we are maybe one of the first technologies that are pushing that into the cloud, but there is a, I'm sure a lot, a lot of other teams and people and interesting companies are working on similar problems and we'll see that trend becoming bigger and bigger, but the future of storage is definitely going to be in the cloud and with sufficient connectivity, there's virtually no reason economic performance, any other reason to, to buy these expensive boxes and put them in your data center anymore. Like I said, there are certain use cases that would require that I'm talking about the bulk of the data, and that's probably gonna, you know, in all likelihood, it's going to go to the, into the cloud the next couple of years. Speaker 3 00:39:57 So then that makes me think of the idea of having a work group. Not that much, many work groups are back in the office though. We're starting to hear that some of our clients are kind of going back hybrid, but the present is certainly hybrid. I think the future is certainly cloud-based as we were just discussing, and we're going to be migrating more to hybrid and then purely cloud in terms of being able to say, have used lucid link as more of an edge device. Right. I think you guys support something like being able to Mount that on a server internally and then maybe reshare that over NFS or SMB. Am I right about that? Or, um, is that something that you guys have been working on too? Because I know there's a lot of people who that could be very useful to, for some more internal distributed workloads for, you know, people who already have that Nass or sand, but also are using lucid link. So to be able to kind of have the best of both worlds, right. Speaker 2 00:41:00 Well, that's a great question. We don't know exactly. We know that it's going to be a hybrid work environment. We don't know exactly what the needs are going to be. As people start to move back to the office, but there's a case to be made that if you had a storage device locally, that can be utilized to as a sort of a giant cash and reduce egress, reduce the back and forth and the round trips to the cloud, there's some advantages there. The problem is that we're back to, you know, I have to buy equipment, I have to take care of it, manage it's administered, upgraded, burning, power supplies, this and all that stuff. You're kind of creating Speaker 3 00:41:44 The same problems. Speaker 2 00:41:45 We're recreating the same problem with the old days, right? And so I'm not sure to what extent the market really would like to go back to the old ways because the alternatives to that would be get a fatter pipe. So you have a one gig and a couple of hundred bucks for another one gig. Now you've doubled that. So we have very passionate debates internally, whether we should create a local cash, which is a relatively straightforward thing to do in our world, based on the log structure, fastest in design, in other things, it is a relatively low lying fruit for us. But the problem is, is again, we're pushing the problem to the end customer that we'll have to then, you know, size the right system, provision storage, et cetera. Sure. Some of the existing on-premise supports could be repurposed for that. Um, and that's probably a good way to utilize your existing investments that you've made. Speaker 2 00:42:42 Long-term I don't even believe this is where the industry is going to go. That's just a personal opinion of mine. This is something that we could, we could do, and we have been debating about doing. And in fact, I have to tell you a secret, this wasn't a roadmap that we actually removed because it was sort of an insurance for us when we said, well, if people start going back to the office, how are they going to collaborate? You have 20, 30 people all using the same internet connection that may become a bottleneck. So how do you solve that? Well, you put a sort of a gateway in between, right? Some caching device. Well, that hasn't happened because of our Macron and all the other, the other things people continue to work from, from home, uh, for the most part. But these are legitimate questions that customers are asking and you may see something like that. I think the key to success here is going to be working with a quality hardware vendor and provide a turnkey solution instead of providing Lego blocks for people to build, because that's just error prone. And yeah, like I said, it pushes the problem to the customer. You don't, you wouldn't want to do that. Speaker 3 00:43:50 Yeah. But my immediate thought is that I think it would definitely help drive adoption for some of the larger clients out there who already have the infrastructure who have dedicated transcoding farms and stuff like that for the larger operations that might be sitting idle. Now I know everybody thinking about jumping to the cloud, but everybody's being like, well, you know, we are gonna want the ability to be hybrid and to be able to do some of the transcoding there and yeah. The idea of having a gigabit or even 10 gigabits of, uh, internet connectivity. That's great. But then if we have an organization full of, you know, multiple of people in a building and you've got five, you know, editors who can saturate that just by the virtue of the media that they use. I think there's a real use case there that the industry would really benefit from that. So however we can help, we'd be happy to, Speaker 2 00:44:51 For sure, in the success of such addition to our product would hinge on making this completely transparent to the end user. Because one of the things that Jason mentioned earlier is that when we talk about pinning, pinning is completely transparent to the end-user you right? Click on a file, say pin, but the way you consume that file stays the same. So you don't change how, how you work with the system. You're not moving stuff in different directories on different drives, nothing changes you just say, right, click and the magic happens. And so in a similar fashion, if we can pull that off in a way that is totally transparent to the end user, and if you're in an office environment, you'll leverage the large cash. And as soon as you leave the premises and you go home, you continue to work the same way, except that there's no longer cash in the path. That's, that's the key to success. And if we can do that, interestingly, we would sort of combine the advantages of file sync and share, which is the pinning in offline mode and the storage gateway, which is another sort of adjacent technology, which addresses exactly that. How do you collaborate with a local office that connects to another remote location or connecting branch offices, those kinds of things. And so there's value that. Speaker 3 00:46:11 Absolutely. If we can still use the lucid link client in the office and at home and the client knows what you've got pinned and what you need, what's cashed and could intelligently say, all right, Ben, I know you've got this file. These portions of these files you were just using today in the office. I'm just going to go on and send it to your hard drive at home because you're probably going to want those there too. That would be awesome Speaker 0 00:46:33 For sure. One other thing that I am always curious about in the sort of workflow orchestration arena is this idea that this has made familiar to us by things like AWS Lambdas, the ability in Amazon web services to be able to run some code based on an event, essentially. So if the event is that somebody put a file in a directory and, or moved a file from one, from one location to another, that there would be an event admitted by the storage, for example, that we can hook into, you know, maybe for example, tell the ma'am that this file moved or that there's a new file, as opposed to having these like sort of processes that pull the directory continuously over and over again, to look for changes. How do you see something like that fitting into the storage solution? Speaker 2 00:47:21 Well, we only work with companies like iconic for instance, that where you can put there at least torch gateway SGI, ISG, ISG. Yeah, exactly. That actually monitors your soul listening to climb on that virtual machine and monitors directory and then say produces proximity would be a typical scenario. That would be the, the mechanism that you can use today. But what you're saying is we can do better, right? We can create a Lambda that triggers and does performs other things. And that's certainly possible. We think that as a technology, we need to move up the stack and provide more integration, better integration with additional applications and other declines, just like ma'ams and others. And so this is where eventually we're going to go. We're just not quite there yet because we have so many things to do. Oh, absolutely. I think originally we built this as a general purpose storage platform, which has now sort of become media entertainment, creative content platform, because that's the biggest pain point right there in the creative, stuck at home. And we said, you know what? We love being in that space. Why not stay here for now? And let's see what other value we can bring to customers. And we're going to build those layers on top, like triggers and other things and proxy generation maybe, and all these other things that creative people need on a daily basis. Speaker 3 00:48:57 Great. Awesome. We should probably talk a little, a bit about some of the other fundamentals that are really important, like security, right? One of the things that I've been really interested in learning as I learned more about lucid link is how you guys handle user data, specifically user metadata discreetly. Right? At first we should mentioned that everything's encrypted, right? It's encrypted in transit, it's encrypted at rest. You guys support things like SSO, right? Uh, secure sign on through Octa and ADFS. Right? So walk us through some of those security aspects, because I think that's also something you guys do really well and something that's critically important today as well. Speaker 2 00:49:46 Yeah. I absolutely agree. It is critically important because this is one of the biggest impediments to adopting a technology like ours, if we didn't have it. And so we said, this is going to be a basic, basic tenant to get the security right in getting the security right to us means that only the customer has access to their data. And so when we say end to end client site encryption, it may be obvious, but to a lot of people, but I want to elaborate on what that really means because everybody talks about security and everybody talks about a cryption, but there's a big difference. And that is who has access to that data or who has keys to the encrypted data, just about every other technology that I can think of right now, folks, employee server, side encryption, which means that the cloud provider or the storage provider, the service provider, they also have the keys to your data and therefore they can see the data. Speaker 2 00:50:49 Okay. Unlike those technologies, like I said, it's very central to us is that everything is encrypted in a way that only the end user or the organization has those holes, the encryption keys, and therefore has access to that. I can see the content. We as a service provider in the cloud storage, as the storage provider, we can see the data passing through, but it's encrypted data. And because we don't have access to those encryption keys, we don't have access to the content. And that's a very, very fundamentally different position. And that applies to both the data itself and also to all the user generated metadata. We don't see file names. We don't see directory names. We don't see extended attributes. All of that is encrypted in the same way. It's funny. I talked to large media companies on a daily basis. We used to say, guys, our security model is such that even us government uses our technology and they laugh. Speaker 2 00:51:57 They laugh because they say we don't care about the government. You know, use case use cases, security for us is way more important than that. So media companies take security seriously because it's their intellectual property. And putting that intellectual property in the cloud is a big deal. And so that's why we've been successful in opening up these doors with these large media companies, because they have this peace of mind. This is, this is fundamentally different. Now we can get into sort of the, the nitty gritty of how we, we accomplish that. I'm not sure if we'll have the time to get into that level of detail, but this is, this was the large sort of the point I was trying to drive home is that this is not server-side encryption that everybody says they have, because they have keys to your data. Speaker 3 00:52:51 Only you have the keys to your data, so don't lose them or your, Speaker 2 00:52:56 And, and as we say, don't lose your root password. There's no password reset in our world. And by the way, every once in awhile, we're asked, Hey, can you reset my root password? I'm like, no, your data is toast. There is nothing that can recover that data. Speaker 3 00:53:15 Very important listeners hear that loud and clear don't lose your password, Speaker 2 00:53:20 Right. That's Speaker 3 00:53:21 Right. Oh, the one thing I was wondering about is, um, the ability to Mount more than one file space. I know we can do that today via command line, but not quite yet with the lucid link application. Is that something that's in the works to, Speaker 2 00:53:37 This is I'm happy to say on our roadmap for this year. And in fact, if I'm not mistaken for Q2. So we're definitely doing that. This is a very, very frequently asked feature. I'll just give an example. I have my own sort of family file space and I have our corporate file space. And I'm constantly switching between the two. So I know that pain for scam. That's something we have to address in a lot of cases. You might have file spaces in different regions or different work groups, but you're working with multiple groups at the same time or, you know, shuffling data around that kind of thing. So it's, uh, it comes up a lot. So it is absolutely on our roadmap for this year. Awesome. Speaker 0 00:54:20 Um, any other, any other user stories that you can think of for people that are already familiar with lucid length that you'd like to sort of tease out as things we can look for, maybe coming up in the future recently and the recent future, Speaker 2 00:54:34 We have quite a few customers now in the architecture, engineering and construction. Okay. Cap cam revenue, all these folks, because they have slightly different set of issues. They also of course have the collaborative workflow scenarios, but in their world file locking in distributed file log is a must. It's a requirement. Why? Because the software that they use is designed or built to run on top of shared storage, and the way you deal with that is through file locking. And this is something we do, doesn't come up a lot in the media and entertainment space. I don't think you have two editors simultaneously working on the same project, even though Adobe actually, maybe I misspoke here because Adobe premier does address this to an extent, right. But when it comes to working on a design with multiple people, that is the norm, essentially you have multiple architects simultaneously working. Speaker 2 00:55:38 We address this in a very nice, uh, performance manner as well. So this is also another sort of vertical where we see a lot of interest. We also see some interest in the medical imaging space. These are huge, huge sets in. They're typically all that data and, and because of HIPAA and other regulations, et cetera, and compliance requirements, they have to keep that data for up to seven and 10 years. In some cases I'm forgetting the exact period. The, these are long periods of time. You have to keep that data. And these are stored on local PACS systems, right? It's gotta be very secure and it has to be secure, but these days it also has to be remotely accessible. You see? And so it becomes the combination of large data sets and remote access. Usually we are a good fit for those use cases in, so that's another good use case for us in general healthcare industries. Obviously they're not exactly, um, early adopters and put it this way. So there's a lot of roadblocks to overcome there, but I th we, we believe this could be a huge potential, uh, uh, use case for us. Yeah. Speaker 3 00:56:53 Yeah. I mean, if, if there's a huge image file of tons of cells, you know, that somebody is studying the ability to just go in there and look at the specific portions of the file that you need as opposed to download the entire file, you know, just like we deal with video, it's the same use case. It's just a different application of the technology. I want to go back just one second and talk a little bit about what you guys do with file locking in collision detection, because I think that's a good thing to talk about. I know it's much more of a, whoever saves last wins, kind of a situation, and it's more an aspect of what the software applications themselves do and less the file system, but just the simple fact that you guys are able to use file locking that enables things like Adobe premiere, like you were just mentioning George with something like team projects, where we can both be looking at the same timeline and editing the same thing and making sure we're not going to step on each other's toes or ruin each other's edits. That also gets us into kind of how you guys handle data in the file system. And some of the things you're doing with snapshots as well, and what allows us to do snapshots, right? Speaker 2 00:58:16 Um, firewalking is a complicated subject when it comes to a distributed system like ours, to answer your question, I guess there are two aspects to it, first of all, the consistency of data, and then the file locking aspect. So I'll say a few words about consistency is that the initial efforts was to replicate your local file system semantics in sort of in an internet environment and that doesn't work and it doesn't work for simple reason. Uh, this is known as the cap fear around consistency, availability and partition tolerance, and the cap theorem simply states that out of the three characteristics, you get to choose two. And so you can't have your cake and eat it too basic. You have to, you make, you have to make trade-offs. And so all these early efforts, we're going to do POSIX compliant file systems. That's going to work in the cloud. Speaker 2 00:59:10 You can build it. The user experience is not going to be that great. And so what, one of the first things that we said is we're not going to even attend that because that's not going to get us anywhere. So why don't we relax the requirements a little bit in the requirement that we were last was consistency. We said, you know what, if I write a file, you may take a couple of seconds or a couple of minutes before you can see the entire, all of my changes. And as it turns out, that's just fine. In an internet environment, there are a lot of cases where it doesn't matter with one or two exceptions. And those exceptions are typically applications that are built on sort of on the premise that they have shared storage. And they all see the same thing sort of at the same time. Speaker 2 00:59:58 And they employ file locking for isolation so that they don't stomp on each other. And so when it comes to walking, what we say is in the presence of locks, we cannot be sort of eventually consistent. We have to be strongly consistent. That means we have to push the data right away before we lock in, before synchronized, before we lock in, then push the data out, back to the cloud before we unlock. So another system can take the lock and they have to see the latest view of that file or set of files. And so for those scenarios, we become consistent, okay, but we may not be available, but we trade these characteristics on the flight dynamically so that we can give you the user experience in the normal case and gave you the consistency semantics in the firewalking case. And earlier we said that we keep the data in the object store and the metadata is in a separate metadata service. That metadata service is also responsible for things like locking. Okay. And it's sort of the, the lock arbiter between all these different systems. This is a really tough nut to crack. Uh, we're now working on our file locking 3.0 implementation that further improves the performance because it's so difficult to get it right, because it's also sort of a trade off between different things, sort of juggling several balls on top of one another and trying to keep it all. That's kind of how it feels building those, those distributed systems. Speaker 0 01:01:27 I think a lot of our listeners have, or they're familiar with the concept of the triad of values of which you can only pick two. And if you're really striving to get all three of those, you're going to, you're going to end up with something pretty mediocre. Speaker 2 01:01:38 So exactly speaking of the metadata service, another thing that I'd like to point out is that we also do snapshotting and in our world, snapshots are natural and quote unquote easy because of the nature of how we lay out the data because of we employ a lot structure design, all rights, go into your arms. And if those objects become stale within collect them through a garbage collection service that we run in our metadata service, and that is also responsible for maintaining snapshots. And what snapshots give you is sort of the ability to go back in time and restore the entire file system as a tour at that point in time. And that's important because that is different and distinct from file. Versioning file version allows to drill down on a particular individual file. We can do file versioning, and we will actually, because it's just a convenience thing for end users. Speaker 2 01:02:37 We don't have that feature suppose today, but we're going to do this, but snapshots are different beasts because they restore the entire system state as or before. And I'm talking about, you know, think of hundreds of millions of files and petabytes, and it's important because typically you're not dealing with individual files, you know, sort of the simplistic scenario where you and I are exchanging a doc file. It's a doc file. I need file version. That's all I need. But if you're working on a complex project, that's comprised of many, many files restoring one single file. Ain't gonna, ain't going to get you to where you want to be. And so you really want to get the entire snapshot thing. It's very valuable object storage itself. Doesn't have snapshot characteristics because snapshots, aren't simple to do in a horizontally scalable environment like the object store. And so we layer those semantics on top of the object store to give you those snapshots characteristics, right? Speaker 0 01:03:38 Oh, that's hugely valuable. That's pretty fantastic. I'm an engineer. And I don't often like to talk about costs, but I feel like it's something we should maybe not necessarily tease out, like what things cost in particular that's really tough to do, but something like egress out of the cloud and all that kind of stuff. What does that look like? Just maybe a high level overview of what that kind of looks like, because that's something that people are always thinking about, what are we paying for? How does it look? That kind of thing. Speaker 2 01:04:03 So eagerness is a significant factor into the overall cost of a solution like ours simply because AWS sort of set the bar back in the day and all the other hyperscalers sort of followed and replicated their pricing models. And eagerness as it exists today is, is very high and much higher than, than it should be. It is a form of keeping your audience captive. The, the Roach motel type of scenario puts your data in, but it's hard to take out in our world. Unfortunately, we're trying to solve that problem. Data's in the cloud, you're on the edge. And so you're going to incur ingress. That's where we are today. There are a lot of things that could be said about is, but personally, this is a race to zero, both on the storage side of things and on egress. And because it is artificial, I think companies are going to start to erode this model and we are already seeing this. I'm not at Liberty to discuss this because I don't believe we've made an ask the working with one of the cloud service providers to eliminate the egress costs altogether, which is a big deal. And so while this is still a problem today, I don't think that long-term, this is going to be a big issue for the industry. Yeah. Speaker 3 01:05:25 Gotcha. Yeah. Yeah. I mean, you guys have your own kind of bundled offering, right? The I'm thinking about the, either the enterprise or the kind of basic it's either IBM's object storage or wasabi underneath of it. Right. And if we think about average price for ERs, which is what like 9 cents per gigabyte, if we're talking about Amazon or, you know, some of the other bigger vendors too, whereas we still have to pay eagerness with your storage, but I think it's like 3 cents, right? Speaker 2 01:05:54 That is correct from where Amazon is and the list price at 9 cents. We brought that down three X, but we're not happy with the results we want to eliminate that. But you're absolutely right. If you decide to use the turnkey sort of all in one offering that we have, which has bundled IBM storage plus loosening service, you get 3 cents, okay. Which is already much better. And another thing that I, I need to point out is the fact that everything is cash and persistent. Yes. Most of the hot data is going to be brought down once and then Kevin in the cash. So you're not incurring egress constantly. And the other thing is unlike other storage solutions where say they synchronize the entire dataset, we're not synchronizing the thirst datasets. We're just bringing the bits and pieces that you need at the moment. So we will, in general, everything else being equal will incur less egress. And then the files and can share Speaker 3 01:06:53 Just quickly. I think it's worth drilling into that portion about the cash again, and it being persistent, meaning that the bits of the file that you download are going to stay there on the cash location, the hard drive that you've designated as the cash location for lucid link, but the file space until either the cash gets flushed, either by you flushing it or filling it up again, because the way the cash works it's first and last out, right? So if you have a larger cash, more blocks will stay there for a longer period of time. And so that's the real benefit of having a larger cash, you know, up to 10 terabytes if you're a larger organization, but for those of us working at home, maybe up to 500 gigabytes or a terabyte, you might want to keep it as a cash, uh, location. That way, as George was saying, as you're downloading those chunks, they're just staying there and you never have to download them again until the cash gets flushed, which means the egress cost is much lower rather than constantly re downloading things. And that's a huge, Speaker 2 01:07:57 That's absolutely correct. And it's both, uh, minimizing the egress as well as improving the performance, which is a big component. I'll give you an example in the life sciences, we have a customer who is running some DNA analysis. I don't know exactly the type of workforce that they have, but they used to use a local on prem ness that took 24 hours to do a full iteration. And they would do this couple of times a week and they got rid of their local mask. They moved everything to the clouds and they brought that down to 21 hours and they couldn't believe their eyes. They're like, that's not physically possible. We were like, no, it's possible because your heart working set is on your local disc. So you're taking advantage of the local storage, right. You're not bringing everything back and forth, back and forth. And so that's, and that that's both for reads and writes, right. Utilize the local or system cash. And so there are situations where you're going to actually see improved performance over your high-end ness that you're accustomed to. Speaker 0 01:09:15 Yeah. Right. That makes perfect sense. You know, another thing, it, it stretches to sort of talk about this as a use case, the remote workflow of having, um, folks working from home on media projects, lucid link is a great tool for say, you've got some remote producers or a camera folks out on the road connecting to their file space, maybe in a hotel or in some sort of a remote location, really great use case there. One thing that, you know, again, to just to be aware of, we've got to keep in mind how this works. So in order for, for others to be able to access that content, me being the content producer, who's placing that content in the file space. I need to be able to upload it, to get it into the cloud before you download it, we should call it out just to set expectations there. My upload bandwidth is also a factor when we talk about accessibility and being able to sort of get this content out there to other people who need to work with it. Yeah. Speaker 3 01:10:12 Yeah. I mean, camera to cloud is a big buzzword these days, for sure. And I'm just thinking about all of our friends and who are, like one of the things that we could do to potentially mitigate some of this on both ends is to generate proxies as a lighter weight reference, you know, for things like we might want for dailies, those will go up a lot faster, um, and be more available across the file space really quick versus the heavyweight files. I mean the heavyweight files will come up too. It's just, I know the organization is we're going to want to access and see things as soon as humanly possible. Same thing in the back end, right? For our editors who want to work in might have a slower bandwidth at home doing an offline online workflow where we've got pro approxi edit. And then the high res files are still within the lucid link file space. And somebody who has a faster connection might have those persistent in their cash. And that's where they can conform. Right? If they've got the same project file, they can just open it up to whoever has the faster connection they can do the conform and then boom, there's our distributed workflow. And everybody is benefiting from the cloud as our good friend, Andy shepherd might say Speaker 2 01:11:24 Certainly, and I would love to work with the camera manufacturers themselves and maybe get to a world where you should. And it goes straight into listening thing and straight to the cloud and it's easily accessible. That's sort of the holy grail. Sure. Of course. That'll take some work with the, with the camera manufacturers themselves. What we can do is provide those capabilities from your phone. So if you're shooting video from your iPhone 13, with the work that we're doing, which is something that's not available just yet, you'll be able to directly put it in enlisted link in the cloud. And that sounds Speaker 0 01:12:03 Highly valuable Speaker 2 01:12:04 Sort of where, where we're going to go now. Speaker 3 01:12:06 Right. Because if you're from shoots pro Rez and it's in your back pocket, especially if you're an electronic news gathering kind of person. Yeah. Hell yeah. George, good time. Speaker 0 01:12:17 Yeah. These things have the ability to capture some pretty decent video for, you know, for a wide variety of use cases. Speaker 3 01:12:24 Right? Yep. Right. I think this has been a fantastic conversation. Any parting thoughts on their end? Speaker 2 01:12:32 I truly believe that the future is in the cloud. We've been talking about this for many, many years, but things started really moving in in March of 2020. And we're now seeing that organizations no longer want to invest in on-premise storage. They have mandates to move to the cloud and they turn to companies like ours. And I believe that in the coming years, you will see more and more of that transition taking place. As you approach a refresh cycle, people say, we're not going to spend another couple of hundred thousand dollars. We'll just put everything in the cloud. And we clearly see this, the rest of the world may not be seeing it the way we do internally, but we see those trends playing out. And so these are very exciting times for us as a company and for all this, for the ecosystem, that's the cloud storage ecosystem. So like I said, we're very excited. There's so much interesting things we can do and we will do in the coming years. And I'm sure that we'll get to talk again and, and share some of the work that we're doing. Speaker 0 01:13:41 Awesome. Awesome. George dochev from lucid link. Thank you for joining us today on the workflow show. Speaker 3 01:13:47 It's been a pleasure. Thank you Speaker 0 01:13:48 Guys. Thanks for listening. The workflow show is a production of Chesa and more banana productions. Original music is created and produced by Ben Kilburg. Please subscribe to the workflow show and shout out to [email protected] or at the workflow show on Twitter and LinkedIn. Thanks for listening. I'm Jason Whetstone.

Other Episodes

Episode 0

February 20, 2020 01:05:42
Episode Cover

#43 "Cloud Editing Realities and Myths"

When and where does it make sense to incorporate the cloud into your post-production workflow? The cloud is here and it's staying, but confusion...

Listen

Episode 0

March 17, 2013 00:35:38
Episode Cover

#12 "The Advantages of Adobe Creative Cloud for Teams"

In this episode of The Workflow Show, Merrel Davis and Nick Gold discuss the many advantages offered by Adobe's innovative Creative Cloud for Teams...

Listen

Episode 0

February 02, 2015 01:25:14
Episode Cover

#29 A Conversation with Richard Harrington

Media Guru, Workflow Evangelist, Multi-tasking Entrepreneur, Prolific Tech Author and Spokesperson -- these are just some of the desciptors associated with Richard Harrington. In...

Listen