#31 File Transfer Acceleration with FileCatalyst

November 20, 2015 01:03:21
#31 File Transfer Acceleration with FileCatalyst
The Workflow Show
#31 File Transfer Acceleration with FileCatalyst
/

Show Notes

Should you invest in file transfer acceleration? Find out by listening to the CEO and Co-Founder of FileCatalyst. Getting ready for a long distance commute for a Thanksgiving homecoming? Wouldn’t it be nice if you could use technology to your advantage and assure you would get there as fast as possible? We might not be able to send you from point A to point B faster, but we can get your files there faster! How can you use your existing Internet bandwidth to the best of its ability, and what are the bottlenecks that keep you from doing so? The Workflow Show talks with CEO and Co-Founder of FileCatalyst, Chris Bailey, to discuss the history of file transfer technology, how it works, and how we can exponentially accelerate those file transfers. File Catalyst FileCatalyst technology is developed by Emmy award-winning Unlimi-Tech Software Inc., a world leader in fast file transfer solutions. Founded in 2000, the company has more than 1,000 clients with a user base of over 1 million. FileCatalyst provides software-based solutions designed to accelerate and optimize file transfers across global networks. Being immune to packet loss and latency, FileCatalyst can send files much faster than methods such as FTP, HTTP or CIFS, while adding security and reliability. Unlimi-Tech is a privately-owned company with headquarters and a product development center located in Ottawa, Canada. After listening to this episode, you’ll have a better understanding of the benefits of investing in file transfer acceleration and how easy it is to send files much more quickly. So if you need something interesting to listen to while you are plodding through holiday traffic this week, the Workflow Show is here for you! Your comments are welcome below, or feel free to email us. View a list of all the episodes of The Workflow Show. The Workflow Show is also available on iTunes. SHOW NOTES FileCatalyst TCP Streams UDP FTP
View Full Transcript

Episode Transcript

Speaker 0 00:00 Hello, welcome to the next episode of the workflow show. This is episode four Oh two. I am your cohost Nick gold. And I am joined by my fellow cohost Jason Whetstone, also of Chesapeake systems. Hey guys, how's it going? And of course we have our ever reliable producer, Ben Kilburg, also Chesapeake's solutions architect. Howdy here with us and joining us today. We have a guest from outside our organization that'll be a little less of the typical echo chamber of the workflow show. We have Chris Bailey who is the CEO and cofounder of the makers of file catalyst. So file catalyst. A lot of our, a lot of our clients actually utilize or know about it is a file transfer accelerator technology, which is kind of a mouthful. We're going to certainly talk about what that is, both the technology and the challenges it addresses, um, in general and then we'll relate that to kind of file catalyst itself. Speaker 0 01:02 So thank you for joining us today, Chris. Great to be here and thanks for having me. Absolutely. So you're Skyping in today. Where are you Skyping in from? I'm Skyping in from balmy Ottawa, Canada, Ottawa, where it is 95 degrees right now. Wow. Seriously? No, it's probably like 45 degrees Fahrenheit or Celsius because that's Fahrenheit. I'm adjusting this for your audience. It's really for us because we have no idea. It's like six degrees Celsius. I think. Something like that. It's, it's cold for this time of year. There could be snow here and a few weeks they're talking about flurries on the weekend. So Whoa. It goes from pretty warm to pretty cold pretty quick here and here in, in the mid Atlantic or in our Baltimore hometown today we, uh, we're having like this really nice spell of auto. It's like just, it's just gorgeous. Speaker 0 01:57 Chris. I deal really, really tough. Now you're missing something. So is Ottawa where you guys are based or is this just where you happen to be located at the moment? We are based in Ottawa. We're all true Canadians up here, all based in Ottawa. So you are one of the founders of, of the tech that we all know the brand of as file catalyst. Um, tell us about, kind of just yourself, a little bit about your background and what brought you into kind of decide to create a company around accelerating moving files around on the internet. Well, um, I graduated with computer science degree from dalhausser university. Not sure if anybody's heard of that. It's on the East coast of Canada in a town called Halifax. And right out of university I was uh, recruited and I was working for the Canadian government for a while and met my now business partner, John Kuczewski. And we started collaborating in our spare time on some Speaker 1 03:00 File transfer technologies. It wasn't accelerated, it was based around FTP. So what we did is we made a, a Java applet that can run in a web browser that did FTP from your webpage. So instead of having to go around and install FTP clients everywhere, you could install an app load on your webpage that would allow all your users to simply browse in and upload and download their files. So that's how we kind of started out. And then we decided to quit our day jobs and the government and do it full time. And it started to evolve. Uh, we had a lot of companies like print shops, uh, graphic design houses start using the software. And they came to us and said, Hey, can you allow us to submit some form data metadata along with these files and can you make it a little simpler to use? Speaker 1 03:44 Don't have the classic two pain file Zillow, look, just make it an upload. And so we started to make customized forums for people and these features just started to evolve into products. And uh, that kind of is a first generation of our company from around 2000 to 2005 was, you know, web based widgets for submitting online files. Um, we started to notice around Oh four Oh five that there was a real uptick in the amount of media and broadcast customers coming in. And this sort of coincided with, uh, the move from tape to digital and the move from SD to HD. So files were getting larger and it was a real hassle and cost money and took time to ship, uh, hard drives. And at the time a lot of people are still shipping film in the and the movie industry. So, you know, we had customers come to us and say, Hey, you know, we really like your stuff but can you make it go faster? Um, we were using FTP as I mentioned, so we had to come up with a way of, of doing the same thing we were doing just as easy to use but do it a lot quicker. And that's when we started to look at our acceleration technology file catalyst. Speaker 0 04:50 Right. And FTP is, you know, it's a stance for file transfer protocol for those that don't know it's been around for many, many, many, many. FTP is I think a seventies technology if memory serves. Speaker 1 05:03 Yeah. The original protocol for file transfer protocol was written back in the seventies correct. Speaker 0 05:09 Yeah. So, and it's interesting to hear you say that even in your earlier days when you were building basically a web based, you know, Java applet, you know, FTP uploader concerns around workflow because of your creative base of users. Was, it seems like that kind of was kind of baked into your culture early on because you know, they're giving you feedback like, well we don't necessarily need every last feature. We need this to be as efficient as possible. So it fits into the workflow of a typical creative outfit as they're passing files along to get a job done. Exactly. I just think it's interesting that, you know, from the get go it's essentially those were some of the concerns that you found yourself addressing. It's not just throw every technical capability at the wall, it's how do we do this in a way that makes, using this fundamental technology as smooth and easy as possible to fit into the workflows of a creative professional set of users. So Chris likes to talk about some of the problems with FTP that you ran into that, that you know, caused you to want to move on beyond that. Speaker 1 06:15 Well, the problem with FTP is it's, I mean it's based on TCP, which is also a very old protocol, but it powers everything that you're using right now on the internet. Speaker 0 06:24 TCP, we, and we think of it, we think of when we think of the internet, most people are thinking of TCP IP crutches, transport control protocol and kind of melded together with the original internet protocol or IP. Right. So, so how does that work? Why was, why is TCP not as ideal for sending a file across the internet when TCP seems to power the vast majority of services on the internet? Speaker 1 06:52 Well, TCP was built to not cause the internet to collapse with congestion. So it's very quick to, if you're sending a lot of data back off when it sees that other data is attempting to go through. So it plays very fair with other protocols. There's a lot of protocols out there like that you, you would recognize HTTP for example, FTP, obviously SFTP, almost everything you know, is built on TCP and they all have to coexist on the internet and you only have a finite amount of bandwidth. So really it was built to allow everybody to coexist at one time. And when you started to see packets lost it, it interprets that as there's congestion. Let me slow down quickly. And in addition to a, it has a speedup mechanism that relies on sending bits of data across, call the TCP window. And that's a finite amount. Speaker 1 07:46 And what happens is it sends some data and then waits for reply. And as geographic distance grows, sometimes it's sitting, waiting for replies more than it's actually sending data. But it does that to avoid these types of scenarios where, uh, there's a lot of congestion in everybody's application start to slow down. So what happens is back in the seventies when this was originally conceived and over time there has been iterations to improve the performance. It might not have been for thought that we'd be dealing with gigabit networks or, or a hundred Meg plus, you know, and now gigabit at the home sometimes. So it wasn't really designed to handle the bulk file transmit. We're seeing now with you know, high definition 4k or uncompressed video or audio files or even now people snap in pictures with their, their new iPhone, you know, you quickly fill up your storage. If you've got to move those, you're moving them across, you know, using a transfer protocol that uses TCP. And so it's, it's quite slow. So when you start getting into businesses, organizations that need to move this type of data around, they have massive amounts of data, they have big bandwidth, but FTP or TCP based protocols just can't move the data fast enough. It just wasn't designed. Speaker 0 08:54 So basically a lot of this hand Speaker 1 08:56 Shaking going on between the multiple nodes that are communicating with one another. And MTP is very sensitive. It's a sensitive little protocol and it's very fair and egalitarian. But sometimes you know, your data just wants to kind of sit in the corner and do its own thing easy. We don't want to hurt his feelings. It's very sensitive as I've said. And again, this really comes down to kind of what the internet was originally built around, which was kind of what reliability of getting the data from point a to point B at all costs and a willingness to maybe sacrifice, you know, immediacy for that to happen. Right? And it was also kind of this very egalitarian view of the world where all the bits should kind of be considered, you know, equal to one another and we're sending them around and they're all going to be patient and wait on one another and it's going to kind of, what is it fair to say? Speaker 1 09:48 Almost kind of self shaped the traffic based on that very egalitarian approach. But once that's just happening all the time and you really need to get your big frickin file from your office to say a colleague that needs to review a video, you know, that just starts making it go slow for you despite whatever bandwidth you might have on either end. Right. Well, correct. And I mean back in the mid to late nineties it was actually perfectly fine even from a speed standpoint for your average usage. But as bandwidth started to grow or as even with smaller bandwidth, as latency starts to grow. So geographic distance as we know now people are becoming and companies are becoming global, more geographically dispersed. You might be doing some editing in one location and you might be doing production in other locations. So you need a collaborate across very large distances that without the internet was impossible in the past. But now you have these scenarios and so now with the bigger bandwidth available, bigger file size, you know, before people didn't care if it took an hour, an hour and a half. But now, now we're talking the difference between an hour and 20 hours to transfer a file with a, with TCP versus an Excel. Speaker 0 10:56 So, so, and this is a very, this ties into a point, I was essentially making it for a different set of technologies, but I was making this point to someone. I was a potential client. I was on the horn with this morning. I said, when you look at speed and reliability, and at the time we were talking about storage systems, but I think the principle applies here. It's not just, well, I've got a hundred megabits up and down at this location and they've got a hundred megabits up and down at their location and so I'm going to send them a file and gosh darn it, we can count on essentially a hundred megabits speeds. That's not what happens in the real world. Correct. Speaker 1 11:35 If they both have the a hundred megabits in theory you would expect you would, but this is where the latency problem kicks in. If you're going across town, then you might actually get that a hundred megabits, but that all depends on is that link shared amongst others in your office for other applications, is QoS involved that will throttle you back and not allow you to send that fast? If you had a clean network, you know across town, very low latency, then yes you could. You might be able to saturate, Speaker 0 12:04 But what if I have an office in New York and we've got an LA office, so it has to jump across the public internet across a continent, but I still have my a hundred megabit up and down in each location. What starts to actually be the real world experience when you're hopping across cities, continents, States, oceans, et cetera. Speaker 1 12:24 Well, let's use your, your 60 to 70 millisecond, uh, latency there. Um, that's from LA to New York, approximately. And what's going to happen with that is you're gonna start to get, the chitchatting is going to become a bottleneck. So as Jason was mentioned earlier, you know, it's a very chatty protocol, FTP, and, and I mentioned the TCP window, the chunks of data that gets, gets sent across. Now, when you're reaching that type of band, if you're sending a chunk of data and you're waiting for the reply, if you look at kind of the network graph of this transfer going on, it's going to be very spiky, a lot of peaks and valleys, because the valleys are going to be waiting for the replies and the peaks are going to be sending more data. So as the geographic distance grows, you're going to see more valleys and less peaks. Speaker 0 13:11 Ideally, you want to see as flat a line as possible that comes as close as possible to your maximum theoretical bandwidth. Sure. At the moment you're having these moments, a moment drops because of this handshaking and these confirmations that are constantly occurring. You're, even though you're peeking out maybe at decent speeds, it's like, it's like if you're trying to drive your car down the street and you're having to kind of like go from 60 miles an hour to five miles an hour and you only can do it for, you can only do 60 for five seconds at a time. It's like that's just not an efficient way of driving down the street. You know the, the, the difference between driving down the street on the highway versus on a Parkway where there's lots of shopping malls. Yup. Speaker 1 13:51 I have a great analogy that I like to use for this and this. We'll get into UDP a little bit and that's it. Let's say you're trying to fill up a swimming pool and you have a bucket and you're sitting there by the hose bib on your house and you fill up the bucket and you can dump it right into the pool and you keep doing that. That's great. You're right beside it. You're getting that water in there as fast as you can. Now let's push that pool. You know, a hundred feet away and you have two or three people in between. Now you fill up your bucket and you only have one. You have to pass it to the next guy. He passes the next guy or girl, the next guy or girl, dump it into the pool, pass the bucket back. He fill it up. Speaker 1 14:28 Wouldn't it be more optimal if you could have multiple buckets? You fill one up, pass it along, fill one up, passed along. There's a constant flow of buckets going across. Now it wouldn't make much difference when the pool is close to the hose bib and you can just dump it right in, but as you get more and more distance and you need more and more people and you're passing that one little bucket across that it's going to fill up the pool slower and slower as you get further. Whereas if you're sending a constant flow of buckets across with water, really it doesn't matter. Once the first bucket starts to reach there, there's always going to be another bucket to dump in. So you're gonna be filling up that pool just as fast, even if it's far away as if it's close. And that's kind of what we've done with file catalyst with the UDP protocol is we're sending multiple pieces of data concurrently with the other pieces. While we're waiting for the first piece to be acknowledged, we're sending the second piece, we're sending the third piece. And so all those valleys that you see as the distance grows with TCP based protocols like FTP, we're filling them up by just sending more data at the same time. Speaker 0 15:30 So UDP, we were, we were talking about TCP and the most things we do on the internet are built around TCP. What is UDP and how is it something you're able to use on the internet if so much of what we do on the internet is based on the TCP, the transport control protocol. Well, Speaker 1 15:49 I'm assuming that a lot of your listeners are technical people and they've probably played, I wouldn't assume that games, a lot of people play games. And a lot of watch streaming video. And so if you've done those things that you've played online gaming, a lot of that is using UDP in the background and what UDP is great for is streaming. Um, and it's, it's good for real time applications where if you lose a couple packets here and there, like the pack is just a piece of data that's being sent across, it's not going to be the end of the world because you can recover from that. It's real time, it's lost, you don't need to get it back. Whereas with a file, typically you need every bit of data to arrive. What does UDP stand for? Again? It user datagram protocol Speaker 0 16:34 God, and that was actually an earlier protocol than TCP, correct? Speaker 1 16:39 Right. It's, it's, it's kind of a precursor and it doesn't have all the congestion control and the flow control that I was talking about earlier that stops the internet from kind of collapsing as you're using as if everybody was using UDP at one time and with unbounded, you know, they just send it as fast as they could. Then immediately everybody would kind of grind to a halt. So it's, you really have to be careful when you're using UDP and applications built on it, like games or streaming utilities, they can stream at the exact rate that they need and uh, they need to have some sort of mechanism to throttle things back as well. But go ahead. Speaker 0 17:15 Here's a question. I mean you were talking about streaming video in that if there's packet loss when I'm watching a streaming video, say it's Netflix on my computer or on my TV through my Apple TV or whatever it is. Yeah, sure. We're all used to those moments where suddenly, you know, it obviously steps down in quality or you get some, yeah, macro blocking artifacts when it, obviously it's taken a hit on the data rate. You've got some weird visual distortions for a moment and then it kind of snaps back. And you know, that might be annoying as a user, but it, it generally these days doesn't dramatically hold up your enjoyment of whatever you're, but you said, you know, that would be bad if you're transporting a file because a file needs to be intact at both ends. It needs to be the identical set of bits as it was on the transmission end as it is on the receiving end. So this seems contradictory to me. How, how can one use UDP with this kind of almost shot gun approach and yeah, maybe some of the bird shots going to hit it and some of it's not, but it kind of doesn't matter if the bulk of it hits. How do you reliably send files using a system like that? Speaker 1 18:24 Well, the nice thing about UDP is that you can send a perfect stream. So you get that flat graph. But as you mentioned, you can lose packets along the way. But the nice thing about file transfer is that you don't need to have every packet arrive in order. You don't need to have it arrive until you know later in the file potentially. So if you build a system on top of UDP that can track what's arrived at the destination, what hasn't, then concurrently while you're sending new data, you can be retransmitting any lost data as opposed to TCP. What it does is it will sit and wait because it makes sure everything arrived sequentially. So if something's lost with TCP, it'll actually stop transferring, send a negative acknowledgment, which is, Hey, I missed packet nine recent packet nine and it will not progress. It doesn't have the concurrency built in to do that. So we've had to build on top of this unreliable UDP protocol, a retry and retransmit mechanism that happens concurrently while we're sending that flatline, that smooth data stream across. And so because of that, we're able to send at any speed. Essentially we want to reliably, um, just like TCP would. But we, since we've added our application level features to retransmit and retry these lost packets, we don't have the same issues that you would if you were just using raw UDP. Speaker 0 19:46 But that is a still, I'm sorry, but that's still not adding the kind of overhead that would be associated with a pure TCP transfer. Right? Speaker 1 19:54 Right, exactly. There's still some things that TCP would do, but like the congestion and avoidance and that sort of thing that we've also had to build on top. But we do it in a little bit, a less aggressive way than TCP would. Uh, so I mean we still have the same problem where if you want to send data across the internet and it starts to encounter other data, let's say someone else on your network is just browsing the web and they start downloading a file while we still have to slow down to allow that traffic to go through, but we at least allow you to tune how much to slow down. Do we want it to be aggressive? Do we want it to be passive? And so we've also built that into the protocol that we've kind of built on top of UDP to allow that to happen. So we, we allow you to be almost as aggressive as TPP, TCP slowing down or way less aggressive in. And we can, uh, we can slow down to basically when we see other traffic and then speed back up. So we've, we've essentially created our, what we think is almost a better, a better version of, of, of a data transport protocol than TCP ever could be. Um, because we've, we've, we've started from scratch and tailored it specifically for bulk data transfer. Speaker 0 21:04 Yeah. And the, and the application, the final catalyst application is very flexible in that regard. It's very, you know, tweakable and you know, you're able to really kind of use all the bits you can and your stream and your bandwidth. So how does that end up manifesting? So give us, you know, some back of the napkin kind of scenarios. Like what do your clientele who have been, let's say just using an FTP client across the public internet, which many of our clients still do constantly. You know, they're using technology from the seventies essentially. Sure. It's attached to a much bigger pipe, but it's like, well, if you have the biggest pipe in the world and it's still like a journal and a ball at one time, that's not going to do that much. So like when people who've been using FTP, you know, for, you know, distributed file transfer, start to utilize this UDP based file transfer acceleration technology that you guys have baked into the file catalyst product line, what happens to them? What happens to their data? What happens to their world? Like are we talking about something to, they get an extra 5%? Do they get an extra, you know, say twice as fast. I mean, what do people, how does it play out? Speaker 1 22:16 Well, let's go back to your LA to New York scenario because everybody is aware of LA and New York. Like I mentioned Speaker 0 22:22 Media markets in the U S a lot of distributed work groups, our own clients have offices in both those locations. Sure. Speaker 1 22:29 Um, so between those two locations, you'd have a typical round trip time or latency of, of 60 to 70 milliseconds. And just as an example, uh, if you're sending a 10 gigabyte media file from LA to New York, uh, it could take five or six hours on a a hundred megabit connection. Um, with FTP. It depends on if it's tuned, you know, it depends on what operating system you're using, but you'll get a steady transfer and get it there in about 14 minutes with file catalyst, wait a minute, what? Speaker 0 23:04 I mean that's not like 5% or 20% faster. That's Speaker 1 23:09 I think you could just say that's like way facile. Some people, like when we go to a trade show for example, we have kind of a, a billboard we stick up that has these types of numbers and of course we take a extreme number and keep in mind, you know, if you had a a one Meg file, then it might be 14 or like a minute and then two minutes with FTP. So you don't see the difference. But when you stretch that, that difference across a 10 gig file, then it starts to really stretch out the FTP transfer. Whereas since we get that theoretical maximum speed across the network, the entire, we're able to get it there in the theoretical minimum time possible with that given bandwidth. So, and it, it really depends a lot on you can go into your registry and tune all these things with TCP tune the window size and both sides have to do that. Speaker 1 24:00 So if one side hasn't done it then, and if one side's Linux and one-size windows or one site's Mac, they might get better performance than others. But in general, you're going to get those types of performance gains. And if you go into gigabit speeds now, you're not going to get much, uh, much extra speed with FTP. So you could be upping your bandwidth from a hundred megs to a gigabit, for example. And it's taking you that five hours to get it across. It's going to take you the same five hours to get it across. Whereas Speaker 0 24:29 The bulk of what you're waiting on are these constant reconfirmation and the communication protocols. Speaker 1 24:36 Well, you mean the latency that our network RTT is not going to change. So your bottleneck is not the bandwidth at that point. Your bottleneck is that latency in between. So, but with file catalyst, it doesn't care about that. So it will immediately claim all of that extra bandwidth. So when you're getting into, you know, the 10 or the gigabit speeds, then you're talking, you know, a minute or two for the same file would take five hours with file cap or Feb. That's okay. Speaker 0 25:02 Tens and tens and tens of times faster. Speaker 1 25:06 Yeah, I mean that's, and now, now imagine taking those types of bandwidth and stretching it from LA to Hong Kong or New York to, you know, New York deception. Well, Ottawa is actually pretty supportive, but uh, if you're going, you know, in Canadian terms, so, so yeah, if you stretch it further, you know, you're going to Asia, you get like 200 milliseconds of latency, then yeah, the speeds are just ridiculously slow with FTP. I mean it's becomes unusable. I mean, everybody's familiar with the concepts of mirror sites for downloading. They what the traditional scenario would be to, to remove this download bottleneck is to just mirror the content to other download servers around the world and you download from your local whichever one's closest, whichever one's closer. Whereas with file catalyst you could have one server in the world and everybody could just download at the same speed, so Speaker 0 26:00 <inaudible> or Netflix and you have edge servers placed at all of the key internet sites, these actual facilities that are the main nodes that all of the internet traffic goes through. Sure. Ed servers might be a perfectly reasonable way of distributing your collection of say movie files or content for, you know, whatever the big broadcaster is and sure make it so everyone in the big cities can be grabbing something locally, but that's not how workflow tends to work in post production. We're going point to point. It's literally these guys in this office have created this unique file that literally just hatched into existence at that moment. And of course, what are, what do our clients always want? Like immediacy and everything because they're up against deadlines. And so that one file that only exists in that one place, it needs to go across the country or the ocean or whatever it may be and they just needed to get there. Speaker 0 26:52 So you're saying that there's probably a lot of people dramatically overpaying for bandwidth that aren't getting a lot out of it? If, if a lot of file transfers, what they're up to, if they're not using file catalyst, that would be our main return on investment right there. I mean, I mean that's good. The waste of time, like a week wasted time, wasted bandwidth. I mean, you get a file from a to B and then person B says, Hey, that's not the right filer. No, I need you to redo this. Well, if you can shrink those bottlenecks down, it takes the only 15 minutes to send her a file as opposed to another five hours. You can have that kind of back and forth, Hey, can you review this? Nope. Do this, resend it. And you can do that. It makes it possible within a span of a few hours. Speaker 0 27:39 Whereas with FTP, you better get that file right the first time. And what's interesting is, you know, what we've seen in our industry, I think in that let's say call it the last 10 or so year period where things went very file oriented from base band video signals or tapes and uh, you know, file sizes have obviously increased, but as the internet has, you know, expanded in our lives and in our infrastructures, it's allowed for people to be more geographically isolated and still kind of be part of the same work groups. You know what, what's happened as well in that time is we see a lot less people that we deal with in these environments who are kind of dedicated media management people. It's usually the fricking editor or the creative or a producer who's also having to do a lot of these types of file management and movement tasks and the it people or the, and they don't like having to just do this type of rote stuff over and over. Speaker 0 28:40 And if it takes five hours to transfer a file and it was the wrong one, they're going to, their head is going to explode when they have to do it again because they'd rather be editing. They'd rather be cutting or doing graphics or putting together their next package that they're working on. So, you know, again, the fact that it tends to be only somewhat technical creatives who often are tasked with these types of exact things. I think it's just all the more reason why you want it to be simple as possible. You want it to be super reliable. You want it to be super fast so you're not bogging down creative people and throughout their day with these types of tasks versus, you know, the creative stuff you're probably actually paying them for. And you've found a way to make the technology just work much more optimally. So, um, it's Chris talk, talk a little bit about what, what, what goes into this. Like, so what do we need to make this happen? Is the software is this hardware, um, you know, what's involved? Well, it's a hundred percent Speaker 1 29:42 Software and we have two kinds of avenues. I can talk about your, and I'll, I guess we can get to both to start with. We have our off the shelf products and what that looks like is if you're familiar with FTP and how to deploy it with the client server, it looks just like that. We have a file catalyst server which has user accounts that you can create with home directories. You can create groups that include multiple users that can have virtual folders and so everybody kind of gets access to what they need. They log in, they see a file system just like they would with FTP. In fact, you can connect with a third party FTP client to our server as well and transfer just fine with FTP if you want to. Um, Speaker 0 30:21 You won't, you won't, you won't see the speed benefit of it. Speaker 1 30:24 Correct. But if you, if you, if you plug our server in place of your FTP server and then replace specific strategic nodes or locations with file catalyst clients, that's the route that some people do take. They don't want to necessarily change the experience. Some people like it or they don't need acceleration. But that's kind of the server side of things. And then the client side, we have a few different options in the client side is the one that's sending and receiving files to the server. Uh, we have a all the way down from an, from an API and SDK where you can build your own applications to a command line tool just to kick off via a script or manually transfer files to watch folder applications that uh, they can just watch a folder where you're saving your data from your editing studio. And as soon as it sees that file, you can start sending it across the network. And then we have web applications so you can browse into a web page and use it directly in a webpage. Uh, so we have a wide variety of different applications, uh, to, to actually deploy this. Sure. Speaker 0 31:28 So platforms, I mean Linux, Mac, windows, all that, Speaker 1 31:33 If that's it, you hit it right on the nail right on the head. So all, all those platforms. Commodore Amiga um, yeah we run on Apple two weeks Speaker 0 31:43 B O S and Vic 20, uh, sorry, sorry, start to date ourselves there. But Steve was the one that came with them built in mini ports. I had to Oh yeah, that was great. We're working on our TRS 80 you load the software off of like an analog cassette tape, you know that's right. The bottleneck starts to become the analog cassette. There's only so much you can do to accelerate that. Fast forward a little bit. But so, so you have, you have client side applications that are thick client apps, you've got web based applications so there isn't even really something that the client side needs to install. You talked about your server stack a little bit. Again, we're talking about software that runs on a physical server that you've built out to be a file share server but that uses the file catalyst technology. What platforms can that run on? Speaker 0 32:36 Is the server also across platform technology? So it could be anything from a Mac mini with a Thunderbolt hard drive on it all the way up to like a burly Linux server running, you know, red hat or something like that. I mean, what are the options there? Uh, correct. I mean all of the above. It depends on your usage scenario. I mean, you can run it on a laptop if you want it to. We have a lot of uh, well it's server side, you know, generally he could run it on a laptop, but generally people would have that on a, like you said, it'd be for your server. But the client side application, the watch folders, we have news gathering, you know, agencies that are out there in the field with a laptop with a 3g connection and they have a watch folder on their desktop and they use that to send over cellular networks back to the surface. Speaker 0 33:19 So you don't really need a beefy, a piece of hardware. We also have Android and iPhone apps that uh, you can go and get in the, in the app stores. And so, I mean, it's running on cell phone hardware. So it doesn't have to be beefy. When you get into the very high speed, you get into the 10 gigabits plus, then of course you need to have the high end machines to do that. The high CPU is high end storage connected with fiber to be able to handle all the reading and writing. So, so yeah, it's, it's, it just depends on your needs. It scales all the way up for all the way down. I'm actually interested to, let's talk about cellular for a moment because one of the trends that I see quickly coming together and you really just spoke to it yourself and a lot of typical, you know, traditional uh, Eng electronic news gathering type, you know, setups where you might have a microwave truck or a satellite truck and you know that that vehicle literally has, you know, one of those towers on it and it either does line of sight microwave or maybe it's even a satellite dish bouncing off of a bird somewhere. Speaker 0 34:22 And those, those ways of beaming video content from a remote news gathering crew or production crew or sports production crew on a, in a sports broadcast environment, you know, it was typically done kind of as real time base band video signals, you know, traveling in real time across these links. You know, again going between a truck and a microwave transmitter or a truck and a satellite and then another facility and it was very oriented around kind of we can beam this back in real time, not really any faster and it costs a hell of a lot of money to do that. It, it, you know, you, you're literally talking about satellite time or you're talking about very expensive microwave hardware and you know, you know, uh, towers and all sorts of crazy stuff. This move to utilizing kind of off the shelf cellular networking technology that we obviously have literally swimming around us at and going through us at all minutes of the day at this point. Cause the internet is literally kind of in the air all around us that I think that's a very interesting trend. And so this idea that not only can you use the off the shelf cellular technology opposed to these much more expensive and proprietary transmission technologies is fascinating, but you run into the same kind of latency issues because again, you're often using the public internet at that point, right? Speaker 1 35:49 Oh, correct. And you hit, uh, also packet loss issues, especially as you're moving between cell towers as you're getting away from one, and you're going to have a lot of, uh, packet loss. And that's, you know, that's a big issue with, as you mentioned with, uh, with satellite or wireless technologies, even wifi, as you're getting farther from your router, you'll notice that the speed kicks down, starts dropping down. It can't negotiate as high, as high as speed, but that's because of interference. And that manifests as packet loss when you're transferring files and packet loss is interpreted by TCP or FTP as congestion, even though in this case it's not, it's actually interference. TCP interprets it as congestion and will throttle your Speedway back. So when you're using a cellular network, now, you know, several years ago it wasn't as big a deal with 3g for example, it was, it was slow enough, but as you start to get into four G LTE, now the speeds are getting to the point where that speed limit essentially from TCP because of the interference and packet loss is causing you not to be able to transfer at full speed of your, your LTE network. Speaker 1 36:55 And that's where a technology like file catalyst on a mobile app for example, or just running on a laptop, but to using a a LTE or four G a modem, that's where you're going to start to see a speaking. Speaker 0 37:07 That's this whole idea that there's kind of a logical limit to what those traditional transfer technologies can give you no matter how much raw bandwidth you have. That's like my big takeaway here because it's like guys, you can pay for a hundred gigabits, but if the protocol itself and or the physical layer of the technology you're using has these inherent limits to it, you can pay for as much bandwidth as you want, but it's essentially wasted money. And you know, one of the technologies that I love hyping that is where just on the threshold of is what I call the real first true four G cellular technology, LTE advanced and LTE advanced is really still very much in testing phase. Even though the latest iPhone, like the success for instance, has an LTE advanced, uh, you know, baseband chip in it. But LT advanced in the next three, four years, we're going to be having hundreds of megabits worth of connectivity to our cellular devices. Speaker 0 38:06 And so this idea of leaning on cellular networks more and more for this type of stuff, you know, it's going to fit into this exact scenario. Even though the network is much faster if you're using these old school protocols that are subject to various types of issues on them, you're just not taking advantage of the, you know, all of this advancement in bandwidth and connectivity. So these things become, it's, it's, it's funny cause again this is kind of the opposite of my thinking going into this conversation. As the raw bandwidth gets bigger and bigger, it actually becomes more important to have these types of technologies that play. It's not like the raw bandwidth obviates the need for them. It's like the raw bandwidth even creates more of a need to take advantage of the bandwidth. That's, that's like huge light bulb for me cause I didn't really think of it that way. Speaker 1 38:57 No. And that's exactly why as a acceleration company that's using this type of technology and our products, we're excited because you know we see the future where we're seeing files grow. They went from HD to four K to eight K content is going to be starting to move around and we see bandwidth growing but these protocols aren't changing. The speed of light is not changing. So latency is going to always be there and there's always going to be some sort of interference that causes these packet. So you know, technology like file catalyst is a, it's, it's well positioned for the future. Um, just getting it into the hands of the people that need it is the big challenges for us. Well, we'll talk about that, the Speaker 0 39:38 Commercialization a little bit more in just a moment. But there is, um, another point I wanted to make, you had talked a little bit about, you used the phrase API APIs and you kind of spoke to that you have an implementation of the underlying file catalyst technology that can kind of be baked into other systems and become part of more sophisticated workflow management systems. So at Chesapeake systems, one of the most active areas that we're involved with literally every day. And mr Whetstone here is, is essentially one of our, our workflow engineers, uh, who is doing programming within media, asset management and workflow automation environments. Uh, like levels beyond reach engine or a number of other platforms could even be like Telestream vantage. And we love hooking those types of software platforms into kind of other pieces of enabling software technology through application programming interfaces, hooks that those third party software makers make available so we can kind of have the software directly talking with it with each other versus having to rely on things like watch folder workflows, which, which not only require more manual input, but you kind of, you know, in a workflow that you're tracking, you kind of can lose track of things a little bit when things are hopping between folders, we like to have very central management and understanding of where things are at in any given workflow that's unfolding and when the software is very deeply integrated to another software platform through API APIs. Speaker 0 41:19 You know, we get that, that capability. Can you talk a little bit more about how file catalyst amongst some of your clients has been fit into these types of overall kind of workflow management platforms or media management platforms? Speaker 1 41:35 Sure. I mean, we've spent probably the last three years trying to build up a, an ecosystem of technology integration partners and those range from asset management solutions to workflow management, storage and archive, uh, QC transcoding. So in order to do that, we really had to put a focus on not our off the shelf features but on our, our integration features. So our API APIs and you know, we work closely with, uh, various vendors of these applications, uh, to, to see what they need, what, what kind of hooks do they need and to our technology in order to just sort of drop them in and plug them in. And so we have an extensive, uh, it's called a rest API and, uh, it's sort of a web based technology that, you know, any, any web browser could even call one of these URLs, the same URL use to get to a webpage. Speaker 1 42:30 It's, we expose our technology through a series of URLs that another program can, can call and kick off a transfer. The part of the URL says, Hey, transfer this file from this location to that location. Here's the credentials here, here's how fast I want you to do it. And then you can call another URL that says, Hey, give me the status on this transfer that I just started. So using these types of hooks, a third party tools, workflow management solutions, asset management solutions, can now kick off transfers directly from their, from their own application, start it up, send it to where they need it, get the status, they can show a nice progress bar back inside their application and they get the file from a to B as fast as possible. Whereas previously, maybe the default application that it used to transfer files was FTP. Speaker 1 43:16 So that would essentially be what we refer to as a file transfer bottleneck inside someone's workflow. So maybe someone has to send a file into a workflow, then it goes somewhere else for trans coding, and then it needs to come back and there's three or four trans coded files that need to be sent somewhere else. Every time I found these to move, if it's in a different geographic location, it's a T, it's a file transfer bottleneck. So if you have file catalysts, software deployed in these locations, now you're just telling it to send between file catalyst servers. Now you end up with a as fast as possible, at least in terms of the file transfer part of things. We obviously have, you know, there's complimentary technologies like transcoding, they can become faster in different ways. But for us, you know, just reducing those file transfer bottlenecks is, is our focus and providing API like the rest API that I just mentioned to our customers in order to do that is kind of been one of our folks and we've built up a large ecosystem of vendors that probably you guys have heard of. Um, in order to, uh, to accelerate those words, Speaker 0 44:16 Which is awesome. That's what I spend a good part of my day doing is, is, you know, picking these APS part and figuring out how we can you probably dream in rest API at this point. Get it, get it when you're ready, when you're ready. The other part of my job is listening to Nick. So the enjoyable part. Um, so here's another light bulb that just went off in my head. Chris, you know, we've been talking a lot in this conversation about, you know, very kind of point to point transfer. Maybe you got an LA office and a New York office or New York and London or New York and Singapore. What about the cloud? Um, and, and when I say that I, you know, I hate the term the cloud frankly, but I think it's just been overused and watered down and it's kind of like web 2.0, which just sort of became meaningless but, but you know, the cloud does actually kind of mean something. Speaker 0 45:07 And when I think of the cloud, I think, you know, of services like Amazon web services, the actual idea that you're using someone else's physical it infrastructure in their data center and essentially just provisioning as much of it as you kind of need at a given moment. And Hey, you're kind of getting a lot along with that. They're handling a lot of, you know, file backup and disaster recovery and high speed links and all of these things and you just pay for it as a service as needed. Can you, and you mentioned transcoding, right? So we work with some vendors now like Telestream that has vantage cloud and then of course there's encoding.com and people like that. Uh, and of course Amazon themselves also just bought elemental, the transcoding company. Um, and so obviously they're making even more investments in, in web based transcode and Amazon in fact already had offered web based, or I'm sorry, cloud-based transcode as a service. Speaker 0 46:01 Can we put file catalyst nodes on someone's cloud infrastructure? So if you're creating content locally and you're free for the foreseeable future, you're going to have to kind of do that heavy lifting postproduction locally. But let's say you need to do a lot of transcoding or very fast and you don't want to build out a huge trans coding cluster. Can we kind of put file catalyst at our local side of things and file capitalist kind of on the cloud, if you will. So we can kind of really expedite that process of moving files from our local infrastructure to the cloud to do some type of operation and then bringing it back at a much faster rate than we normally would have if we weren't using file catalyst. Of course, otherwise, no. See Speaker 1 46:51 We have a, a Amazon is, is simply just running VMs. It's windows and Linux and you can install our software on any windows or Linux box. But beyond that, uh, the storage that's used in the back end, uh, for example, if you're using Amazon S three to store those files and use maybe your, uh, encoding is happening directly on the files in Amazon S three, we'll file catalyst server software is able to map user accounts directly into your S three accounts. So it will appear as a regular file system at which you can then accelerate using UDP from your local location into file catalyst server landing in an S three bucket that's maybe being used by your trans coding, uh, software to trans code. And then you can then pull the files back down directly from S3. So we've, we've done a tight integration using all the Amazons, a rest API APIs into file catalyst to make it appear as a nice file system when in fact it's a bunch of blobs sitting there. Speaker 1 47:54 And, and using that kind of approach, you're able to essentially accelerate files in and out of cloud, not just the cloud into the local storage but in and out of the actual cloud storage. And we have a very similar offering for Microsoft Azure as well. So you can actually upload and download files directly into the Azure blob storage and then download files directly from Azure blob storage and any services that you use that are based on top of those storage services from Amazon or Microsoft and then access those files so that that would be kind of our, our cloud integration at this point that that's what Speaker 0 48:29 Wicked. And you know, there's all this talk of the blob and you know what, we should probably do an object storage episode soon because not a bad idea because we ourselves are talking object storage more and many people don't really know what object storage is. And you know, object storage is not the traditional little file system with a hard drive icon that you just connect to and Mount and it it, you know, you have folders and sub-directories it's a, it's a, it's not necessarily a new paradigm, but it's a very different paradigm in, in, in file storage. It can still be utilized by trans coders and as part of the repository for things is as part of an overall workflow. But having that accelerator on one hand to get to it and from it, if you're using say Amazon's S3 and then on the other hand, the tools you give to kind of treat that object storage a little more generically, a little more traditionally, if you will. So it's maybe a little easier to fit into certain types of workflows. That sounds like just a critically enabling technology that makes the cloud just significantly more usable in these types of media and video centric workflows than it would otherwise be. Right? Speaker 1 49:42 Yeah. And in addition to the, you know, just getting access to the traditional tools that Amazon or Microsoft provide are all based on an HTTP based rest API. So if you were to have us calls exactly, it's an HTTP based upload, which has the same bottlenecks as, as FTP. So by putting us in front of your S3 in insight ECE two, which is Amazon's compute area, then you, you've, you've taken that part that does the rest calls and you put it on a very low latency network. So that bottleneck kind of disappears as much as we can. And then the long haul part that's from your office into the Amazon infrastructure, that's the part where we do the acceleration. So, so without us there, then you starting to move a lot of data in, it's going to be very slow. And Amazon does have a, an import export service. Speaker 1 50:30 They just released a new version. Uh, they're invent or reinvent a, uh, show in the last week called snowball. And what that is, is they ship you a big storage appliance and do, you can install an application locally to move all your data onto the snowball appliance and then ship it back to Amazon and then they'll put it onto S3 for you. So that's, that's their solution. Well, I mean it's, it's, it's funny when you think about it, but you know, some people you know, need to move petabytes of data with the bandwidth available today, even with file catalyst, it's going to take you weeks and weeks. So to a point, I can see that. But the funny thing is, is that they don't provide you a way to get that data back out. And there's a reason for that because they want your data to live there forever. Exactly. File catalyst allows you with within a reasonable amount of time, with reasonable data to get the data in and out in the optimal amount of time. And that's not possible with the APIs that they've provided. So let's talk about, Speaker 0 51:29 You know, who we talked about, a number of different types of organizations that can use file catalyst, but you know, again, let's just relate this to the actual product line a little bit in terms of costs and who this might be appropriate for. Is, is this something that just inherently is going to cost someone tens and tens of thousands of dollars and you know, a certain segment of the market is, it's not going to be appropriate for given, you know, w you know, the costs and complexity of setting up file catalyst or do you have kind of something for work groups of all sizes if they have this need? Speaker 1 52:01 Um, so we, our software is all pretty much, you know, bring your own license. So, uh, it, it starts in the, from a couple thousand dollars and goes up into the tens of thousands of dollars. It depends on how many servers you want, et cetera, et cetera. But, uh, it's, it's targeted generally to, from all the way from SMEs up to large enterprises. I mean, so you can have small post houses that are using it all the way up to, you know, one of our largest, uh, on the media side would be NBC sports. We used it for the Olympics. Um, but then we have it across verticals. You have companies like Dell using it enterprise wide. So it's, it's really, you know, it could scale down and, and really where the ROI comes in is, as you mentioned earlier in the show, you're paying for this bandwidth and you're paying a lot of money every month for this bandwidth. Speaker 1 52:44 So you gotta look at it and say, I'm spending maybe $500,000 a month on this high bandwidth connectivity and I'm not able to use, utilize more than a fraction of it and I need to get this file from a to B. So you're, you recovering money from the bandwidth costs that you're spending, but also what about that loss time and productivity. So that's where you sit down and you do an analysis and say, you know what, to get a perpetual license to this software, this is going to pay for itself within, you know, four or five months. And then I own the software. Speaker 0 53:15 Or I mean, again, if you're getting tens of X improvement in transfer speed, you might say that that gigabit internet link that we have is way overkill and maybe we only need a hundred megabits because now we're actually taking advantage of it. It's way faster than it was before. Even with the gigabit connection, we can just cut back on our bandwidth bill. I mean, I imagine in many circumstances it more than pays for itself essentially instantaneously. Speaker 1 53:44 Yeah. Well the productivity alone, I mean, how valuable is that to get, uh, the almost instant feedback? So, yeah, it, there's many ways it can pay for itself, but we do understand that there are, uh, smaller, even smaller companies who can't even afford the initial capital cost. So they're looking for something on a pay as you go basis, you know, maybe per gigabyte or per transaction or, so we do provide a monthly subscriptions as well. And we do have several partners that we work with. One of them is called a frame for example, and they provide cloud services for these types of situations where you just pay a per gigabyte as you go. And uh, you know, we have several partners of that kind of ilk that we've partnered with to provide those types of services. That's not to say that we won't come out with something in the future that kind of, it's, it's file catalyst branded, it's off the shelf, just pay per gigabyte, maybe ties into your cloud storage. But uh, for now we've, we've partnered with several others to do that kind of the lower hanging, you know, to, to get it to the masses. Speaker 0 54:45 Sure. But it, it really does sound like for the type of clientele that we deal with across the board, it could be a mega media corporation or it could be a smaller production company with a few different field offices. It sounds like you absolutely have something that we can be putting in for those clients of ours and you know, it's not going to necessarily break the bank. In fact, it could be again, a massive almost immediate return on investment. So let me ask you this, we don't work with them directly, but we do encounter a few other folks in the file transfer acceleration market, uh, who are, you know, somewhat well known as well. You, you I think tend to kind of be lumped in with them as one of the best known for these technologies. What do you, what would you say makes you guys unique from the competition? Both kind of just your, you know, the way you approach the technology that, the set of tools you have, maybe the price points in your offerings. You know, why, why file catalyst when someone is starting to say, Oh, I'm going to explore this thing. This sounds useful. Why might I go with these guys? Speaker 1 55:51 Well one of the, the, there's two main competitors that you guys are probably aware of and one of them has shifted more to the cloud side of things. And they're offering more of a SAS offering. And while the other one, um, we'll, we'll call them IBM, they, uh, well they're IBM now. So that's one reason not to work with them. But, uh, leading up leading up to that, I mean it's, it's very similar technologies in terms of the transport, but where we differentiate is we feel is the feature set. So, and not to say that they have a bad feature set or ours is better, but it depends on the scenarios. What we've chosen to focus on with ours is easy to use, tight applications for our off the shelf products and then partnering with several industry experts in different areas like workflow, immediate asset management. So we're focusing on making the technology able to integrate with the masses, whereas they focused on building their own versions of workflow management and orchestration and that sort of thing. So that's really important. Yeah. So for the, I mean I guess it's kind of a closed, it seems like almost a closed ecosystem versus an open ecosystem. We're kind of, Speaker 0 57:02 You want to specialize in the part and be experts of your domain without having to expand into all of those ancillary areas. Let's make it as easy as possible to hook into us so you don't have to use, well we can just say it. You know, a Sparrow which is owned by IBM, has a product called orchestrator and it is kind of a workflow automation system and it's like, well maybe you want to use reach engine instead. And there's plenty of good reasons why a person might want to do that. And it's like, well, why get kind of pulled into an entire ecosystem of solutions when we really just need to find the perfect puzzle piece to plug into what we're doing cleanly, efficiently, cost-effectively, and let an integrator say Chesapeake systems because this is kind of what we do, you know, put the ties together between the platforms that you choose and make it as tight as possible so you can kind of pick the best player for each component of your overall solution and tie them together in a way that frankly, a lot of the time is more integrated than the so-called vertical stack coming from, you know, a more integrated solution provider because frankly they can't be good at everything. Speaker 0 58:11 Right? You guys can be really, really good at file transfer technology, you know, accelerator technology for either a individual user or as part of an automated workflow and then let people have a lot more options when it comes to how they take advantage of it. Speaker 1 58:28 Exactly. And to an extent, our off the shelf products allow you to do somewhat of a, we call it a loose integration with watch folders. As you mentioned to get started, if, if that goes well and you like it, then you can move on to taking the API APIs and integrating the more tightly. So then you, you, instead of waiting for a third party application to pick up your file and move it, and you have no way to monitor that. Now you can fire off a transfer exactly when you want it and monitor it with the API and you can get real time feedback, you know exactly what's going on. So providing all the different levels of options is really what our focus is on. And then, you know, just, you know, dealing with IBM, I think it's kind of spooked a few people. So we, we find that our niche is, is as I mentioned with the integrations and also just being a company that's easy to deal with. Speaker 1 59:17 Um, a spear has been around a bit longer than us and has grown a little bit larger than us. So because of that, uh, you know, they maybe won't do the things that we will do. Uh, you know, adding a feature here and there dealing directly with our developers and not having to jump through all this red tape. So we, we give that kind of, we can serve the large guys but we can still act like the small guys when you talk to us type feel and a lot of our customers really appreciate that and that's not just the small post houses, that's all the way up to large networks. And I was going to say the integrators Speaker 0 59:49 Appreciate that too. And I think customers want to have a relationship with their key vendors. All of our clients, whether they're small operations or mega corporations, they want to have a bond with their key technology vendors. And yes, they also want to have a good bond with their integrator or support system that Chesapeake say might be for them, but they also want to have a relationship with the manufacturer so to speak, of the core technologies. And the fact that you guys are that nimbler kind of more sociable, approachable operation is why we've picked you as kind of our go to, to direct people to and to make part of the solutions that we craft because that is more representative frankly of the type of partner that we've always found huge success with. It's those folks who are in a niche, they really understand, they are deep level experts and you're not having to go through five layers of middle management just to get a basic answer or you know, work on a technical issue or you know, get questions, you know, solved that come up during an integration. Speaker 0 00:53 You guys are easily approachable and I think that's what really sets you apart as you said. Is there anything we've missed in the conversation? It's been pretty comprehensive at this point. I think people have probably learned a lot. In fact, I know I have and I love, that's why I love doing this because we learn things doing the show. So it's been fantastic having you. I mean how can people learn more about file catalyst? Obviously they can reach out to Chesapeake systems at any time. But you know, where would you say direct someone to who kind of wants to learn a little bit more about your suite of solutions? Specifically? Speaker 1 01:25 I would just say go to file catalyst.com right on the homepage, there's a, there's three boxes and we try to make it really easy. Do you need acceleration? And it, it kind of walks through a description of what the problem is with TCP, how do we solve it and then you can move on and see what our solution, how do we bundle that underlying transport with, you know, applications and, and then, you know, if you're still interested then we can send you a trial. So we kind of hold your hand through that process using our website or you can just go to the menu bar and kind of go wherever you want and see. But, uh, we, we basically want people to, uh, to be able to come in and understand what we do right away and be able to see concrete examples of how we do it and then request a trial if they want to. So that's the best way is just through our website file, Speaker 0 02:12 Catalyst.com and again, of course folks can engage Chesapeake systems directly as well either through myself or just, uh, sending something to, you know, pro [email protected] or our phone numbers, you know, our listeners to attend, to know how to get ahold of us. And uh, obviously we can be available to consult and bring people together with Chris and his team and be part of an integration. If you want to tie this into, say, your media asset management platform, that's obviously all within our skill set because we don't keep Whetstone busy enough writing to rest API APIs. He has to do more and more and more. And this has just been fantastically enlightening, Chris. And it seems, again, for all the reasons we've talked about, we're going to just have more and more and more reasons to be working with you guys as utilization of the cloud, enhance increases in this industry as people have bigger and bigger bandwidth available. You know, your need just becomes all the more singular. So, so thank you so much, Chris bell, founder, CEO of file catalyst. You heard him here, folks on the workflow show. Thanks so much for joining us today, Chris. Thanks for having me. Yeah, thank you.

Other Episodes

Episode

November 04, 2020 00:53:55
Episode Cover

#56 Engineering Empathy: Building Innovative Access Systems and Preserving Video Testimony with Sam Gustman, CTO and Associate Dean at USC Shoah Foundation and USC Libraries

On this episode of The Workflow Show, hosts Ben and Jason interview Sam Gustman, CTO of USC Shoah Foundation - The Institute for Visual History and Education (USC Shoah Foundation) and Associate Dean and CTO at the USC Libraries where he oversees IT for the Libraries and started USC Digital Repository (a CHESA client). Their discussion covers the intricacies of maintaining a media archive to last for generations, including file management and migration, avoiding bit rot, preservation quality video codecs, and the Dimensions in Testimony project which uses AI to allow people to have a real-time conversation with Holocaust survivors and other witnesses to genocide.  Sam Gustman also elaborates on the origination of the Shoah Foundation and the important work they do for education, highlighting the voices of genocide survivors and leveraging technology to engender respect and understanding. Episode Highlights: Sam Gustman outlines the start of the Shoah Foundation as an effort to archive and maintain interviews of victims of the holocaust, founded by Steven Spielberg after the release of Schindler’s List. Ben and Jason ask about the hurdles of storing and managing a multi-Petabyte collection of digital video, such as monitoring for bit rot, ingesting metadata, preservation quality video codecs, and even the possibility of utilizing blockchain to preserve video across the internet. Sam, Ben, and Jason discuss the importance of the Visual History Archive’s work in education  providing compelling ways to access a vast library of experience as the Shoah Foundation continues its mission to document and provide ...

Listen

Episode

February 08, 2013 00:53:57
Episode Cover

#11 "CES Insights with Patrick Roan"

In this episode of The Workflow Show, Nick and Merrel interview tech journalist Patrick Roanhouse (left), of Plan8 Media, who shares his "take-aways" from his tour of the recent 2013 CES (Consumer Electronics Show) held in Las Vegas.     Remember, you can listen and subscribe to The Workflow Show in iTunes Episode length: 53:57 Show Notes: definition of 4K television Sony news about downloadable 4K - Pocket-lint.com news from Japan re: 4K broadcasting next year - The Verge RED Scarlet Sony FS700 camera Sony F65 camera Pioneer Kuro plasma tv sets OLED monitors ProRes 4444 H.265 Moore's Law  Skynet Sony EX1 Sony FS-700UK   Panasonic AG-AF100 "Apple Still Casts a Long Shadow Over CES" - Wired 3D printing explained MakerBot Z Corp 3D printers RepRap - open source 3D printer Catonsville's fabrication lab Baltimore's Digital Harbor Tech Center E3 Expo (by Entertainment Software Association) Google Fiber and Kansas City - Wired crunchyroll.com Aereo television service. View Patrick's many articles and video reports at Plan8 Media  You can also follow him on ...

Listen

Episode

November 20, 2019 00:59:36
Episode Cover

#41 "Artificial Intelligence (AI) and Machine Learning: Why Should You Care?"

The verdict is in. So much data is being produced today that humans can’t keep up. For many years now, CHESA has been exploring the benefits of automating processes in media workflows, especially when it helps creative people do their creative jobs – which a machine cannot do. But how can you harness the solutions out there to make your content work for you? On this episode of The Workflow Show, Veritone’s President, Ryan Steelberg, and Chesapeake’s CEO, Jason Paquin, discuss real-world use cases that demonstrate how Artificial Intelligence (AI) and Machine Learning (ML) solutions have made inconceivable tasks possible. Listen in to learn more about taking advantage of the value of your content library, and how automation for your infrastructure might be the solution you’ve been seeking.   For more on Artificial Intelligence on The Workflow Show [gravityform id="1" title="true" description="true"] ...

Listen