Episode Transcript
Speaker 0 00:01 Welcome to the workflow show. This is Nick gold and this is episode two Oh nine and interview with Gary Watson, the chief technical officer of Nixon. And I'm joined today by our new cohost Jason Wetstone. And Jason, welcome to our organization. Jason is our new solutions architect and our national or pro video group. Thank you Nick. How are you today? I'm great, I'm great. So you've been here for about two months now and about two months. It's um, it's been, it's been very exciting time, lots of, uh, lots of ramp up on technologies that I'm very interested in but not necessarily familiar with, so don't be so modest. Jason, we brought him into the organization after knowing him for a number of years. He, um, worked for JPL productions in Pennsylvania and has lots of hands on experience with storage, archive systems, media asset management as an administrator.
Speaker 0 00:56 And uh, he seemed a good fit for the team. We are no longer with Merrell. Um, he has been a great host for almost what, two years now, but Merrill is no longer with our organization and he has moved along with his family to Nashville. I think it's to pursue a country music career, if I remember correctly. I believe that's what his face, his LinkedIn profile said. So we wish him well. Um, we'll probably see them on grand Ole Opry pretty soon and don't worry. There'll be plenty of mentions of Hogan poetry and uh, you know, Hitchhiker's guide to the galaxy on this end of things. Yeah, absolutely. We can keep that going with Jason. No problem. So I've been a Hitchhiker's guide van ever since high school, so yeah, definitely do the towel on May 25th. Yeah, exactly right. I mean we, we, we are geeks and we'd like our geek references to be sure.
Speaker 0 01:47 So speaking of geeks, Gary, I hope that's not offensive to you at all. Well, Hey, Hey guys, this is Gary Watson from the next hand site. I am. I'm actually sitting here teaching myself the banjo cause I'm going to also follow in the footsteps of going to Nashville because I think I didn't know that was an option for computer guys now that I know it's an option. I'm right there. We encourage it and maybe you can play a little during the episode today, Gary, we, we encourage that. So Gary, we met probably the first time heck almost two years ago and maybe I'll give a little backstory. Um, I think our first real, we had known about next and next and is one of our primary and leading storage vendors of of hard drive and disc based storage solutions. They, they play into a lot of the storage area networks that we do.
Speaker 0 02:31 They have a line of network attached storage devices or Nazism, AK file servers and they've been a fantastic vendor for us for a number of years. And I think again, despite having known of you guys for a great number of years, our first dealings were during the flooding in, where was that in? In Taiwan or Thailand time, sorry, the Thailand flooding that disrupted global hard drive supplies for what seemed like almost a year. And somehow you guys came to our attention as the ones who like could still get hard drives and weren't having a problem with surprise. A supply. You remember that Gary? Oh I sure I remember it. We had all kinds of problems. We just tried to make sure our customers never knew about it. See that's what we love about you guys. You know, just, just, you know, obscure what's really going on in the background and keep us happy.
Speaker 0 03:23 But frankly that's generally a good thing. So we began working with you guys and we loved the products and again, we were mostly using, you know, your E-Series fiber channel raids that they were, they were going into a lot of the sands that we were doing and it's just been this marriage made in heaven ever since. And we've, we've done a lot with you guys and our clients that we've brought you guys into are generally quite ecstatic with their neck. SanDisk, I think our service department is quite, it's quite ecstatic with it as well. It's, um, it's a pretty neat, uh, series of, of storage that's very dense. There's a lot of, um, a lot of drives per square inch I would say. Oh yeah, no, they're fantastic. Our guys have loved them and, and they've just worked wonderfully. So Gary, maybe you can tell us versatile, just a little bit about yourself and your Genesis at Nick San, you, if I remember correctly, are one of the founders and that's right. Yeah. So how did that all begin? I mean, was this just a bunch of you engineers sitting around like, let's make a disc storage company? Actually the, the, the details are much more assorted than that. Let me tell you, let me tell you the story. So, uh, I had my,
Speaker 1 04:27 In the eighties, I designed tape and disc controllers for many computers. And then towards the end of that run, I designed what today would be called a raid zero controller. We didn't have that name. That's what it was. So basically striped a bunch of drives together. And, um, and that was my last project at accompany. And then another company wanted to get into the enclosure business because they were making rate enclosures for people like Hewlett Packard and, and, uh, DEC. Um, and they thought, well, Hey, let's make our own brand of raid system. So they hired me to come in and be the raid product manager. I worked for that company for a long time and the politics started getting a little bit thick for me there. So, um, I asked to transfer to their operation in the United Kingdom. And so they had a factory in Nottingham, England and I was single that sounded like fun. So, um, they sent me over there to start a European engineering team and that worked really well for a couple of years. And then in a sort of a corporate, um, shakeup, they fired my boss, the guy, the managing director of the UK, um, facility. And he called me the next day and said, do you want to start a new company? And that sounded really good to me. I mean, how hard could that possibly,
Speaker 0 05:39 Yeah,
Speaker 1 05:40 Again, I was single, you know, and I had money and so me and him started the company and he, uh, my, my co founder, uh, his name is Martin body. He was with us as our first CEO for the first, uh, six or seven years or whatever, you know, he handled the money side and the sales side and I handled the design. So in the first couple of years I designed all the boards and, and work with a contractor to design the chassis. We ultimately hired that contractor to be an employee. And um, and then we started hiring actual engineers with more talent than me to do the boards and the software and stuff. And so that was in, that was in 1999 and then in 2000, we acquired a small company in the UK that had a raid controller stack for ATA disc drives, which you call desktop drives.
Speaker 1 06:26 And we expanded that technology to do an eight drive box. So it was eight drive rack mounted system for ATA. So like a data center style ATA disc box. No one had ever done that. Then we made a 14 drive version and then when things really took off is when we made a 42 drive box, which was called the atta beast. And um, the, the ATA drives in those days had six times the density and about 10 times cheaper cost per terabyte than typical enterprise drives did. So the trick was trying to make sure you got enough reliability out of these drives so that they would fit into a data center. And that's, that's what our claim to fame was. We came up with a lot of techniques to do that and then we updated the product and came out with a Satta obese, which a lot of people are very familiar with. It's 42 drives with Satta drives. And then the product you were talking about as the next generation of that, the series, which was where we had three drawers of, of 20 drives each to give you 60 drives in one shelf. We just updated that family to what we call the series V. So it's just like a midlife kicker with faster controllers and stuff. Um, still 60 drives and we just released a six terabyte drive. So now you've got 360 terabytes in a four year shelf, which is lots of nuts. It's just crazy. Yeah.
Speaker 0 07:40 So yes, we have a lot of experience with the series. We'll get more into that. But was <inaudible> really always bent on high density from the get go? Was that, you know, kind of one of the fundamental principles that and using kind of the, the less expensive, highly dense drives on, uh, you know, that were available for the times as well. I mean, what were kind of, what was the philosophy of the company's storage? Well, really we wanted
Speaker 1 08:05 To, to make storage so inexpensive that it could be used to replace tape in most cases. So, um, if you think about the types of applications we routinely talk about today, I'm using it for video, using it for closed circuit television, recording using it for backup targets. That stuff was unheard of in 1999 and early two thousands, because disk systems are just too expensive. So by being disruptive with the density and the price, we could cut the power consumption. So it was, you know, within striking distance of a tape library of the same capacity, we could cut the Rackspace requirement. So it was a reasonable amount of space and we could make it cheap enough. So maybe it wasn't as cheap as tape, but it was in, it was cheap enough that people said, okay, this is cheap enough to be used for tape replacement. And so over time, as we grain gain more credibility in the data center, we also started attacking the performance side of the market. So that was a different skill set. And we started doing that maybe five years ago by starting to do work with 10,000 and 15,000 RPM drives and solid state drives and things like that.
Speaker 0 09:14 And the company, what was it about a year and a half ago, underwent a corporate shift of sorts. It was, it was acquired by, I may shoot, we were acquired,
Speaker 1 09:26 Right. I mentioned, I mentioned is a, uh, about a billion dollar company. You may know them as the owners of the TDK brand. Uh, also they sell a lot of media under the automation brand. They private label, uh, tape media for a number of large companies. So, uh, you know, they make hundreds of millions of dollars worth of like LTO cartridges a year. So, um, they didn't really have much of a storage business, but they wanted to be in, in disk storage. Uh, they did a couple of small projects but nothing really serious. And so they wanted, uh, to buy a turnkey storage company and, uh, we were available to be acquired. Uh, we were doing pretty well, but we, you know, this gave us an opportunity to have a much larger parent corporation to bring in a bunch of capital to help us expand.
Speaker 1 10:11 The other alternative was doing IPO and, um, that market is too much tumultuous to say the say the least. So we got, we, we got with them and they, they made us a good offer. And so it's been a very gentle transition. They kept everybody for the first year and you know, we've had some management changes, but basically it's been a thing where they've added to us rather than subtracted. So they've added facilities, they've added people. So it's, it's been a very, very helpful transition. And as I like to tell people, we're just, we're all the same people doing the same stuff. We just drive nicer
Speaker 0 10:45 Cars now. Hey, I'm all about it. And you know, the biggest notice, the biggest change that we noticed was frankly, the color of the accent on the front of the units went from being kind of a yellowish gold to red. I mean, that was really the biggest change we saw from an integrator perspective. And I mean, essentially everybody that we interacted with was still there. And the quality, you know, new products have come out that are even better. And you know, so I mean it was a, you know, it sounds like it was really helpful to you guys on the back side of things, but we, you know, it was just a very consistent experience for us. And I can't always say that about acquisitions amongst vendors that we've worked with before.
Speaker 1 11:27 One of the differences, Nick, is that when, if we were acquired by a big company, let's say for example, EMC, uh, we would be assimilated into the board really fast, right? So, you know, and then within a short period of time, most of our products would probably disappear and Oliver people would dissipate within their organization. In this case, they didn't have a storage company already. So we became the storage company. Right. And, um, it's actually more the other way around. We've poaching their staff. So a lot of our added personnel came from my nation and they have a lot of really great people who were off selling tape cartridges and we taught them how to, how to sell raid systems. And they're doing way better doing that because it's more interesting and more fun. And, um, I mentioned also had a number of worldwide facilities that they gave us access to. And so you're, one of the things you're gonna see a lot more of is our support. 24 by seven worldwide is getting much better. And we have this thing where we want to make sure when you call the phone, you're always talking to an uh, an automation next, an employee as opposed to a third party call taking company.
Speaker 0 12:32 That's fantastic. And I mean frankly you're one, your support is one of the reasons why both our techs and our clients love you guys. So I mean to imagine you taking something that frankly was already recognized as being extremely good and saying, how can we make it better? You know, we love to hear that and that's, that's fantastic.
Speaker 1 12:52 You know, there was an excellent slide that one of our managers showed on one of our presentations recently where they basically said, okay, what are people's perceived quality of laptops? And Apple was right at the top and then everyone else sort of worked their way down to the bottom. Like Dell was near the bottom. And when they actually looked at the percentage of time, the systems fail, they're all pretty much the same. All laptops pretty much failed at the same rate. The difference was Apple support is so easy to use and so friendly. Everyone has bad experiences, but generally speaking, anything you ever do, you just go to the Apple store and they take care of it is in sharp contrast to the support you get from a number of other laptop vendors and that gives people the perception of a much higher quality product. Well
Speaker 0 13:36 That's right. And you talk about these things tend to happen at the same rate and support is very influential. But one thing I can speak to is your guys' products don't fail at the same rate as some of the other vendors we've worked with over the years. And I mean, yes, we know that when we're selling mass amounts of hard drives with complex raid controllers behind them, you know, things can go wrong, especially because we sell so many across so many clients, but you guys put more into them and you know, we can talk about the series now. You do a lot of pretesting in house to ensure that failure rates are going to be as low as possible. Can you go into some of kind of that, that TLC you apply to each and every hard drive and kind of what your methodology is to make sure people get, you know, the best possible experience out of those raids. So frankly they don't have to call you as often just for even standard types of failures.
Speaker 1 14:28 Now let me back you up the clock a little bit. You know, one of the things, um, that I like to remind people is this, this overnight success we've had with the series, we, you know, it's 15 years of overnight success. We, it's, it represents an iterative generation after generation of developing high density arrays. We shipped our first high density array in 2003 so we've been doing that for a long and we're getting very good at making those systems reliable. And it's not that there's one big secret thing. It's a whole bunch of little things that you learn along the way about you about designing the chassis, the right way about using the right kind of disk drives, developing the right kind of techniques for telling whether or not a hard drive will be reliable in a system. As an example, we developed something called stress test and stress test is a program that lives on the firmware of the rate controller and what it does is it automatically runs a test program against all the drives in the box.
Speaker 1 15:24 This is used in the engineering qualification and manufacturing and you run this test for a couple of days for for a typical test and after a couple of days you look at the results and normally none of the drivers will have actually had an error in a couple of days. Yet maybe one or two of the drives in that time period will exhibit signs of weakness and how we measure that is by measuring the amount of time that it takes. A drive to do commands drives to take a little bit longer. To finish a command, given that you give the same set of commands to the same, all the same drives, the slower drives are having trouble positioning on the head either because they're mechanically out of alignment or one of the heads is weak. And we have found that that's a really, really good way to tell if a drive is going to be reliable in a high density, high workload environment.
Speaker 1 16:14 And the, we take those drives out even though they don't have any errors or anything, we take them out and return them to the manufacturer and we call that our drive screening process. And that's something we, we, we do. Um, it's, it's one of the many ways that we do reliability. Another thing that's happening in the industry is if you buy a storage array from most companies these days, uh, it shows up on your dock as two cardboard boxes, one with a storage shelf in it. And the other with the hard drives, those two, those two boxes have never seen each other until that moment. And in our case, it's different when you actually buy the system. It's tested with the drives that are going in your system. This is going to guarantee that it's the test results and your and the chassis will match as closely as possible.
Speaker 1 16:57 I'll give you one example why that might matter. So you take a high density box, it's got like an RK 60 drives. Let's suppose you have a drive that vibrates more than it should. It works, but it vibrates more than it should at the disc drive manufacturing house. They'll never notice that because it's just sitting on a big Lucite block by itself being tested and as far as they're concerned, that drive is fine. They go ahead and ship it. The problem is when you stick it into a high density box, that extra vibration is bothering all the drives into vicinity and it will make all the other drives either go slower or possibly even fail. So by testing them together, you get to catch those kinds of scenarios or you catch drives that are unusually sensitive to vibration and the vibration. It's not a question of the amount of vibration.
Speaker 1 17:40 When you put a bunch of drives in a box together, the vibration frequency is higher than if they're on their own on a desk. And so the higher frequency vibration, this is a way of determining whether the drives have an issue with it. So not getting into the technical weeds here, but basically it's one of the things we do to screen out bad drives. And we also have very strict controls on drive vendors and revisions. We don't let the vendors change revisions of anything. If they change, if they want to change the revision of a part or a sub assembly or the firmware, it has to go back through qualification again. And this is not just a perfunctory thing. Very frequently they'll make a change to a drive, which actually causes it to fail qualifications. So that's why it's very important to be careful where you buy your drives and, and try to make sure that you're careful about the firmware revisions of the drives because it can make a huge difference to whether they're reliable or not.
Speaker 1 18:36 So that's some of the stuff we do. And we also learned some tricks, like we Mount the drives in back to back counter-rotating pairs to help cancel out the vibration. Uh, we, we were very careful about cooling and on the East 60 family we started doing something called active cooling where the fans are speed controlled. So, um, normally in a normal computer room is normal temperature, 70 degrees or so Fahrenheit. And if everything is working normally and you're not doing anything unusual to the system, the fans will be running about half speed. So running fans, half speed accomplishes four basic things. Number one, it uses less power. Number two, it's quieter. Number three, it reduces the amount of vibration the fans cause. And number four, it makes the fans last about 10 times longer. So those are sort of, there's about a hundred things like that I could list for you, but that's a quick little run through some of the things we have to think about when we make boxes like this.
Speaker 1 19:34 And then of course there's a couple of hundred thousand lines of firmware that sit on top of that, that take care of things like scanning the drives for errors and dealing with error situations when they occur. Wow. Yeah. I mean when people ask us what makes one raid different than another, I'm just going to have them listen to this episode of the podcast and we basically tell us a little light version of that story when, when we explain this to our customers, but we see it be so amazingly stringent. Just testing on, on, on your end and also then the relationship that you have with the drive vendors is also a really big piece of that puzzle as well. Well, and in the early days when we first started doing the enterprise ATA stuff, there was no such thing as enterprise ATA drives. So we worked with one manufacturer, was very willing to work with us to try to improve their product.
Speaker 1 20:20 So it would work well in that environment because they thought that was going to be strategic going forward. And it turned out they were right and we were early, very early on that, um, technology curve. And so we worked together and they made changes, we made changes, we very much cooperated and they've been our, mostly our, our dry vendor of choice, even though they've been acquired a couple of times. And then we occasionally qualify other drives and light. And lately we spend a lot of time trying to qualify solid state drives because that's the wild wild West. Again, the solid state drives are changing, you know, every few months they make changes to the drives and the technologies and the vendors pop up and pop out. So it really is keeping us busy trying to figure out what drives are good and what drives are not good in the SSD world.
Speaker 0 21:03 Well, let's, let's talk about drive technology for a moment. And let's, let's start with hard drives because we've obviously seen a lot of capacity expansion and you alluded earlier to six terabyte drives coming online. Now we've obviously seen the transition, uh, between ATA or I, you know, parallel ATA and IDE drives to Satta. And now you guys in the, the series V's and VTS are shipping near line. SAS is kind of the standard. And again, we're seeing, you know, things coming down the pike like, you know, helium drives and things like that. Can you, can you give us, you know, you're highly technical when it comes to this stuff and you've seen it for many years now. Where are we in the progression of hard drives? What should we expect to see? What is kind of the difference between especially Satta and nearline SaaS and what does that give to the customer? Um, you know, school less Gary school us.
Speaker 1 22:00 Okay. So here's the thing, there's a lot of questions there. Let's talk about the Santa versus nearline SAS thing. So we, we have been kind of sloppy around here in using a nomenclature we'd talk about Satta drives and by that we meant 7,200 RPM drives and we'd say SAS drives. And by that we meant either 10 or 15 K drives. We've supported both of those for a long time, but, um, a lot of our customers have pressured us to try to, to switch from Satta to nearline. SAS. The actual drive mechanism is the same, but the electronics are a little bit better. I mean it's, it's very technical differences, but the electronics on a nearline SAS drive a little bit more sophisticated, get that out from his class. That is what they're typically referred to that the enterprise class drives with the, but the, the actual physical drive is the same between those Satta drives that we were using in the near line.
Speaker 1 22:54 SAS drives we're using. The only difference is a different, uh, you know, IO card on the drive. So the, the, the, the actual thing that's you have to worry about is that the HDA mechanism, the heads and the disks and stuff. Um, and the read write channel, all that stuff is the same between the two types of drives. I mean, literally the same components. They, they, the difference is whether it has a Satta chip or a sass chip on the board sat. It gives you a few advantages and error correction and, um, being better on the bus when you are, when you have a large number of drives contending for access to the media, the sass media I'm talking about the, the SAS drives are better behaved so we can get a little bit more throughput. Uh, but the, the difference to the user is pretty negligible.
Speaker 1 23:38 But the main thing that happened was the price difference between the Santa and the nearline. SAS got down to just peanuts. So we're able to tell people, Hey, we'll switch to nearline SAS with a negligible cost impact. And because it's really more of a sales thing, it helps with sales because some sales tenders require the nearline SaaS as part of the spec. So, um, we figured there's no reason to fight that battle anymore. So even though we know that the drives are just, there's no reliability difference that you're ever going to notice between the Satta and the nearline SAS, but you know, they're the same price. You might as well go with the one everybody wants. Right? So, uh, we switched nearline SAS and then in a future generations of the product we might drop support for Sada. I mean officially we don't have any support for Satta in the, in the new box anyway, although the electronics would still work if you tried it.
Speaker 1 24:28 But in the, in the distant future we may be using Silicon that no longer supports the old fashioned Satta interface. So we had to make the transition at some point and it just seemed like this was a good time to do it. Now in terms of the helium, the new helium drives six terabyte drives, this is the tip of the iceberg. They're talking about some capacity points. I'm not allowed to repeat sort of the roadmap, but this is nothing. They're kind of, they've got some really super ambitious plans for hard drives. Gary, can you tell us like what makes a drive a helium drive? I think, I think some people could probably use that insight. So at some point the drive vendors decided that using helium instead of air inside of a hard drive would give them better aerodynamic properties. They could, they could make the head fly lower on the media.
Speaker 1 25:14 And uh, so they, the helium drive actually has a sealed chamber with helium inside and it no longer has a little filter, which, you know, allows outside air to go in and out of it. The drive, it's now a sealed container. Right. And that's just one, one vendor's approach to the problem. And it may not be the whole solution. They also have done a lot of other things in the drives to get the higher density, but there's a whole long series of things they've got planned to increase density way beyond the six terabyte number. And I get asked the question a lot, well don't you think the whole world is going to go solid state? And I keep seeing the difference in cost per terabyte of these new spinning drive systems versus SSD and they're still miles apart. Um, 2025 times difference. So I think for, for capacity optimized applications, spinning drives are going to be with us for quite some time, especially with the new roadmap that's underway.
Speaker 1 26:08 And especially considering the fact that these drives transfer, you know, a couple hundred megabytes a second per drive. So you know, the, the, the actual sequential transfer rate of spinning 7,200 RPM drives is, is very, very good. And, um, switching to solid state doesn't really help you with that much. So if it is a sequential workload, we're, we're seeing the spinning hard drives proceeding very well going into the future. And furthermore, the three and a half inch form factor appears to be the winner for quite some time. And just in terms of cost per terabyte or in density per cubic foot. Uh, although we do have also recently a, um, we, we have an enclosure that supports two and a half inch drives. But, um, really for the bulk of our market, especially in the video space, we expect the three and a half inch form factor to be the winner for a long time. Wow.
Speaker 0 26:57 Um, yeah, I know that's a, that's a pretty common misconception with these high end, um, video applications. A lot of people are thinking, you know, well, why, why not just do SSDs? I mean there's so much faster and this is really, uh, you know, this is a great discussion, you know, for the, for those people who are thinking along those lines,
Speaker 1 27:13 You know what though the word faster is an interesting word. We spend a lot of time around here trying to decide how to define what faster means. But I'll give you a couple of interesting factoids about solid state. Most solid state drives are pretty good at reading, but you will find if you take several different types of solid state drives and try to write to them, especially try to write to them for any length of time. So let, let's say like it's a 100 gigabyte drive. You write a hundred gigabytes of data to it in a row, then all of a sudden the performance is going to change. Now are really expensive. High tech, high end drive, it won't change very much, maybe 30 40% but an inexpensive high density drive, maybe the kind of thing you get from a retail establishment or on the internet, it may drop in performance 99% because it's having to do so much garbage collection in the background and it really can't accept new data very fast coming in when it's doing that.
Speaker 1 28:08 And that performance stays really low until such time as you leave it alone long enough for it to recover and that that can be quite some time. So you know like 30 minutes or something. So, uh, in a system that's going to be used heavily, you have to be very careful about what type of solid state you have to make sure that it's steady state, right? Performance is adequate for what you're trying to do. And remember that video is all about rights. So you, you got data coming in, you're going to load in, you know, 45 minutes of uncompressed HD video. You're writing and writing and writing and writing and writing it in a nice long string and you may very well run up to that limit of how much the drive can take without having to do garbage collection. And then all of a sudden the performance drops and you start dropping frames, you know, an hour into the project project while you've gone out to go get lunch. Right.
Speaker 0 28:57 <inaudible> that's, that's problematic. And you know, the other thing is just the cost per terabyte. I mean SSDs are still considerably more expensive and after what you're describing of them, not necessarily in any way shape or form being ideal for video applications, which again are very, uh, you know, sequential in nature can, we're writing out long streams of video files and reading off large chunks of video at once.
Speaker 1 29:20 And Nick, you know, the thing is Nick, also a writing is where his out drives. So a lot of the drive technologies, let me just tell you what the drive technologies are. You have the MLC multilevel sell flash drives. They, they typically have a durability of a few thousand erased cycles. So you take the capacity of the drive times of, you know, a few thousand depending on what the spec is, that's how much data you can write to that drive before it burns up. And by burns up, I mean it'll just brick, it'll stop working. So rights actually wear them out. It's like each one of these bits in the system is effectively like a little battery that you're charging and discharging and charging and discharging, just like a rechargeable battery in anything else. It can only do so many recharges before it stops working.
Speaker 1 30:02 Then they had something called enterprise multilevel cell and the newest generation of that is actually very good. They've got those up into this sort of a hundred thousand range of erase cycles and that's usually enough for almost anything. And then the old older technology, single level cell SLC flash drives, which is what they used to call enterprise flash, uh, is actually designed so that you can just pound on it day and night for five years without wearing it out. And the trade off is price and density. So the SLC devices are very expensive and they're so expensive that they're essentially being phased out because the new generation enterprise MLC devices are good enough for most people and there's no reason to spend the tons and tons of extra money for SLC technology except for the very most demanding applications. And then lastly, and this is just applicable to using it as a cash device.
Speaker 1 30:59 Uh, we have a product that uses a lot of solid state flash, but it uses Ram for the most demanding part of the cache because Ram does not wear out and Ram is way faster. But the, the downside of course is Ram is, is very low density and very expensive. So we use, we use Ram for the most demanding caching applications for solid state. Then we use SLC or the best ELC devices for the next tier down. And we're not currently doing anything with MLC in our, in our type of storage systems. However, for example, I'm talking to you through an Apple Mac book pro, which has a MLC solid state, hard drive inside, which I love and it works perfectly for those applications. So you are seeing the spinning hard drives disappear from laptop type situations. And of course nobody's MP3 player uses a hard drive anymore. Right, right. That's all flash. So flash is taking over bit by bit so to speak. Uh, but it's not, it's not completely there yet. Yeah.
Speaker 0 32:00 Got it. And Jason, I think you were raising question. I have another question about back. Let's go back to the six terabyte drives for a second, Gary. Um, so one of the concerns we always have with larger drives, especially, you know, when the next sort of version comes out, you know, two to three to four to now six is the rebuild times on a raid. You know, we, we have in the past, we've, we've, we've cautioned people against, you know, going to those to that next big step because when you look at, um, you know, potential failure, potentially baking a new rate, it's, it's really, it, it can cause some pretty big rebuild times. Can you talk a little bit about how, like how next Santa dresses that rebuild times are creeping up now?
Speaker 1 32:40 First, let's talk about the things that affect rebuild time. Number one, when you go, let's say from four terabytes to six terabyte drives, that does not mean 50% more rebuild time. It actually means closer to 25% more rebuild time because a half of the density increases probably in the linear direction, which means the drives are transferring data that 25% faster so they can then rebuild faster. But the, the, the, the increase in number of tracks that they have does not go faster. So, you know, so really half of the difference is, is a requirement of additional bandwidth in your system. So what we are seeing it companies like Nixon I mentioned now, um, is uh, w w we generally release new controller technology is more or less about the same time that new hard drives come out so that the controller is faster and can rebuild larger rate sets faster than before.
Speaker 1 33:37 So the, the rebuild times are creeping up, but it's not going up as fast as the drive capacities are going up because we do things to mitigate it. Like, for example, making the controllers faster. But there's technologies coming out that that uh, we're looking at different variations of where you, you thinly spread the drives, um, the raid stripes over multiple drives and then you have a bunch of parallel processors work together to the rebuild that that kind of thing will, will be the next generation that cuts the rebuild times down. We're starting to get to pain threshold for people. It's, you know, sort of 24 hour timescales for rebuilds now and that's sort of getting up. That's sort of getting up to a magic number that people stress out over. So probably before we go above six, we're going to have to do some, some other mitigation factors to reduce the rebuild even further.
Speaker 0 34:26 How are prices, you know, shifting as these larger capacity drives come out or are we seeing large swings for those larger capacity drives or is, are they going to come down relatively quickly in your opinion?
Speaker 1 34:39 There is certainly a premium to start with. I did a calculation the other day for somebody where I looked at the cost of per per terabyte of our four terabyte systems versus our new, the pricing we came out with for the six terabyte systems and the four terabyte systems were significantly cheaper per terabyte right now than the brand new six terabyte systems. Now you'll find in some number of months, those will, those will be equal and then they'll cross over where the six terabyte then becomes cheaper per terabyte. But right now there's a premium for the six terabytes.
Speaker 0 35:09 Although, you know, that may be mitigated by the physical space someone has access to or the cost of power. Yeah, exactly. Those things may, you know, supplant those increases in the drive, you know, costs themselves. Sure.
Speaker 1 35:25 Also, there is a certain amount of benefit to be had from just having one box, like a 360 terabyte box. You can say, okay, this is my shelf. I don't have to worry about managing multiple shelves. There's just a shelf. Um, you know, we can, we can accommodate more people's storage requirements with a capacity like that. Then if we just had, um, you know, a 240 terabyte shelf, I'd just say only 240 terabyte shelf. Um, the 360 terabytes. Now you took put two expansions on there and now you've got over a petabyte in 12. You have Rackspace and I don't know about you.
Speaker 0 36:01 Insane. Yeah, that's pretty, that's pretty neat. Well I'm getting so old. I remember when a petabyte was a lot of data. Yeah. And it was like it was, it was, you know, a room full of machines. Right. And were completely unattainable at one point. Right. When I started doing hard drives, my hard drives were five megabytes and they were 14 inches in diameter. Yeah, yeah, absolutely. Yep. He's going to start talking about like what was it, the block memory or the what? The little magnets on the wires or, well when we, I actually core memory, that's it for memory. I did, I did core memory. Core memory was cool cause you could actually like load a test program in one core card and you could take it out of your computer and walk it over to a different computer. And plug it in and run that and run that test on that computer and fix it.
Speaker 0 36:48 It's, it's, it is amazing how quickly it's changing. And I mean, what you were just kind of hinting at with what we might see with hard drive capacities over the next few years. It's, it's still at that breakneck speed. And I think it's funny because every time you get hints from the industry that we might be hitting a wall with certain capacities, it's like, Oh, magically, you know, some new breakthrough has occurred and you know, now we've got a whole bunch of extra headroom ahead of us and we've got, we've gotten breakthroughs coming in several years.
Speaker 1 37:15 I mean, on the host interface side right now, I noticed that the new Mac pro doesn't have a 10 gig option built in. That's sort of a shame. But, uh, but of course ADOT popped up immediately with a 10 gig to two, um, thunder, uh, Thunderbolt three or two to about two. They also did a fiber channel interface for it as well. So like if you don't mind an outboard box, you'd get your 10 gig port. Then we have 10 gig on all of our products available and then 40 gig is coming pretty soon on the ether net side and then we're now shipping some 16 gig fiber. So the fiber channel is getting faster on the disc side. We're currently, we're currently using SAS interface SSDs and we had tried some Satta ones in the past but they were pretty horrible so it never really deployed those.
Speaker 1 38:06 But th we're using SAS for that but there's a new generation of that coming. Um, the new generation of SSDs is using a two and a half inch form factor technology called nonvolatile memory express in VM E and in VM he uses a couple of lanes of PCI express gen three and it gives you a couple of gigabytes, a second of bandwidth each slot. But more importantly the latency is much less because now instead of doing the unnatural act of taking a memory device, converting it to a disc drive protocol like sat or SAS, and having a maximum number of IOPS that you can do to drive to be maybe a couple hundred thousand or something like that. Now with the NBME, uh, technology, uh, people have announced 1 million IOPS drives, and this is for a drive that's small enough. You could put three in your shirt
Speaker 0 38:57 Pocket. I mean, that's insane. That's insane. It's really something
Speaker 1 39:00 Now. And we actually have one in our possession that's doing 400, 400,000, so, and so it's a technology for late next year, but, um, people are, you'll see more announcements. Uh, Dell announced a server that had a couple of slots for it. Um, and then other people will be making announcements about NBME type devices. So there's a next generation of that coming. There'll be pretty expensive to start with and they're really for high IOPS requirements. By that I mean high random IOPS, uh, probably not the most cost effective thing for sequential transfer type applications, massive
Speaker 0 39:35 Database applications, things like that. That starts to become really useful. Yeah, yeah. Indexing, maybe transcoding awesomely. Yep. Absolutely. Now we've talked about the performance of the drive mechanisms. We've been talking now about the performance of the kind of internal, you know, interconnect technologies that the drives connect inside of the storage units to the raid controllers. You, we talked about the performance of the raid controllers in a moment ago. We were talking about increases in kind of the host side. Connectivity with increases from one gigabit to 10 gigabit and soon 40 gigabit and on the fiber channel side we've obviously seen four to eight and you alluded to 16 gig, you know, outfitted raid controllers coming soon. This is a lot of areas you have to balance when it comes to engineering a system that that is maximizing the possible performance. I mean I'm really curious to get your sense of things, you know, inside today's units and even tomorrow's units out of that whole string of potential performance, you know, bottlenecks across the system as a whole are some areas lagging behind others. What really represents the true performance bottleneck primarily across all of those various interconnects and points of limit, so to speak?
Speaker 1 41:00 Well, you'll often find that it's the rate engine itself that's the bottleneck. But really what we're trying to do is balance all of these things together. So when we're proposing a product for the future, we sit down and try to balance the front end performance and the back end performance and the engine performance by selecting the least expensive piece of technology that will, that will balance the other pieces. So I mean I could stick a processor on there that was a hundred times more powerful than it needed to be, but then that's wasting everybody's money if it's bottleneck on the front end or backend. So the engineers sit down and they say, okay, we need to do so many iOS a second and so many gigabytes a second. Let's make sure the memory can do it was to make sure the PCI express lanes can do it. Let's make sure the host front end and back end can do it. And if you, if you do your job right, all those things will be more or less matched never. Exactly. Because there's variations in customer behavior, but you try to make sure that they, they more or less match each other. And then that that represents a product point that you release when you, when you can do that.
Speaker 0 42:00 Got it. Got it. And that makes perfect sense from an engineering end, you know, balancing the engineering with the, you know, price, consciousness perspective. And again, another thing we've loved about the units that you guys sell, they are engineered extraordinarily well and kind of in synchrony, you know, internally into themselves. But also guys are very price competitive with anyone else out there. I mean it's, it's frankly, you know, what we consider enterprise class storage, no question at the price of, you know, some other manufacturers out there that I've wouldn't necessarily apply that moniker to.
Speaker 1 42:35 Well certainly appreciate that. There's a lot of good people doing a lot of hard work. And I think one of the biggest advantages we have is we, we tend to make things in house. So we design our own raid controller, we write the firmware, you know, we designed the chassies, we build them, test them, ship them, right? So we don't bash the sheet metal here. We subcontract that. We, we don't make our own circuit boards. We subcontract that. But it's our design and we design the chips and the software on it and so forth. So that allows us a great deal of control over the product and it makes the customer closer to the engineer. Uh, there are a lot of companies out there that OEM everything that the controllers from one company and the boxes from another company and the software is three different companies bolted together and so forth.
Speaker 1 43:20 And, and we've sort of played in that kind of a world ourselves and any time we ever have, we've regretted it because it's very difficult to support our customer. Well when you've got sort of a Frankenstein system like that. Cool. Um, Gary, um, talk a little bit about your NST series, your file servers. Just tell us a little bit about that solution. We've been talking about the series. Each series is, is Easter is V now is, it's a sand storage, so it's fiber channel, ice Guzzy or SAS block storage. In other words, there's no file system on it. If you want to file system, you need to have something like StorNext or you need to attach it directly to a server and create a file system. We have a product called NST and this NST 5,000 NST, 6,000 some more models about to be released in a few months.
Speaker 1 44:05 And NST is a unified hybrid storage solution. So unified meaning that it's, it does block and file, so it does sifts or SMB, that's for your windows people. It does NFS for your Unix people and it does fiber channel. I scuzzy for the block storage sand type people, uh, as well as FTP. So it does all those protocols in one product and it's in that product. All those protocols are sharing one set of hardware and one set of storage. So you have a bunch of disks on the backend that are virtualized into a big resource that you carve out as you need for either the sand or the or the file storage. And the, the hybrid part of it is we have an option where you can stick in solid state storage and we will automatically cash your data into two different tiers of solid state storage.
Speaker 1 44:56 We have the Ram tier and the F and the flash tier. Typically the Ram tier is used for right cache data and the flash tier is used for read cash data. And this solution can scale from a few tens of terabytes to about five petabytes under one unified management console. And there's a ton of advanced features like snapshot replication, thin provisioning, compression, and other things like that that are there more to come. It can do synchronous replication in, in certain situations. Uh, and there's quite a variety of price points and performance points that are available. And since 10 gig ethernet is now fast enough for most video workflows, uh, you're starting to see more adoption of this type of a product. And the big video houses like ABC or NBC just did the Olympics in Russia. They had an NST system they talked about in some art press article we did, had nothing to do with it. They, they'd put it in the article, they talked about how they used it and they were happy it replaced their previous store. Next solution. And the neat thing about Nass is it is automatically a shared file system. So as long as you can talk to a ness device, you can talk to it. You don't need any client software to make that work.
Speaker 0 46:10 Yeah, we've done less with your MSTs and as you know, we, we often are putting StorNext you know, as the file system on top of the series. But we do certainly deploy Nazism file servers, uh, you know, with ethernet connectivity into some of the smaller shops that, that we catered to. But we've always kind of had the NST in the back of our minds as well. If, if a bigger type outfit was really, you know, really serious about doing an ethernet based, you know, video storage infrastructure, it's kind of probably what we would want to test with them. And it sounds like some folks are doing that very successfully. I mean, can some of those engineering features like snapshots, replication, the cashing tiers potentially cause problems in environments that are mostly oriented around sustained operation or sequential operations instead of random or you know, do you think the NST can truly handle it all so to speak?
Speaker 1 47:09 Well, I think that both the series and the NST support snapshot and replication, the challenge with replication with the video market is a, you video guys generate a lot of data and a lot of people's lands are not ready for that kind of data to be transferred across them. So, um, I mean you, it's, it's nothing to GE to generate terabytes of data for a little video project and transferring five terabytes over. Somebody shared when that may be the telephone system for the company and everything else. Uh, it might make you very unpopular in your organization. On the other hand, there are people who have tons of bandwidth between sites and or, or the sites are very close together and we can run a private optical cable between two buildings for example, and replicate between them then then you start seeing a more sensible use for it.
Speaker 1 47:56 For the video world. My experience video people don't typically use replication. It's not just the mechanics of replicating that much data. It's legal issues. They are, I recently went to Technicolor and, um, their security was more stringent than when I went to the NSA. I mean, they, they do not want data going out of that building and I mean, they are real serious about that. So letting the lending somebodies, you know, latest Disney film travel across the internet to some other location is something that they have to have a whole lot of assurance over before they'd ever let that happen. So I haven't seen a lot of people asking for us to do replication for them in the video space. But the technology is there. If it comes in handy snapshots, um, a work fine on video, there's no issue there. Uh, things like, you see a lot of people talk about data compression and deduplication gotta be a warn you that most video data is not deducible or compressible. And so a lot of vendors who's who the performance numbers, they give you assume a certain compression ratio or the capacity numbers assume a ratio. You got to discount all of that because that's not gonna work for you in the video space.
Speaker 0 49:08 Yeah, absolutely. Now you had mentioned 40 gig ethernet and you know, I envision a possible future storage scenario amongst some of our larger clients and postproduction work groups and only having a handful of 10 gig links between something like an NSTC head node server and the ethernet backbone, if you will, of their, their, their private video ethernet switching environment. You might not be enough. How right? How soon is 40 going to be out there available, you know, quasi inexpensively on, you know, some maybe medium or higher end switches. Is that something we're going to be seeing this year? Is it out now next year? And when are you guys going to be, you know, making it widely available across, especially the NST product range?
Speaker 1 49:57 Well, Nick, I, I really wish I could say that it was going to be this year, but I don't see it. Um, and I'm not even sure it's going to be next year. Uh, everything is so expensive and so, so few people have the infrastructure available right now, but we're watching the situation. It's just a different plugin card for from our perspective, but there's no point going through all the qualifications and putting in a million dollars worth of test equipment, uh, if no one hasn't used for it. And that's one of the downsides in the largest shops. Um, and why continues to be a great solution for a lot of people is if you need a lot of gigabytes a second, that's probably way cheaper than doing something with either net.
Speaker 0 50:34 Yeah. Yeah. We're having that dialogue now with several people that we work with who are, you know, really needing extreme high performance because they're building storage systems that involve both kind of quote unquote post-production activities, but also a lot of transcoding. And a lot of big file transfers in an off of the the file system, which as you know, can just saturate the performance that you give. Whatever machine is doing. Those transfers are, there's those trans codes. So yeah, I mean it's all about finding the right solution for the environment and thinking it through very, very deeply. And one of the things we love about having you guys in our arsenal of, of, you know, core vendors frankly, is you have the solutions that kind of run the gamut when it comes to disc storage. It's great. Um, you know, from an archival perspective, no, we don't have to go into a lot of detail on it, but it's something that's probably worth talking a little about.
Speaker 0 51:26 You know, we do still deal quite a lot with tape and I think as kind of a quasi active archive medium for, you know, especially in postproduction environments or even, you know, maybe just a longer term archival storage to us and a lot of our clients, it still makes a lot of sense. But you guys do have this other line of products called Assurion and it's kind of a disc based archival storage, which you know on one level seems a little, I don't know, counterintuitive to my mind, but maybe you can go into, I know that you guys have designed it in such a way where you're taking some of those things into account and you know some of discs issues with longevity into account when you design the system as a whole. Can you talk a little bit about Assurion? I know we get a lot of questions from people who want to use disc as archive, but you know, you guys have some experience building a system that's dedicated towards that and there's, there's certainly some kind of caveats and extra engineering that goes into that, right?
Speaker 1 52:31 Sure. Assuring is a product we we, we launched about nine years ago and we've continued to upgrade it. It's, I'm maturing at version seven now and eight's coming. It's effectively an archive using hard drives designed for protecting your most valuable digital assets. It goes to a whole bunch of things to make sure that all of your assets are accounted for and are not tampered with and are retained through the end of the retention period. So basically imagine how it, there's a couple different ways it could work, but basically let's take the NASA implementation as an example. So you write a file to what looks like a Nash share. It's not really, but that's what it looks like. So you write a file to it, uh, and as depending on which directory you started into, that will assign different retention rules. So retention rule might say, retain this file for five years, which basically means nobody, not even the administrator, not even an ex, an engineer can delete that file until five years has gone by.
Speaker 1 53:31 It's also protected at a very high level against any kind of malicious computer attack, worms, viruses, that sort of thing. So even if you're the super user of the system, you do not have the ability to delete a file until this retention period has expired. And this protects you against inadvertently deletion or inadvertently modifying a file before you're allowed to do so. And a lot of companies have, you know, some number of years as a retention policy. And then we also have for people who don't really have that strict of a policy, we have it so that you can have a user over rideable or administrator over rideable retention time. And if you were to try to write another version of that file with the same name in the same location, it would save the older one and create a new full file so you, you'd have versions that it would keep.
Speaker 1 54:21 So you have no ability to change or modify the data and that guarantees that it has integrity. And then everything has got fake digital fingerprints on it and there's an auditing process that goes through every 90 days or so and make sure every file is there cause they're serialized and it makes sure every file is readable and matches the original cryptographic hash that we took of the file and the Assurion, there's always at least two separate copies of your file in two different nodes. So if a file turned out to be corrupted or missing, we could repair it from there. One of the surviving nodes, so I'm assuring typically is used for medical records like cat scans and pet scans and stuff. Uh, it's used for financial records. Uh, the FAA uses it to store aircraft maintenance logs. It's used by some legal organizations to store legal documents and we have just started to sell it into the media and entertainment space where people have high value original content they want to guarantee, does not get inadvertently corrupted either due to user errors or you know, any other kind of mechanical or electrical problem.
Speaker 1 55:30 No matter what user mistake is made, no matter what administrator goes crazy, no matter what customer hacks in your network, whatever is designed to protect you against that kind of situation. And the integrity level is so high that we typically don't have people back up the Assurion to tape. It supports it, but that's not normally necessary. The amount of safeguards and protection in the system is sufficient that you no longer need to back this stuff up. And so that brings us to the other part of the assurance story is it's used for storage optimization. You migrate your unchanging data off of your tier one expensive storage to the Assurion for safekeeping. The data is much safer inside of an Assurion than it would be say sitting up on your um, you know, HDS system. Sure, sure. But from a pricing perspective, you've talked a lot about both the, from the redundancy standpoint, having multiple copies, so you're taking up more space
Speaker 0 56:24 For that data as well as some very kind of fancy engineering that goes into all of these policies and the policy driven retention systems and you know, all from an administrative standpoint, you know, so where does it price out versus just raw block storage?
Speaker 1 56:40 Yeah, I don't have the numbers in front of me, but just as a rule of thumb, it's probably three or four times the price of the amount of, for the same amount of eateries, storage because we've got a store, two copies, then there's servers attached to that and software and the price per terabyte. The file size makes a difference. So because there's overhead attached to every file, there's encryption keys, there's metadata records and so forth. The smaller the file, the bigger the overhead, uh, video. It's not an issue because your video files are so huge. But um, yeah, it's, it's, it's certainly significantly more expensive than either E-Series or in S T but it's less expensive than a lot of other systems that people use and it doesn't require a lot of labor from the customer. Once you've set it up, it just looks like a big bucket you throw your files into that are protected.
Speaker 0 57:27 Good. Gary, wait, what are you looking at as far as like a drive lifetime on something like that? Um, is that something you're, you're swapping out on a regular basis or? Uh,
Speaker 1 57:36 Yeah, it has built in migration tools. So the customer will typically, um, install a new generation Assurion node and simply transfer the files from the old one to the new one. Now is it object storage internally? Yes, but we hide that from the customer. So it's got a cast archive system inside. But we, we've always felt that the, the mapping layer between the customer friendly username, our file name and the cast object identifier sometimes called a token or a ticket or whatever. That is where the security protocol needs to be. So there needs to be, so that we keep under the covers. So the mapping between, you know, my file dot doc and the hash code for that file is protected internally to the Assurion to protect, prevent people from playing with that database. If you imagine if that database was outside in an unprotected, ordinary server that I could, I could, I could, I could modify a file or make it look like I modified a file simply by changing the token that was associated with the file name that you were looking up in the database. So, so that security, that security wall needs to be on the mapping layer between the file name and the, and the token.
Speaker 0 58:52 Absolutely. And I mean, you know, again, if you're spending millions of dollars on productions, um, either for broadcast, you know, scripted dramatic stuff or Hollywood theatrical cinema and you know, again, you may be spending tens or even hundreds of millions of dollars and this is where the studio master lives. It sounds like a really appropriate system, you know, if that's the type of video that you're, you're dealing with. But you know, for some customers who may be, you know, the, the cost of their productions is much lower. Other types of disk based archive or rather tape even or other just mediums or approaches in general might be more appropriate. Right? Sure. And I think what you see in the media world is there's still a mindset of Hey, if it's more reliable than tape, that's good enough. And that's a pretty low bar, right? So it's hard to sell a lot of video people on ultra super Uber reliability because they're so used to something that's so unreliable to start with.
Speaker 0 59:49 Yes. The problem is almost none of our clients shoot on video tapes. And usually the tape is on the other end of the process. Now it's on the archiving side. So that's where they, you know, that's where they invest in the, you know, the th the dual writing drives, you know, where you're writing the two tapes at once. Exactly the same data, you know, so your archive is also backed up. But yeah, most, uh, you'd be surprised even even frankly at the consumer end to be sure. Um, but pro-sumer and industrial video and corporate production all the way up to broadcast and theatrical, the vast majority of it exists as a file from the moment it's created. And so, you know, people, it's been a trying decade during my time here at Chesapeake. I mean, you know, again, people used to rely on those videotapes probably lasting a good 15 to 20 plus years and now there's this file that's this, just a morphous kind of hard to put your finger on thing that they need to preserve.
Speaker 0 00:49 Just the same way that they preserved those physical things or what they thought of is the physical media. But now there's really nothing physical. It's just this file somewhere and you know, this is coming back to our original thing about a flash media. And so okay great. You got this camera, it's got a flash stick that sticks in there, our flash cartridge. That's great. But they're so expensive. Everybody has to reuse them. Right? And the tape world, you simply took the tape out and you gave it to somebody and you put it in a vault. Cause the tape is five bucks right now you got this memory cassette, this three or $400 or something, maybe more. And so the impetus is to get the data off of bit and reuse the cassette, right? Absolutely. Absolutely. And it presents a lot of challenges. And again, as we wrap here, and this has been just an extraordinarily informative conversation.
Speaker 0 01:37 Amazing. Thank you. That's why I always love talking to you and picking your brain about these things. Cause you know, I walk away from the conversation at least allowing myself to present myself as being smarter than I really am. So thank you for that. Thank you. Um, but that's why we love your stuff. That's why we love the next hand stuff. Again, as people are in this realm now where their content is as valuable as ever, it's just, it's a file they need to trust that where it lives is going to be highly, highly reliable. They want to know that you've done, you know what you need to do as the, as the manufacturer of their storage to make sure that they don't have to know add or worry about any of that. Exactly. And so, and you know, everything we've talked about today is just all of that backend stuff that goes into making highly reliable and fantastic storage systems that we have sitting in our office and coming in and out of here on a pretty much daily basis.
Speaker 0 02:37 So we really, we appreciate that. We appreciate the partnership with you guys. So thank you so much for being on today, Gary Watson, CTO of NEC San, and we look forward to seeing him and speaking with you in the future, Gary, and thank you, Nick. And now I'm actually an automation fellow. That's, that's my new title now. Now that we've been good, our new corporate overlords have taken over. Well, the next time I see you, I'll say, uh, my Goodfellow. That's me. Awesome. Awesome. Thanks so much, Gary. Thank you, Nick. Thank you, Jason. Take care. Bye. Bye.