#19 "Beyond Metadata: The Role of Audio and Video Search"

November 04, 2013 00:37:28
#19 "Beyond Metadata: The Role of Audio and Video Search"
The Workflow Show
#19 "Beyond Metadata: The Role of Audio and Video Search"

Nov 04 2013 | 00:37:28

/

Show Notes

The Workflow ShowWe recently conducted another of our small group "think tank" digital video workflow symposia in New York City at Quantum's offices in Lower Manhattan. We spend a lot of time at Chesapeake Systems talking with our clients about metadata. It's defintely an extraordinary tool in the world of digital video workflows, but there are some emerging technologies that are allowing us to have an additional set of tools. Highlighted at this symposium were two search applications that we at Chesapeake are very impressed with, Nexidia and NerVve. Nexidia is a phonetic search application that can scan a library of literally thousands of hours of footage in seconds. And NerVve is an application that works similarly with video imagery search. Both applications present easy-to-use interfaces, and the results typically "blow-away" first-time users. What role do these applications play in advanced video workflows? Do they threaten or complement metadata-driven media asset management systems? These are questions addressed by our panel of experts: Chris Lacinak, Founder and President, AudioVisual Preservation Solutions Drew Lanham, SVP/GM, Nexidia Thomas Slowe, CEO, NerVve Technologies Nick Gold (panel moderator), Director of Business Development, Chesapeake Systems Beyond Metadata panelists Chris Lacinak, Tom Slowe, Nick Gold, Drew Lanham If you are interested in seeing a demo of the Nexidia or NerVve applications, either in person or via web, email Nick Gold or call him at 410.400.8932. Episode length:  37:34 You can also listen to this and other episodes of The Workflow Show in iTunes. Again, special thanks to our friends at Quantum for their assistance in facilitating the presentation of this symposium. Show Notes:
taxonomy
Avid PhraseFind
Boris Soundbite
DAM = Digital Asset Management
MAM = Media Asset Management
PAM = Production Asset Management (Avid, for example, is one entity that makes the distinction between MAM and PAM)
SAN Fusion
View Full Transcript

Episode Transcript

Speaker 0 00:00 So I'm going to give just a moment to each one of our fellow panelists here to introduce themselves. And then we will have this conversation about these advanced technologies and how they relate to metadata in our video production workflows, take it away, Chris. Speaker 1 00:14 Hello, I'm Chrysalis Sinek from audio visual preservation solutions or to save breath AB preserve. So we're a consulting firm and we work with organizations that have collections of audio visual materials. So those, those encompass kind of formal archives like library of Congress, national archives, uh, folks like that. Uh, but also, uh, you know, what we would say, other organizations that, that don't would not consider themselves a formal archive, but do generate use and, and have the need to, uh, preserve audio visual content. So we, and, and by preserve, we think of preserve meaning, uh, longterm access, right? So this is not a distinction between preservation on one hand or access. They are integrally linked. So we, we do, uh, on one side kind of helping organizations figure out how to take warehouses or boxes or rooms full of stuff, legacy materials, and to create the blueprint, to get them from not knowing what they have, how much it's going to cost, what specifications to use to kind of doing all the wonderful things that they want to do with their, with their content. And on the flip side of that, we also do a lot of work help doing technology selection around digital asset management systems and kind of repository systems, uh, to help organizations in their workflows. And again, doing all the wonderful things that they want to do with their content. Cool, Tom, Speaker 2 01:37 Uh, I'm a CEO of nerve technologies and what we do is search video based upon, uh, submitted images to our system for objects of interest. And we do that with incredible speed, current type speeds. You can see are us searching an hours worth of video, every single frame of it for the thing that you're looking for in about five seconds and that's linearly scalable across multiple machines. So we're right now just started about two and a half years ago or so. So refresh company with a significant presence in the intelligence community and department of defense. And we've just begun the process. Now over the past few months of moving into meeting entertainment, Speaker 0 02:19 Thank you, Andrew drew Lanham with neck CEDIA and next CEDIA is the ability to search enables the ability to search through spoken word content and audio and video, both in kind of live files as well as recorded files. And we do that at scale. So we regularly search millions of hours of audio for keywords and phrases and combinations of keywords and phrases. We, as a company apply that technology to four key markets, something called speech analytics, government, uh, legal discovery, and then also media entertainment. And I'm responsible for the media entertainment industry. We've released a product recently called dialogue search. That's designed to attach to a media asset management system file system, wherever there may be a repository of audio and video rapidly enabled that to be indexed to the tune of, you know, thousands of times, faster than real time to create the index and then search that at millions of times, faster than real time on a pretty discrete hardware footprint, very small. And then to find what you're looking for, and then get back into your media asset management workflow or your nonlinear editor workflow, but really enable rapid discovery of the media. Speaker 3 03:22 Cool. So I'm going to start this this off because you know, one of the challenges we run into at Chesapeake when we're talking with our clients about media asset management and metadata is yes, we all realize that metadata is a wonderful thing. Uh, it's great. If you have a wonderfully thought out metadata, schema and taxonomy, there's a lot you can do with it. If people are using it and maybe as an organization, you even get to the point where you've thought through all of that stuff, but there can be some barriers maybe to truly implementing the man within the organization, getting people to actually use it. Then there's the issue of kind of the backlogs. What do we do about them? Okay. Maybe we come up with a system for moving forward, but what do we do about all the archives of the legacy stuff? You know, Chris, from your guys' perspective, as you work with your clients to evaluate their archives, can you just speak to some of the types of challenges that people run into and the organizations you work with when wrestling with these questions of how to properly implement metadata and the metadata handling database system for their materials? I just kind of would love to hear some war stories about the challenges that people face with a purely metadata driven approach. Speaker 1 04:41 Yeah, well, uh, so I would probably answer that question a little bit more broadly and saying that, that organization, that, I mean, a lot of what you're pointing to are kind of change management types of challenges, right? So implement everybody. Uh, you know, it's not hard to sell the value of metadata organizations understand that, and various departments within organizations understand what they can do and, and are pretty quick to get on board. I mean, a lot of the challenges that we see is, is organizations not going through the process correctly of selection. So we do a lot of work with requirements development and use case development. And, and that, that's not just about ending up with like the end product, the, the report that says here's our requirements. The process of going through that with an organization is actually very valuable. So cause what happens is you workshop ideas out, you've meet with people. A lot of times you're meeting with people that have not really met before and talked about these things. And so it's a, it's a valuable process to, in an organization, it builds buy-in, it builds, you know, a much greater kind of synergy than would be there otherwise. And so that's really critical. I would say that process and the, and the outcome of that process is really huge. Um, and, and having things actually work as opposed to, Oh, we got this system today, use this, Speaker 3 05:56 But you deal small clients, you deal with very large clients. You deal with some who are well versed in, in media management and library sciences. You deal with some where, Hey, we just want to be able to find all this stuff in this interesting collection of, of footage or material that we have, you know, especially in some of those smaller organizations, just looking at the manpower challenges and assigning the correct roles to people in the organizations to actually log things. I mean, where do you run into challenges there and do you run into stumbling blocks and, and, you know, cause it takes effort to actually tag things with a well thought out set of metadata. Speaker 1 06:33 Yeah. Yeah. So there, there is probably a, there's a big misunderstanding, or there's a big disconnect in between the first meeting often goes something like, well, I want to be able to Google our stuff. Right. I mean, that's kind of the first thing. I want to be able to search it. Like I can search stuff in Google. And so that's, and there's, and that, there's the thought that that's fairly easy. Most people think that that's essentially digitization equal to digitization. Um, so, so kind of their educational component. Yeah. Figuring out, uh, taxonomies, uh, doing taxonomy development, um, the complexities of workflows, right? So different workflows generate different types of metadata. Different use cases, mean people want to use content in different ways. And so figuring out where metadata is generated, how it's generated, who's generating it, the trustworthiness of various metadata sets and then figuring out how to consolidate those in a way and present the ultimate end user experience, which people are actually imagining. So kind of letting you know, you have been, don't want to hit people over the head with a bat and say, this is what you're in for, but, but there is a big kind of educational component and trying to let people know what's behind actually getting to where they want to. Speaker 3 07:42 So it's not only implementing a system, but it's managing people's expectations of how they can realistically expect to use that system. Speaker 1 07:50 Yeah, definitely. And what it takes and the resources required to get there. Speaker 3 07:54 Drew you guys at Nick Cydia, you know, it didn't necessarily originally come from the media and entertainment space, but as meetings, as recently as yesterday, indicate you're obviously talking to a lot of people in the media and entertainment world, the broadcast world folks sitting on very large volumes of media data. What are you hearing from them about the types of challenges that they're running into with a purely metadata driven approach to organizing their materials? Yeah. I would echo some of Chris's comments. So, you know, I think they think the ma'am is the end game. And I think it's the beginning. And I think they often misunderstand the level of effort required to implement that and the expense of managing that media. They know they want to keep it, but the cost of digitization and retention of that or creation of the metadata is prohibitively expensive. Speaker 3 08:43 And, you know, I hear a lot that they want to Google. How do we Google our media? Google just made everything look too easy in search. They kind of screwed it up for the rest of us, but then we're here now, Tom, I are here and it's going to be fine, Tom, your guys' background traditionally has much more been in the kind of the government space we'll call it. And if I get any more specific, we'll have to lock all of you in a room and flash red lights. So you don't remember this, but um, you know, you, your set of traditional users obviously has challenges as well. They're probably monitoring more video than anyone else combined, I imagine. And so, you know, what do you hear from them? I'm just kind of curious with their types of use cases for nerves technology. What are they telling you about challenges that they've had in trying to tag their materials, search through it, maybe just based on the volumes or not having access to the manpower to do that, or the training of the manpower to properly tag things, you know, why are they coming to you? Speaker 3 09:44 What are those challenges with those other approaches that they see in their space? Speaker 2 09:48 The major challenges that they see are that, uh, they run up against a particular set of technology and people have really been trying to crack this nut of visual analysis or video analytics now for about 20, 25 years in a serious way. And the reason why, uh, nobody is really come to the forefront is that functionally, what they want is exactly what you guys both outlined a Google style type search, but with one little tweak, a false negative, not seeing something of importance is really painful for them. It's maybe not as big a deal in other types of industries, but for them, that could be the critical missing piece that allows them to solve the riddle or to prevent something bad from happening. So there's a big focus on accuracy, particularly with them. It's not about it. It's much more about a detailed orient, a understanding of exactly what's in that data and to assure as few false negatives, false positive rates obviously are very interesting and important to not crowd the, the, the users was seeing all these things that don't, aren't relevant to what they're looking for, but they can't miss things. Speaker 3 10:57 Yeah, because that can be very bad. Um, you know, going back to you drew, you guys have this phonetic speech searching technology, you are having these conversations with folks in the media and entertainment space. You guys have been in that space for a little while. Some of you may know that their core technology is what avid licensed for their phrase find and Boris has and sound soundbite. So you have had folks using a phonetic speech search approach for a while. Now, I'm curious if you can paint some pictures for us of the type of use cases you run into specifically in media entertainment broadcast, how do people use it? Because it is a different type of thing you're now searching for then a more conceptual metadata field. Now you're actually mining the data itself. The data is the words, obviously that have been spoken, paint a picture for me or a few, if you will, of how people use that, what are the types of searches they're doing? What problems is it solving for them today? Speaker 0 12:01 Yeah, I, I think it's, um, you know, there's, uh, when we say media entertainment, it's a very broad definition. So we have, uh, CNN Turner as a customer. We have a PBS as a customer it's gateway church, a little church out of Texas. We have, uh, people that kind of reality shows that are utilizing our technology so that the myriad of use cases would take the rest of the session, but generally it's that they want to. And I think CBC has told me this best is that we can use our ma'am if we know exactly what we're searching for. And the user is very experienced. We'll use your technology when we need to cast a wider net or when the user needs a more simplistic user experience. Right? And so they're using our technology to go off and do very specific searches, but also wildcard, what do I have? Speaker 0 12:48 And the best, most recent example I have is, is Cox media. They recently put digitize, they spend 18 months digitizing 40,000 hours of media. And this is the WSP station out of Atlanta. They put it into a ma'am and then went to go create a piece on civil rights for the 50th anniversary of the March on Washington. They spent a week looking for footage. Now they know the media had a time and date stamps on it that they digitized. So they were able to go into the range of dates and they start, when I start, we weren't able to find anything and next CDO had come in and we'd installed our software. And, and, uh, in 80 hours had processed the entire 40,000 hour library on a single server. And in what they described their words, five minutes, they found all the footage, they needed to create this anniversary piece on Martin Luther King and, and civil rights. Speaker 0 13:34 And so, and they found footage. They didn't know they had. And so it's the it's helped me get to my, what I need faster, but also helped me find out what I have at all is a bigger problem. And so I think that's, that's kind of the heart of what, what we're doing material untagged, other than some fairly basic time and date type metadata. It was a, I would say, other than kind of file name and date stamp, it was completely opaque, right? It was, it was on search on structured data, right? Big unstructured data without a doubt, 40,000 hours of it. Right. Speaker 3 14:05 Tom, you guys are talking with more media and entertainment accounts about, you know, your visual search technologies. I'm curious what some of the feedback has been either from folks who are actively using nerve or are trialing it, or even just what kind of interesting use cases, you know, you're being told about from the clients. Cause I know you guys are very inquisitive as to what people even can imagine to do in their sector with your technology. What are some of the things that people are telling you about the value or the use cases they see in the visual search, Speaker 2 14:38 The main focus point while there's been a number of different use cases that we've encountered so far? The main focus point has been about being able to re monetize the archives for branding information and new program assembly. And because of the fact that we can search for any visually evident signature, whether you're looking for, you know, logos, not necessarily overlays, but literally branding, that's evident just in the, in the, the raw, uh, you know, clean footage in the background, not necessarily, uh, already, uh, you know, paid, paid for, uh, type stuff. We can find that information and we can find it in conjunction with certain scenes activities, things that are happening in the video that you may be interested in submitting to in, into production. So those are the main type applications is really being able to look back through those archives and understand what you've got so that when you're assembling new program, you've got options and you've got them very, very fast. Speaker 3 15:35 And one of the other use cases I've thought about is in the world of video, there's just reams of B roll, or just kind of your large collection of additional footage to, you know, use as filler or whatever. And, you know, people pay a lot of money to generate that stuff. And to be able to say, do we have a shot of the white house or a shot of an American flag or a shot of a car that kind of looks like this car driving by it, it seems like it could be really useful just to maybe limit your need to go and reshoot stuff that again, you may be sitting on, but if it's not tagged and it's there amongst thousands and thousands and thousands of hours of footage, you might not have access to. Speaker 2 16:20 Sure. Yeah. I mean, for us, it's a very, uh, you know, similar type problem because while somebody's telling you that they worked for the us federal government kind of brings to thoughts of predator feeds and that type of security footage around embassies and such. We see a tremendous amount of, uh, broadcast footage, YouTube footage, viral type video that is searched for other reasons, but, uh, very, very similar type problems, honestly, somewhat type applications. Speaker 0 16:48 Can I add one thing to the video thing? Um, you know, so we, we do some work with, uh, PGA and we found that they log shots, but not dialogue. And the media asset management folks really appreciated our technology for being able to jump to the media and where something was said or spoken in the commentary. But one of the primary uses within PGA was the marketing team. And they wanted to be able to go back and quickly show, you know, marketing folk, you know, the, their sponsors where they were mentioned. So Barclays wants to know, BMW wants to know where's my product being featured. How is it being featured? Show me that quickly. And so the marketing organization really took to that. And, and I think that's a wonderful use for you because they're, you know, there may not be audio references. There may be visual references. So the finding of that sponsor information is what makes the financial we'll go around, right? For the media creation foundation return on investment return on the investment of the shows, seen logos on people's certs, fuzzed out, right? And I mean, this is a way you could be like, so pay us this much and we'll unfold your logos, right. That's kind of the world we're moving towards. Well, the other Speaker 1 17:50 Thing that that brings up is, is that I think we've started to seeing a lot of metadata generation, uh, tools. And they've really evolved as you pointed out, we've been looking at visual analysis for so many years now. Um, so it's great to start to see these come out where, where the biggest gap is right now, as in starting to put these together on the, on the kind of, uh, platform and user experience side. So like, as you just kind of pointed out like your tools together actually are much more valuable that's and that's, and people don't think I want to search audio. I want to search video. They they're looking for just information, right? So I think we're at a point where audio visual content is becoming much more a kind of a part of the fabric of information in the same way that text always has been based on our abilities to generate and search metadata. Speaker 1 18:37 But, but it also brings up the point that, that it's not a replacement. So combining these various technologies is not a replacement for human metadata, refinement generation, uh, subject matter expertise and all these things that we get the most power out of these things. When we combine various technologies and humans in order to, uh, provide, you know, the appropriate context that's needed for various use cases and where we don't have that stopping us there is, is really platforms or standardization and platforms that allow interoperability. I think we're starting to get there. I know in talking to you guys, you guys are really focused on this is integration, but where these data sets can really combine, they can exchange, you know, exchange information and ways where we think about layers of information in the same way. We think about layers of audio, visual content. We have tracks of metadata and it could be the embedded metadata track. You know, I think about MLB, they have major league baseball does a fantastic job of capturing and layering metadata, right? So what's the speed of the pitch. What stadiums are that what's the teams that are playing and, you know, these are all come in on varied. They have lots of streams and metadata that come in that they layer. And so being able to navigate through information in that way, I think is, is kind of where the gap is right now. So combining them Speaker 0 19:52 Yep. Chris do it, especially with technologies that don't, uh, you know, the MLB has people that will pay them to log that data for them. Right? Yeah. So in the absence of that, yeah. Chris, do you guys have any customers right now who have started to look into, or are already utilizing search technologies for the media data itself, in addition to the various types of metadata logging tools that are out there? Speaker 1 20:15 Yeah. Well, I'd say that we have clients that are interested. The problem is that we run into is that these interest and the awareness tends to be at a level within the organization. That's typically not the budget holder or decision maker. So we're in a kind of a catch 22 where the vision that we have for what's possible has not been demonstrated. And so, you know, it's, it's a little bit like preservation and access, right? You, you talking to an upper level administrator or CEO or something of an organization, it's not very sexy to talk about preservation, everybody's focused on access. And as I said earlier, those are integrally related, but people think about them differently. And the same way, it's not very sexy to talk about the resources it takes to generate metadata. It's, it's quite, uh, sexy to talk about what that looks like from a user experience, but we haven't seen, I don't think we have the compelling user experiences to demonstrate to drive that the funding that it takes to do it in the way that we, we really need to do. Speaker 3 21:15 You've sent me up very well. Thank you for the segue I kind of wanted to make, which is kind of the ROI argument, because it's, it's great to have all of these really flashy, cool video media technologies at your disposal. And, you know, in an ideal world, you have all of these various lenses or filters you can apply. And there is the metadata layers. There's obviously the ability to delve into the, the raw data itself, but, you know, drew we've certainly talked about, and I know you've even worked with customers to kind of draft ROI arguments. What is the return on investment with the speech search specifically? You guys obviously live in a realm where you're sometimes compared to the ability to get at the data through closed captions that may be embedded in some of these data streams or, you know, other approaches talk me through an ROI argument, not necessarily saying you have to spell out the entire ROI right now, but how, how would a customer begin to craft an ROI case for implementing something like dialogue search? Speaker 0 22:19 Sure. And so we, we scale and price our solution based on the maximum volume of media that's to be searched at any one time, as well as kind of anonymous concurrent users, but even at its most. And as you scale up, it gets cheaper on a per hour and per user basis. And so it goes into the pennies per hour, kind of media analyzed and is searchable. But even at its most expensive, we think it's one 10th to one 15th, the cost of transcription and probably available, you know, a hundred times faster. And then on the logging side, we think we're probably one eighth, the expense of logging and probably, uh, several hundred times faster. Our objective in pricing, it was to create something that was by far the most inexpensive way of actually getting access to your media at any scale. And it just gets cheaper as you get into larger art, the archives. Speaker 3 23:13 So, you know, for you guys, part of it is compare and contrast with some alternate approaches you're entirely, right? Cause you searching dialogue on an as needed basis, knowing that your results are going to be, you know, nearly instantaneous. It is a very different approach. I mean, the entire thing is different than just sending it out to be transcribed. So it's not just, you have to, you know, get the clients to understand that by taking this radically different approach to solving the same problem, they can get, uh, you know, a tremendous savings in, in accomplishing the same task. But it's the end task that has to be framed in terms of right. Not all of the steps we take to get there. Speaker 0 23:56 Well, I mean, you're right. And people are comparing it to, so I don't, I would think the one thing we don't compare favorably to is if somebody is actually capturing and has a comprehensive, uh, record of all the closed captioning, if all that media, there wasn't any B roll and it all been broadcast. I don't know that we add a lot of value or could compete with that at a cost basis. But what I find is there's a tremendous number of maps. I don't think I've seen one yet, actually, the ma'ams that actually retain it and use it to inform search, especially in a time coded way. And so you may have it. PBS is a customer of ours. They have 30 years of captioning. I think the cost to extract that and the time it would take, they decided was too expensive and not sufficiently better than next CEDIA. So it was not just that you have it, it's the ability to get at it and to inform search Speaker 3 24:40 Tom, again, you guys are a little more nascent in the media and entertainment market, but you know, what kind of feedback are you getting from clients about the return on investment? Are they, are you, are you getting pushback saying, well, there's no way we can really justify this, even though it's really flashy or people saying, Oh my God, this is, this would allow me to accomplish XYZ with so much less man power. You know, what are you hearing from people as far as, you know, when you present your core technologies to them, Speaker 2 25:09 The, the, the two major points are shortening production time, shortening workflows, fundamentally because they can find their data faster. That's a straightforward one. Uh, another one is really the meat re monetization of past recorded video in archives and such being able to use what you've already got is always going to be cheaper. Uh, and then from a D duplication perspective, if they're storing significant archives of video in many, many, many different copies, we can go in at very high speed and compare and understand whether or not that's, you know, to what degree that's the case. Speaker 3 25:43 So Chris, you've presented this vision of kind of, I think the ideal reality, which is you have the database, you have the metadata layers, um, you know, some of those are technical metadata. Some of those are maybe kind of very process oriented metadata. Some of them are very descriptive and qualitative metadata. And then you have these kind of selective again, lenses or filters that you can apply into the data itself. Drew, tell me about the integration story with dialogue search. I know you guys talk a lot about API APIs, which to me is a good thing because it means you, you you're bearing this in mind, but maybe you can just speak a little bit to how you see yourself being integrated in a way that I think kind of Chris was illustrating with that vision. Speaker 0 26:27 Sure. In our system, we want to ingest media, which means we want access one time to the audio or video, and that could be a high rise or a proxy, but we are generally agnostic as to where that lives is. So if that's at a network accessible location stored on a file system or a media asset management system, we'll create a media source connector. What we refer to that will feed media into our experience to allow it to be indexed, and then subsequently searched. We allow the user to search that index that media wants its index. And that can, I should describe it. That can be multiple sources of media that media could reside in many different locations and we can federate search across that. And so we're trying to be as open on how we get media into the system. And I'd say, we're, I'm not aware of a format we don't accept. Speaker 0 27:12 And then it's searchable within a web client and a portal that is Google, like simple. And then when the user they're able to find and preview results, they're able to do structured Boolean queries and kind of normal metadata filtering. And then they're able to export time coded markers that they found. Once I filed the five that they're looking for export time, coded markers back to the nonlinear editor or to the media asset management system or systems, as Nick said, we have built APIs that allow third parties to create media connectors to those media sources for us. But we've also built APIs that allow our client experience our user experience and integrate that directly into, um, into a third party application. And the first one is we've integrated into the Adobe premiere pro panel simply because it was available and open. And, uh, and so it was, and it was a good example of our API. We recognize that we're a secondary search experience and we want it to be as friendly and as integratable as possible for the end user. So we don't take them out of their existing workflow and we're doing everything within our power to enable that, Speaker 3 28:14 Tom, where are you guys at with your integrating with other people's platforms? Obviously a lot of people do have these Uber databases, the Uber ma'am sitting above everything and kind of want to be able to probably find hits in your system, but then publish that back into the main system. Has that been a need that people have expressed? And then are you guys supporting that currently in what our future plans, so can easily kind of get hooked into the various other parties that are part of this whole ecosystem. Speaker 2 28:43 Again, this, this particular, uh, thing is, is very, uh, analogous in the government space. Uh, you don't deploy a system like ours is very specific technology without assuming that it's going to be integrated in the larger system is, is that's what we've had to do in order to get deployed. That said we are in the process of developing a more open restful API that will allow for much easier integration into particularly the maps that I think will support this, this area in a, in a much more significant way, right. Speaker 1 29:13 Peggy down on a rough time estimate for some of those developments January that's close, we just have the holiday, Chris, please. So I think building off a little bit of what I was talking about earlier and what they just mentioned is that I think, um, you know, one way that we think about this stuff is that throwing tools that do automated metadata generation and search a lot of ways, give you the ability to, because we are dealing with such massive amounts of information, we need to be thinking about prioritizing description, right? So what it does is I think of these tools as being able to optimize expertise and thinking about kind of financial analysis, what this does is it allows subject matter experts and catalogers that really optimize their expertise. So they're not describing things, red car, you know, American flag, white house, these, these types of things, but are actually able to use tools to hone in on the areas that are really important at a given point in time, and to provide the much richer kind of metadata set that, that only they could. I think that's a really important thing. I would also add that combining these things together, adding things like link data, doing named entity extraction and tying into link data and open data builds so much amazing context that you can pull from and use again, to even just create with, you know, prior to human having to do anything creates a lot of important findability and context Speaker 3 30:38 That AVP S you guys part of what you said you do. And I know you to do you, weren't lying, um, is, is help clients actually conduct technology evaluations, um, you know, step through the process of evaluate different ma'ams or dams. We're talking about some fairly bleeding edge technologies, at least as far as what the private sector has access to nerve is absolutely one of the strongest visual search technologies. And they have just begun to really dip their toes into the water of media entertainment, broadcast, private sector, use cases. Nick Sidia, you know, also has a government background, but has been there a little longer, but still it's a pretty new type of capability. What would you say to customers and what are maybe some just points that they should have on mind about what they might be looking to get out of a technology evaluation of a fairly cutting edge tool set like these? Is there anything as far as, you know, the time that they should a lot to even a technology evaluation, what kind of, uh, personnel would go into that? The type of issues that you see coming up when someone is even considering is this applicable to me? Speaker 1 31:49 Yeah. Yeah. That's interesting. So a typical, I mean, there are several projects that have started where the, we get a call and it, it starts with, we want to redo our website and we went to do, we, we decided, geez, we should probably put audio visual content on our website. And we realize that we have all these tapes and we also have all these files and we need to get our heads around that. And, and that often leads into, uh, again, you, you, you want to be careful of scaring people away and saying what, what a project they have ahead of them and educate, uh, uh, in a, in a cautious way to not scare people. But, but the reality is, is that like, you know, several projects that have started with that conversation ended up being several things. Uh, these are, you know, it's a timeline, you know, nothing happens in under six months. Uh, and usually it can be two, three years. If, if people get into things like taxonomy development, implementing integration of, you know, the source tools and dams and PAMs, I mean, these are, these are multi-year projects that take a lot of resources. So Speaker 3 32:54 Can I quote you on that when our vendors come knocking and like where's the PO yeah. Well, I mean, you know, we get things, thanks. True. Speaker 1 33:03 We're working on a project where we're, uh, organization wants to do this. They call it, I think we started two months ago and it has to be done by the end of the year. So fortunately they, they realized the risks and are willing to take those on. So Speaker 3 33:15 Now one of the nice things is these guys technologies, maybe not in the final way, a customer would deploy them, but they are at least in a usable form, somewhat off the shelf. You can deploy them in a non-integrated way, relatively quickly and painlessly and throw your content. Both of you guys are open to allowing people, maybe through a partner like Chesapeake on a server that we have set up or something else or a server you're hosting themselves an evaluation period. Right? I mean, that's something that all of you guys are open to and, you know, giving someone maybe 30, 60 days at a system and just see how it actually does with their material, almost do it by default. Yep. And so, you know, again, just what are some points that you may want to leave people with? You know, as far as like how to get something out of that eval, you've got your 30 days taken, you've got your 60 days ticking, you know, what are, what are folks going to want to do to get the most out of a technology evaluation other than obviously call you guys and have you contracted into help, Speaker 1 34:14 I would say to know what you want to do, right? So you have to, if you, the best way to evaluate technology is when you have a use case, right? So you actually you've sat down. You've thought about not only what do we do now, but what do we want to be doing? And you can actually, if you end up with a set of use cases that are, you know, here's the, here's what the recorder does. And then here's the next step. You actually have a workflow built out. Then you would take a tool and you would evaluate evaluate it. And that specific use case and see how it performs. That's how you'll get the most knowledge out of and value out of a demo. Right? Cause otherwise you get the tool, they mess around with a little bit. Let me think about this. You get busy with other things. It goes away and you don't really take advantage of it. You haven't done yourself much good there. Speaker 3 34:56 So, um, I want to obviously get into the demos so you can actually see all of this cool stuff, but I want to just leave a couple minutes. If anyone has any questions of any of the panelists, um, about the tech that you don't think maybe will just be immediately addressed by the demos, but is there any, any other questions or even comments that people have, uh, regarding any of this fun stuff, Greg, and just, I'll introduce you. This is Greg Schiff of quantum. Speaker 4 35:21 This is for Tom and drew. I mean, or you guys, what percentage of your customers are actually using your, let's say QE application versus using the books? I would think that most of them would be using the API to talk to it into a larger system. But do you actually have customers that are just using your web gooey to do, to do searches Speaker 3 35:42 Drew, you want to start, and then you released our API and our one, four version of the product, which came out two or three weeks ago. So it's, uh, there are no customers using it currently. They're all using our web interface. You know, the feedback that we get, it's super simple, but we're, we publish the API for free. And so we're and support it for free, which I think is unusual. And, uh, so we're doing everything we can to encourage the integration of it. And it's a rest based interface. And, uh, I think pretty easy to implement Tom. Obviously you probably can't tell me too much without killing me, but for some of the types of government projects and, and scenarios that your customers are involved with are how many of them are kind of getting a little more under the hood with you guys versus kind of using your more off the shelf product. Speaker 2 36:27 There's a number of them that have <inaudible> integrated solutions, uh, with our product inside, but they're in my mind from a technological perspective, they're not quite as sophisticated as we want. That's why we're moving to the more full featured API. So that is right now to interact with the system. Uh, you do actually have to use the front end as it is now. Now what we will be offering up as part of the API is an ability to take elements from the front end, in order to retain at least to some level, the user experience, which we've crafted, which is a core piece of, uh, it's kind of a core necessity. And when you see the demo, you'll, you'll see why, Speaker 3 37:03 But integration obviously goes both ways. And as of today, both of you guys allow a user to use essentially your out of, uh, out of the box application experience and publish hits into another system that's, you know, very easy to achieve through XML based workflows or whatnot. Correct? Correct. Yes. Great. Okay. Unless there's any other questions we're going to do a quick little transition here and we'll start with drew and Nick Cydia and then move on to the nerve guys for demos. Thank you.

Other Episodes

Episode 0

January 19, 2015 01:08:41
Episode Cover

#28 Marquis Builds Video Workflow Bridges with Daniel Faulkner, Business Development Manager for Marquis Broadcast

There was a time when it made sense for a video facility to obtain its NLE, storage, archive, MAM and other sundry post-production workflow...

Listen

Episode 0

June 04, 2020 00:24:49
Episode Cover

#48 Media Workflow Basics: Part 1 of 5: Ingest Media and Ingest Video

Join Jason and Ben as they begin a multi-episode tour through the digital media workflow and entertainment workflow creation process. In this first episode,...

Listen

Episode 0

February 08, 2022 00:39:46
Episode Cover

#68 API First Live Video and Media Ingest with Cinedeck

On this episode of The Workflow Show, hosts Jason and Ben welcome from Cinedeck Jane Sung, COO, Charles d’Autremont, Product Development, and Ilya Derets,...

Listen