Search Docs…
As the General Manager of Lauretta.io, Yuvanesh oversees operations where his team turns existing CCTV cameras into smart cameras that can infer occupant activity, behaviour, and intent, allowing security personnel to react in real time.
D-Ron: From your perspective, what are the most significant trends shaping the future of the surveillance and security industry, especially in the context of advancements in Artificial Intelligence (AI) and Deep Learning?
Yuvanesh: That's a good question. So, one of the trends that I see is AI-based analytics coming to the edge. Innovation is happening on a daily basis; compute is improving, allowing us to deploy complex AI solutions on the edge. That means we can collect more data and visualise it more meaningfully.
Also, being able to deploy at the edge opens up new avenues for these solutions. An example is multimodal AI analytics. When deployed at the edge, we can connect to more sensors, like Internet of Things (IoT) devices within the facility, enabling us to collect even more data, process it, and provide it to clients.
So, being able to do that on the edge is a rising trend and a powerful way to provide value to clients, whether for security and surveillance or any industry, for that matter.
The second trend is ethical AI or responsible AI.
When we talk about using AI and deep learning in security and surveillance, there's a need to balance surveillance understanding with privacy protection. Deploying advanced AI solutions for security requires consideration of how to achieve objectives and security outcomes while protecting the personal interests of people in the secured space.
The third trend is advancements in AI and deep learning, allowing us to bring more capability to the ground. From basic detection a few years back, today we can detect specific actions in real time, like someone smoking in a non-smoking area or abandoning a bag—a sign of a potential threat. This advancement lets us deploy for specific use cases with meaningful accuracy.
Another crucial aspect of AI and deep learning advancement is the capability to integrate with other ecosystems, allowing us to make sense of events on a larger network, where predictive modelling comes into play.
With machine learning (ML) and data collected at the edge, we can integrate with more data sources, enabling predictive analysis and preventing incidents before they occur. This progress aligns with cybersecurity technologies using AI-ML to predict potential threats, saving costs, time, and potential damages for clients investing in these solutions.
D-Ron: And more importantly, it prevents a loss of trust because once you lose your clients’ trust, that’s it.
Yuvanesh: You are right. To me, I look at security as an investment, you're investing to protect yourself from that one incident that will just bring everything crumbling down.
D-Ron: I've spoken to other Key Opinion Leaders (KOLs) and they have also mentioned that apart from video analytics with the use of AI, they are starting to move into sound analytics as well, so that they can increase the frequency of the predictions being correct due to more sources of data.
Do you feel that's also a big trend which things are moving towards or there's even more out there?
Yuvanesh: Yeah, I think that's a very good point that you raised. So, I’ll bring back my earlier point of multimodal AI analytics. Essentially, when you look at videos, you're just talking about using an edge-based sensor computer vision. But the reality is, there are many other sources we can collect data from, and we can combine that with vision to generate something more meaningful and accurate.
Sound is one great way to complement vision, providing more context for AI to understand potential events. Human behaviour is complex; vision alone may not be sufficient. For example, sound can judge tonality in a person's voice, differentiating between shouting and excitement. Integrating different data points, not just vision but sound and others, allows for more meaningful inferencing.
D-Ron: Implementing AI and security systems can present challenges. So, what are some common obstacles or concerns that organisations may face when adopting AI for security?
You've touched a little bit on that earlier with the privacy side of it, could you expand on that a little bit more? Are there any other challenges or obstacles that you feel that organisations may face?
Yuvanesh: Definitely, one of them is privacy. When we talk about privacy and ethics, they're distinct subsets of protecting people's interests or protecting the interests of the general public.
The first challenge is usually data privacy and security concerns. Regulatory frameworks like the General Data Protection Regulation (GDPR) and Singapore's Personal Data Protection Act (PDPA) require organisations to protect the data they collect. AI, collecting a lot of data, needs assurance on how it will be protected.
Working with stakeholders to ensure the necessary steps are in place to protect the data collected is crucial. Compliance with data privacy acts is well-defined globally, and organisations need to ensure their AI solutions adhere to these regulations.
The second challenge is ethical considerations. Regulatory requirements must align with what is socially acceptable and morally right. Balancing these aspects is essential, and responsible AI becomes a guiding principle. Understanding why and how AI is deployed helps in compliance and making ethical decisions.
The third challenge is integration into existing ecosystems. Large organisations, with complex workflows and security standards, demand AI solutions that seamlessly integrate without disruption. Meeting ISO standards and fitting into existing workflows are concerns raised by organisations.
Scaling is another challenge. Deploying a small proof of concept is easy, but replicating and scaling solutions across multiple locations with diverse requirements, possibly in different geographies, is a complex task. Supporting these requirements is a challenge faced by organisations.
D-Ron: For businesses looking to enhance their security infrastructure, what advice would you give them in terms of selecting and implementing the right combination of technologies, considering both traditional and AI-based solutions?
Yuvanesh: I love this question. It's a very fun one to answer because there's no right or wrong answer.
For sure, the way I look at it — and I think it is very important — the first step is always to do a risk assessment. It's the age-old framework that everyone uses in any industry and you need to understand what your current risks are, and what you need to mitigate those risks.
And then, once you have that risk profile through the risk assessment, the next step is to define your objectives. So, like I mentioned earlier, really knowing what your use case is, and what the security objectives that you want to achieve from this project are, is very important because it defines how you're going to go about doing it, and what kind of technology you need to actually deploy to meet that outcome.
And once you clear those two steps, the next thing I would say is look at what's available out there, and see what would actually make sense. So, every organisation may have different constraints — some may be looking at a cost-benefit analysis where they need to define a budget and they're trying to understand: “OK, I got this budget. What can I do with that?”
Or it could be that you're building a new budget. And you want to understand what's out there so that you can actually build that budget, and then decide how you're going to go around implementing this new solution.
So, once you do your risk mitigation, your use-case analysis and you understand your objectives, the next step is to go out there, look at what's available, then that's where you start assessing different solutions to see what really, really works.
Now, AI is not always the solution. There are times when we even tell our clients “You can just stick with your traditional setup access control physical barriers because your requirements aren't as complex or your requirements don't really need AI to actually provide you that level of surveillance and security that you need.” So, it’s really about understanding the objective that you want to achieve. And once you know that, you roughly start to figure out what you need or what you don't need and you can move from there.
And then, there is the other thing you would want to do, which I would say is very important being that AI is new and it's something that many people may not still be familiar with. So, when you are looking to implement a new AI solution, it's very important to get all stakeholders involved so that they understand what is being implemented and they are also properly trained in using the system. This is because one of the failure points in any new initiative is the lack of onboarding and the failure of people in being able to use that system.
So, you can spend all that money, you can have that fancy system which you know will work well once deployed, but if the very people that are meant to use the solution are against it, uncomfortable, unfamiliar, or not trained with it, whatever the reason (it’s always a human reason), if they become the barrier, then whatever you deploy is not going to work. You're not going to get your return on investment (ROI), and you're not going to be able to see any meaningful usage of it.
D-Ron: It becomes a huge white elephant in that case.
Yuvanesh: Yeah, exactly. And I see a lot of projects fail, not because the projects themselves were poorly implemented, but because the users were not trained by enough onboarding and then boom, they became white elephants like you said.
D-Ron: I've also noticed that a lot of times, it could be multigenerational in the sense that where people leave the company, during the handing over, a lot of things just drop through the cracks.
Yuvanesh: Yeah, you're right 100%. A lot of times, when the very experienced people leave, the person taking over is not onboarded properly and then, things just start to fall apart.
In summary, I would say when deploying an AI product or initiative, just like you would do with any other technology or solution, do it the right way and it will be deployed successfully. Then you would also have people using it in a way that makes sense for the organisation.
D-Ron: So, Lauretta.io, your company, emphasises anonymous AI design. So, in your opinion, how important is it for AI systems in surveillance to focus on understanding what is happening rather than identifying individuals?
Yuvanesh: Again, this is another question I’d like to answer because that's what we specialise in. It is a very interesting question because we are always trying to find the balance between understanding and identifying.
And the reason for that is because, as we mentioned earlier in the conversation, there is what is socially acceptable and there's what's regulatorily acceptable. So, for some people, everything is based on what is legally possible, and then there are organisations that want to strive for a perfect balance between the two.
So, in our case, we believe that it is very important for at least, a part of the surveillance to focus more on what is happening than identification because identification can be passed on to somebody else. Again, it depends on the use case. You don’t need to implement technologies designed to catch terrorists for use cases like retail insights.
So, if you take the extreme use cases, which include critical infrastructure, government buildings, and military camps, they are high risk and high security. There can be no compromise of not being able to identify people because these places have very sensitive information that cannot be leaked at all costs.
So, in that case, the trade-off might be the privacy of every individual entering that space. But then again, because this is already a socially acceptable norm in that context, you don't really need to focus much on understanding. But you can do both understanding and identifying.
But now, if we take it down a notch and we go down to public commercial spaces where the general public seems to be going in and out, then generally in those situations, you don't assume that every person that is coming into your space is up to no good or is a bad actor. What you're trying to do is you're trying to look at potentially identifying something if it happens or when it happens.
In our opinion, it is better to focus more on what is happening in such situations, because when you know what is happening and you have enough information to make sense of a person's behaviour or the incident occurring in your space, you can ultimately use that information to identify the person.
A great example I would always give is a commercial setting where you have a camera and you catch somebody shoplifting. At that point in time, you don't really care who is shoplifting, you're not trying to understand who the person is. What's more important is being able to react to the situation, addressing the situation at hand, preventing it, or doing damage control. After that has been resolved, it is only then that you proceed to identify who he or she is.
In most cases, and in most deployments, you always want to first understand whether something is happening, and you want to be able to address the incident before you find out who he or she is.
So, what we believe in and the way we design our systems is to prioritise understanding a person’s actions. Then, we leave identification to the relevant authorities, or to somebody else who has a valid reason for deploying something that allows identification.
D-Ron: So, to summarise, it really depends on the specific situation and the specific security needs of the location itself.
Yuvanesh: Yeah, it always boils back down to your use case, your requirements, and what level of granularity you need.
And we found that in most cases, the priority is always understanding what's happening, before checking the identity of the person who is causing the incident to happen.
D-Ron: Would you be able to share with us any benefits to anonymous AI where you focus on understanding what's happening rather than identifying individuals?
Perhaps like less processing power being required and less memory usage?
Yuvanesh: You're right in some cases. Again, it depends on the use case. But I would bring you back towards the ethical, responsible side of things.
Each country has their own set of regulations around how software and AI solutions can be deployed due to data privacy. So, developing something that focuses more on the anonymity of people and what is happening, rather than who is causing it allows us to be more regulatory-compliant internationally across all countries.
We can see regulation coming and we are already prepared for it., We have no biometric data or personally identifiable data on our systems. And from our end, as solution providers, we automatically comply with all these requirements. And that provides assurance to our clients that they do not need to worry about anything significant if there's a data leak.
Now, different types of data leaks have different impacts. For instance, a data leak of somebody's name, a single data point, is not a good thing. But the damage that can be done with that piece of information is a lot less compared to a data leak of a person's name and credit card details. That’s because the person's financials become public and can be compromised.
So, as for us, what we try to do is design a solution, not just to be anonymous, but also to minimise the amount of data that we collect. So, we look at it as a need-to-have basis in that regard, and we don’t include anything that we don't need. So, in such situations, we are able to provide clients with the assurance that on our end, we have done everything that we can to ensure that there is data integrity, data safety, and data protection.
Also, for companies that value data privacy more than anything else, we meet those requirements. We get to meet the requirements of companies that, as a part of their corporate culture, want solutions that will be socially acceptable when deployed.
Lastly one big benefit we offer are the rich capabilities coming from the ability to track over time - there are hundreds of AI systems that can detect football, crowds and activity, but our capabilities allows us to answer questions like 'where was person X over the past 2 hours' etc.
D-Ron: With increasing reliance on surveillance technologies, how do you see the industry addressing privacy concerns?
You touched on this just now with your mentions of the PDPA, GDPR, and few other things that you mentioned. So, maybe we could specifically hone in on what role responsible AI plays in mitigating these concerns?
Yuvanesh: Well, responsible AI provides a kind of framework on how solutions should be built and deployed in a manner that allows a client or user to achieve those outcomes without compromising on the privacy of the everyday person.
So, the role of responsible (I’ll use this word instead of ethical because ethics is a grey word which can be subjective) AI as a framework also requires multiple stakeholders. You require collaborative effort with the government body and the industry partners.
So, you’ve got the government body that provides the starting narrative and the direction the country as a whole would want to follow; next you have the industry partners and the domain experts that are in the space writing their academic papers or even developing those solutions for the market; having to work together to decide and identify how AI can be used in a meaningful way without compromising on privacy.
Again, this depends on your existing regulations and what is prioritised in them — is it data privacy or security that is prioritised? There are so many things around this that it is very hard to touch on all points. But I would say that I guess the essence of responsible AI would be really about how to deploy AI in a manner that, not just addresses the legal requirements, but also the socially acceptable ones. And that will involve things like minimising data.
Even if you had to collect biometric data, for example, in a very specific use case, what do you do with that data? How long do you retain it for? So, things like data minimisation — limiting it to the purpose of collection, being able to encrypt it and protect it in a way where it's almost virtually inaccessible by bad actors, being able to anonymise it in a way where no random person would be able to make sense of the data — are all things that form part of what I would define as responsible AI.
Responsible AI is the thought process that goes into developing a solution that you want to deploy for the market. A point to note regulations are lagging technology by a fair bit and the EU AI Act is likely the nearest to implementation
D-Ron: Thank you so much for your time, you have given us a lot of information!
ABOUT YUVANESH TS
Yuvanesh is a brilliant and enterprising personality who combines his prowess in technology and marketing to develop and promote high-functionality products in the fields of Artificial Intelligence, surveillance and agriculture.
Yuvanesh’s dynamic skills, combined with his educational background in biomedical sciences and marketing, have helped him to make valuable contributions to the various organisations he has worked with, spanning the IT, business development, agriculture, and technology sectors. Presently, he serves as the General Manager at Lauretta.io, an innovative surveillance outfit using AI-powered cameras to infer occupant behaviour and intent for commercial facilities.
Yuvanesh is a creative individual who demonstrates a high level of industry knowledge and enjoys solving challenges. You can learn more about Yuvanesh by checking out his LinkedIn profile.