Okay, so thanks for joining. So who are we actually? So what is ssh.com? We developed SSH protocol in five, and today it is still kind of the, one of the fundamentals of internet security for interactive access machine, to machine access and file transfer. Today we offer zero trust solutions for safe, electronic communications and secure access to hosts. And between hosts, we have more than 100 patents in our portfolio, and we have five more than 5,000 good and demanding customers globally. And we have been doing insurance security for more than years now.
So challenges for easy and safe access, easy and safe access is of course, the holy of security security, oftentimes safeness or security seen as a hindrance for ease of use. But at least we believe that it doesn't need to be so products can be built in a way to help the end users to achieve their tasks.
Security can actually also make things easier, but it's not easier just for end users.
It, the easeness should also apply to infrastructure, engineers, managers, architects, and CSOs, whose job it is actually to understand the security of their estate. And of course, safety is also mandated by regulations and, and EU and other data protection laws. There are a lot of rules on how companies should handle sensitive data. So what do we want to tackle?
We want to understand and control the secrets in our infrastructure, be those passwords or focus because if we don't, we leave an opportunity to Fs parties, to access our IPR, called our business as hostage or even use our infrastructure to attack others. And often we also delegate the responsibility of having good secure passwords and work environment to our end users. But the reality is that people make mistakes and forget users also come and go.
There are joiners movers levers.
There are different hosts systems and services, users need access and managing the complexity of different entitlements is labor some and expensive. So if you take developers, for example, back in the day, all you had to do was give them access to few tens of hosts, which most likely were in your own data center. But the world has really changed.
First, the physical service we virtualized, then those virtualized servers were lifted and shifted cloud. Then applications on those VMs were containerized. And now we are moving towards serverless compute powers. Actually it's becoming like electricity. It just comes out of the circuit on the wall.
And that's only infrastructure doing the computation for your application that developers need access to code repositories, CICD, pipeline components, testing software cloud management applications, and not to manage mention RPA or robotic access.
The CICD pipeline components, for example, need, need to be able to authenticate and access dependent systems and so forth. There are secrets and credentials all over place. A simple example, actually. So what is the most important step in a CRCD pipeline developers need access to code repositories and how is that access most often controlled the aesthetic SSH keys. So basically developers have private key and they upload the public key to the code repository and authenticate with the private key to the repository.
But if a malicious product against access to that private key, they can potentially commit code into that repository in the name of the developer, which then ends up being built into the product automatically also a challenge, which is becoming more apparent all the time is in operational technology or industrial IOT connectivity, often between the maintenance engineers and for example, factory machinery, the OT protocols used are old and varied often UN UN encrypted, lacking, sensible authentication, and moving away or upgrading them is really not an option because there are huge investments.
Software is just a small part of an industrial system. And as it is vendors like us have to provide ways to enable secure and trusting time access to various OT devices without actually disrupting or affecting the communications between the control applications and the workstations and the target systems in the factory network. So we actually quickly went from modern and server as paradigms to legacy environments with very old remote access tools, because this is the reality the organization's live in it's mix of old and new for all of us actually.
But the overall challenge, regardless of this is how do we make sure that only the right persons have access to right resources at the right level and at the right time?
So the basic premise is that users need access to data and resources to perform their jobs. And especially with COVID, it's quite likely that they're connecting from home, bring your own device usage is growing. So the need for the Pam client for the previous access management solution, to be more dynamic on the dev anti device, agnostic is growing.
So what we can do is that we pull users from different identity, identity management systems like active directory or, or the users can be pushed in via scheme or lock-ins can be federated to an O IDC provider based on their identity management system attributes and group memberships, they get map roles. And then when a user logs in, we can be sure that their entitlements and authorizations are up to date as they are refreshed at lock time. And then periodically during the session, the role memberships also have context based restrictions such as time and source address.
We of course also need targets. Again, host can be update manually via the API pushed in via scheme, or we can index targets automatically from different cloud providers at input time, the targets and target accounts also map. Then when a user opens up the browser to our solution and authenticates with their corporate credentials, they can see an up to date list of resources. They are entitled to access based on their role configuration. They can make a single click connection via the browser or by using native clients.
But the most important idea is that since the connections go through preve, the end users never gain access to the secrets needed authenticate the targets. Even if the connection isn't using an FMR certificate based authentication. And of course all the user and admin actions should be logged and connections themselves are stored on disc and connections are, are play via the admin UI. They are stored on disc as protocol dump. So Noran on what the user did on target system is never lost. And of course, audit events can be sent to external CMS for analysis.
And then you can do behavioral analytics on the data since you have it by default, our footprint of the workstations is zero. There are no agents, there are no browser plugins. It's just a standard modern HDM H HTML five web browser, which is in our, we have built four VT, 100 and RDB and C web clients directly into our browser experience. We also support connecting and mode using native clients.
And again, no custom software at the workstation is needed for application to application access using, for example, SSH connections, canopy transparently made through VE just by changing the SSH configuration on the client host on the target side. Again, we are zero footprint, no need to install agents. The authentication mechanisms MES the end users previously used can be used with though, of course, whenever we recommend using which it necessary.
So what is access, then it's a method to authenticate a user to the target without the user never handling or seeing any secrets.
What's more its passwordless or keyless of sorts. All the secrets require required to authenticate the user update into the certificate that is created just in time for that specific authentication event. After the connection is established, the certificate expires automatically leaving no secrets behind to lose, share steel or misuse. It also eliminates the needs to manage store or vault or, or rotate any secrets. Not only is it more secure, but it's a huge boost to operations and keeps access management.
Once our customers have established FMO certifi certificate based authentications, they really don't want to go back into world in rotating secrets products, actually most likely was the first privileged management solution to take advantage of the open certificate authentication and productize it.
And since then we created the similar system for windows target hosts by creating what we call virtual smart card authentication.
And even though we strongly recommend jumping into the certificate, and we do recognize that in some environments with legacy and more restricted devices, it is not possible. This is why we also support a fully-fledged secrets world and world API. So we can also use welded secrets to authenticate, to use as the targets. And in addition to that, we also support a shared password or secrets manager. So users and admins can easily share secrets. We have to product UI, but we don't really believe that secret rotation. If you have to do it in the long run is an viable option.
We have seen infrastructures doing tens of thousands of secret rotations per hour. And you can actually imagine how fragile a system like that is.
So what is zero trust in a nutshell, zero trust states that the trust between entities should never be implicit. In other words, even if both of the devices are in a secure corporate network, the devices should still mutually authenticate. The client should verify the server and the server. In addition to authenticating the user should also confirm the device identity and integrity just in time.
On the other hand, ensures that the client connecting the resources have the access only when needed. There are no standing trade.
So, so using preve and especially the certificate based authentication, what can we achieve automation? The target systems can be quant to either trust the preve CA or the secrets in the target system can be quant preve world, which enable enables immutable deployments and in multiple deployments mean that infrastructure passcode through better cloud development kit or similar technologies, which in turn means that the target systems after deployment are never changed or configured outside of the CD pipeline.
What happens if a security system is difficult to use people try to figure out ways to bypass them. The most extreme case I've seen is that developers order DSL Alliance to the office for internet connectivity and the corporate network was too restrictive and impossible to work with. And we actually, we actually believe that we can make the user experience better users have a single pan of class through, through browser, which they can see all the systems they are entitled to access. They don't need to worry about the password accounts.
Everything is pre for them through vaulting and key based a keyest authentication. It is possible to gradually move the state to Al certificate based authentication. This is especially true with our universal key manager product, which for large S infrastructures can scan the state and gradually move the state through finding which end entitlements are correct to an Al certificate based authentication using Trivex. And instead of all the users having admin or root privileges role requests, it is possible to create temporary entitle domains.
So having spoken a bit about trust in time earlier, pre enforces trust in time, by in ensuring that when the user logs into pre they get map roles and subsequently during the session, the roles against the ID D provider. And also the second factor is that since the connections are made using FML certificates, the they are shorted and can only be used once. There are understanding privileges. And like I hinted earlier, the point of our solution is to be a single of class to the whole state, Peter targets, Linux windows hosts, either on or in devices, web services.
And so we seamless experience to connect all of those and naturally all the connections, the target systems are audited and recorded and all the actions are persisted logs and sent to cm. And oftentimes the products in our category are quite heavy and built on commercial operating systems or databases. We built product using boarded microservices, architecture it's lean. It can be deployed the way actually way the were meant to be used its horizontally completely source components or APIs also
Time.
So wrapping this up, I want to leave you with actually some thoughts, the complexity of modern it systems and it infrastructures is growing all the time. There are more and more secret tokens and keys to manage. Instead of managing these things, focus your efforts in getting rid of them. The future is keyless and passwordless even big players and apple and Microsoft are moving towards biometric first level authentication instead of pins and passwords, having standing credentials and secrets to manage means that your admins are tied up with processes.
Even if the processes are automated, there are unlikely to work throughout the infrastructure changes and upgrades and need maintenance. I talked earlier about the journey from physical service to servers in the past, you had two camps. On one hand, you had application developers with the CI pipelines and output of the pipeline, an application package with operations sky, deploy to production servers with the of DevOps and continuous delivery. The division is becoming modeled.
And then having said all of this, it is not a perfect world. We realize too all too well.
And there are the C systems. There are services, which can't be changed. There are a lot of corner cases. Sometimes the world is the only way to go and it's fine. As long as you aspire to improve and reduce complexity, basically try to pick a product which lets you handle the necessary use cases and enables you to migrate at your own pace. All thanks. We actually have Andrea and David at the venue. So if you have questions, have a chat with them.