K005: Identity Management in Modern Infrastructure
transcript
Kamalika: Any online seller today has two major objectives: become the best seller and the fastseller in their sectors. Their business is heavily dependent on shorter time to market and selling the best product or service in their sector. They can only achieve it through continuous and consistent delivery of quality products. But often fault lines in infrastructure, commonly labelled as infrastructure issues, pose bottlenecks in their delivery, most of which can be addressed proactively. When it comes to infrastructure. Unfortunately, it's a reactive approach to solve these issues, and it has become a norm, although proactive measures can prevent them, but they get ignored since they are considered non functional or operational. Hello, and welcome to Cloudkata, the modern infrastructure podcast. This is season one anatomy of modern infrastructure, and today is Episode Four Identity Management. From this episode onwards, you are going to hear more about infrastructure management. Learn how to build fault tolerant cloud ready modern infrastructure for your services and applications. I am your host and Coach Kamalika Majumder from stacks LLP and I am here to take you through a decade's experience of taking infrastructure school life, so fasten your seat belts, and let's take off.
Intro Music
So far, we have learned about the three core factors: network, system and storage, which constitute the foundation of modern infrastructure. If you have missed these episodes, don't worry, they are all available with downloadable transcripts on cloudkata.com. So guru podcast on cloudkata.com, and listening to the anatomy of modern infrastructure covered so far. Today, I'll take you through the fourth factor identity management to access modern infrastructure as a service or cloud. It requires identity management to ensure that systems are protected both logically and physically against unauthorised access. Now, you may ask if you are on cloud, how do you manage physical access. So in my first episode, I have explained to you how to have a segregated network and dedicated private network through virtual private clouds. So when you are on cloud, go with a virtual private cloud. That way you will be able to control your network and the system which are not
assigned to anyone else. And if you want to physically control access to your system, you can go with dedicated servers so that the server access can be controlled by you. That way you can also control the physical access to your server. However, this is again based on your requirement whether you may or may not need it, only certain organisations in certain sectors will need to physically control access to their systems. However, systems should be logically protected against any unauthorised access.
And if you're on VPC, a cloud provider, you know, ensures that your virtual machines are dedicated to you and not to anyone else the physical server may be shared, but the virtualized machines where your application is hosted is dedicated to you. So, going with a reserved server will actually also dedicate the physical server on which the virtual machines are running. However, that can be costly. So it will only be adopted if it is needed by your company compliance needs. The second thing that identity management has to ensure is Ease of Access. Users are not always tech savvy, they may or may not be programmatically control everything. So they need a very friendly user interface, so that they can access the systems and services. And the third thing that is needed is ease of onboarding and offboarding, the faster you can onboard a new user and the faster you can offer them will ensure how fast you can start your delivery and development you know work. So three things systems should be protected they should be easy to access and there they should be easy to onboard and off board. Now these can be achieved through one stop station for change management, tracking and tracing a centralised user management and standard access control policies and permissions. Once you have a centralised identity management system, with well defined policies and permission, you will be able to control who has access to which layer of the infrastructure or system or services and you will be able to also ensure
That's how easy it is to access and to revoke access.
So let's look into how these can be achieved. In order to understand that we need to first understand how access happens. In identity management. Whenever you are accessing a system or server, it goes to two phases of validation. First is authentication, wherein it validates you as a user, and its secrets. So you provide your user name, you provide your password, or any key, and then it validates, yes, this is a valid user whose ID is stored somewhere in my identity management system. And then it says yes, authentication successful that means you have successfully logged into the system or service.
No matter which mechanism you're using, and the next phases Once you are logged into the system or service, then the next phase comes in about what areas in that system are you able to access, you might have noticed that more often. Once you have logged into any centralised system or service, be it your infrastructure as a service or software as a service or Platform as a Service, you may or may not be having full access to everything that is available, you may have only restricted access to certain services, let's say if you're on cloud,
Sometimes you may see that you only have access to learn how to know the machines or the instances, but you do not have access to do anything or to edit anything. So that means there is an authorization layer. So you are not authorised to edit anything, but you are authorised to view it or to go inside it. This mechanism is called authorization. So any kind of access management has two stages: first authentication to username and secrets. And next through authorization. authorization happens through two things: roles and permissions.
Now let's look into authentication. authentication is done using identities. So what is the structure of an identity comprises two sets. First part is your accounts or usernames or your IDs whichever you may refer to based on whichever system you're using. accounts can be of various types, there can be user accounts, which are real human beings who are accessing the system and services. These can be service accounts, which are used by applications or services to talk to each other or to integrate it with each other. These can be system accounts where virtual machines or instances
allow a user to log into those machines or services to log into those machines or sometimes these machines can also talk to each other let's say file transfer or something like that. So they're the these accounts sit on this system level
Then there is Cloud account, when you are an uncertain cloud provider, that cloud provider will have to authenticate to that cloud provider's console or account to get inside it and access the infrastructure as service or software as a service or platform as service whichever model you are using. So these cloud accounts are typically used for cloud operations, let's say billing cloud or creating organisation or enabling any service or purchasing any service and likewise. So,
Overall, you can divide identities into four kinds of accounts: user accounts used by user service accounts used by applications, system accounts used by servers or systems and cloud accounts used by cloud operations. Now, this is the first part which is the user account or the first entry that you may have to do whenever you are trying to access any service. So the next part is the passwords or secrets or credentials, whichever you may like. And the passwords can be of various types. These can be simple, you know, passwords which are used by users. These can be when it comes to service accounts, these can be service passwords, or some keys or tokens which are transmitted by the service during the runtime. There can be these encryption keys, which are used by a certain encryption management system during a machine encryption or decryption. These can be SSH keys, the SSH keys are a pass referred to as password less mechanism. Though it is also kind of a secret, some sort of password but not password which is being typed into but it is more like in in key for
either RSA or open SSH, whichever, and it is much more secure. So these can be SSH keys. And there can be API secret tokens or access key tokens. These are most commonly used. The SSH keys are most commonly used in the systems and servers. They are not specific only to the cloud, they can also be used on any on premise system. And these API secrets are more common on the cloud. When you are accessing any cloud service. Let's say you have a EC two instance and you want to access an RDS instance from the EC EC two instance, you you would want to use the API keys are access tokens between the services instead of using the password or they these can be in the containerization world, these can be certain keys stored in files, like in you know, a PEM files, which are cube, which are used by Kubernetes like systems called cube config. So cube config, though it is called config, it is actually a set of keys which is used to access the Kubernetes API. So likewise, there are other software's systems, there can be databases, passwords or keys, depending on which database you're using. If you're using Postgres, there will be a Postgres database password, or MongoDB password. So, all of these are various kinds of passwords that are readily being used in our regular daylife to access the systems and services or infrastructure that we set up on Cloud.
Now, let's look into where these identities are stored or should be stored. As I mentioned earlier, in order to achieve Ease of Access and ease of onboarding and onboarding plus securely provide you know, authorised access to system it is very important that all these identities like on the whole if I have to summarise it, there are at least four groups of identities, and if each one of them have 100 identities, then there will be 400 identities that you have been needing. So, it is very important that you store all the identities in a centralised location. Otherwise, if you have different system handling different identities, like say for user you will use certain kinds of identity storage for systems you will use something else for
services, you will use something else for Cloud account, you will lose something. So it will be not just confusing, but also confusing to your user. Because they will have to now remember four different identities one for user access one for cloud access, you know, user access in cases like say email ids and your you know, the project management software will be using something else, then again, your Cloud account and then your cloud software. So, it will be very confusing for your user plus it is also a headache for your operational activities. Because, as I mentioned, it should be not just easy to provide access, but it should also be easy to revoke that access. Because if people will be leaving your organisation right, nothing is static here, everything is dynamic. So when your users are leaving, or are no longer
active, you need to revoke those access. So if you have 10 different places where you will have to go and clean up your user, trust me I have seen organisations retaining or not cleaning up their identity server for 10 years and more. And it leads to, you know, security breaches, because you will be using the same credential somewhere, and it is not revoked. So somebody someday will try to
brute force into your services, and it will compromise your service. So it's a big security risk. So all these identities should be stored in a centralised place. And today, there are services available, and these are called identity providers. Before the cloud era, there used to be one and only an Active Directory server, were used to save it Microsoft Active Directory, or some other Linux Active Directory or LDAP. But today, we have Google g suit, which is a centralised identity provider. And there are other
SAS products available, which can be used as identity providers. So no matter which one is, make sure that all your identities are stored in a centralised and single identity provider. Don't use one identity provider for your users, emails and others for your systems. Again, centralise the identity provider. And whichever identity provider you use, make sure that it has Single Sign On capability. Single Sign On capability is a mechanism where your user needs to only use one user ID or email id and password.
And that can be used through,
you know, federated authentication to multiple systems. So he will, let's say you have logged into your Gmail, and you want to log in to your Cloud account, let's say AWS account. Now when you have SSO enabled if you have the AWS account integrated with, say g suit.
So when you log in to AWS, it will ask you whether you want to log in through your Gmail ID or not. So you just have to say yes, this email id I want to use, and the systems g suit and AWS account, identity management, will,
Tran you will, you know share the permissions that yes, g suit your user with this email id wants to access this. And I am validating it because we both have integrated with each other with a centralised IDP. And that's how
you can automatically log in to your AWS account just by selecting your Gmail ID. Now you have to understand very carefully that you are not sharing the passwords to AWS, or copying it to AWS, it's a federated authentication mechanism. The way it works is one organisation, let's say g suit and AWS have been integrated through your account
by you authorising them to integrate and they have validated that this email id is a valid user for this account in AWS. So this is called federated authentication. So likewise, this single sign on can be used for various other systems and software's many software's today provide Single Sign On mechanism with different types of IDPs g suit being one of the
the first one or most common one that is being provided. So have a centralised IDP, where all the user IDs or email ids or our unique IDs are stored, and make sure that it is enabled by a single sign on now what is the other benefit of single sign on. So for user ID, they get an ease of access, they don't have to enter username and password every time they go to a different service, or they don't have to, you know, use a different username or password. The second is ease of onboarding and onboarding now says that users have left your organisation and you need to revoke that user. So you just go ahead and revoke it in the centralised IDP in this case, let's say g suit. So you just delete that user from your G suit. And it's access is gone from all the systems which are integrated with G suit and with single sign on. So that is the beauty of having a centralised IDP with a single sign on. Similar identity providers with single sign on, you can access all the systems which are integrated with the G suit, there are still some other secrets that you will hold. Let's say your API tokens, which are typically used for programmatic access between your services let's say you are having a CI job for deployment job, which deploys or creates infrastructure. So you will have some CIA agents which will be launched to create an RDS server or an EC two instance. So they will use certain API keys or access tokens. Now, that is one form of password then I mentioned about the SSH keys which you will use to log into any Linux system. Then as I also mentioned about encryption keys which are typically used by key management system let's say you have an encrypted disk to to encrypt and decrypt, you need a key pair and that key pair has to be stored somewhere so that the key management system can decrypt it or if you are going to decrypt it yourself you also need to have it Then there is another type of secret which is in containerization world like say Kubernetes these are the cube config secrets very commonly used for application deployment, you know container deployment in Kubernetes cluster. Basically, if you are having a microservices architecture you need to convict to deploy any micro service as containers. So, these are another form of secrets, these are more mostly these are keys tokens, PEM files and likewise and these needs to be stored securely and same like in case of identity user identity provider, these also need to be stored in a centralised place because these will be used during runtime by applications. And sometimes even a user may want to programmatically access some of the services for logging, debugging or some other operational needs. So you need to store all of them in a centralised place and it needs to be secured. So what do you need for them is a centralised secret management system and one example or the most common example of this is vault there can be different versions of vault
Vault for secret management should be used to store all your secret keys token or even password if you have static passwords if you are, if you have a software which does not have single sign on with your identity provider, you will need to set up a user ID and password so, you need to save those also. So, for that you need to store them in a centralised vault or a secret management system. And what are the advantages of storing them in a centralised vault or secret management system is that vault can be all these secrets can be stored in a centralised, encrypted secret storage. And when you're storing these secrets, you can actually segregate these secrets into different groups or different policies so that accidentally one application does not end up using the secret of the other application. So vault gives various capabilities and conflict parameters like this. The second advantage of using a secret management system is key rotation. Now you have created a key today, it is not a good practice to use the same key forever, or for your entire lifetime. It is important that this key gets rotated and refreshed. Because if attackers notice that you have been using a key for too long, it is very easy for them to brute force to attack and launch a brute force attack into your system and get inside your system.
And insert some of the keys. I won't say it is a key. But it's kind of a secret or authentication mechanism, cert which is a certificate. Now certificate level authentication is used for API to API communication especially between two parties. And it is very important, if you are purchasing a certificate from a known root authority, they have a certain time limit like say one year or two year or three year and after that you need to update the certificate. Now, if you have not stored the certificate in a particular place, and your application is not referring to that central management system to call it, you will have to manually go and change it in each and every place you might automate it with script but it is not be that feasible. So if you have it stored in a centralised secret management system and your application is just given the link to that secret management system to the path and the file name, it will go ahead and call it so you only need to go and change it in the secret management system. And the application will refresh it from the secret management system. Also, it enhances the runtime usage. Now if you do not have a secret management system and you are using API tokens and no SSH keys or SSL certificates, you will have to store them on that virtual machine. Now remember, in my second episode on system, I told you about immutable infrastructure, you need to make sure there is no stateless data set, stateful data sitting on your infrastructure, you need to make it truly immutable. But if you store say passwords or secret tokens or SSH keys or SS will password only on your system, right you need to keep a copy of it on the system, but if you only keep it on that system and dependent to copy it from there or pull it from there, then if the system is gone, you will lose your SSH keys and token and I have seen it many cases that systems as fancy as containerization dries platform, if the secret management is not been configured properly, the the entire platform stops working just because of the fact that they have expiry expired token. So it is very important that you have this in a secret management system in a centralised way, wherein it can be accessed during runtime and you don't have to store it statically on the system. This also enhances
a renewal mechanism, because then you can set a scheduled renewal and you can set an alert that okay, this is going to expire you will get a notification in an upload. Otherwise, if you're storing it on a static system, you will have to keep remembering the date if you miss it, your certificates expire or passwords expire then you are gone, you will forget about it.
This having a secret management system also enables configuration managed infrastructure as code. So you can programmatically manage a secret management system. You don't have to manually go it plus it also has a UI which can give a read only access to your user. So for users you don't have to send these credentials over email or over you know files or folders which risk chances of you knowing data copy or data compromising or violating your security policies you should never know share your passwords and credentials over chats or emails, etc. Instead, you can generate these passwords and you can tell them okay, go in the secret management system URL and download your password so they are access your password and
Also you can have temporary credentials generated by suppose somebody wants to just have access to one system for a day. So you can have the token generated for a temporary token generated for a day and provide it to them. So in short, identities can be stored in two ways. One centralised identity provider, which can have all the account names, email ids, IDs, etc, and then a vault for secret management storage.
Next up, let's talk about the second stage. Once authentication is completed the second stage of identity management, that is authorization. authorization means providing selective access to your user. That means that when you don't want your user to have access to everything, or you want to group them based on what they need to know, that's when you enable authorization, and it is an added layer of security on your identity management so as to ensure that it prevents or it prevents any risk of unnecessary Lee compromising your system. And also, it prevents any unauthorised user from accessing something which they are not supposed to let's say you are in a bank, you are developing a banking system. Your developers did not have access to the core banking data, right. However, your operation may have access to infra but they don't need to have access to databases. It's your developers or engineers or production support especially for production access, not every one needs to have right access to production. In fact, production access should always be read only for users and right access should be given only to the application in services which should be authorised on it. So how do you achieve authorization, this can be achieved through centralised role based access control for all users and service accounts. And when you're creating a service account, you should have a strong password policy which has all access keys and password, you should have rotation policies, as I mentioned earlier, it should be rotated. It should not be that one password is used for like years forever. All accounts should be checked against MFA multi factor authentication. So earlier, it was two factor authentication before that it was only one factor which is username and password, then came two factors with the username, password and some auth token or OTP. Now it is MFA, which is like say OTP plus some security questions, etc. You will most likely see these when you're logging into your bank's banking account where you have username OTP some secret, you know, question or answer and likewise. So these are added layers of security for the system.
And especially MFA is very important when you are dealing with cloud accounts. As I mentioned earlier, the operational accounts like say billing, how you know how much to pay, which which shows your payment information, or infrastructure creation, all of these and in cloud, there is one account that you need to create when you're registering to any cloud provider which is commonly known as the root account, or admin account. That is a you know, super admin account of the entire Cloud
account that you have created. That should be enabled with MFA, and especially any access that you are providing, which is not programmatic, which is why the web console either enables MFA along with the single sign on with your IDP, very important. And rules route and super admin account, which you create in the beginning, make sure that it is subscribed to a group email address and not your CTOs email ID or your billing admins email id because that will cause issues when that person is not around. And you need to fix something or enable something. So if you are assigning it to a group email ids, it will provide a long term group like operational access, and also it will avoid loss of control.
So in case of a role based access control, you will need very well defined rules for Cloud Console API and services. There should be separate policies for your Iam roles. This is specific to clouds. Like say, you can have a developer and an admin role. And to explain these rules, based on the use cases, it can be developer, a QA or admin or maybe just users and admin or devs, and admin. And then you should have very well defined API access policies and service control policies. These mapping off Iam roles should be done to a group so that you can manage the permissions
in one place and reflect it in multiple roles.
And again,
In this case also for the rows if, if possible and provided by your cloud provider. And if you're using a g suit, then you can map it to the Google Groups. And you can also do group management from go g suit, you don't have to do group management or role management or authorization management from your Cloud account. So you can have a centralised place where you are, you have a standardised rule and user account created for your organisation and you can map to your Cloud account.
I mentioned keys. So when it comes to the key management make sure there is a key management system and access control through individual rather than shared key. So never share your keys or credentials. So role based access control is only successful if your users are not sharing the credential with someone else. So sometimes it happens that because of any urgency of operational demand, people who enter tend to share their credentials with their fellow mate, not do that. Because truly, there is no way of stopping it, it is only awareness that you can set because that way you will not know who is accessing it, and that will risk your system or service access. So be very careful. Do not share any keys or any password among your teammates to create a unique ID for every user who is accessing it. And that holds true for event services. Do not have one anonymous service user name that can be used across your compute system, your API gateways, your databases, and especially in databases do not just create one database username and password and distribute it across all system level integration that is highly risky and very prone to brute force attack again.
Now, after you define the rules, you will identify the roles Okay, I have users, I have admins, I have super admins, or you can say I have engineers, and I have an ops people and I have like billing people. These rules should be structured with proper policies and permissions. Now, what are the permissions and what do they look like? Permission can be based on data access on especially the database layer, it can be either read only and readwrite readwrite should be typically given for services which are writing to the databases rather than users. Because nowadays, very rarely, you will directly upload any table or any data into your database you should prevent from doing it, even if you want to create a service which can upload it. So direct access to the database for any user should be prevented, only service level access should be given. And read only access can be given to validate whether the data is uploaded correctly or written correctly or not. So one level of permission can be set based on the data and database. And this
This form of accessibility also applies to anything that you are giving to us. Now your user can be read only user or can have readwrite user. And this comes in when you are giving access to the user, especially to cloud in cloud. This can be a programmatic access, like the users can get an API access for specific and even on the API access, you can define which API's you don't want them to suppose you want somebody to only have access to create easy two instances. So you can define your permissions and pour policy permissions in your policy, which is only access to create easy two instances. This can be filtered on Maitland rulers even further. Likewise, you can make them admin users, if these are your operations.
engineers who are creating infrastructure or your ci agents which are doing the deployment, so there has to be an admin user which can create these infrastructure services.
And this is a programmatic access that you can get. And you will have to define it in a granular way. When you're defining programmatic access for your users or services, make sure that it is very well defined and granular. If that server does not need to have access to certain servers do not give it most often, it is very easy to just choose you know AWS admin x and give to access to anything but remember, you're risking it if somehow that system gets compromised, they can then go to your billing account and change the 10 things in your payment account and do some messy things over there. So make sure you define your policies and point permission in a very granular way. The second level of access or permission that needs to be set is for the web console. Now, some users may not need access to programmatic changes for API's, etc. web console access is a more generic and easy access that can be given to user so that they can have visibility to the status of
This system like they can look into logs that they can look into if their servers and systems are running or if they are in some faulty state. So, we have console access should always be read only. And again, since I mentioned earlier, it should be integrated with SSO and single sign on. So that if the user no longer needs access or user is no longer present, you can just revoke access from one place and that will remove them from every application, we have console access can also be given for is also given for certain things like buckets or you know, object storage, like s3 bucket or GCS bucket where you will store some kind of static files and folder. So you can define policies about it. Don't just make it open that anybody and everybody can upload files and download files and copy files. So you need to make sure that only authorised people are allowed to download and upload files in those buckets. And those workouts are encrypted as well. web console access can also be given for many services, let's say RDS or Kubernetes to see you know, utilisation etc. But all these accesses should be recommended is to give read only access rather than creation or edit level access. So do not enable any
write access from the web console, make sure that it is programmatically handled, because that way you will be able to create proper infrastructure as code if you are giving access over the web console to create an edit. That means it becomes untraceable because the web console is not version control. Only programmatic access or programmes can be version control. So if you're doing anything from the UI, it is lost and you'll forget what configuration changes you made. So it is very easy to use the web console and give editor access and it allows so but do not do it because in that way, you will lose track of what changes you have made. So make sure that we have console access is only provided with read only access and maybe some downloadable file access for bucket policies. And that's all.
Now as we are talking about Cloud Identity Management as I mentioned earlier, there are various ways you can do Identity and Access Control management rules and policies on Cloud. Typically, they are cloud prone a cloud provider, there is a service defined as Iam, which you will see it being AWS or
in G suit. And that is a place where you can define your account and policies. Now when you're creating cloud organisation, first thing you will when you're doing Identity and Access Control in cloud, first thing you will have to decide is when you're registering to any cloud provider is that to create an organisation now, various cloud providers have various models of creating organisation, like in AWS, they have an organisational you owe you model. However, in GCP, they have a folder and project model
names can be different, but ideas are the same. They want to segregate the use cases, when you have an organisational unit let's say you have an organisation called
x y Zed right you register first thing you do is organisation via AWS or in G Suite you will have to register the organisation with your domain name first. And now under that you will have to create two sections. Now, when you are designing modern infrastructure, I recommend creating the sections based on your production and non production units. If you are on AWS create separate or use for non production and production. And under the overview you can create as many accounts as you want or you can create a single account, if you if you are running a single project, you can create a single account let's say you're running multiple projects, one for say ecommerce or bank or a FinTech. So, create one account for each project. The idea is there will be a certain amount of expenses related to each account and you will have to come up with a mechanism to charge them back or revoke them or get an estimate of which project is consuming what. So when you're having an account
separated and segregated, then you can get a consolidated billing with the separation of which project is earning. So as in AWS, you have oh you like production and on production and different accounts based on you are in which project like in likewise in GCP you can create folders in GCP does not have oh you they have more of a folder and under that folder you can have service project based on if you are a single tenant then it will be only single project if you're multi tenant. If you're a SAS project and have multiple projects that are being products that are being released. So you can create a service project based on that
Now once you have created the account, then you apply the Iam roles and policies. And you can define one standard for Im roles and policy you can define there is a master role or super admin role, which has access to all the systems and services, then there can be instant role, which will only be for server to server access. And then there can be a user role, which is like a web console access for your users to go through Cloud account and controls. And you can define those rules and apply to all systems.
If you're on AWS, it also provides a mechanism to apply service control policies across the organisation in unit. So let's say you want to enforce that every user who is logging into your Cloud account has a multi factor authentication enabled, you can set it in one policy and apply it to the whole organisation. So it gets applied to all the values that you prepare irrespective of whether it is production or you are non production, you. So anybody and everybody gets that, then there are certain other policies like a deletion policy, you can say that nobody should get an authorization to delete infrastructure from the web console. So web console deletion is banned. So you can set those, you know, default policies, and these are called service control policies on AWS. And you can do that.
Likewise, in GCP, you have service accounts, which you can use for project to project access or service project access service to service object. So you'll have to play around with those permissions, but the standard design remains the same. That means divide your access into two or three groups, you can have either master access and user access, or you can have master access, super admin access, and then programmatic access, and then web access, right. So define which role you want to provide.
There is another aspect to AWS and also in GCP, which is assumed roles, like say, you can have one rule defined for a server, and it can use that rule assume that role and, you know, integrate with some other service, it's called assume roles within accounts. And then if you have different accounts, let's say you have one for banking, one for FinTech, but you want one of the banking services to access one in FinTech then maybe cross account access that you want to generate. So what you could do is, you can also assume rows, wherein the user account will assume a particular role, which is like a cross account access and they can access it.
When it comes to Cloud account management, there are certain aspects that should be kept in mind and should be taken care of. And these are console accounts or web console access should be different for production and non production. Whenever you're giving your users access, make sure they have different IDs, or different accounts for non-production. Otherwise, if a user has one email id, even if he's a super admin, and that email id is given to production and non production access in both places, there are chances that
he can accidentally go to the production account when he is doing the non production work. So make sure that his user accounts are separate. When you're creating you can label them as product users and non producers likewise, only have trusted admin and service accounts that have rights that should access production. Console root or super admin accounts should be subscribed to a group email address, as I mentioned earlier, do not have an individual user account attached to it because if that user somehow is unavailable, then you will be stuck if you want to do any immediate change or management. Usage of a root account is restricted to minimum and minimum amount of people and only emergency use, like say maybe billing, changing any billing configuration or anything like that.
multi factor is enabled across all cloud accounts make it a must. Even if you do not have a centralised IDP or an SSO make sure multi factor is enabled across all accounts. Cloud API's should be restricted to only trusted admins and trusted service accounts. And in fact, in my recent projects, I have prevented anybody to have API access to any user even if that person is a super admin of infrastructure. All the API's and access tokens are used by our Deployment Services to deploy infrastructure creation. Only. When you are developing the cloud from scratch. Maybe in the initial days, you might have one user download access keys and API keys and use it but once your deployment servers are ready, revoke that access and disable it. So in my projects, I do not use access keys on my token. If I have to do any kind of provisioning, I will go to my deployment agents and from there programmatically it will be accessed so service key access key cloud access keys should be only provided to trusted service accounts and deleted elsewhere.
API rights are set up on at least right principal API keys are only usable for selected egress and trusted locations. service accounts with access to console API's are protected as I mentioned earlier, all API keys even if it is between the service should be rotated on a monthly or bimonthly basis. Now, best practices are 60 days, but 60 to 90 days. So this the shorter you keep the better. All account should have a strong password policies and
all the accounts should have an identity provider membership account integrated with them. So these are some of the things for cloud accounts that need to be taken care of.
So to summarise for secure, ease of access and easy to onboard and off board. For identity management. What you need is a centralised identity provider, and secret management where you store your user IDs and your secret tokens. you need an SSO for your identity management identity provider. For ease of access, onboarding and offboarding. You need logically segregated, low role based access control authorization for the system and you need well defined policies and permissions for each route. In short, a centralised IDP enable with SSO and our back feature. Once you have your core infra set up, integrate it with the centralised IDP and secret management before you enable access for any user or engineer.
And there are some
These are the do's. I would also like to mention some of the don'ts. Don't use anonymous or generic credentials, like root or default or you know systems and something like that. reset all default system software or passwords that are set. Do not transmit credentials in plain text. Use a secret management system and a temporary encrypted URL to communicate the credential. Do not store credentials on machines or static files. Do not share credentials like VPN profiles, access tokens, root accounts on emails or chats or slack. Do not keep temporary credentials longer, make it as short as possible a few hours to 24 hours. Not more than that. Do not communicate credentials through emails, chats, etc.
I hope you like today's session, and it gave you a good idea of how you should design your identity management system so that you just don't give me easy access to your user. But it is also a secure access so that that does not compromise your system. The next session will be about logging the fifth factor of modern infrastructure. I hope so far you are enjoying the anatomy of modern infrastructure. So join me back next Friday on cloudkata where I share more Carter's for modern infrastructure. With that note, I would like to conclude today's episode, subscribe to the show on cloudkata.com. I repeat, that's cloudkata.com. If you have missed listening to the complete episodes, don't worry, you can get the transcript and the playlist on the podcast page on cloudkata.com. So with that note, I'm signing off today. Enjoy your weekend. Take care, stay healthy, stay safe and keep learning. See you next week on the fifth factor logging of modern infrastructure.
Transcribed by https://otter.ai
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
Other episodes
K013: A Bank On Cloud – Part 2 : The People Side Of The Story
Distance by pandemic united by goal – How I enabled a completely remote in-house DevOps community from scratch for a bank, in the midst of a pandemic when the entire team was locked down across multiple cities and timezones.
K012: A Bank On Cloud – Part 1 : The Tech Side Of The Story
Designing & developing modern infrastructure for one of Indonesia’s first cloud-native Digital Banks. In this two part story, this episode covers the tech side of the project.