AWS - CLOUD PRACTIONER NOTES - CHAPTER 6.SECURITY
AWS - CLOUD PRACTIONER NOTES - CHAPTER 2 . EC2 N COMPUTE
Amazon elastic compute cloud (Amazon EC2) provides secure, resizable compute capacity in the cloud as Amazon EC2 instances.
with an Amazon EC2 instance you can use a virtual server to run applications in the AWS Cloud.
- You can provision and launch an Amazon EC2 instance within minutes.
- You can stop using it when you have finished running a workload.
- You pay only for the compute time you use when an instance is running, not when it is stopped or terminated.
- You can save costs by paying only for server capacity that you need or want.
Amazon EC2 instance types
Amazon EC2 instance types are optimized for different tasks. When selecting an instance type,
consider the specific needs of your workloads and applications. This
might include requirements for compute, memory, or storage capabilities.
1.General purpose instances
General purpose instances provide a balance of compute,
memory and networking resources, and can be used for a variety of
diverse workloads. These instances are ideal for applications that use
these resources in equal proportions such as web servers and code
repositories.
- application servers
- gaming servers
- backend servers for enterprise applications
- small and medium databases
2.Compute optimized instancesCompute Optimized instances are ideal for compute bound applications
that benefit from high performance processors. Instances belonging to
this family are well suited for batch processing workloads, media
transcoding, high performance web servers, high performance computing
(HPC), scientific modeling, dedicated gaming servers and ad server
engines, machine learning inference and other compute intensive
applications.
High performance computing (HPC), batch processing, ad serving, video
encoding, gaming, scientific modelling, distributed analytics, and
CPU-based machine learning inference.
3.Memory optimized instancesMemory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory. Memory optimized instances enable you to run workloads with high memory needs and receive great performance
Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics
4.Accelerated computing instances
Accelerated computing instances use hardware accelerators, or
co-processors, to perform functions, such as floating point number
calculations, graphics processing, or data pattern matching, more
efficiently than is possible in software running on CPUs.
Machine learning, high performance computing, computational fluid
dynamics, computational finance, seismic analysis, speech recognition,
autonomous vehicles, and drug discovery.
5.Storage optimized instancesStorage optimized instances are designed for workloads that require
high, sequential read and write access to very large data sets on local
storage. They are optimized to deliver tens of thousands of low-latency,
random I/O operations per second (IOPS) to applications.
These instances maximize the number of transactions processed per second
(TPS) for I/O intensive and business-critical workloads which have
medium size data sets and can benefit from high compute performance and
high network throughput such as relational databases (MySQL, MariaDB,
and PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, and Cassandra).
They are also an ideal fit for workloads that require very fast access
to medium size data sets on local storage such as search engines and
data analytics workloads.
Amazon EC2 instance types
Amazon EC2 instance types are optimized for different tasks. When selecting an instance type, consider the specific needs of your workloads and applications. This might include requirements for compute, memory, or storage capabilities.
- application servers
- gaming servers
- backend servers for enterprise applications
- small and medium databases
General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that use these resources in equal proportions such as web servers and code repositories.
Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this family are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.
High performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference.
Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory. Memory optimized instances enable you to run workloads with high memory needs and receive great performance
Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics
4.Accelerated computing instances
Accelerated computing instances use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs.
Machine learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, and drug discovery.
Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.
These instances maximize the number of transactions processed per second (TPS) for I/O intensive and business-critical workloads which have medium size data sets and can benefit from high compute performance and high network throughput such as relational databases (MySQL, MariaDB, and PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, and Cassandra). They are also an ideal fit for workloads that require very fast access to medium size data sets on local storage such as search engines and data analytics workloads.
Amazon EC2 pricing
Amazon EC2 pricing
With
Amazon EC2, you pay only for the compute time that you use.
1.On-Demand
With on-demand instance, you pay for the compute capacity by second with no long term commitment. You have complete control over instance life cycle when to start stop or terminate
- no upfront fee- charged by hour or sec- no commitment,- ideal for short term needs or - unpredicatable workloads
2. Reserved Instances
- full upfront, partial upfront, no upfront- one-three year commitment- once expired automatically on demand price
Reserved instances are not physical instances rather billing discount applied to on demand instance.. on-Demand instances must match critieria to which determines price/billing discount.
Reserved instance billing criteria:
-Instance type - ex m4.large - m4(instance family, large-instance size-region - which region instance is purchased-tenancy - shared or single tenant hardware.-platform - windows or linux
Types of reserved instance:
Standard - This provides significant discount. Cannot be exchanged but can be modified.
Convertible - Provides lower discount, but can be exchanged for reserved instance with different attribute. can also be modified.
3. Savings plan
- no upfront , partial upfront or all upfront- reduce cost by commiting 1-3 year- saves 66% over on demand
Types of savings plan
Compute savings plan :
provides more flexibility and 66% of demand rates. These plans automatically apply to your ec2 instance regardless of instance size, instance family, region , operating system, or tenancy. Also applies to Fargate n lambda usage.
EC2 instance savings plan:
Provides savings upto 72% in exchange for commitment to Specific instance family in a chosen region, regardless of size, tenancy, OS
AWS Cost Explorer : a tool that enables you to visualize, understand, and manage your AWS costs and usage over time.
If you are considering your options for Savings Plans, AWS Cost Explorer can analyze your Amazon EC2 usage over the past 7, 30, or 60 days.
AWS Cost Explorer also provides customized recommendations for Savings Plans. These recommendations estimate how much you could save on your monthly Amazon EC2 costs, based on previous Amazon EC2 usage and the hourly commitment amount in a 1-year or 3-year plan.
4. Spot Instances
Spot instance uses spare ec2 capacity which is available for less than on demand price. spot instance enable you to request unused EC2 instance at huge discount. Charged by hour. This is suitable for workloads that can be interrupted.
5. Dedicated hosts
An amazon ec2 host is a physical server with ec2 instance capacity fully dedicated to your use. Dedicated hosts allow you to use your exisiting per-socket, per-core, per vm software licenses.
6. Dedicated instances
Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC)
on hardware that's
dedicated to a single customer.
“An important difference between a Dedicated Host and a
Dedicated Instance is that a Dedicated Host gives you additional
visibility and control over how instances are placed on a physical
server, and you can consistently deploy your instances to the same
physical server over time.”
With Amazon EC2, you pay only for the compute time that you use.
1.On-Demand
AWS Cost Explorer : a tool that enables you to visualize, understand, and manage your AWS costs and usage over time.
If you are considering your options for Savings Plans, AWS Cost Explorer can analyze your Amazon EC2 usage over the past 7, 30, or 60 days.
AWS Cost Explorer also provides customized recommendations for Savings Plans. These recommendations estimate how much you could save on your monthly Amazon EC2 costs, based on previous Amazon EC2 usage and the hourly commitment amount in a 1-year or 3-year plan.
4. Spot Instances
Spot instance uses spare ec2 capacity which is available for less than on demand price. spot instance enable you to request unused EC2 instance at huge discount. Charged by hour. This is suitable for workloads that can be interrupted.
5. Dedicated hosts
An amazon ec2 host is a physical server with ec2 instance capacity fully dedicated to your use. Dedicated hosts allow you to use your exisiting per-socket, per-core, per vm software licenses.
6. Dedicated instances
Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer.
“An important difference between a Dedicated Host and a Dedicated Instance is that a Dedicated Host gives you additional visibility and control over how instances are placed on a physical server, and you can consistently deploy your instances to the same physical server over time.”
Scalability
Scalability
involves beginning with only the resources you need and designing your
architecture to automatically respond to changing demand by scaling out
or in. As a result, you pay for only the resources you use. You don’t
have to worry about a lack of computing capacity to meet your customers’
needs.
Scalability
Scalability involves beginning with only the resources you need and designing your architecture to automatically respond to changing demand by scaling out or in. As a result, you pay for only the resources you use. You don’t have to worry about a lack of computing capacity to meet your customers’ needs.
Amazon EC2 Auto Scaling
-minimum size
-maximum size
-desired capacity
Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon
EC2 instances available to
handle the load for your application. You create collections of EC2 instances, called
Auto Scaling groups.
You can specify the minimum number of instances in
each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never
goes below this size.
You can
specify the maximum number of instances in each Auto Scaling group, and Amazon EC2
Auto Scaling ensures that your
group never goes above this size.
If you specify the desired capacity, either when
you
create the group or at any time thereafter, Amazon EC2 Auto Scaling ensures that
your group has this many
instances.
If you specify scaling policies, then Amazon EC2 Auto Scaling can launch
or terminate instances
as demand on your application increases or decreases.
Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling and predictive scaling.
Dynamic scaling: responds to changing demand.
Predictive scaling: automatically schedules the right number of Amazon EC2 instances based on predicted demand.
To scale faster, you can use dynamic scaling and predictive scaling together.
Adding Amazon EC2 Auto Scaling to your application architecture is one way to maximize
the benefits
of the AWS Cloud. When you use Amazon EC2 Auto Scaling, your applications gain
the following
benefits:
-
Better fault tolerance.
Amazon EC2 Auto Scaling can detect when an instance is unhealthy,
terminate it, and
launch an instance to replace it. You can also configure
Amazon EC2 Auto
Scaling to use multiple Availability Zones. If one Availability
Zone becomes
unavailable, Amazon
EC2 Auto Scaling can launch instances in another one to compensate.
-
Better availability. Amazon EC2 Auto Scaling helps ensure that your application always
has
the right amount of capacity to handle the current traffic demand.
-
Better cost management.
Amazon EC2 Auto Scaling can dynamically increase and decrease
capacity as needed.
Because you pay for the EC2 instances you use, you save
money by launching
instances when they are needed and terminating them when they
aren't.
Amazon EC2 Auto Scaling
-minimum size
-maximum size
-desired capacity
Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups.
You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size.
You can specify the maximum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes above this size.
If you specify the desired capacity, either when you create the group or at any time thereafter, Amazon EC2 Auto Scaling ensures that your group has this many instances.
If you specify scaling policies, then Amazon EC2 Auto Scaling can launch or terminate instances as demand on your application increases or decreases.
Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling and predictive scaling.
Dynamic scaling: responds to changing demand.
Predictive scaling: automatically schedules the right number of Amazon EC2 instances based on predicted demand.
To scale faster, you can use dynamic scaling and predictive scaling together.
Adding Amazon EC2 Auto Scaling to your application architecture is one way to maximize the benefits of the AWS Cloud. When you use Amazon EC2 Auto Scaling, your applications gain the following benefits:
-
Better fault tolerance. Amazon EC2 Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can also configure Amazon EC2 Auto Scaling to use multiple Availability Zones. If one Availability Zone becomes unavailable, Amazon EC2 Auto Scaling can launch instances in another one to compensate.
-
Better availability. Amazon EC2 Auto Scaling helps ensure that your application always has the right amount of capacity to handle the current traffic demand.
-
Better cost management. Amazon EC2 Auto Scaling can dynamically increase and decrease capacity as needed. Because you pay for the EC2 instances you use, you save money by launching instances when they are needed and terminating them when they aren't.
Elastic Load Balancing
Elastic Load Balancing automatically distributes your incoming traffic across multiple
targets, such as
EC2 instances, containers, and IP addresses, in one or more Availability Zones. It
monitors the health of its registered targets, and routes traffic only to the healthy
targets. Elastic Load Balancing scales your load balancer as your incoming traffic
changes over time. It can
automatically scale to the vast majority of workloads.
Elastic Load Balancing
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to the vast majority of workloads.
Load balancer benefits
A load balancer distributes workloads across multiple compute resources, such as
virtual servers. Using a load balancer increases the availability and fault tolerance
of your applications.
You can add and remove compute resources from your load balancer as your needs
change, without disrupting the overall flow of requests to your applications.
You can configure health checks, which monitor the health of the compute resources,
so
that the load balancer sends requests only to the healthy ones. You can also offload
the
work of encryption and decryption to your load balancer so that your compute resources
can focus on their main work.
Elastic Load Balancing supports the following load balancers: Application Load Balancers,
Network Load Balancers, Gateway Load Balancers, and Classic Load Balancers.
Monolithic applications and microservices
In a microservices approach, application components are loosely
coupled. In this case, if a single component fails, the other components
continue to work because they are communicating with each other. The
loose coupling prevents the entire application from failing.
When
designing applications on AWS, you can take a microservices approach
with services and components that fulfill different functions. Two
services facilitate application integration: Amazon Simple Notification
Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS)
A load balancer distributes workloads across multiple compute resources, such as virtual servers. Using a load balancer increases the availability and fault tolerance of your applications.
You can add and remove compute resources from your load balancer as your needs change, without disrupting the overall flow of requests to your applications.
You can configure health checks, which monitor the health of the compute resources, so that the load balancer sends requests only to the healthy ones. You can also offload the work of encryption and decryption to your load balancer so that your compute resources can focus on their main work.
Elastic Load Balancing supports the following load balancers: Application Load Balancers, Network Load Balancers, Gateway Load Balancers, and Classic Load Balancers.
Monolithic applications and microservices
In a microservices approach, application components are loosely coupled. In this case, if a single component fails, the other components continue to work because they are communicating with each other. The loose coupling prevents the entire application from failing.
When designing applications on AWS, you can take a microservices approach with services and components that fulfill different functions. Two services facilitate application integration: Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS)
Amazon Simple Notification Service (Amazon SNS)
Amazon Simple Notification Service (Amazon SNS) is
a publish/subscribe service. Using Amazon SNS topics, a publisher
publishes messages to subscribers. This is similar to the coffee shop;
the cashier provides coffee orders to the barista who makes the drinks.
In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda functions, or several other options.
Amazon Simple Notification Service (Amazon SNS)
Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service. Using Amazon SNS topics, a publisher publishes messages to subscribers. This is similar to the coffee shop; the cashier provides coffee orders to the barista who makes the drinks.
In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda functions, or several other options.
Amazon Simple Queue Service (Amazon SQS)
Amazon Simple Queue Service (Amazon SQS) is a message queuing service.
Using
Amazon SQS, you can send, store, and receive messages between software
components, without losing messages or requiring other services to be
available. In Amazon SQS, an application sends messages into a queue. A
user or service retrieves a message from the queue, processes it, and
then deletes it from the queue.
Serverless computing
Earlier
in this module, you learned about Amazon EC2, a service that lets you
run virtual servers in the cloud. If you have applications that you want
to run in Amazon EC2, you must do the following:
Provision instances (virtual servers).
Upload your code.
Continue to manage the instances while your application is running.
The
term “serverless” means that your code runs on servers, but you do not
need to provision or manage these servers. With serverless computing,
you can focus more on innovating new products and features instead of
maintaining servers.
Another benefit of serverless computing is
the flexibility to scale serverless applications automatically.
Serverless computing can adjust the applications' capacity by modifying
the units of consumptions, such as throughput and memory.
An AWS service for serverless computing is AWS Lambda.
Amazon Simple Queue Service (Amazon SQS)
Amazon Simple Queue Service (Amazon SQS) is a message queuing service.
Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.
Serverless computing
Earlier in this module, you learned about Amazon EC2, a service that lets you run virtual servers in the cloud. If you have applications that you want to run in Amazon EC2, you must do the following:
Provision instances (virtual servers).
Upload your code.
Continue to manage the instances while your application is running.
The term “serverless” means that your code runs on servers, but you do not need to provision or manage these servers. With serverless computing, you can focus more on innovating new products and features instead of maintaining servers.
Another benefit of serverless computing is the flexibility to scale serverless applications automatically. Serverless computing can adjust the applications' capacity by modifying the units of consumptions, such as throughput and memory.
An AWS service for serverless computing is AWS Lambda.
AWS Lambda
AWS Lambda is a service that lets you run code without needing to provision or manage servers.
While
using AWS Lambda, you pay only for the compute time that you consume.
Charges apply only when your code is running. You can also run code for
virtually any type of application or backend service, all with zero
administration.
For example, a simple Lambda function might
involve automatically resizing uploaded images to the AWS Cloud. In this
case, the function triggers when uploading a new image
You upload your code to Lambda.
You set your code to trigger from an event source, such as AWS services, mobile applications, or HTTP endpoints.
Lambda runs your code only when triggered.
You
pay only for the compute time that you use. In the previous example of
resizing images, you would pay only for the compute time that you use
when uploading new images. Uploading the images triggers Lambda to run
code for the image resizing function.
In AWS, you can also build and run containerized applications
AWS Lambda
AWS Lambda is a service that lets you run code without needing to provision or manage servers.
While using AWS Lambda, you pay only for the compute time that you consume. Charges apply only when your code is running. You can also run code for virtually any type of application or backend service, all with zero administration.
For example, a simple Lambda function might involve automatically resizing uploaded images to the AWS Cloud. In this case, the function triggers when uploading a new image
You upload your code to Lambda.
You set your code to trigger from an event source, such as AWS services, mobile applications, or HTTP endpoints.
Lambda runs your code only when triggered.
You pay only for the compute time that you use. In the previous example of resizing images, you would pay only for the compute time that you use when uploading new images. Uploading the images triggers Lambda to run code for the image resizing function.
In AWS, you can also build and run containerized applications
Containers
Containers
provide you with a standard way to package your application's code and
dependencies into a single object. You can also use containers for
processes and workflows in which there are essential requirements for
security, reliability, and scalability.
Containers
Containers provide you with a standard way to package your application's code and dependencies into a single object. You can also use containers for processes and workflows in which there are essential requirements for security, reliability, and scalability.
Amazon Elastic Container Service (Amazon ECS)
Amazon Elastic Container Service (Amazon ECS)
* highly scalable
* high-performance container management system that enables you to run and scale the containerized applications on AWS.
* Amazon ECS supports Docker containers.
*With Amazon ECS, you can use API calls tolaunch n stop Docker-enabledapplications
Docker is a software platform that enables you to build, test, and deploy applications quickly.
AWS supports the use of open-source Docker Community Edition and subscription-based Docker Enterprise Edition.
* highly scalable
* high-performance container management system that enables you to run and scale the containerized applications on AWS.
* Amazon ECS supports Docker containers.
*With Amazon ECS, you can use API calls tolaunch n stop Docker-enabledapplications
Docker is a software platform that enables you to build, test, and deploy applications quickly.
Amazon Elastic Kubernetes Service (Amazon EKS)
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use to run Kubernetes on AWS. kubernetes is open-source software that enables you to deploy and manage
containerized applications at scale. A large community of volunteers
maintains Kubernetes, and AWS actively works together with the
Kubernetes community. As new features and functionalities release for
Kubernetes applications, you can easily apply these updates to your
applications managed by Amazon EKS.
Amazon Elastic Kubernetes Service (Amazon EKS)
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use to run Kubernetes on AWS. kubernetes is open-source software that enables you to deploy and manage containerized applications at scale. A large community of volunteers maintains Kubernetes, and AWS actively works together with the Kubernetes community. As new features and functionalities release for Kubernetes applications, you can easily apply these updates to your applications managed by Amazon EKS.
AWS Fargate
AWS Fargate
AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS.
AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS.
When
using AWS Fargate, you do not need to provision or manage servers. AWS
Fargate manages your server infrastructure for you. You can focus more
on innovating and developing your applications, and you pay only for the
resources that are required to run your containers.
When using AWS Fargate, you do not need to provision or manage servers. AWS Fargate manages your server infrastructure for you. You can focus more on innovating and developing your applications, and you pay only for the resources that are required to run your containers.
AWS - CLOUD PRACTIONER NOTES - CHAPTER 7. MONITORING N ANALYTICS
Amazon CloudWatch
Monitor your AWS infrastructure and resources in real time
View metrics and graphs to monitor the performance of resources
Configure automatic actions and alerts in response to metrics
Amazon cloud watch is a web service that enables you to monitor and manage various metrics and configure alarm actions based on data from those metrics.
CloudWatch uses metrics to represent the data points for your resources. AWS services send metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show how performance has changed over time.
Monitor your resources’ utilization and performance
Access metrics from a single dashboard
Amazon CloudWatch
Monitor your AWS infrastructure and resources in real time
View metrics and graphs to monitor the performance of resources
Configure automatic actions and alerts in response to metrics
Amazon cloud watch is a web service that enables you to monitor and manage various metrics and configure alarm actions based on data from those metrics.
CloudWatch uses metrics to represent the data points for your resources. AWS services send metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show how performance has changed over time.
Monitor your resources’ utilization and performance
Access metrics from a single dashboard
CloudWatch alarms
CloudWatch alarms
With CloudWatch, you can create alarms that automatically perform actions if the value of your metric has gone above or below a predefined threshold.
For example, suppose that your company’s developers use Amazon EC2 instances for application development or testing purposes. If the developers occasionally forget to stop the instances, the instances will continue to run and incur charges.
In this scenario, you could create a CloudWatch alarm that automatically stops an Amazon EC2 instance when the CPU utilization percentage has remained below a certain threshold for a specified period. When configuring the alarm, you can specify to receive a notification whenever this alarm is triggered.
With CloudWatch, you can create alarms that automatically perform actions if the value of your metric has gone above or below a predefined threshold.
For example, suppose that your company’s developers use Amazon EC2 instances for application development or testing purposes. If the developers occasionally forget to stop the instances, the instances will continue to run and incur charges.
In this scenario, you could create a CloudWatch alarm that automatically stops an Amazon EC2 instance when the CPU utilization percentage has remained below a certain threshold for a specified period. When configuring the alarm, you can specify to receive a notification whenever this alarm is triggered.
CloudWatch dashboard
The CloudWatch dashboard feature enables you to access all the metrics for your resources from a single location. For example, you can use a CloudWatch dashboard to monitor the CPU utilization of an Amazon EC2 instance, the total number of requests made to an Amazon S3 bucket, and more. You can even customize separate dashboards for different business purposes, applications, or resources.
CloudWatch dashboard
The CloudWatch dashboard feature enables you to access all the metrics for your resources from a single location. For example, you can use a CloudWatch dashboard to monitor the CPU utilization of an Amazon EC2 instance, the total number of requests made to an Amazon S3 bucket, and more. You can even customize separate dashboards for different business purposes, applications, or resources.
AWS CloudTrail
AWS cloud trail records API calls for your account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, and more. You can think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that someone has left behind them.
Recall that you can use API calls to provision, manage, and configure your AWS resources. With CloudTrail, you can view a complete history of user activity and API calls for your applications and resources.
Events are typically updated in CloudTrail within 15 minutes after an API call. You can filter events by specifying the time and date that an API call occurred, the user who requested the action, the type of resource that was involved in the API call, and more.
cloud trail tasks:
Track user activities and API requests throughout your AWS infrastructure
Filter logs to assist with operational analysis and troubleshooting
AWS CloudTrail
AWS cloud trail records API calls for your account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, and more. You can think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that someone has left behind them.
Recall that you can use API calls to provision, manage, and configure your AWS resources. With CloudTrail, you can view a complete history of user activity and API calls for your applications and resources.
Events are typically updated in CloudTrail within 15 minutes after an API call. You can filter events by specifying the time and date that an API call occurred, the user who requested the action, the type of resource that was involved in the API call, and more.
cloud trail tasks:
Track user activities and API requests throughout your AWS infrastructureFilter logs to assist with operational analysis and troubleshooting
CloudTrail Insights
Within CloudTrail, you can also enable cloud trail insights. This optional feature allows CloudTrail to automatically detect unusual API activities in your AWS account.
For example, CloudTrail Insights might detect that a higher number of Amazon EC2 instances than usual have recently launched in your account. You can then review the full event details to determine which actions you need to take next.
AWS Trusted Advisor
Within CloudTrail, you can also enable cloud trail insights. This optional feature allows CloudTrail to automatically detect unusual API activities in your AWS account.
For example, CloudTrail Insights might detect that a higher number of Amazon EC2 instances than usual have recently launched in your account. You can then review the full event details to determine which actions you need to take next.
AWS Trusted Advisor
AWS trusted advisor is a web service that inspects your AWS environment and provides real-time recommendations in accordance with AWS best practices.
Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits.
For the checks in each category, Trusted Advisor offers a list of recommended actions and additional resources to learn more about AWS best practices.
The guidance provided by AWS Trusted Advisor can benefit your company at all stages of deployment. For example, you can use AWS Trusted Advisor to assist you while you are creating new workflows and developing new applications. Or you can use it while you are making ongoing improvements to existing applications and resources.
AWS trusted advisor is a web service that inspects your AWS environment and provides real-time recommendations in accordance with AWS best practices.
Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits.
For the checks in each category, Trusted Advisor offers a list of recommended actions and additional resources to learn more about AWS best practices.
The guidance provided by AWS Trusted Advisor can benefit your company at all stages of deployment. For example, you can use AWS Trusted Advisor to assist you while you are creating new workflows and developing new applications. Or you can use it while you are making ongoing improvements to existing applications and resources.
AWS advisor trusted dashboard
When you access the Trusted Advisor dashboard on the AWS Management Console, you can review completed checks for cost optimization, performance, security, fault tolerance, and service limits.
For each category:
The green check indicates the number of items for which it detected no problems.
The orange triangle represents the number of recommended investigations.
The red circle represents the number of recommended actions.
AWS advisor trusted dashboard
When you access the Trusted Advisor dashboard on the AWS Management Console, you can review completed checks for cost optimization, performance, security, fault tolerance, and service limits.
For each category:
The green check indicates the number of items for which it detected no problems.
The orange triangle represents the number of recommended investigations.
The red circle represents the number of recommended actions.