Showing posts with label AWS CLOUD PRACTIONER EXAM NOTES. Show all posts
Showing posts with label AWS CLOUD PRACTIONER EXAM NOTES. Show all posts
AWS CLOUD FOUNDATIONAL EXAM PRACTICE TESTS
AWS CLOUD FOUNDATIONAL EXAM PRACTICE TESTS
1. Skill builder - 30 questions - FREE
2. AWS EXAM SYLLABUS - 20 questions - FREE
3. Whizlabs 25 questions - FREE
4 .exam topics - 800 questions - FREE
PURCHASE
Digital cloud - 20 questions
AWS CLOUD PRACTIONER EXAM NOTES - 20
1. Data exchange - use 3rd party data for analytics
2. Datasync - data transfer service that automates,accelerates,replicating data
between on-premises to AWS storage services with internet or Direct Connect.
1. AWS App Mesh - makes it easy to monitor and control microservices
2. Appflow - no code, API - anayltic tool
3. Apprunner - deploy from source code/docker image to web appln)
4. Appsync - enterprise-level, GraphQL service with real-time data synchronization
5. Appconfig - Used quickly deploy application configurations to applications of any size
6. Amazon AppStream - lets you move desktop applications to AWS, without rewriting them
1. Elastic map reduce (EMR) - Hadoop/Apache big data
2. Elastic cache - Memcache and Redis
3. Elastic search - opensearch and operational analytics
1. Code commit - versioning
2. Code deploy - Automate code deployment
3. Code Build - Build and test code
4. Code Guru - Automate code reviews,optimize application performance (with ML)
5. Cloud9 - Write,run and debug code
6. Code Star - Quickly develop, build, and deploy applications on AWS
7. Code pipeline - Automate continuous delivery pipelines for fast and reliable updates
AWS Cloud development Kit(CDK) - The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define your cloud application resources using familiar programming languages.
AWS CLOUD PRACTIONER EXAM NOTES - 19
IQ - Service
DR - chepeast backup method
Rekoginition
Transcoder
VPC(transit gateway, VPC peering)
1.The 6 Pillars of the AWS Well-Architected Framework
2. cloud adoption framework
3. 6 Stratgies of migration
1. Operational Excellence
There are five design principles for operational excellence in the cloud:
Perform operations as code
Make frequent, small, reversible changes
Refine operations procedures frequently
Anticipate failure
Learn from all operational failures
2. Security
There are seven design principles for security in the cloud:
Implement a strong identity foundation
Enable traceability
Apply security at all layers
Automate security best practices
Protect data in transit and at rest
Keep people away from data
Prepare for security events
3. Reliability
There are five design principles for reliability in the cloud:
Automatically recover from failure
Test recovery procedures
Scale horizontally to increase aggregate workload availability
Stop guessing capacity
Manage change in automation
4. Performance Efficiency
There are five design principles for performance efficiency in the cloud:
Democratize advanced technologies
Go global in minutes
Use serverless architectures
Experiment more often
Consider mechanical sympathy
5. Cost Optimization
There are five design principles for cost optimization in the cloud:
Implement cloud financial management
Adopt a consumption model
Measure overall efficiency
Stop spending money on undifferentiated heavy lifting
Analyze and attribute expenditure
6. Sustainability
There are six design principles for sustainability in the cloud:
Understand your impact
Establish sustainability goals
Maximize utilization
Anticipate and adopt new, more efficient hardware and software offerings
Use managed services
Reduce the downstream impact of your cloud workloads
_______________________________________________________________________________________
cloud adoption framework
1. The Business perspective
The Business Perspective ensures that IT aligns with business needs and that IT investments link to key business results.
Common stakeholders include chief executive officer (CEO), chief financial officer (CFO), chief operations officer (COO), chief information officer (CIO), and chief technology officer (CTO).
2. people
The People perspective serves as a bridge between technology and business, accelerating the cloud journey to help organizations more rapidly evolve to a culture of continuous growth, learning, and where change becomes business-as-normal, with focus on culture, organizational structure, leadership, and workforce. Common stakeholders include CIO, COO, CTO, cloud director, and cross-functional and enterprise-wide leaders.
3. Governance
Perspective focuses on the skills and processes to align IT strategy with business strategy. This ensures that you maximize the business value and minimize risks.
Use the Governance Perspective to understand how to update the staff skills and
processes necessary to ensure business governance in the cloud. Manage and
measure cloud investments to evaluate business outcomes.
Common stakeholders include chief transformation officer, CIO, CTO, CFO, chief data officer (CDO), and chief risk officer (CRO).
4.Platform Perspective
includes principles and patterns for implementing new solutions on the cloud, and migrating on-premises workloads to the cloud.
Use a variety of architectural models to understand and communicate the structure of IT systems and their relationships. Describe the architecture of the target state environment in detail.
Common stakeholders include CTO, technology leaders, architects, and engineers.
5. The Security Perspective
ensures that the organization meets security objectives for visibility, auditability, control, and agility.
Use the AWS CAF to structure the selection and implementation of security controls that meet the organization’s needs.
Common stakeholders include chief information security officer (CISO), chief compliance officer (CCO), internal audit leaders, and security architects and engineers.
6. The Operations Perspective
helps you to enable, run, use, operate, and recover IT workloads to the level agreed upon with your business stakeholders.
Define how day-to-day, quarter-to-quarter, and year-to-year business is conducted. Align with and support the operations of the business. The AWS CAF helps these stakeholders define current operating procedures and identify the process changes and training needed to implement successful cloud adoption.
Common stakeholders include infrastructure and operations leaders, site reliability engineers, and information technology service managers.
_____________________________________________________________________________________
6 Stratgies of migration
Rehosting
Replatforming
Refactoring/re-architecting
Repurchasing
Retaining
Retiring
_____________________________________________________________________________________
AWS CLOUD PRACTIONER EXAM NOTES - 17
Amazon VPC
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account.
It is logically isolated from other virtual networks in the AWS Cloud.
A VPC spans all the Availability Zones in the region.
When you 1st create AWS account a default VPC is created for you in each AWS region.
By default you can create up to 5 VPCs per region.
The default VPC has all-public subnets.
Instances in the default VPC always have both a public and private IP address.
_________________________________________________________________________________
Terms in VPC :
Virtual Private Cloud(VPC):
A logically isolated virtual network in the AWS cloud.
Subnet:
A segment(group)of IP address range where you can place groups
of isolated resources (public subnet, private subnet)
Internet Gateway:
to connect to internet from Public subnet
Nat Gateway: To connect to internet from Private subnet.
(previously Nat instance was used, managed by client )
Direct connect:
AWS Direct Connect is a network service to "avoid" the Internet
to connect a customer’s on-premises sites to AWS.
Data is transmitted through a "private network connection" between AWS and a customer’s data center or corporate network.
_________________________________________________________________________________
Hardware VPN(Virtual private network) connection :
A hardware-based VPN connection between your Amazon VPC and your datacenter
Virtual Private Gateway:
The Amazon VPC side of a VPN connection.
Customer Gateway:
Your side of a VPN connection.
Peering Connection:
A peering connection enables you to route traffic via
private IP addresses between "two peered VPCs".
VPC Endpoints:
Enables private connectivity to services hosted in AWS, from within your VPC without using an Internet Gateway, VPN, Network Address Translation (NAT)devices, or firewall.
Egress-only Internet Gateway:
A stateful gateway to provide egress only access for IPv6 traffic from the VPC to the Internet.
AWS Transit Gateway:
connects "Multiple VPCs" and on-premises networks through a central hub.
AWS PrivateLink:
AWS PrivateLink establishes private connectivity between VPCs and
services hosted on AWS or on-premises, without exposing data to the internet.
AWS VPN CloudHub:
uses an Amazon VPC virtual private gateway with multiple customer gateways
________________________________________________________________________________
Options for securely connecting to a VPC are:
AWS managed VPN – fast to setup.
Direct Connect – high bandwidth, low-latency but takes weeks to months to setup.
VPN CloudHub – used for connecting multiple sites to AWS.
Software VPN – use 3rd party software.
Firewalls:
Security group :
operates at instance level
supports allow rules only
Stateful (remembers who came in)
evalutates all rules
applies to instance, ONLY if associated with group.
Network ACL :
Operates at network level
supports allow and deny rules
stateless (forgets who came in)
processes in order.
Automatically applies to all resource in Network.
Use VPC wizard 4 types of configuration in VPC:
VPC with a Single Public Subnet
VPC with Public and Private Subnets
VPC with Public and Private Subnets and Hardware VPN Access
VPC with a Private Subnet Only and Hardware VPN Access
AWS CLOUD PRACTIONER EXAM NOTES - 18
Amazon S3
* S3 is "OBJECT" storage built to store and retrieve data.
* Files can be anywhere from 0 bytes to 5 TB. Files are stored in "buckets".
* Buckets are root level folders. Any subfolder within a bucket is “folder”.
* S3 is a universal namespace so bucket names must be "unique globally".
* Configure a "lifecycle policy" to manage your objects and store
them cost effectively throughout their lifecycle.
* Life cycle policy -You can transition objects to other S3 storage
classes or expire objects that reach the end of their lifetimes.
* S3 Object Lock – Prevent Amazon S3 objects from being deleted or
overwritten for a fixed amount of time or indefinitely
* S3 provides query-in-place - functionality, allowing you to run
powerful analytics directly on your data at rest in S3(Athena)
* S3 Block Public Access – Block public access to S3 buckets and objects.
By default, Block Public Access are turned on at the account and bucket level.
* EBS snapshots are stored in S3.
Cloud watch logs are stored in S3.
Cloud trial logs are send to S3.
* S3 charges are for :
-Storage class
-Storage size
-Requests n data retrievals
-data transfer
-management n replication
When you successfully upload a file to S3 you receive a HTTP 200 code.
A HTTP 200 codes indicates a successful upload.
A HTTP 300 code indicates a redirection.
A HTTP 400 code indicates a client error.
A HTTP 500 code indicates a server error.
S3 is a persistent, highly durable data store.(retain data when powered off.)
S3 use cases:
- Backup storage
- Application hosting
- Media hosting
- Software delivery
- Static website
There are seven S3 storage classes.
S3 Standard = durable, immediately available, frequently accessed
S3 Intelligent-Tiering = automatically moves data to the most cost-effective tier.
S3 Standard-IA = "High Available" immediately available, infrequently accessed
S3 One Zone-IA = lower cost for infrequently accessed data with less "resilience"
S3 Glacier Instant Retrieval = data rarely accessed n requires retrieval in Milisec
S3 Glacier Flexible Retrieval = archived data, retrieval times in minutes or hours
S3 Glacier Deep Archive = lowest cost storage class for long term retention
AWS Snowball Edge (80 TB)
With AWS Snowball (Snowball), you can transfer hundreds of terabytes or petabytes of data between your on-premises data centers and Amazon S3.
They are well suited for large-scale data migrations and recurring transfer
workflows, in addition to local computing with higher capacity needs.
Snowball edge storage optimised - used for recurring transfer workflow
*storage- 80TB HDD capacity for block volumes, 1TB of SATA for block volumes
*compute - 40 vCPUs and 80gb memory to support sbe1 instance(equal to C5)
Snowball edge compute optimised - used for machine learning,full motion video,analytics
*Storage - 42TB HDD, and 7.68 TB of NVMe SSD
_________________________________________________________________________________________
AWS Storage Gateway
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises
access to virtually unlimited cloud storage.
These include moving backups to the cloud, using on-premises file shares backed by cloud storage, and providing low latency access to data in AWS for on-premises applications.
Storage Gateway Types:
1. File Gateway :On premises file storage backed by s3 objects.
a> S3 File Gateway - Store and access objects in Amazon S3
from NFS or SMB file data with local caching.
b> FSx file Gateway - Access fully managed file shares in Amazon FSx
for Windows File Server using SMB.
2. Tape gateway : on premises Block storage backed by s3 and "EBS snapshot"
Store virtual tapes in Amazon S3 using iSCSI-VTL, and store archived
tapes in Amazon S3 Glacier or Amazon S3 Glacier Deep Archive.
You can also deploy a tape gateway on an AWS Snowball Edge device to
facilitate offline transfer of tape data.
3. Volume gateway: Virtual tape storage in s3 and Glacier with VTL management.
Store and access iSCSI block storage volumes in Amazon S3.
a> Cached volumes – Store your data in AWS and retain a copy of frequently
accessed data subsets locally
b> Stored volume – Store all your data locally and asynchronously
back up point-in-time snapshots to AWS.
----------------------------------------------------------------------------------------
AWS CLOUD PRACTIONER EXAM NOTES - 16
1. Amazon DynamoDB
2. Aurora
3. Elastic cache
4. RDS
Amazon DynamoDB (schema-less.)
Amazon DynamoDB is a fully managed NoSQL database service that provides
fast and predictable performance with seamless scalability.
NoSQL type of database (non-relational).Fast, highly available fully managed.
Used when data is fluid and can change. used in social networks and web analytics.
"Push button scaling" means that you can scale the DB at any
time without incurring downtime.
DynamoDB supports cross-region replication which
in its latest implementation is now known as Global Tables.
"A Global table" gives you the capability to replicate a single table across
1 or many alternate regions and in doing so protects your table from regional outages
All of your data is stored on solid-state disks (SSDs) and is automatically
replicated across multiple Availability Zones in an AWS Region, providing
built-in high availability and data durability
Amazon DynamoDB global tables provides a fully managed solution for
deploying a multi-region, multi-master database.
_________________________________________________________________________________________
Amazon ElastiCache
ElastiCache is a web service that makes it easy to deploy and run
Memcached or Redis protocol-compliant server nodes in the cloud.
The in-memory caching provided by ElastiCache can be used to significantly
"improve latency" and throughput for many read-heavy application workloads or
compute-intensive workloads.
ElastiCache can be used for storing session state.
There are two types of ElastiCache engine:
Memcached - simple model. use to cache frequent queries in front of RDS.
Redis - complex model. use to cache with load-balanced web servers, store web session information in Redis so if a server is lost, the session info
is not lost, and another web server can pick it up
_________________________________________________________________________________________
Amazon Aurora
Amazon Aurora (Aurora) is a fully managed relational database engine
that's compatible with MySQL and PostgreSQL.
Amazon Aurora is designed to offer 99.99% availability, replicating 6 copies
of your data across 3 Availability Zones and backing up your data continuously
to Amazon S3.
Aurora can deliver up to 5 times the throughput of MySQL and up to 3 times
the throughput of PostgreSQL without requiring changes to most of your
existing applications.
Amazon Aurora is fully managed by Amazon Relational Database Service (RDS),
which automates time-consuming administration tasks like hardware
provisioning, database setup, patching, and backups.
Aurora management operations typically involve entire clusters of database servers
that are synchronized through replication, instead of individual database instances.
Aurora also automates and standardizes database clustering and replication.
_________________________________________________________________________________________
Amazon RDS
is a managed service that makes it easy to set up, operate, and scale a "relational database"(sql) in the cloud.
RDS is an Online Transaction Processing (OLTP) type of database.
RDS features and benefits:
SQL type of database , Can be used to perform complex queries and joins.
Easy to setup, highly available, fault tolerant, and scalable.
Used when data is clearly defined. used in online stores and banking systems.
Amazon RDS supports the following database engines:
SQL Server. Oracle. MySQL Server. PostgreSQL. Aurora. MariaDB.
RDS is a fully managed service and you do not have access to the
underlying EC2 instance (no root access).
The RDS service includes the following:
You can use the database products you are already familiar with:
MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server.
Amazon RDS manages backups, software patching, automatic failure detection, and recovery.
You can turn on automated backups, or manually create your own backup snapshots.
You can use these backups to restore a database.
You can get high availability with a primary instance and a synchronous
secondary instance that you can fail over to when problems occur.
You can also use read replicas to increase read scaling.
In addition to the security in your database package, you can help control who
can access your RDS databases by using AWS Identity and Access Management (IAM)
to define users and permissions.
You can also help protect your databases by putting them in a virtual private cloud (VPC)
____________________________________________________________________________________
AWS CLOUD PRACTIONER EXAM NOTES - 15
1.Amazon Elastic Block Store (Amazon EBS)
2.Amazon Elastic File System (Amazon EFS)
3.Amazon Elastic compute (Ec2)
1.Amazon Elastic Block Store (Amazon EBS) - Block storage
Amazon Elastic Block Store (Amazon EBS) provides persistent block
storage volumes for use with Amazon EC2 instances in the AWS Cloud.
Each Amazon EBS volume is automatically replicated within its Availability Zone
to protect from component failure, offering high availability and durability.
EBS volume data persists independently of the life of the instance.(permanent)
You can attach multiple EBS volumes to an instance.
EBS volumes must be in the same AZ as the instances they are attached to.
EBS Snapshots:
Snapshots capture a point-in-time state of an instance.
Snapshots are stored on S3.
Instance Store Volumes:
Instance store volumes are high performance local disks that are physically
attached to the host computer on which an EC2 instance runs.
Instance stores are ephemeral which means the data is lost when powered off
Charges for
1. Volume storage
2. snapshots
3. Data transfer
_________________________________________________________________________________________
2.Amazon Elastic File System (Amazon EFS)
EFS is a fully managed service that makes it easy to set up and
scale file storage in the Amazon Cloud.
Good for big data and analytics, media processing workflows,
content management, web serving, home directories etc.
Can concurrently connect 1 to 1000s of EC2 instances, from multiple AZs.
Data is stored across "multiple AZs" within a region. - Regional service
_________________________________________________________________________________________
3.Amazon EC2
Amazon Elastic Compute Cloud (Amazon EC2) is a web service with which
you can run virtual server “instances” in the cloud.
User data(Bootstrapping) is data that is supplied by the user at
instance launch in the form of a script.
Bootstrapping) in AWS simply means to add commands or scripts to AWS EC2's
instance User Data section that can be executed when the instance starts.
Pricing :
There are three fundamental drivers of cost with AWS:
compute, storage, and outbound data transfer.
Ec2 Pricing : On Demand, Reserved, Saving, Spot, Dedicated host, Dedicated Instance.
[1]. on demand - unpredictable workloads that cannot be interrupted. No upfront cost.
[2]. Reserved - Applications with steady state or predictable usage.
No upfront, partial upfront, full upfront.
1-3yrs commitment.
Reservations provide you with greater discounts, up to 75%, by paying for
capacity ahead of time. Some of the services you can reserve include:
EC2, DynamoDB, ElastiCache, RDS, and RedShift
a>Standard RI - 75% off, cannot change attributes(family,platform,tenancy,instance type)
b>convertible RI - 54% off,provides the capability to change the attributes of the RI
c>Scheduled RIs - you can reserve capacity that is scheduled to recur daily, weekly, or monthly, with a specified start time and duration, for a one-year term
[3]. Savings plan: No upfront, partial upfront, full upfront.
1-3yrs commitment.
measured in $/hour
a>Compute savings plan:
provides more flexibility and 66% of demand rates. These plans automatically apply to your ec2 instance regardless of instance size, instance family, region , operating system, or tenancy. Also applies to Fargate n lambda usage
b>EC2 instance savings plan:
Provides savings upto 72% in exchange for commitment to Specific instance
family in a "chosen region", regardless of size, tenancy, OS
4. Spot - instance that uses spare EC2 capacity that is available for less than
the On-Demand price. Suitable for Applications that have flexible
start and end times.
5. Dedicated host - Physical servers dedicated just for your use.
can use your own software licenses that use metrics
like per-core, per-socket, or per-VM.
6. Dedicated instances: Virtualized instances on hardware just for you.
Does not provide the additional visibility and controls
of dedicated hosts (e.g. how instances are placed on a server).
Amazon EC2 instance types:
1. General purpose instances:
General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads.
used in application server
back end server for enterprise appln.
small and medium database
Gaming
2. compute optimised instance:
Compute Optimized instances are ideal for compute bound applications
that benefit from high performance processors
High performance computing (HPC)
batch processing,
ad serving,
video encoding,
scientific modelling,
distributed analytics,
CPU-based machine learning inference.
3. memory optimised instance :
Memory optimized instances are designed to deliver fast performance for
workloads that process large data sets in memory.
Memory-intensive applications such as
open-source databases,
in-memory caches,
and real time big data analytics
4. Storage optimised :
Storage optimized instances are designed for workloads that require high,
sequential read and write access to very large data sets on local storage.
ideal fit for workloads that require very fast access to medium size data sets on local storage. can benefit from high compute performance and high network throughput.
relational databases (MySQL, MariaDB, and PostgreSQL),
and NoSQL databases (KeyDB, ScyllaDB, and Cassandra)
search engines and
data analytics
5. Accelerated computing :
Accelerated computing instances use hardware accelerators,
or co-processors, to perform functions.
graphics processing,
data pattern matching,
Machine learning,
high performance computing,
speech recognition,
autonomous vehicles, and drug discovery.
AWS CLOUD PRACTIONER EXAM NOTES - 14
Other Services
Amazon Elastic Transcoder [Media service]
AWS AppSync [Front-End Web & Mobile]
AWS Device Farm [Front-End Web & Mobile]
Amazon AppStream [End User Computing]
Amazon WorkLink [End User Computing]
Amazon WorkDocs [Business Applications]
Amazon Simple Email Service (Amazon SES) [Business Applications]
AWS IoT Core
AWS Managed Services
_______________________________________________________________________________________
Amazon Elastic Transcoder
lets you convert media files that you have stored in Amazon S3 into media
files in the formats required by consumer playback devices.
________________________________________________________________________________________
AWS AppSync
is an enterprise-level, fully managed GraphQL service with real-time data synchronization and offline programming features.
________________________________________________________________________________________
AWS Device Farm
is an app testing service that enables you to test your iOS, Android and Fire OS apps on real, physical phones and tablets that are hosted by AWS. The service allows you to upload your own tests or use built-in, script-free compatibility tests.
________________________________________________________________________________________
Amazon AppStream
2.0 lets you move your desktop applications to AWS, without rewriting them. It’s easy to install your applications on AppStream 2.0, set launch configurations, and make your applications available to users.
________________________________________________________________________________________
Amazon WorkLink
is a fully managed service that provides your employees and contractors secure, one- click access to your internal websites and web apps using their mobile phones.
________________________________________________________________________________________
Amazon WorkDocs
You can store virtually any type of file on Amazon WorkDocs. Each individual user account on Amazon WorkDocs includes 1 TB of storage capacity by default.
________________________________________________________________________________________
Amazon Simple Email Service (Amazon SES)
is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is limited to sending email.
________________________________________________________________________________________
AWS IoT Core
is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices.
AWS IoT Core can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and sec
________________________________________________________________________________________
AWS Managed Services
AWS Managed Services provides ongoing management of your AWS
infrastructure so you can focus on your applications.
By implementing best practices(ITIL® ) to maintain your infrastructure,
AWS Managed Services helps to reduce your operational overhead and risk.
AWS Managed Services automates common activities
________________________________________________________________________________________
What is internet of things with example?
The internet of things is a technology that allows us to add a device to an inert object (for example: vehicles, plant electronic systems, roofs, lighting, etc.) that can measure environmental parameters, generate associated data and transmit them through a communications network.
AWS CLOUD PRACTIONER EXAM NOTES - 13
Machine learning tools
1.Amazon SageMaker
2.Amazon comprehend
3.Amazon Rekognition
4.Amazon Textract
5.Amazon Transcribe
6.Amazon Lex
7.Amazon Polly
1.Amazon SageMaker
is a fully managed machine learning service.
With Amazon SageMaker, data scientists and developers can quickly build and train machine learning models, and then deploy them into a production-ready hosted environment.
2. Amazon CodeGuru
provides intelligent recommendations for improving application performance,
efficiency, and code quality in your Java applications.
3. Amazon comprehend
uses natural language processing (NLP) to extract insights about the
content of documents without the need of any special preprocessing.
With Amazon Comprehend you can search social networking feeds for mentions
of products, scan an entire document repository for key phrases,
or determine the topics contained in a set of documents.
4.Amazon Rekognition
makes it easy to add image and video analysis to your applications
5.Amazon Textract
enables you to add document text detection and analysis to your applications
6. Amazon Transcribe
provides transcription services for your audio files and audio streams. It uses advanced machine learning technologies to recognize spoken words and transcribe them into text.
7. Amazon Lex
Conversational AI for Chatbots.
8. Amazon Polly
Turns text into lifelike speech.
_______________________________________________________________________________________
AWS CLOUD PRACTIONER EXAM NOTES - 12
1.MIGRATION TOOLS
1.AWS Database Migration Service (AWS DMS)
2.AWS Server Migration Service (AWS SMS)
3.AWS Migration Hub (Migration Hub)
4.Migration Evaluator (Formerly TSO Logic)
_________________________________________________________________________________________
When to rightsize when Migrating ??
1. Right Size Before Migrating
By right sizing before a migration, you can significantly reduce your infrastructure costs. If you skip right sizing to save time, your migration speed might be faster,BUT
you will end up with higher cloud infrastructure spend for a potentially long time.
2. Right Sizing is an Ongoing Process
_________________________________________________________________________________________
AWS Database Migration Service (AWS DMS)
is a web service you can use to migrate data from your database that is on-premises, on an Amazon RDS DB instance, or in a database on an Amazon EC2 instance to a database on an AWS service.
You can also migrate a database from an AWS service to an on-premises database. You can migrate between source and target endpoints that use the same database engine, You can also migrate between source and target endpoints that use different database engines
_______________________________________________________________________________________
AWS Server Migration Service (AWS SMS)
combines data collection tools with automated server replication to speed the migration of on-premises servers to AWS.
_______________________________________________________________________________________
AWS Migration Hub (Migration Hub)
provides a single location to track migration tasks across
multiple AWS tools and partner solutions.
With Migration Hub, you can choose the AWS and partner migration tools that
best fit your needs while providing visibility into the status of your migration
Migration Hub also provides key metrics and progress information for
individual applications, regardless of which tools are used to migrate them.
_______________________________________________________________________________________
Migration Evaluator (Formerly TSO Logic)
you can gain access to insights and accelerate decision-making
for migration to AWS at no cost.
Server cost
Storage cost
Network cost
Data centre cost
_______________________________________________________________________________________
AWS CLOUD PRACTIONER EXAM NOTES - 11
Storage
1.AWS Backup
2.Amazon Elastic Block Store (Amazon EBS)
3.Amazon Elastic File System (Amazon EFS)
4.Amazon S3
5.Amazon S3 Glacier
6.AWS Snowball Edge
7.AWS Storage Gateway
a. AWS snow cone
b. AWS Snowmobile
c. AWS Elastic Disaster Recovery (AWS DRS)
d. Amazon FSx
AWS Backup
AWS Backup is a fully managed service that enables you to centralize
and automate data protection across on-premises and AWS services.
AWS Backup also enables you to audit and report on the compliance of
your data protection policies with AWS Backup Audit Manager.
----------------------------------------------------------------------------------------
Amazon Elastic Block Store (Amazon EBS) - Block storage
Amazon Elastic Block Store (Amazon EBS) provides persistent block
storage volumes for use with Amazon EC2 instances in the AWS Cloud.
----------------------------------------------------------------------------------------
Amazon Elastic File System (Amazon EFS)
EFS is a fully managed service that makes it easy to set up and
scale file storage in the Amazon Cloud.
----------------------------------------------------------------------------------------
Amazon S3
S3 is OBJECT storage built to store and retrieve data.
Files can be anywhere from 0 bytes to 5 TB.iles are stored in buckets.
Buckets are root level folders. Any subfolder within a bucket is “folder”.
S3 is a universal namespace so bucket names must be unique globally.
----------------------------------------------------------------------------------------
S3 Glacier
S3 Glacier Instant Retrieval = data rarely accessed n requires retrieval in Milisec
S3 Glacier Flexible Retrieval = archived data, retrieval times in minutes or hours
S3 Glacier Deep Archive = lowest cost storage class for long term retention
----------------------------------------------------------------------------------------
AWS Snowball Edge (80 TB)
With AWS Snowball (Snowball), you can transfer hundreds of terabytes or
petabytes of data between your on-premises data centers and Amazon S3.
Snowball edge storage optimised - used for recurring transfer workflow
storage- 80TB HDD, compute - 40 vCPUs and 80gb
Snowball edge compute optimised - used for machine learning,full motion
video,analytics Storage - 42TB HDD
----------------------------------------------------------------------------------------
AWS Storage Gateway
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises
access to virtually unlimited cloud storage.
Storage Gateway Types:File Gateway, Tape gateway, Volume Gateway
----------------------------------------------------------------------------------------
AWS snow cone( 8 TB)
is a small, rugged, and secure edge computing and data transfer device.
It features 2 CPUs, 4 GB of memory, and 8 TB of usable storage
----------------------------------------------------------------------------------------
AWS Snowmobile (<100 PB)
It is an exabyte-scale data transfer service used to move large amounts
of data to AWS. You can transfer up to "100 petabytes" of data per Snowmobile,
a 45-foot long ruggedized shipping container, pulled by a semi trailer truck
----------------------------------------------------------------------------------------
AWS Elastic Disaster Recovery (AWS DRS)
minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.
----------------------------------------------------------------------------------------
Amazon FSx
makes it easy and cost effective to launch, run, and scale feature-rich,
high-performance file systems in the cloud.
With Amazon FSx, you can choose between four widely-used file systems:
Lustre, NetApp ONTAP, OpenZFS, and Windows File Server.
----------------------------------------------------------------------------------------
Subscribe to:
Posts (Atom)