YouTube Icon

Interview Questions.

AWS Architect Interview Questions and Answers - Jul 17, 2022

fluid

AWS Architect Interview Questions and Answers

Section 1: What is Cloud Computing

Q1. I have some non-public servers on my premises, also I even have distributed a number of my workload on the general public cloud, what's this architecture called?

Virtual Private Network

Private Cloud

Virtual Private Cloud

Hybrid Cloud

Ans: 4.

Explanation: This type of architecture would be a hybrid cloud. Why? Because we are using both, the public cloud, and your on premises servers i.E the private cloud. To make this hybrid structure smooth to use, wouldn’t or not it's higher if your private and public cloud had been all on the equal community(actually). This is hooked up by along with your public cloud servers in a virtual private cloud, and connecting this virtual cloud along with your on premise servers the usage of a VPN(Virtual Private Network).

Section 2: Amazon EC2

Q2. What does the following command do with recognize to the Amazon EC2 security businesses?

Ec2-create-group CreateSecurityGroup

Groups the user created safety organizations into a new group for easy access.

Creates a brand new security institution for use along with your account.

Creates a new institution within the safety institution.

Creates a new rule inside the safety institution.

Ans: 2.

Explanation: A Security organization is much like a firewall, it controls the site visitors inside and out of your example. In AWS phrases, the inbound and outbound site visitors. The command referred to is pretty clear-cut, it says create protection institution, and does the identical. Moving alongside, as soon as your safety institution is created, you can upload exclusive policies in it. For example, you have got an RDS example, to get right of entry to it, you need to add the public IP cope with of the gadget from that you want get right of entry to the instance  in its protection group.

Q3. You have a video trans-coding utility. The videos are processed in step with a queue. If the processing of a video is interrupted in one example, it's far resumed in some other example. Currently there is a massive back-log of movies which wishes to be processed, for this you want to add extra instances, but you want these instances only till your backlog is decreased. Which of these would be an efficient manner to do it?

Ans: You should be the use of an On Demand example for the same. Why? First of all, the workload must be processed now, which means it is urgent, secondly you don’t want them as soon as your backlog is cleared, consequently Reserved Instance is out of the photo, and for the reason that paintings is urgent, you cannot prevent the paintings in your example just due to the fact the spot fee spiked, therefore Spot Instances shall also no longer be used. Hence On-Demand instances will be the right choice in this situation.

Q4. You have a allotted software that periodically processes big volumes of records throughout multiple Amazon EC2 Instances. The utility is designed to recover gracefully from Amazon EC2 example disasters. You are required to perform this challenge inside the maximum value effective manner.

Which of the following will meet your necessities?

Spot Instances

Reserved instances

Dedicated times

On-Demand times

Ans: 1

Explanation: Since the paintings we are addressing right here isn't non-stop, a reserved example will be idle at instances, same is going with On Demand instances. Also it does now not make feel to release an On Demand instance whenever paintings comes up, considering the fact that it is high-priced. Hence Spot Instances can be the right fit because of their low charges and no long term commitments.

Q5. How is stopping and terminating an instance extraordinary from each other?

Ans: Starting, preventing and terminating are the three states in an EC2 instance, let’s discuss them in detail:

Stopping and Starting an instance: When an instance is stopped, the instance performs a ordinary shutdown after which transitions to a stopped kingdom. All of its Amazon EBS volumes stay attached, and you may begin the instance again at a later time. You aren't charged for extra instance hours even as the instance is in a stopped country.

Terminating an instance: When an instance is terminated, the instance plays a ordinary shutdown, then the attached Amazon EBS volumes are deleted unless the quantity’s deleteOnTermination attribute is about to fake. The instance itself is likewise deleted, and you may’t start the example once more at a later time.

Q6. If I want my example to run on a single-tenant hardware, which value do I need to set the instance’s tenancy characteristic to?

Dedicated

Isolated

One

Reserved

Ans: 1.

Explanation: The Instance tenancy characteristic should be set to Dedicated Instance. The relaxation of the values are invalid.

Advanced Architect AWS

Q7. When will you incur fees with an Elastic IP address (EIP)?

When an EIP is allotted.

When it's far allocated and associated with a jogging example.

When it is allotted and related to a stopped example.

Costs are incurred no matter whether the EIP is associated with a going for walks instance.

Ans: 3.

Explanation: You are not charged, if only one Elastic IP deal with is attached together with your strolling example. But you do get charged in the following conditions:

When you use a couple of Elastic IPs along with your instance.

When your Elastic IP is attached to a stopped example.

When your Elastic IP isn't connected to any example.

Q8. How is a Spot example one-of-a-kind from an On-Demand example or Reserved Instance?

Ans:

First of all, let’s take into account that Spot Instance, On-Demand instance and Reserved Instances are all fashions for pricing. Moving alongside, spot instances offer the capability for customers to purchase compute ability without a upfront commitment, at hourly rates normally decrease than the On-Demand price in each region. Spot times are just like bidding, the bidding fee is referred to as Spot Price. The Spot Price fluctuates based on deliver and call for for times, but clients will in no way pay more than the most rate they have got certain. If the Spot Price moves better than a patron’s maximum price, the client’s EC2 instance could be close down robotically. But the reverse is not proper, if the Spot costs come down once more, your EC2 instance will now not be released routinely, one has to try this manually.  In Spot and On demand instance, there's no dedication for the period from the user side, however in reserved instances one has to paste to the term that he has selected.

Q9. Are the Reserved Instances to be had for Multi-AZ Deployments?

Multi-AZ Deployments are simplest to be had for Cluster Compute times sorts

Available for all instance sorts

Only to be had for M3 example kinds

D. Not Available for Reserved Instances

Ans 2.

Explanation: Reserved Instances is a pricing model, that is to be had for all example types in EC2.

Q10. How to use the processor country manage feature available at the  c4.8xlarge instance?

Ans: The processor state manipulate consists of 2 states:

The C country – Sleep nation various from c0 to c6. C6 being the inner most sleep country for a processor

The P country – Performance kingdom p0 being the very best and p15 being the bottom feasible frequency.

Now, why the C nation and P kingdom. Processors have cores, these cores need thermal headroom to enhance their overall performance. Now due to the fact all of the cores are at the processor the temperature ought to be kept at an ideal nation so that each one the cores can carry out at the highest performance.

Now how will these states help in that? If a middle is positioned into sleep kingdom it'll lessen the overall temperature of the processor and for this reason other cores can carry out higher. Now the identical may be  synchronized with different cores, so that the processor can boost as many cores it may by timely placing different cores to sleep, and therefore get an standard performance improve.

Concluding, the C and P state may be custom designed in some EC2 instances like the c4.8xlarge instance and accordingly you can personalize the processor in keeping with your workload.

HubSpot Video
 

Q11. What sort of community overall performance parameters are you able to anticipate while you launch times in cluster placement institution?

Ans: The community overall performance depends on the example kind and community performance specification, if released in a placement institution you could expect as much as

10 Gbps in a unmarried-drift,

20 Gbps in multiflow i.E complete duplex

Network visitors outdoor the position organization will be restrained to 5 Gbps(complete duplex).

Q12. To deploy a 4 node cluster of Hadoop in AWS which example type may be used?

Ans: First let’s understand what sincerely occurs in a Hadoop cluster, the Hadoop cluster follows a master slave idea. The master device strategies all the data, slave machines save the facts and act as facts nodes. Since all of the garage happens on the slave, a better potential tough disk would be endorsed and considering the fact that grasp does all the processing, a better RAM and a much better CPU is needed. Therefore, you could select the configuration of your machine relying to your workload. For e.G. – In this situation c4.8xlarge may be preferred for master gadget whereas for slave gadget we can select i2.Big example. If you don’t need to deal with configuring your example and installing hadoop cluster manually, you can without delay release an Amazon EMR (Elastic Map Reduce) instance which automatically configures the servers for you. You dump your data to be processed in S3, EMR selections it from there, procedures it, and dumps it lower back into S3.

Q13. Where do you think an AMI suits, while you are designing an structure for a solution?

Ans: AMIs(Amazon Machine Images) are like templates of digital machines and an instance is derived from an AMI. AWS gives pre-baked AMIs which you may pick out at the same time as you are launching an example, a few AMIs are not unfastened, therefore can be offered from the AWS Marketplace. You also can pick to create your very own custom AMI which might help you shop area on AWS. For example if you don’t want a set of software program for your set up, you could customize your AMI to do that. This makes it cost efficient, because you are putting off the unwanted things.

 

Q14. How do you pick out an Availability Zone?

Ans:Let’s recognize this through an example, recollect there’s a organization which has user base in India in addition to inside the US.

Let us see how we are able to choose the place for this use case :

So, almost about the above figure the regions to select between are, Mumbai and North Virginia. Now allow us to first evaluate the pricing, you've got hourly costs, which can be transformed for your in step with month determine. Here North Virginia emerges as a winner. But, pricing can't be the most effective parameter to bear in mind. Performance have to additionally be stored in thoughts hence, allow’s look at latency as well. Latency essentially is the time that a server takes to respond for your requests i.E the reaction time. North Virginia wins again!

So concluding, North Virginia have to be chosen for this use case.

Q15. Is one Elastic IP deal with sufficient for every example that I actually have walking?

Ans: Depends! Every instance comes with its personal private and public deal with. The non-public cope with is related exclusively with the instance and is returned  to Amazon EC2 only whilst it's far stopped or terminated. Similarly, the general public deal with is associated exclusively with the example till it's far stopped or terminated. However, this will be replaced via the Elastic IP cope with, which stays with the instance so long as the consumer doesn’t manually detach it. But what in case you are website hosting multiple websites to your EC2 server, if so you could require a couple of Elastic IP address.

Q16. What are the high-quality practices for Security in Amazon EC2?

Ans:There are numerous excellent practices to relaxed Amazon EC2. A few of them are given beneath:

Use AWS Identity and Access Management (IAM) to govern access in your AWS resources.

Restrict get entry to via most effective permitting trusted hosts or networks to get right of entry to ports in your instance.

Review the policies in your safety agencies frequently, and make certain which you apply the precept of least

Privilege – best open up permissions that you require.

Disable password-based logins for instances released from your AMI. Passwords can be determined or cracked, and are a security risk.

Section three: Amazon Storage

Q17. You need to configure an Amazon S3 bucket to serve static assets in your public-dealing with internet utility. Which approach will make sure that all items uploaded to the bucket are set to public read?

Set permissions at the item to public read at some point of add.

Configure the bucket policy to set all objects to public read.

Use AWS Identity and Access Management roles to set the bucket to public study.

Amazon S3 gadgets default to public read, so no movement is needed.

Ans 2.

Explanation: Rather than making modifications to every item, its better to set the coverage for the entire bucket. IAM is used to offer more granular permissions, because that is a website, all objects could be public through default.

Q18. A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The consumer plans to apply 0.33-party software program to support this integration. Which method will restriction the access of the third party software program to only the Amazon S3 bucket named “organisation-backup”?

A custom bucket coverage constrained to the Amazon S3 API in three Amazon Glacier archive “organisation-backup”

A custom bucket coverage restrained to the Amazon S3 API in “employer-backup”

A custom IAM person policy constrained to the Amazon S3 API for the Amazon Glacier archive “company-backup”.

A custom IAM person policy limited to the Amazon S3 API in “agency-backup”.

Answer 4.

Explanation: Taking queue from the previous questions, this use case involves more granular permissions, consequently IAM could be used right here.

Q19. Can S3 be used with EC2 times, if yes, how?

Ans: Yes, it may be used for instances with root devices subsidized by using nearby instance garage. By using Amazon S3, builders have get right of entry to to the identical enormously scalable, dependable, speedy, inexpensive statistics storage infrastructure that Amazon uses to run its personal international network of net sites. In order to execute structures inside the Amazon EC2 environment, developers use the gear furnished to load their Amazon Machine Images (AMIs) into Amazon S3 and to transport them between Amazon S3 and Amazon EC2.

Another use case may be for websites hosted on EC2 to load their static content from S3.

For an in depth dialogue on S3, please refer our S3 AWS weblog.

Q20. A customer carried out AWS Storage Gateway with a gateway-cached volume at their fundamental workplace. An event takes the hyperlink among the principle and department office offline. Which strategies will allow the branch workplace to get right of entry to their facts?

Restore by means of imposing a lifecycle coverage at the Amazon S3 bucket.

Make an Amazon Glacier Restore API name to load the documents into every other Amazon S3 bucket within four to six hours.

Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and repair from a gateway image.

Create an Amazon EBS volume from a gateway photo, and mount it to an Amazon EC2 example.

Answer 3.

Explanation: The quickest way to do it'd be launching a new garage gateway example. Why? Since time is the key issue which drives every enterprise, troubleshooting this problem will take more time. Rather than we are able to simply restore the preceding working state of the garage gateway on a new example.

Q21. When you need to move information over lengthy distances the use of the internet, as an instance across international locations or continents for your Amazon S3 bucket, which technique or carrier will you use?

Amazon Glacier

Amazon CloudFront

Amazon Transfer Acceleration

Amazon Snowball

Answer three.

Explanation: You might not use Snowball, because for now, the snowball service does now not aid move area statistics transfer, and when you consider that, we're transferring across international locations, Snowball can not be used. Transfer Acceleration will be the proper choice right here as it throttles your statistics switch with the usage of optimized community paths and Amazon’s content delivery network upto 300% in comparison to regular statistics transfer velocity.

Q22. How can you accelerate statistics switch in Snowball?

Ans: The statistics transfer can be expanded inside the following manner:

By appearing multiple replica operations at one time i.E. If the pc is robust enough, you could provoke multiple cp instructions each from distinct terminals, at the same Snowball tool.

Copying from more than one workstations to the identical snowball.

Transferring massive files or by way of developing a batch of small file, this may lessen the encryption overhead.

Eliminating unnecessary hops i.E. Make a setup wherein the supply system(s) and the snowball are the handiest machines energetic on the switch getting used, this could highly improve overall performance.

Section four: AWS VPC

Q23. If you need to release Amazon Elastic Compute Cloud (EC2) instances and assign each example a predetermined non-public IP address you have to:

Launch the instance from a private Amazon Machine Image (AMI).

Assign a collection of sequential Elastic IP deal with to the instances.

Launch the instances within the Amazon Virtual Private Cloud (VPC).

Launch the instances in a Placement Group.

Ans 3.

Explanation: The high-quality manner of connecting for your cloud assets (for ex- ec2 instances) out of your very own information center (for eg- non-public cloud) is a VPC. Once you connect your datacenter to the VPC in which your instances are gift, every example is assigned a non-public IP deal with which may be accessed from your datacenter. Hence, you may access your public cloud resources, as if they have been for your personal network.

Q24. Can I join my corporate datacenter to the Amazon Cloud?

Ans: Yes, you could try this by means of setting up a VPN(Virtual Private Network) connection between your organisation’s community and your VPC (Virtual Private Cloud), this can allow you to interact together with your EC2 times as though they have been inside your existing network.

Q25. Is it viable to trade the private IP addresses of an EC2 even as it's far running/stopped in a VPC?

Ans: Primary non-public IP address is connected with the example at some stage in its lifetime and can not be changed, but secondary personal addresses can be unassigned, assigned or moved among interfaces or instances at any point.

Q26. Why do you're making subnets?

Because there's a shortage of networks

To effectively make use of networks which have a big no. Of hosts.

Because there is a shortage of hosts.

To successfully make use of networks which have a small no. Of hosts.

Ans: 2.

Explanation: If there may be a community which has a big no. Of hosts, dealing with these types of hosts may be a tedious process. Therefore we divide this community into subnets (sub-networks) so that coping with these hosts will become less difficult.

Q27. Which of the following is genuine?

You can attach more than one route tables to a subnet

You can connect more than one subnets to a route table

Both A and B

None of these.

Ans: 2.

Explanation: Route Tables are used to direction community packets, consequently in a subnet having multiple path tables will lead to confusion as to wherein the packet has to move. Therefore, there is most effective one route table in a subnet, and on the grounds that a path desk could have any no. Of facts or statistics, consequently attaching more than one subnets to a course table is viable.

Q28). In CloudFront what takes place whilst content material is NOT present at an Edge region and a request is made to it?

An Error “404 now not discovered” is back

CloudFront promises the content without delay from the origin server and shops it inside the cache of the edge location

The request is stored on keep until content material is introduced to the brink location

The request is routed to the following closest side vicinity

Ans 2. 

Explanation: CloudFront is a content material shipping machine, which caches facts to the nearest facet location from the user, to lessen latency. If records is not present at an facet region, the primary time the statistics may also get transferred from the authentic server, however from the following time, it'll be served from the cached side.

Q29. If I’m the use of Amazon CloudFront, can I use Direct Connect to transfer items from my personal records center?

Ans: Yes. Amazon CloudFront supports custom origins inclusive of origins from outside of AWS. With AWS Direct Connect, you'll be charged with the respective statistics switch prices.

Q30. If my AWS Direct Connect fails, will I lose my connectivity?

Ans:If a backup AWS Direct join has been configured, in the event of a failure it's going to transfer over to the second. It is recommended to permit Bidirectional Forwarding Detection (BFD) whilst configuring your connections to ensure quicker detection and failover. On the opposite hand, if you have configured a backup IPsec VPN connection as a substitute, all VPC traffic will failover to the backup VPN connection automatically. Traffic to/from public assets which include Amazon S3 could be routed over the Internet. If you do not have a backup AWS Direct Connect hyperlink or a IPsec VPN link, then Amazon VPC visitors may be dropped within the event of a failure.

Section 5: Amazon Database

Q31. If I release a standby RDS instance, will or not it's inside the equal Availability Zone as my primary?

Only for Oracle RDS kinds

Yes

Only if it's far configured at release

No

Ans: 4.

Explanation: No, since the motive of having a standby instance is to avoid an infrastructure failure (if it occurs), therefore the standby instance is stored in a exceptional availability quarter, that is a bodily special impartial infrastructure.

Q32. When could I pick Provisioned IOPS over Standard RDS garage?

If you've got batch-oriented workloads

If you operate production on-line transaction processing (OLTP) workloads.

If you've got workloads that aren't touchy to constant performance

All of the above

Ans: 1.

Explanation:  Provisioned IOPS supply high IO charges however alternatively it's far costly as properly. Batch processing workloads do now not require manual intervention they allow complete utilization of structures, therefore aprovisioned IOPS might be desired for batch orientated workload.

Q33. How is Amazon RDS, DynamoDB and Redshift distinct?

Amazon RDS is a database control provider for relational databases,  it manages patching, upgrading, backing up of information and so on. Of databases for you with out your intervention. RDS  is a Db management provider for dependent statistics handiest.

DynamoDB, however, is a NoSQL database provider, NoSQL offers with unstructured information.

Redshift, is a completely specific provider, it's far a facts warehouse product and is utilized in information analysis.

Q34. If I am strolling my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for study or write operations together with number one DB example?

Yes

Only with MySQL primarily based RDS

Only for Oracle RDS times

No

Ans: four.

Explanation: No, Standby DB instance can not be used with primary DB example in parallel, as the former issolely used for standby purposes, it can't be used unless the number one example goes down.

Q35. Your agency’s branch offices are everywhere in the international, they use a software program with a multi-nearby deployment on AWS, they use MySQL five.6 for statistics patience.

The assignment is to run an hourly batch technique and read data from every location to compute pass-local reviews so that it will be allotted to all of the branches. This need to be accomplished inside the shortest time feasible. How will you build the DB structure as a way to meet the necessities?

For every regional deployment, use RDS MySQL with a master in the location and a study replica within the HQ vicinity

For every nearby deployment, use MySQL on EC2 with a grasp in the place and ship hourly EBS snapshots to the HQ area

For each local deployment, use RDS MySQL with a master in the place and ship hourly RDS snapshots to the HQ location

For each regional deployment, use MySQL on EC2 with a grasp in the area and use S3 to copy records documents hourly to the HQ region

Ans 1.

Explanation: For this we are able to take an RDS instance as a master, as it will manage our database for us and for the reason that we need to study from each place, we’ll positioned a read replica of this example in each location where the data needs to be examine from. Option C isn't accurate in view that putting a examine reproduction might be greater green than placing a photo, a examine replica can be promoted if wished  to an impartial DB instance, but with a Db photo it becomes mandatory to release a separate DB Instance.

Q36. Can I run multiple DB example for Amazon RDS totally free?

Ans: Yes. You can run more than one Single-AZ Micro database example, that too without cost! However, any use exceeding 750 instance hours, throughout all Amazon RDS Single-AZ Micro DB times, throughout all eligible database engines and regions, might be billed at general Amazon RDS costs. For instance: if you run  Single-AZ Micro DB instances for four hundred hours every in a single month, you may collect 800 example hours of utilization, of which 750 hours can be unfastened. You will be billed for the last 50 hours at the usual Amazon RDS rate.

For an in depth discussion on this subject matter, please refer our RDS AWS weblog.

Q37. Which AWS offerings will you operate to gather and system e-commerce facts for near actual-time evaluation?

Amazon ElastiCache

Amazon DynamoDB

Amazon Redshift

Amazon Elastic MapReduce

Ans: 2,3.

Explanation: DynamoDB is a fully managed NoSQL database service. DynamoDB, consequently can be fed any form of unstructured information, which can be information from e-commerce websites as nicely, and later, an analysis can be performed on them using Amazon Redshift. We are not using Elastic MapReduce, on account that a near actual time analyses is wanted.

Q38. Can I retrieve most effective a specific detail of the data, if I have a nested JSON data in DynamoDB?

Ans: Yes. When the usage of the GetItem, BatchGetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can encompass scalars, units, or factors of a JSON record.

Q39. A corporation is deploying a brand new -tier web utility in AWS. The corporation has restricted group of workers and requires excessive availability, and the application calls for complicated queries and desk joins. Which configuration provides the answer for the organization’s necessities?

MySQL Installed on  Amazon EC2 Instances in a unmarried Availability Zone

Amazon RDS for MySQL with Multi-AZ

Amazon ElastiCache

Amazon DynamoDB

Ans: four.

Explanation: DynamoDB has the potential to scale greater than RDS or some other relational database provider, therefore DynamoDB will be the apt preference.

Q40. What happens to my backups and DB Snapshots if I delete my DB Instance?

Ans: When you delete a DB example, you have an alternative of creating a very last DB photo, in case you try this you may repair your database from that photograph. RDS retains this consumer-created DB image at the side of all other manually created DB snapshots after the example is deleted, also automatic backups are deleted and handiest manually created DB Snapshots are retained.

Q41. Which of the following use cases are appropriate for Amazon DynamoDB? Choose 2 answers

Managing net sessions.

Storing JSON files.

Storing metadata for Amazon S3 gadgets.

Running relational joins and complex updates.

Ans: 3,4.

Explanation: If all of your JSON facts have the same fields eg [id,name,age] then it would be higher to save it in a relational database, the metadata however is unstructured, also going for walks relational joins or complicated updates would work on DynamoDB as properly.

Q42. How can I load my information to Amazon Redshift from extraordinary statistics assets like Amazon RDS, Amazon DynamoDB and Amazon EC2?

Ans: You can load the facts inside the following  ways:

You can use the COPY command to load records in parallel directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.

AWS Data Pipeline presents a high overall performance, dependable, fault tolerant strategy to load information from a diffusion of AWS statistics assets. You can use AWS Data Pipeline to specify the information source, preferred records variations, after which execute a pre-written import script to load your records into Amazon Redshift.

Q43. Your application has to retrieve statistics from your user’s cellular every 5 mins and the facts is stored in DynamoDB, later every day at a selected time the statistics is extracted into S3 on a according to person basis and then your utility is later used to visualise the information to the person. You are requested to optimize the architecture of the backend system to decrease fee, what could you propose?

Create a new Amazon DynamoDB (capable every day and drop the only for the day past after its statistics is on Amazon S3.

Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB desk and reduce provisioned write throughput.

Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and decrease provisioned examine throughput.

Write statistics without delay into an Amazon Redshift cluster changing each Amazon DynamoDB and Amazon S3.

Ans: 3.

Explanation: Since our paintings calls for the facts to be extracted and analyzed, to optimize this process someone could use provisioned IO, however considering the fact that it is high-priced, the use of a ElastiCache memoryinsread to cache the consequences within the memory can reduce the provisioned read throughput and subsequently reduce price with out affecting the overall performance.

Q44. You are walking a internet site on EC2 times deployed throughout more than one Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The website plays a high wide variety of small reads and writes in keeping with 2d and is predicated on an eventual consistency version. After comprehensive tests you find out that there may be examine competition on RDS MySQL. Which are the high-quality strategies to fulfill these requirements? (Choose 2 answers)

Deploy ElastiCache in-reminiscence cache strolling in every availability area

Implement sharding to distribute load to a couple of RDS MySQL instances

Increase the RDS MySQL Instance size and Implement provisioned IOPS

Add an RDS MySQL study reproduction in each availability area

Ans: 1,3.

Explanation:  Since it does a variety of study writes, provisioned IO can also end up expensive. But we want excessive overall performance as well, consequently the facts can be cached the usage of ElastiCache which can be used for regularly studying the data. As for RDS in view that examine rivalry is going on, the instance size must be improved and provisioned IO have to be delivered to boom the performance.

Q45. A startup is going for walks a pilot deployment of around one hundred sensors to measure road noise and air great in city regions for 3 months. It become referred to that each month round 4GB of sensor statistics is generated. The company makes use of a load balanced automobile scaled layer of EC2 times and a RDS database with 500 GB fashionable storage. The pilot changed into a success and now they need to set up at the least  100K sensors which want to be supported by using the backend. You want to store the records for at least 2 years to analyze it. Which setup of the subsequent could you select?

Add an SQS queue to the ingestion layer to buffer writes to the RDS example

Ingest facts right into a DynamoDB desk and move old statistics to a Redshift cluster

Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage

Keep the present day structure however upgrade RDS garage to 3TB and 10K provisioned IOPS

Ans: three.

Explanation: A Redshift cluster might be preferred as it smooth to scale, also the paintings would be carried out in parallel via the nodes, consequently is best for a bigger workload like our use case. Since every month four GB of statistics is generated, consequently in 2 yr, it have to be round ninety six GB. And because the servers will be multiplied to 100K in number, 96 GB will about become 96TB. Hence choice C is the proper solution.

Section 6: AWS Auto Scaling, AWS Load Balancer

Q46. Suppose you have an application wherein you have to render snap shots and additionally perform a little standard computing. From the following  services which provider will great healthy your need?

Classic Load Balancer

Application Load Balancer

Both of them

None of these

Ans: 2.

Explanation: You will pick out an application load balancer, since it supports direction based routing, because of this it could take choices based at the URL, therefore in case your mission needs picture rendering it's going to route it to a specific example, and for trendy computing it'll course it to a one-of-a-kind example.

Q47. What is the distinction between Scalability and Elasticity?

Ans: Scalability is the capacity of a machine to increase its hardware sources to handle the growth in call for. It may be achieved by increasing the hardware specifications or increasing the processing nodes.

Elasticity is the capacity of a system to address increase inside the workload by adding extra hardware assets while the call for increases(same as scaling) however also rolling lower back the scaled sources, when the resources are no longer wished. This is in particular helpful in Cloud environments, where a pay in step with use model is observed.

Q48. How will you convert the example kind for instances which are jogging in your software tier and are using Auto Scaling. Where will you convert it from the following areas?

  Auto Scaling policy configuration

  Auto Scaling organization

  Auto Scaling tags configuration

  Auto Scaling launch configuration

Ans: 1.

Explanation: Auto scaling tags configuration, is used to connect metadata on your times, to alternate the instance kind you have to use automobile scaling release configuration.

Q49. You have a content management device strolling on an Amazon EC2 instance that is approaching one hundred% CPU usage. Which alternative will reduce load on the Amazon EC2 instance?

  Create a load balancer, and sign up the Amazon EC2 example with it

  Create a CloudFront distribution, and configure the Amazon EC2 example because the beginning

  Create an Auto Scaling institution from the example using the CreateAutoScalingGroup movement

  Create a launch configuration from the instance the use of the CreateLaunchConfigurationAction

Ans: 1.

Explanation:Creating by myself an autoscaling institution will now not solve the problem, until you connect a load balancer to it. Once you connect a load balancer to an autoscaling group, it'll efficaciously distribute the weight among all the instances. Option B – CloudFront is a CDN, it's far a information switch tool consequently will now not assist lessen load at the EC2 example. Similarly the other alternative – Launch configuration is a template for configuration which has no connection with lowering masses.

Q50. When have to I use a Classic Load Balancer and when have to I use an Application load balancer?

Ans: A Classic Load Balancer is good for simple load balancing of visitors throughout more than one EC2 times, while an Application Load Balancer is right for microservices or box-based architectures where there is a want to route visitors to a couple of offerings or load stability across a couple of ports at the same EC2 instance.

For a detailed dialogue on Auto Scaling and Load Balancer, please refer our EC2 AWS blog.

Q51. What does Connection draining do?

 Terminates instances which aren't in use.

 Re-routes visitors from instances that are to be updated or failed a fitness check.

 Re-routes visitors from times that have extra workload to instances that have less workload.

 Drains all of the connections from an instance, with one click.

Ans: 2.

Explanation: Connection draining is a carrier beneath ELB which continuously video display units the fitness of the instances. If any example fails a health test or if any example has to be patched with a software replace, it  pulls all the traffic from that example and re routes them to other instances.

Q52. When an instance is bad, it's miles terminated and changed with a new one, which of the following offerings does that?

 Sticky Sessions

 Fault Tolerance

 Connection Draining

 Monitoring

Ans: 2.

Explanation: When ELB detects that an instance is bad, it begins routing incoming visitors to different wholesome instances within the area. If all of the instances in a area turns into dangerous, and when you have instances in some different availability sector/place, your traffic is directed to them. Once your times turn out to be healthful again, they may be re routed again to the unique instances.

Q53. What are lifecycle hooks used for in AutoScaling?

  They are used to do fitness checks on instances

 They are used to place an additional wait time to a scale in or scale out occasion.

 They are used to shorten the wait time to a scale in or scale out occasion

 None of these

Ans: 2.

Explanation: Lifecycle hooks are used for putting wait time before any lifecycle motion i.E launching or terminating an example occurs. The reason of this wait time, may be anything from extracting log files earlier than terminating an example or putting in the necessary softwares in an example earlier than launching it.

Q54. A user has setup an Auto Scaling group. Due to a few trouble the organization has didn't launch a single example for extra than 24 hours. What will appear to Auto Scaling in this situation?

Auto Scaling will keep trying to release the example for 72 hours

Auto Scaling will droop the scaling technique

Auto Scaling will start an instance in a separate region

The Auto Scaling organization might be terminated routinely

Ans: 2.

Explanation: Auto Scaling permits you to suspend after which resume one or greater of the Auto Scaling methods to your Auto Scaling institution. This may be very beneficial while you want to analyze a configuration problem or other trouble along with your net application, after which make modifications in your application, with out triggering the Auto Scaling technique.

Section 7: CloudTrail, Route 53

Q55. You have an EC2 Security Group with numerous going for walks EC2 instances. You modified the Security Group rules to allow inbound site visitors on a brand new port and protocol, and then launched numerous new times inside the equal Security Group. The new regulations practice:

Immediately to all instances within the safety institution.

Immediately to the brand new instances best.

Immediately to the brand new times, however old times need to be stopped and restarted earlier than the brand new guidelines apply.

To all times, but it can take numerous mins for antique instances to peer the adjustments.

Ans: 1.

Explanation: Any rule specified in an EC2 Security Group applies at once to all of the times, irrespective of when they're released before or after including a rule.

Q56. To create a replicate image of your environment in another region for disaster recovery, which of the following AWS resources do no longer need to be recreated in the 2d region? ( Choose 2 answers )

Route 53 Record Sets

Elastic IP Addresses (EIP)

EC2 Key Pairs

Launch configurations

Security Groups

Ans: 1,2.

Explanation: Elastic IPs and Route 53 file units are common property therefore there may be no want to copy them, considering Elastic IPs and Route 53 are valid across regions

Q57. A consumer desires to seize all client connection facts from his load balancer at an c language of five minutes, which of the subsequent options must he pick for his application?

Enable AWS CloudTrail for the loadbalancer.

Enable get entry to logs on the load balancer.

Install the Amazon CloudWatch Logs agent at the load balancer.

Enable Amazon CloudWatch metrics on the load balancer.

Ans: 1.

Explanation: AWS CloudTrail presents cheaper logging data for load balancer and different AWS assets This logging statistics can be used for analyses and other administrative work, therefore is perfect for this use case.

Q58. A customer wants to tune access to their Amazon Simple Storage Service (S3) buckets and additionally use this facts for their inner safety and get admission to audits. Which of the subsequent will meet the Customer requirement?

Enable AWS CloudTrail to audit all Amazon S3 bucket get right of entry to.

Enable server get admission to logging for all required Amazon S3 buckets.

Enable the Requester Pays option to tune access through AWS Billing

Enable Amazon S3 event notifications for Put and Post.

Ans: 1.

Explanation: AWS CloudTrail has been designed for logging and monitoring API calls. Also this carrier is available for storage, therefore must be used on this use case.

Q59. Which of the following are actual regarding AWS CloudTrail? (Choose 2 solutions)

CloudTrail is enabled globally

CloudTrail is enabled on a in line with-area and service basis

Logs can be delivered to a unmarried Amazon S3 bucket for aggregation.

CloudTrail is enabled for all to be had services within a region.

Ans: 2,three.

Explanation: Cloudtrail isn't enabled for all the offerings and is likewise no longer available for all of the regions. Therefore choice B is accurate, additionally the logs may be added on your S3 bucket, therefore C is likewise correct.

Q60. What occurs if CloudTrail is turned on for my account but my Amazon S3 bucket is not configured with the best coverage?

Ans: CloudTrail files are added in keeping with S3 bucket guidelines. If the bucket isn't configured or is misconfigured, CloudTrail won't be able to supply the log files.

Q61. How do I switch my present domain name registration to Amazon Route 53 with out disrupting my current net traffic?

Ans: You will need to get a listing of the DNS record statistics to your area name first, it is typically available in the shape of a “area file” that you may get out of your current DNS issuer. Once you get hold of the DNS document records, you may use Route 53’s Management Console or simple internet-offerings interface to create a hosted zone that will shop your DNS statistics for your domain call and follow its transfer technique. It also consists of steps including updating the nameservers in your domain name to those associated with your hosted region. For finishing the procedure you need to touch the registrar with whom you registered your domain name and comply with the transfer manner. As quickly as your registrar propagates the brand new name server delegations, your DNS queries will begin to get spoke back.

 

Section eight: AWS SQS, AWS SNS, AWS SES, AWS ElasticBeanstalk

Q62. Which of the subsequent offerings you would not use to install an app?

Elastic Beanstalk

Lambda

Opsworks

CloudFormation

Ans: 2.

Explanation: Lambda is used for walking server-less programs. It can be used to set up functions caused by means of activities. When we say serverless, we suggest with out you annoying approximately the computing assets strolling within the heritage. It isn't always designed for developing packages which can be publicly accessed.

Q63. How does Elastic Beanstalk follow updates?

By having a duplicate prepared with updates earlier than swapping.

By updating on the example even as it's far running

By taking the example down within the renovation window

Updates should be hooked up manually

Ans: 1.

Explanation: Elastic Beanstalk prepares a reproduction copy of the instance, before updating the authentic example, and routes your traffic to the duplicate instance, so that, incase your updated application fails, it'll transfer again to the original instance, and there could be no downtime skilled by using the customers who're the usage of your application.

Q64. How is AWS Elastic Beanstalk different than AWS OpsWorks?

Ans: AWS Elastic Beanstalk is an software control platform at the same time as OpsWorks is a configuration control platform. BeanStalk is an easy to use carrier that's used for deploying and scaling net applications evolved with Java, .Net, PHP, Node.Js, Python, Ruby, Go and Docker. Customers add their code and Elastic Beanstalk routinely handles the deployment. The utility may be equipped to apply without any infrastructure or aid configuration.

In comparison, AWS Opsworks is an integrated configuration control platform for IT administrators or DevOps engineers who want a high degree of customization and control over operations.

Q65. What occurs if my software stops responding to requests in beanstalk?

Ans: AWS Beanstalk programs have a gadget in place for avoiding screw ups inside the underlying infrastructure. If an Amazon EC2 example fails for any motive, Beanstalk will use Auto Scaling to automatically launch a brand new instance. Beanstalk can also hit upon if your application isn't always responding at the custom hyperlink, even though the infrastructure appears wholesome, it will be logged as an environmental occasion( e.G a horrific version turned into deployed) so that you can take the correct motion.

Section 9: AWS OpsWorks, AWS KMS

Q66. How is AWS OpsWorks extraordinary than AWS CloudFormation?

Ans: OpsWorks and CloudFormation each assist software modelling, deployment, configuration, management and related sports. Both help a wide kind of architectural patterns, from simple internet programs to surprisingly complex programs. AWS OpsWorks and AWS CloudFormation fluctuate in abstraction degree and areas of attention.

AWS CloudFormation is a building block provider which allows client to control nearly any AWS resource via JSON-based totally domain particular language. It provides foundational abilties for the total breadth of AWS, with out prescribing a particular model for development and operations. Customers outline templates and use them to provision and manage AWS assets, running structures and alertness code.

In comparison, AWS OpsWorks is a higher degree carrier that makes a speciality of providing distinctly productive and reliable DevOps stories for IT directors and ops-minded builders. To try this, AWS OpsWorks employs a configuration control version primarily based on ideas such as stacks and layers, and gives included reviews for key activities like deployment, monitoring, auto-scaling, and automation. Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of utility-oriented AWS resource sorts consisting of Amazon EC2 times, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.

Q67. I created a key in Oregon place to encrypt my information in North Virginia area for safety purposes. I delivered two users to the important thing and an external AWS account. I desired to encrypt an object in S3, so when I tried, the key that I just created become now not indexed.  What will be the purpose?  

External aws debts are not supported.

AWS S3 can't be integrated KMS.

The Key need to be in the same area.

New keys take the time to reflect within the listing.

Ans: three.

Explanation: The key created and the statistics to be encrypted should be within the same region. Hence the approach taken here to cozy the records is incorrect.

Q68.  A organisation wishes to monitor the examine and write IOPS for his or her AWS MySQL RDS instance and send real-time indicators to their operations crew. Which AWS offerings can accomplish this?

Amazon Simple Email Service

Amazon CloudWatch

Amazon Simple Queue Service

Amazon Route fifty three

Ans: 2.

Explanation: Amazon CloudWatch is a cloud monitoring tool and therefore that is the right carrier for the mentioned use case. The different options indexed here are used for different purposes for instance course 53 is used for DNS services, therefore CloudWatch could be the apt choice.

Q69. What takes place whilst one of the sources in a stack cannot be created efficiently in AWS OpsWorks?

Ans: When an occasion like this takes place, the “automatic rollback on error” characteristic is enabled, which causes all of the AWS sources which have been created successfully till the point in which the error occurred to be deleted. This is beneficial because it does now not leave in the back of any faulty records, it ensures the fact that stacks are both created fully or not created in any respect. It is useful in activities in which you could by accident exceed your restriction of the no. Of Elastic IP addresses or perhaps you can not have get right of entry to to an EC2 AMI that you are trying to run and many others.

Q70. What automation gear can you use to spinup servers?

Ans: Any of the following gear may be used:

Roll-your-personal scripts, and use the AWS API equipment.  Such scripts can be written in bash, perl or other language of your desire.

Use a configuration control and provisioning device like puppet or its successor Opscode Chef.  You can also use a tool like Scalr.

Use a controlled solution which includes Rightscale.




CFG