This article provides with an overview of 50+ Amazon Web Services (AWS) 2023. AWS is the leading vendor of cloud services and infrastructure, dominating the cloud computing market:
Amazon net sales increased by 15% to $127.1 billion in Q3 2022 as compared to $110.8 billion in Q3 2021. AWS segment sales increased by 27% year-over-year to reach $20.5 billion in the third quarter of this year. AWS has announced new commitments for customers in many industries and countries. It is continuing to expand its infrastructure around the world. In this quarter, AWS opened its second region in the UAE and plans to launch new regions in Thailand.
- AWS offers cloud web hosting solutions that provide businesses, non-profits, and governmental organizations with low-cost ways to deliver their websites and web applications.
- AWS has more than 200 services to offer its clients which is much more than the services provided by other cloud service providers. In this article, we would be covering some of the critical AWS services that are categorized under the following domains:
EC2, Lambda, Elastic Beanstalk, Elastic Load Balancer, and Autoscaling.
S3, Glacier, Cloudfront, Elastic File System, and Storage Gateway.
RDS, DynamoDB, ElastiCache, RedShift
This domain deals with the transferring of data to and from the AWS infrastructure. There is a service called snowball used when you need to transfer your considerable data to the AWS infrastructure physically.
Example: SAP HANA on AWS DMO.
VPC, direct connect, and Route 53.
CloudWatch, CloudFormation, CloudTrail, and OpsWork.
IAM and KMS.
Bottom Line: AWS offers a broad set of global cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to deployment tools, directories to content delivery, over 200 AWS services are available.
- Amazon S3 -> Create bucket.
- Buckets are containers for data stored in S3.
- Key steps: General configuration, access management, bucket versioning, and encryption options.
- Upload objects to S3 and manage access using bucket Policies and ACLs.
- Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access.
- Options: Create folders for images and text files.
- Bucket policy: An Amazon Resource Name (ARN) is a file naming convention used to identify a particular resource in the Amazon Web Services (AWS) public cloud.
- S3 storage classes:
S3 Standard – Infrequent Access (Standard – IA)
S3 Intelligent-Tiering is the best choice in terms of cost.
Life Cycle management – set of rules to handle the data
Kendra let you build search for your own product or organization without time spent on building back-end search engine.
First, build an Index = storage location of your data.
There are more than 30 data connectors, starting with S3 object storage. The max file size is 50 Mb.
Search Functionality: we type our NLP type query. Kendra keeps learning whether the search result is relevant or not. You can sort your search results based on multiple categories. Kendra also highlights the key words. So it brings user experience.
Add FAQs: We add FAQ as a feature on the search console.
Routing Policies with Route53
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 connects user requests to internet applications running on AWS or on-premises.
AWS Route 53 translates URL names, such as http://www.wordpress.com, into their corresponding numeric IP addresses—in this example, 126.96.36.199. In this way, AWS Route 53 simplifies how cloud architecture routes users to internet applications.
Creating Routing Policies to Handle Traffic with AWS Route53:
From zero to hero AWS Route53 steps:
- Creating a Free Domain Using Freenom
- Launching EC2 Instances and Creating a Simple Routing Policy.
- Creating a Weighted Routing Policy & Health Checks.
- Creating a Latency-based Routing Policy.
- Creating a Failover Routing Policy.
- Creating a GeoLocation Routing Policy.
- Creating a Multi-Value Answer Routing Policy.
AWS Cloudfront: Serve content from multiple S3 buckets:
1. Create S3 buckets
2. Create a Cloudfront distribution
3. Update origins & behaviors
4. Setup error page
5. Setup URL invalidations
6. Setup restrictions & Terminate
Amazon DynamoDB: Building NoSQL Database-Driven Applications. Key topics: recovery, SDKs, partition keys, security and encryption, global tables, stateless applications, and streams.
- DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale.
- It’s a fully managed, multiregion, multimaster database with built-in security, backup and restore, and in-memory caching for internet-scale applications.
- DynamoDB can handle more than 10 trillion requests per day and support peaks of more than 20 million requests per second.
Launch an auto-scaling AWS EC2 virtual machine.
Amazon Elastic Compute Cloud (Amazon EC2), provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 eliminates your need to invest in hardware upfront, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.
What is Amazon EC2 Auto Scaling?
- Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application.
- You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size.
- You can specify the maximum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes above this size.
- If you specify the desired capacity, either when you create the group or at any time thereafter, Amazon EC2 Auto Scaling ensures that your group has this many instances.
- If you specify scaling policies, then Amazon EC2 Auto Scaling can launch or terminate instances as demand on your application increases or decreases.
- The scaling policies that you define adjust the number of instances, within your minimum and a maximum number of instances, based on the criteria that you specify.
- Security groups act as a firewall for associated instances, controlling both inbound and outbound traffic at the instance level. You must add rules to a security group that enables you to connect to your instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere.
Example ETL pipeline:
- Create a security group
- Create a launch template
- Create an Auto Scaling group
- Verify your Auto Scaling group
- Delete your scaling infrastructure
Amazon Web Services offers a range of services in AI by leveraging Amazon’s internal experience with AI and machine learning. These services are separated here according to four layers:
- AI services
- AI platforms
- AI frameworks
- AI Infrastructure.
- Content personalization: use predictive analytics model to recommend items.
- Categorization: unstructured content -> ML model -> Categorized documents.
- Customer service: analyze social media traffic to route customers to customer care specialists.
- Targeted marketing: use prior customer activity to choose the most relevant email campaigns for target customers.
- Unsupervised ML – ECG anomaly detection.
Conversational UI, speech recognition, and image analysis.
Neural Networks: ANN, CNN, and RNN.
Amazon Rekognition automates image recognition and video analysis for your applications without machine learning (ML) experience.
Other Useful Applications: celebrity recognition, video pathing suitable for post-game analysis, unsafe content detection, and text extraction.
- In reinforcement learning (RL), an agent, such as a physical or virtual AWS DeepRacer vehicle, with an objective to achieve an intended goal interacts with an environment to maximize the agent’s total reward.
- The agent takes an action, guided by a strategy referred to as a policy, at a given environment state and reaches a new state.
- There is an immediate reward associated with any action. The reward is a measure of the desirability of the action. This immediate reward is considered to be returned by the environment.
- The goal of RL is to learn the optimal policy in a given environment. Learning is an iterative process of trials and errors. The agent takes the random initial action to arrive at a new state. Then the agent iterates the step from the new state to the next one. Over time, the agent discovers actions that lead to the maximum long-term rewards. The interaction of the agent from an initial state to a terminal state is called an episode.
The following sketch illustrates this learning process:
AWS DeepRacer – is an autonomous 1/18th scale race car designed to test RL models by racing on a physical track. Using cameras to view the track and a reinforcement model to control throttle and steering, the car shows how a model trained in a simulated environment can be transferred to the real-world.
Semantic Segmentation with Amazon Sagemaker.
- Prepare data for Sagemaker’s Semantic Segmentation algorithm
- Train a model using Sagemaker
- Deploy a model using Sagemaker
Use-Case Sagemaker Workflow:
- Create a Notebook instance
- Download the input dataset
- Data visualization – a quick look at images
- Training Image for the Algorithm and Sagemaker Setup
- Prepare Train/Test Data for Sagemaker
- Create an S3 bucket and upload data to S3
- Create a Sagemaker Estimator
- Setting up the hyperparameters for semantic segmentation
- Create the S3 input data channels
- Training the Model
- Deploying the model
- Using the deployed model for inference
- Deleting the deployed endpoint.
Example: Build an object detection model using images labeled with Ground Truth.
Computer vision problems – detection of multiple objects.
Scope: detection of object, scenes and activities; text-to-speech; NLP – insights and relationships in text using AI; LEX – conversational interface for your applications.
AWS Marketplace – GluonCV
Image feature extraction and ImageNet category prediction using extremely fast MobileNet, provided by GluonCV. GluonCV provides implementations of state-of-the-art (SOTA) deep learning algorithms in computer vision.
Example: COCO Focus on Sports foods and households
Building Python Apps
AWS services used: Amazon S3, Amazon API Gateway, Amazon Cognito, AWS Lambda, AWS Step Functions, AWS X-Ray, and Amazon Comprehend.
- Create a static Amazon S3 website with a bucket policy that restricts access to the website via IP Address. The website will be created using the AWS SDK and AWS CLI.
- Setup mock backend API using Amazon API Gateway REST APIs. You will setup 3 API endpoints using the AWS SDK and AWS CLI, these endpoints will respond to requests with mocked data. You will test this mock API using the website setup in step 1 make AJAX calls to the mock API.
- Secure the API that was built in step 2 by adding authentication via Amazon Cognito User Pools.
- Create AWS Lambda functions to host the backend for your API. You will then configure the secured API built in step 3 to trigger to the lambda functions, instead of using mock integrations.
- Create an asynchronous state machine using AWS Step Functions for a reporting feature of the API. You will then configure the API to run this state machine when a request hits an API endpoint you built in the previous steps.
- Use AWS X-Ray to trace requests through your distributed application. You will also make improvements to your application using various AWS service features like Amazon API Gateway Response Caching, as well as code modifications. Then you will test and view the performance improvements in the AWS X-Ray Console.
- Admin of all computer resources. Lambda performs operational and admin activities on your behalf. It can run different languages on the same execution environment.
- AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume – there is no charge when your code is not running.
- With AWS Lambda, you can run code for virtually any type of application or backend service – all with little to no administration in regards to environment provisioning and scaling.
Read the Developer Guide for AWS Lambda here: https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html
The Lambda Execution Environment
- When AWS Lambda executes your Lambda function, it provisions and manages the resources needed to run your Lambda function. When you create a Lambda function, you specify configuration information, such as the amount of memory and maximum execution time that you want to allow for your Lambda function.
- It takes time to set up an execution context and do the necessary “bootstrapping”, which adds some latency each time the Lambda function is invoked. You typically see this latency when a Lambda function is invoked for the first time or after it has been updated because AWS Lambda tries to reuse the execution context for subsequent invocations of the Lambda function.
- After a Lambda function is executed, AWS Lambda maintains the execution context for some time in anticipation of another Lambda function invocation. In effect, the service freezes the execution context after a Lambda function completes, and thaws the context for reuse, if AWS Lambda chooses to reuse the context when the Lambda function is invoked again. This execution context reuse approach has the following implications:
- Objects declared outside of the function’s handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We suggest adding logic in your code to check if a connection exists before creating one.
AWS X-Ray Terminology
AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs.
For general information on AWS X-Ray click here: https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html
AWS Step Functions is a serverless function orchestrator that makes it easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications. Through its visual interface, you can create and run a series of checkpointed and event-driven workflows that maintain the application state.
- Use SDK (software development kit) for the language of choice.Reach AWS via API
- Data are accessed by consumers via API, so expose our data securely via Amazon API Gateway.
- Default I/O formats: JSON, YAML, text, and ASCII.
- There are 4 ways to interact with AWS, where CLI is for repeatable tasks
- IAM is a way to do access control for an HTTP API in Amazon API Gateway.
- Lambda best practices – minimize the package size (<75Gb) when deploying the lambda function
- Minimize the complexity of your dependencies, avoid recursion.
- Initialize SDK clients and network, DB connections outside of the function handler.
- Cache static assets on a locally available storage and /temp directory.
- Amazon API Gateway HTTP APIs are cheaper and faster than RESTful API. REST is a set of architectural constraints, not a protocol or a standard.
AWS Lambda Performance and Pricing
Following best practices with your AWS Lambda functions can help provide a more streamline and cost-efficient utilization of this component within your workflows. Make sure to keep the Lambda pricing calculator bookmarked to help provide estimations about how changes in your function build and utilization might affect the cost of your service usage.
Find the pricing calculator here: https://s3.amazonaws.com/lambda-tools/pricing-calculator.html
AWS Lambda Power Tuning
The efficiency and cost of your lambda function often times relies on the amount of CPU and memory you have given you function. The more power you give a function, the more it costs to run. That being said, it can often be cheaper to run a function with more power. The reason for this is that your code runs faster with more CPU and memory available, so it can be a good exercise to do Lambda Power Tuning to find the best settings for your lambda function.
AWS Lambda Power Tuning is an AWS Step Functions state machine that helps you optimize your Lambda functions in a data-driven way.
You can provide any Lambda function as input and the state machine will run it with multiple power configurations (from 128MB to 3GB), analyze execution logs and suggest you the best configuration to minimize cost or maximize performance.
The state machine will generate a dynamic visualization of average cost and speed for each power configuration.
Find the AWS Lambda Power Tuning project here: https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:451282441545:applications~aws-lambda-power-tuning
AWS Lambda Best Practices
To read a list of AWS Lambda Best Practices in detail click here: https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html
AWS Global Infrastructure: Edge Locations
AWS Edge Locations are sites that Amazon CloudFront uses to cache copies of your content for faster delivery to users around the world. By deploying content to the edge locations, the latency in your application can be reduced for end users since they are accessing the resources at a location that is physically closer to them than the region you hosted your resources in originally.
Read more about Amazon CloudFront and Edge Locations here: https://aws.amazon.com/cloudfront/
Amazon API Gateway Response Caching
You can enable API caching in Amazon API Gateway to cache your endpoint’s responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.
When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.
To read more about Response Caching click here: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
AWS Lambda @ Edge
Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency. With Lambda@Edge, you don’t have to provision or manage infrastructure in multiple locations around the world. You pay only for the compute time you consume – there is no charge when your code is not running.
Read about use cases of Lambda@Edge here: https://aws.amazon.com/lambda/edge/#Website_Security_and_Privacy
Read more general information about Lambda@Edge here: https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html
An API gateway is an API management tool that sits between a client and a collection of backend services. An API gateway acts as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result.
That’s normally the architecture to upload the data to DB:
- AWS Whitepaper
- AWS for Beginners
- The Business Value of AWS
- Top AWS Services 2023
- Overview of all AI based Amazon Web Services (AWS)
- An AWS Comparison of ML/AI Diabetes-2 Classification Algorithms
- HealthTech ML/AI Q3 ’22 Round-Up
- HealthTech ML/AI Use-Cases
- Cloud-Native Tech Autumn 2022 Fair
- Cloud-Native Tech Status Update Q3 2022
- Cloud Tech Trends June 2022
- Technology Focus Weekly Update 16 Oct ’22
- Cybersecurity Summer 2022 Round-Up
- Cybersecurity Monthly Update
- Brand Architecture: Google vs. Amazon
Make a one-time donation
Make a monthly donation
Make a yearly donation
Choose an amount
Or enter a custom amount
Your contribution is appreciated.
Your contribution is appreciated.
Your contribution is appreciated.DonateDonate monthlyDonate yearly