Showing posts with label amazon. Show all posts
Showing posts with label amazon. Show all posts

Tuesday, March 19, 2024

The VIP Lane for TCP/UDP Applications: Unveiling the Power of AWS Global Accelerator

 Imagine your application is a rockstar, performing sold-out shows worldwide! But instead of cheering fans, you have frustrated users dealing with lag and slow loading times. This is where AWS Global Accelerator comes in, your trusty roadie ensuring your app’s performance is smooth sailing across the globe.

Today, we’ll dive deep into AWS Global Accelerator key features, understand how it compares to CloudFront, and wrap it up with some pro tips to boost your AWS Certification journey. Buckle up and get ready to unleash the power of global application performance!

AWS Global Accelerator is a networking service that exceed traditional limitations by enhancing the performance of a wide range of applications that utilize the TCP or UDP protocol. This is achieved through a technique called “edge proxying.”

Imagine strategically positioned outposts along a global network. These outposts, known as edge locations, intercept incoming user requests. Global Accelerator then intelligently analyzes factors like user location, network conditions, and application health to determine the optimal endpoint within your AWS infrastructure (potentially across multiple regions).

You can easily get started with AWS Global Accelerator using these steps:

  • Create an accelerator using AWS console. 2 Static IP addresses will be provisioned for you.
  • Configure endpoints Groups. You choose one or more regional endpoint groups to associate to your accelerator’s listener by specifying the AWS Regions to which you want to distribute traffic. Your listener routes requests to the registered endpoints in this endpoint group. You can configure a traffic dial percentage for each endpoint group, which controls the amount of traffic that an endpoint group accepts.
  • Register endpoint for endpoint groups: Assign regional resources (ELB, Elastic IP, EC2, NLB) in each endpoint group. You can also set up weight to choose how much traffic will reach each service.
AWS Global accelerator console

Key Features of AWS Global Accelerator:

  • Static IP Addresses: No more fumbling with complex regional addresses. Global Accelerator gives your app a permanent, recognizable stage presence.
  • Global Network Routing: Think of it as a teleport for your data! Global Accelerator whisks user requests to the closest AWS location, ensuring the fastest possible connection.
  • Instant Failover: Is one of your application’s servers having an off night? No worries! Global Accelerator seamlessly redirects traffic to healthy backups, keeping the show running smoothly.
  • Traffic Dial: Need to control the flow of users for A/B testing or a new feature rollout? Global Accelerator’s handy traffic dial lets you adjust the audience size for a specific region, like dimming the lights before a special announcement!
  • Weighted Traffic Distribution: Have multiple versions of your application across different regions? Global Accelerator acts like a spotlight operator, directing the right amount of users to each version based on your preferences.

Perfect, but wait. Isn’t it very close to AWS CloudFront. I got the same feeling and it’s a great question to ask.

Indeed AWS Global accelerator and AWS CloudFront are quite similar and they both use the edge location. But here few differences that will help you to decide which one to use:

  • AWS CloudFront is a content delivery network CDN that improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). CloudFront is used with TCP protocol. It provide Lambda@Edge and CloudFront functions to intercept request and execute short code.
  • AWS Global Accelerator is a networking service that is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP. It’s good also for HTTP use cases that require Static IP addresses or deterministic. Global Accelerator doesn’t provide any caching feature. It is a good fit for fast regional failover.

AWS Certification Champs, Take Note !

  • The Global accelerator is used with TCP and UDP traffic. It doesn’t provide caching, neither lambda function on the edge
  • AWS global accelerator can be used to remove the hassle of managing a scaling number of IP addresses and related security.
  • With Global Accelerator, you can add or remove endpoints in AWS Regions, run blue/green deployment, and do A/B testing without having to update the IP addresses in your client applications.
  • AWS Global accelerator provides great acceleration for latency-sensitive applications
  • It’s possible to dynamically route multiple users to specific endpoint IP and port behind the accelerator. User traffic can be routed to a specific Amazon EC2 IP in single or multiple AWS regions. An example is a multi-player game where multiple players are assigned to the same session. Another example is VoIP or social media app that assign multiple users to a specific media server to initiate voice or video call session.
  • Global Accelerator creates a peering connection with your Amazon Virtual Private Cloud using private IP addresses, keeping connections to your internal Application Load Balancers or private EC2 instances off the public internet.
  • AWS Global accelerator improves performance for VoIP, online gaming and IoT sensors apps.
  • You can’t directly configure on-premises resources as endpoints for your static IP addresses, but you can configure a Network Load Balancer (NLB) in each AWS Region to address your on-premises endpoints. Then you can register the NLBs as endpoints in your AWS Global Accelerator configuration.

P.S. That’s all for today, don’t forget to share your thoughts and experience with AWS Global Accelerator in the comment below!


Monday, March 4, 2024

Dive into Delightful Message Queues with AWS SQS

 

 The heart of the AWS SQS show? It’s the messages, bustling with data like little digital bees! Each message finds a cozy home in a queue, waiting patiently to be processed by a single, dedicated subscriber.

Amazon SQS throws on its invisibility cloak for a little while to prevent other consumers from grabbing it too soon. This cloak, called a visibility timeout, ensures each message gets its moment to shine and be processed properly. By default, this cloak lasts for 30 seconds, but you can adjust it to fit your needs — from a lightning-fast 0 seconds to a leisurely 12 hours.

AWS SQS Queue Certification Tutorial

Once the subscriber finishes the job, the message bids farewell and departs the queue, leaving space for the next one in line.

But wait, there’s more to this queueing magic! SQS offers a nifty feature called long polling. Imagine a queue full of eager messages, but instead of constantly checking if there are new ones (which can be tiring!), Amazon SQS sends a response after it collects at least one available message, up to the maximum number of messages specified in the Receive Message request. Amazon SQS sends an empty response only if the polling wait time expires.

Speaking of smooth, let’s talk about retries. Sometimes, even the most dedicated subscribers might encounter hiccups. That’s where the maxReceiveCount comes in. You can use this setting to tell SQS how many times it should attempt to process a message before moving on. This limits the number of times Lambda will retry to process a failed execution. As SQS natively supports Dead Letter Queue DLQ, it’s possible to create it and configure SQS to transfer failed message to that Queue.

But messages aren’t all about the content, they can carry extra information too! Think of it like a little luggage tag with details like timestamps, location data, or even a digital signature. These message attributes travel alongside the message itself (separately from the body), providing valuable context for the recipient.

Delay queues add another layer of fun to the mix. When a message enters the SQS, it takes a pre-defined break before being consumed. This is perfect for situations where tasks need to happen at a specific time. It’s possible to set delay seconds on individual messages, rather than on an entire queue. To do so, use Message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value.

With these delightful features at your fingertips, AWS SQS can help you build resilient, efficient, and scalable applications. So, what are you waiting for? Start queuing up for some cloud-based magic!

Join the conversation and share your SQS experiences in the comments below!

Good To know:

  • Message size restriction is 256KB
  • The default waiting time of a message in SQS Queue is 4 days
  • For Lambda function consuming from SQS, the best visibility timeout is 6x the function timeout. This gives time for retries with extra buffer.
  • To support cross-regions or SQS active mode, it’s possible to use SNS that delivers messages to SQS Queue in different regions. This is useful to divide customers by location for access to the nearest region.

Monday, February 26, 2024

Understand the VPC endpoint like a Pro and get ready for the AWS certification

Imagine sending your data on a private jet ✈️ instead of a bus 🚌 through rush hour traffic 🚦. That’s the magic of VPC endpoint: a secure, high-speed lane connecting your AWS resource directly to specific services, all within the safe confines of the AS network.




This blog post is your-first class ticket to understanding these powerful tools and get ready for you AWS certification. Buckle up and we will explore the two types of VPC endpoints:

  • Interface endpoints: Think of them as private tunnels, shipping data directly to the service without ever touching the public internet.
  • Gateway endpoints: these acts as secure gateways, allowing communication with AWS S3 and DynamoDB.


 As explained before, a VPC endpoints lets you privately connect your VPC to supported AWS services and VPC endpoint services.

VPC endpoints are virtual devices. They are horizontally scaled, redundant, and highly available Amazon VPC components that allow communication between instances in an Amazon VPC and services without imposing availability risks or bandwidth constraints on network traffic.

Gateway endpoints

A gateway endpoint is a virtual connection within your VPC that allow resources in your VPC to directly access AWS S3 and DynamoDB. The gateway targets specific IP routes in an Amazon VPC route table, in the form of a prefix-list, used for traffic destined to Amazon DynamoDB or Amazon Simple Storage Service (Amazon S3).

When you create a gateway endpoint, you select the VPC route tables for the subnets you enable. A route with a prefix-list is automatically added to each selected route table. Those routes can’t be modified or deleted unless you disassociate the route table from the gateway or you delete the gateway itself.

There is no additional charges for using the gateway endpoint, which makes it preferable over interface endpoint in case your need access to S3 or DynamoDB. Nevertheless, Gateway endpoint doesn’t allow access from on-premise networks, from peered VPC in other regions and through transit gateway. For those use cases, you must use the Interface endpoint.

Here short tips when working with the gateway endpoints:

  • A gateway endpoint is available only in the region you created it. Be sure it’s the same region as your S3 bucket.
  • You can have routes to S3 and/or DynamoDB in the same route table. But you can’t have multiple routes to the same service in a single route table.
  • The instances access S3 or DynamoDB service using its public endpoints. Their security groups must allow traffic to and from the services using the prefix_list_id as destination with the TCP protocol on port 443.
  • The Network ACLs for the subnets for theses instances must also allow traffic to and from the services.

You need to consider the AWS specificities in traffic routing. AWs uses the most specific route that matches the traffic to determine how to route it (longest prefix match). In route tables with endpoint route this means:

  • If the route that sends all internet traffic (0.0.0.0/0) to an internet gateway, the endpoint route takes precedence for traffic sent to S3 or DynamoDB in the current region.
  • Traffic destined for S3 or DynamoDB in a different region goes to the internet gateway as prefix lists are specific to a Region.
  • If there is a route that specifies the exact IP address range for the S3 or DynamoDB in the same region, that route takes precedence over the endpoint route.

Interface endpoints

Interface endpoints enable connectivity to services over AWS PrivateLink.

But wait, what is PrivateLink. It’s an amazing service that allows you to to securely connect your resources in an Amazon Virtual Private Cloud VPC to specific AWS resources, other VPCs, and even on-premise applications, without ever exposing your data in the public internet.

Here some benefit of the PrivateLink:

  • AWS PrivateLink gives on-premises networks private access to AWS services through Direct Connect.
  • You can make services available to other accounts and VPCs that are accessed securely as private endpoints.
  • If you use AWS PrivateLink with a Network Load Balancer to route traffic to your service or application, clients can connect to any service you host.
  • Services configured to support AWS PrivateLink can be offered as a subscription service through the AWS Marketplace.

The interface endpoint is powered by the AWS PrivateLink. When configuring an interface VPC endpoint, an elastic network interface (ENI) with a private IP address is deployed in your subnet. This ENI acts as an entry point for traffic destined to a supported service. Remember, attaching a Security Group is crucial for access control.

Provisioning an interface endpoint incurs hourly charges based on its uptime and data processing.

To optimize costs and simplify management, consider hosting interface endpoint in a centralized VPC. All the “spoke” VPCs can then leverage these centralized endpoints via Transit Gateway or VPC peering, eliminating the need for individual endpoints in each VPC. This approach known as Centralized VPC endpoint architecture pattern, minimizes costs and streamlines access control with centralized Security Groups.

P.S. Don’t forget to share your thoughts and experience with VPC endpoints in the comment below!

Sunday, February 25, 2024

Unlocking the Power of AWS Virtual Private Cloud VPC: Essential Notes for your AWS Certification

Ever felt overwhelmed by the vast amount of information you need to study for the AWS certification exam, specifically when it comes to VPC? I’ve been there, and like many others, struggled to grasp the intricacies of this vital service. Fortunately, while preparing for my exam, I found some key notes that significantly improved my understanding.

In this post, I’m excited to share these notes with you, offering a clear and concise overview of VPC. We’ll dive into its core components, explore connectivity options, and wrap up with essential “good to know” tips. Whether you’re a complete beginner or someone seeking a refresher, this post aims to demystify VPC and prepare you for your AWS certification journey.




Well, let’s begin with the definition of VPC. A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch AWS resources, such as Amazon EC2 instances, databases, into your VPC.

VPC is a regional service. To start, AWS provides a default VPC that comes with public subnet in each availability zone, default Network ACL, default Security Group, Main route table, an internet gateway already configured and settings to enable DNS resolution.

The non-default VPC that you creates is an isolated network that do not allow anything in or out without explicit configuration.


AWS VPC tutorial for certification

As shown in the AWS VPC console, Subnets are how you add structure and functionality to your Amazon VPCs, and they are an Availability Zone resilient feature of AWS.

A subnet is a sub-network of your Amazon VPC CIDR range that is created in one Availability Zone. The are a range of IP addresses in your VPC. You can launch AWS resources, such as EC2 instances, into your subnets. Each subnet resides entirely within one Availability Zone.

When assigning CIDR to the Subnet, the first 4 IP addresses and the last IP address are reserved to the VPC and used for Network Address, VPC router, mapping to amazon-provided DNS, for future use, the last is for broadcast.

You can create the following resources for your VPC: custom Network ACL, route table, SG, Internet Gateway, NAT GW

Components in a VPC

  • CIDR: CIDR is based on the idea of subnet masks. A mask is placed over an IP address and creates a sub network. The subnet mask signals to the router which part of the IP address is assigned to the hosts and which determines the network
  • Subnet that runs in One Availability Zone AZ. In case of AZ failure, the subnets and all running services inside it will fail. For high availability, it’s recommended to pace services in different subnets from different AZs.
  • EC2 are virtual servers that provide scalable computing capacity while eliminating the need to invest on hardware. EC2 instance is launched in a private or public subnet within the VPC. The instance receives a primary private IP address from the IPv4 address configured for the subnet.
  • A VPC Router is highly available and moves traffic from A to B. The router runs in all your Availability zones that your Amazon VPC uses. The router has a network interface in each subnet and uses the first reserved IP address of the subnet. the router is fully managed by AWS to route traffic between subnets in your VPC.
  • Route table that contains a set of rules called routes that determines where network traffic from subnet or gateway is directed. They are created at the VPC level, but are associated with a subnet. If you don’t associate a new route table to your subnet, then the main route table is associated. Only one route table can be associated with a subnet, but it’s possible to use the same route table for many subnets at a time.
  • Internet Gateway sits on the edge of your VPC and allow traffic to and from the Internet. A VPC can have only One Internet Gateway IG, and it’s regional. It works across all Availability zones. Once created, the IG should be attached to the VPC and added to the route table for the subnet that becomes public.
  • Network Access Control List NACL is a type of security filter that filter traffic that enters and leave a subnet. A NACL is attached to a subnet. Hence, it only manages traffic that is crossing the subnet boundary but not internal traffic. Network ACLs are stateless, which means that if you add a rule to allow/deny an inbound traffic, then you must also add the same rule for outbound traffic.
  • Security Groups are another security feature that complement the NACL. It handles security outside the subnet but also the interaction inside the subnet. Security groups are attached to the elastic network interface of AWS resources in the subnet. It sits at the boundary of EC2 instances. Security groups are stateful and view traffic as one stream. If traffic is allowed in, then that traffic is automatically allowed back out.

Connectivity

The different connectivity option for the VPC are:

  • The internet via an internet Gateway (One per VPC)
  • Your corporate data center using Site-to-site VPN connection (using Virtual Private Gateway)
  • Other AWs services (via internet gateway, VPC endpoints, virtual private gateway)
  • Other VPCs in the same or different region as well as same or different account via VPC peering or transit gateway

Good to know:

  • CIDR (Classless Inter-Domain Routing) allocated to VPC are the private one (Class A: 10.0.0.0, Class B: 172.16.x.x, Class C: 192.168.x.x).
  • Subnets can’t span across multiple AZs
  • Max CIDR per VPC is 5.
  • CIDR should not overlap within your other networks
  • For resiliency, it is recommended to provision subnets in at least 2 AZs.
  • You cannot increase or decrease the size of an existing CIDR block that’s associated with the VPC.
  • You assign a single CIDR IP Address range as the primary CIDR block for the VPC and can add up to 4 secondary CIDR. Subnets within a VPC are addressed from these CIDR ranges by you.
  • It’s also possible to shrink the network by removing secondary CIDRs.
  • While you can create multiple VPCs with overlapping IP address ranges, doing so will prohibit you from connecting these VPCs (example via VPC peering). For this reason it’s recommended using non-overlapping IP address ranges.
  • CIDR IP address range for VPC can be between /28 (in CIDR notation) and /16 in size.
  • The minimum size of a subnet is a /28 (or 14 IP addresses.) for IPv4. Subnets cannot be larger than the VPC in which they are created.
  • Amazon reserves the first four (4) IP addresses and the last one (1) IP address of every subnet for IP networking purposes.

Articles les plus consultés