The Response from Generative AI depends on Our Intelligence more than the Intelligence within It

In-Short

CaveatWisdom

Caveat:

It is easy to type a question and get a response from the Generative AI, however it is important to get the right answer as per the context, because Large Language Models (LLMs) of Generative AI are designed to predict only the next word and they can hallucinate if they don’t get the context right or if they don’t have the required information with-in them.

Below is the screenshot of above example and response from Gen AI model in Amazon Bedrock

Wisdom:

  1. Don’t be 100% sure that what ever Generative AI says is true, many times it can be false, and you need to apply your critical thinking.
  2. It is important to understand the limitations of the Generative AI and frame our prompts to get right answers.
  3. Be specific with what you need and provide detailed and precise prompts.
  4. Give clear instructions instead of jargon and overly complex phrases.
  5. Ask open ended questions instead of questions for which the answers could be yes or no.
  6. Give proper context with purpose of your request.
  7. Break down complicated tasks into simple tasks.
  8. Choose the right model as per the task.
  9. Consider the cost factor for different models to perform different tasks. Sometimes traditional AI is much less costly than Generative AI.

In-Detail

In this post I will be using different LLMs available in Amazon Bedrock Service to demonstrate where the models can go wrong and show you how to write prompts in the right manner to get meaningful answer with the appropriate model.

Amazon Bedrock

Amazon Bedrock is a fully managed serverless pay-as-you-go service offering multiple foundational Generative AI model through a single API.

In this repo I have discussed on how to develop an Angular Web App for any scenario accessing the Amazon Bedrock from the backend. In this post I will be using Bedrock Playground in the AWS Console.

Understanding Basics

Tokens – These are basic units of text or code that LLMs use to process and generate language. These can be individual characters, parts of words, words, or parts of sentences. These tokens are then assigned numbers which, in turn, are put into a vector that becomes the actual input to the first neural network of the LLM.

Some thumb rules with respect to tokens are.

  • 1 token ~= 4 chars in English
  • 1 token ~= ¾ words
  • 100 tokens ~= 75 words

Or

  • 1-2 sentence ~= 30 tokens
  • 1 paragraph ~= 100 tokens
  • 1,500 words ~= 2048 tokens

Some common configuration options you find in Amazon Bedrock playground across models are Temperature, Top P and Maximum Length.

Temperature – By increasing the temperature, you can make the model more creative by decreasing it the model become stable and you get repetitive completions. By making temperature zero you can disable random sampling and deterministic results.

Top P – It is the percentile of probability from which tokens are sampled. If the value is less than 1.0 then you get only the corresponding top percentile of options are considered, this results in more stable and repetitive results.

Maximum Length – You can control number of tokens generated by the model by defining maximum number of tokens.

I have used the above parameters with different variations in different models and posting only the important examples in this post.

The art of writing prompts to get right answers from Generative AI is called Prompt Engineering.

Some of the Prompting Techniques are as follows:

Zero-Shot Prompting

Large LLMs are trained to follow instructions given by us. So we can start the prompt by giving the instructions first on what to do with the given information.

In the below example we are giving instruction in the starting to classify the text for sentiment analysis.

Few-Shot Prompting

Here we enable the model for in-context learning where we provide examples in the prompt which serve as conditioning for subsequent examples where we would like the model to generate a response.

In the below example the model Titan Text G1 is unable to predict the right sentiment with few shot prompts. It says model is unable to predict negative opinion!

The same prompt works with other model A21 Lab’s Jurassic-2 Mid. So you need to test and be wise before choosing the right model for your task.

Few-Shot Prompting Limitation 

When dealing with complex reasoning tasks Few-Shot Prompting is not a perfect technique.

You can see below, even after giving examples, the model fails to give the right answer.

In the last group of numbers (15, 32, 5, 13, 82, 7, 1) which is question the odd numbers are 15, 5, 13, 7, 1 and sum of them (15+5+13+7+1=41) is 41 which is an odd number, but the model (Jurassic-2 mid) says, “The answer is True” and agrees that it is an even number.

The above failure leads to our next technique Chain-Of-Thought

Chain-Of-Thought Prompting

In Chain-of-Thought (CoT) technique we explain to the model with examples on how to solve a problem in a step-by-step process.

I am repeating the same example discussed in the Few-Shot-Prompting discussed above with a variation of explaining to the model how to group the odd numbers and their sum is even or not. After that stating the answer True or false.

In the below screen shots, you can see that the Models Jurassic-2 Mid, Titan Text G1 – Express and Jurassic-2 Ultra doesn’t do well even after giving the example with Chain-of-Thought. This shows their limitations. At the same time, we can see that Claude v2 does an excellent job in reasoning and arriving at the answer with the step-by-step process, that is Chain-of-Thought.

Zero-Shot Chain-Of-Thought Prompting

In this technique we directly instruct the model to think Step-by-Step and give answer in complex reasoning tasks.

One of the nice features of Amazon Bedrock is that we can compare the models’ side by side and decide the appropriate one for our use case.

In the below example I compared Jurassic-2 Ultra, Titan Text G1 – Express, Claude v2. And we can see that Claude v2 does an excellent job however the cost of it is also on the higher side.

So, its again our intelligence which defines which model to use as per the task at hand considering the cost factor.

Prompt Chaining

Breaking down a complex task into smaller tasks and output response of one small task used as an input to next task is called Prompt Chaining.

Tree-Of-Thoughts

This technique extends prompt chaining technique by asking the Gen AI to act as different personas or SMEs and then chain the responses from one persona as input to another persona.

Below are the screen shots of the example in which I have given a document stating about a complex cloud migration project of a global renewable energy company.

In the first step I have asked the model to act as a Business Analyst and give Functional and Non-Functional requirements of the project.

Next the response from the first step is given as input and asked to act as a Cloud Architect and give the Architecting considerations as per the functional and non-functional requirements.

Applying Mechanical Sympathy with Built-In Algorithms of Amazon SageMaker

Mechanical Sympathy

The term Mechanical Sympathy was coined by racing driver Jackie Stewart, he said that “You don’t have to be an engineer to be a racing driver, but you do have to have Mechanical Sympathy.” He meant that understanding how a car works makes you a better driver. In case of Machine Learning by understanding when to use a specific algorithm we can get maximum efficiency from the resources we provision in the cloud.

In-Short

CaveatWisdom

Caveat

It is important to note that there is no one-size-fits-all solution when it comes to selecting a machine learning algorithm. The best algorithm for your problem will depend on a variety of factors, including the size and structure of your dataset, the complexity of the problem, and the trade-offs between accuracy, training time, and ease of use. Choosing a wrong algorithm can easily get you into cost over-runs and low performance of the models.

Wisdom

  • It is important to understand the problem you are trying to solve and the type of data you are working with. This will help you determine whether you need a classification, regression, or clustering algorithm.
  • Consider the size of your dataset and the complexity of the algorithm along with their input formats.
  • Evaluate the performance of multiple algorithms on your dataset and choose the one that performs best.
  • Be aware of the trade-offs between accuracy, training time, and ease of use when selecting an algorithm.
  • Use K-Fold cross-validation techniques to assess the performance of your models and prevent overfitting.

In-Detail

Amazon SageMaker

Amazon SageMaker is a one stop solution to develop, train and deploy the machine learning models in AWS. It gives four Algorithm options as follows:

  1. Bring your own algorithm.
  2. Write a script in your framework like TensorFlow, MXNet, PyTorch, etc.
  3. Get an Algorithm from AWS Marketplace.
  4. Use Built-In Algorithms.

Built-In algorithms enable us to quickly train and deploy machine learning models. Understanding when to use these algorithms is important to get the best results for the specific problem which we are trying to solve with machine learning.

AWS is always evolving, and many more built-in algorithms and jump start models are getting added to SageMaker service frequently. In this post I will be discussing some of the important algos which are highly used and referred in AWS Machine Learning Speciality Exam as per my experience.

Following are the mind maps to easily remember these algos:

Amazon SageMaker’s Built-In Algorithms:

Linear Learner

Linear Learner combines the simplicity of linear models with the flexibility of gradient boosting algorithms. The algorithm automatically handles feature transformations and missing values, making it easy to use. It supports large datasets and can be trained quickly, making it suitable for real-time applications.

 Use Cases:

Fraud detection: By training Linear Learner algorithm on historical data that includes both fraudulent and legitimate transactions, it can learn to classify new transactions as either fraudulent or legitimate. The algorithm can analyze various features of the transactions, such as transaction amount, location, and time, to make accurate predictions.

 

XGBoost

XGBoost can deal with both classification and regression problems. It is particularly useful when you have structured data with a large number of features. XGBoost is known for its ability to handle complex relationships between variables and its capability to handle missing values.

Use Cases:

Customer Churn Prediction

By training the algorithm on historical customer data, including factors such as demographics, purchase history, and customer interactions, it can learn to predict which customers are likely to churn or cancel their subscriptions. This information can help businesses take proactive measures to retain customers and improve customer satisfaction.

Anomaly detection

XGBoost can be used to identify unusual patterns or outliers in data. This can be applied in various domains such as fraud detection, network intrusion detection, or equipment failure prediction. By training the algorithm on normal data patterns, it can effectively identify deviations from the norm and flag potential anomalies.

 

Seq2Seq

Seq2Seq (Sequence-to-Sequence) can work with tasks that involve sequential data, such as language translation, text summarization, or speech recognition. It is specifically designed to handle problems where the input and output are both sequences of varying lengths.

Use Cases:

Machine Translation

By training the algorithm on pairs of sentences in different languages, it can learn to translate text from one language to another. For example, it can be used to translate English sentences into French or vice versa. The Seq2Seq algorithm is capable of capturing the contextual information and dependencies between words in a sentence, allowing it to generate accurate translations.

Text Summarization

By training the algorithm on pairs of long documents and their corresponding summaries, it can learn to generate concise summaries of text. This can be particularly useful in scenarios where there is a need to extract key information from lengthy documents, such as news articles or research papers.

DeepAR

DeepAR can be used when working with time series forecasting problems. It is specifically designed to handle tasks where the goal is to predict future values based on historical data.

Use Cases:

Demand Forecasting

By training the algorithm on historical sales data, it can learn to predict future demand for products or services. This can be particularly useful for businesses to optimize inventory management, production planning, and resource allocation.

Energy Load Forecasting

By training the algorithm on historical energy consumption data, it can learn to predict future energy demand. This can help utility companies optimize energy generation and distribution, as well as enable consumers to make informed decisions about energy usage.

Other Use Cases

The DeepAR algorithm is also applicable to other time series forecasting tasks such as stock market prediction, weather forecasting, and traffic flow prediction. It can capture complex patterns and dependencies in the data, making accurate predictions based on historical trends and seasonality.

 

BlazingText

BlazingText will help in text classification or natural language processing tasks. It is specifically designed to handle large-scale text data and can efficiently train models on massive datasets.

Use Cases:

Sentiment Analysis

By training the algorithm on a large corpus of text data labeled with sentiment (positive, negative, or neutral), it can learn to classify new text inputs based on their sentiment. This can be particularly useful for businesses to analyze customer feedback, social media posts, or product reviews to gain insights into customer sentiment and make data-driven decisions.

Document Classification

By training the algorithm on a diverse set of documents labeled with different categories, it can learn to classify new documents into relevant categories. This can be applied in various domains such as news categorization, spam detection, or topic classification.

Object2Vec

Object2Vec can be used with tasks that involve embedding and similarity analysis of objects or entities. It is specifically designed to handle scenarios where the goal is to learn meaningful representations of objects in a high-dimensional space.

Use Cases:

Recommendation Systems

By training the algorithm on user-item interaction data, it can learn to generate embeddings for users and items. These embeddings can then be used to calculate similarity scores between users and items, enabling personalized recommendations. For example, in an e-commerce setting, the algorithm can learn to recommend products to users based on their browsing and purchase history.

Document Similarity Analysis

By training the algorithm on a collection of documents, it can learn to generate embeddings for each document. These embeddings can be used to measure the similarity between documents, enabling tasks such as document clustering or search result ranking.

Other Use Cases

The Object2Vec algorithm is also applicable to tasks such as image similarity analysis, fraud detection, and anomaly detection. It can learn meaningful representations of objects or entities, allowing for efficient comparison and identification of similar instances.

 

Object Detection

Object Detection can help in detecting and localizing objects within images or videos. It is specifically designed to handle scenarios where the goal is to identify and locate multiple objects of interest within an image or video frame.

Use Cases:

Autonomous Driving

By training the algorithm on a dataset of labeled images or videos, it can learn to detect and localize various objects on the road, such as cars, pedestrians, traffic signs, and traffic lights. This can be crucial for developing advanced driver assistance systems (ADAS) or autonomous vehicles, enabling them to perceive and respond to their surroundings.

Inventory Management and Loss Prevention

By training the algorithm on images or videos of store shelves, it can learn to detect and locate products, ensuring accurate inventory counts and identifying instances of theft or misplaced items.

Other Use Cases

The Object Detection algorithm is also applicable to tasks such as surveillance, object tracking, and medical imaging. It can detect and localize objects of interest within complex scenes, providing valuable insights and enabling automated analysis.

 

Image Classification

Image Classification can categorize images into different classes or labels. It is specifically designed to handle scenarios where the goal is to classify images based on their visual content.

Use Cases:

Medical Imaging

By training the algorithm on a dataset of labeled medical images, it can learn to classify images into different categories such as normal or abnormal, or specific medical conditions. This can assist healthcare professionals in diagnosing diseases, identifying abnormalities, and making informed treatment decisions.

Product Categorization

By training the algorithm on a dataset of labeled product images, it can learn to classify images into different categories such as clothing, electronics, or home goods. This can help automate the process of organizing and categorizing products, improving search and recommendation systems for online retailers.

Other Use Cases

The Image Classification algorithm is also applicable to tasks such as facial recognition, object recognition, and quality control in manufacturing. It can accurately classify images based on their visual features, enabling a wide range of applications in various industries.

Semantic Segmentation

Semantic Segmentation can do pixel-level segmentation of images. It is specifically designed to handle scenarios where the goal is to assign a class label to each pixel in an image, thereby segmenting the image into meaningful regions.

Use Cases:

Autonomous Driving

By training the algorithm on a dataset of labeled images, it can learn to segment the images into different classes such as road, vehicles, pedestrians, and buildings. This can be crucial for developing advanced driver assistance systems (ADAS) or autonomous vehicles, enabling them to understand and navigate their environment.

Medical Imaging

By training the algorithm on a dataset of labeled medical images, it can learn to segment the images into different anatomical structures or regions of interest. This can assist healthcare professionals in accurate diagnosis, treatment planning, and surgical interventions.

Other Use Cases

The Semantic Segmentation algorithm is also applicable to tasks such as object detection, scene understanding, and image editing.

Random Cut Forest

Random Cut Forest (RCF) can do anomaly detection in high-dimensional data. It is specifically designed to handle scenarios where the goal is to identify unusual patterns or outliers within a dataset.

Use Cases:

Fraud Detection

By training the algorithm on a dataset of normal transactions, it can learn to identify anomalous transactions that deviate from the normal patterns. This can help businesses detect fraudulent activities, such as credit card fraud or money laundering, and take appropriate actions to mitigate risks.

Cybersecurity

By training the algorithm on a dataset of normal network traffic patterns, it can learn to detect abnormal network behaviors that may indicate a cyber attack or intrusion. This can help organizations identify and respond to security threats in real-time, enhancing their overall cybersecurity posture.

Other Use Cases

Random Cut Forest algorithm can also be used for tasks such as equipment failure prediction, sensor data analysis, and quality control in manufacturing. It can effectively identify anomalies or outliers within high-dimensional data, enabling proactive maintenance, process optimization, and early detection of potential issues.

Neural Topic Model

Neural Topic Model algorithm is specifically designed for topic modeling tasks, which involve discovering latent topics within a collection of documents. It utilizes a neural network-based approach to learn the underlying structure and relationships between words and topics in the text data.

Use Cases:

Content Analysis and Recommendation Systems

By training the algorithm on a large corpus of documents, it can learn to identify and extract meaningful topics from the text. This can be useful for organizing and categorizing large document collections, enabling efficient search and recommendation systems.

Market Research and Customer Feedback Analysis

By training the algorithm on customer reviews, surveys, or social media data, it can uncover the main topics and themes discussed by customers. This can provide valuable insights into customer preferences, sentiment analysis, and help businesses make data-driven decisions.

 

Latent Dirichlet Allocation – LDA

LDA can help in topic modeling. It is specifically designed to uncover latent topics within a collection of documents and assign topic probabilities to each document.

Use Cases:

Text Mining and Document Clustering

By training the algorithm on a dataset of documents, it can learn to identify the underlying topics present in the text. This can be useful for organizing and categorizing large document collections, enabling efficient search, recommendation systems, or content analysis.

Social Media Analysis and Sentiment Analysis

By training the algorithm on social media posts or customer reviews, it can uncover the main topics being discussed and analyze the sentiment associated with each topic.

Other Use Cases

The LDA algorithm is also applicable to tasks such as information retrieval, document summarization, and content recommendation. It can uncover the hidden thematic structure within text data, allowing for efficient organization, summarization, and retrieval of relevant information.

  

K Nearest Neighbors – KNN

KNN can help with both classification or regression tasks based on similarity measures. It is specifically designed to handle scenarios where the goal is to predict the class or value of a new data point based on its proximity to its neighboring data points.

Use Cases:

Recommendation Systems

By training the algorithm on a dataset of user-item interactions, it can learn to predict user preferences or recommend items based on the similarity of users or items. This can be useful for personalized recommendations in e-commerce, content streaming platforms, or social media.

Anomaly Detection

By training the algorithm on a dataset of normal data points, it can learn to identify anomalies or outliers based on their dissimilarity to the majority of the data. This can be applied in various domains such as fraud detection, network intrusion detection, or equipment failure prediction.

Other Use Cases

The KNN algorithm is also applicable to tasks such as image recognition, text classification, and customer segmentation. It can classify or predict based on the similarity of features or patterns, making it suitable for a wide range of applications.

 

K-Means

K-Means can work with tasks that involve clustering or grouping similar data points together. It is specifically designed to handle scenarios where the goal is to partition data into K distinct clusters based on their similarity.

Use Cases:

Customer Segmentation

By training the algorithm on customer data, such as demographics, purchase history, or browsing behavior, it can learn to group customers into distinct segments based on their similarities. This can help businesses tailor marketing strategies, personalize recommendations, or optimize customer experiences based on the characteristics of each segment.

Image Compression or Image Recognition

By training the algorithm on a dataset of images, it can learn to group similar images together based on their visual features. This can be useful for tasks such as image compression, where similar images can be represented by a single representative image, or for image recognition, where images can be classified into different categories based on their similarities.

Other Use Cases

K-Means can also help with document clustering, anomaly detection, and market segmentation. It can group data points based on their similarity, allowing for efficient organization, analysis, and decision-making.

 

Principal Component Analysis – PCA

Principal Component Analysis (PCA) can do dimensionality reduction and feature extraction. It is designed to handle scenarios where the goal is to transform high-dimensional data into a lower-dimensional representation while preserving the most important information.

Use Cases:

Data Visualization

By applying PCA to a high-dimensional dataset, it can reduce the dimensionality of the data while retaining the most significant features. This allows for visualizing the data in a lower-dimensional space, making it easier to understand and interpret complex relationships or patterns.

Feature Extraction

By applying PCA to a dataset with a large number of features, it can identify the most informative features and create a reduced set of features that capture the most important information. This can be useful for improving the efficiency and performance of machine learning models by reducing the dimensionality of the input data.

Factorization Machines

Factorization Machines mainly work for recommendation systems, personalized marketing, or collaborative filtering. It is designed to handle scenarios where the goal is to predict user preferences or make recommendations based on interactions between users and items.

Use Cases:

Recommendation Systems

By training the algorithm on user-item interaction data, such as ratings or purchase history, it can learn to predict user preferences and make personalized recommendations. This can be useful for e-commerce platforms, content streaming services, or social media platforms to enhance user experiences and drive engagement.

Personalized Marketing

By training the algorithm on customer data, such as demographics, browsing behavior, or past purchases, it can learn to predict customer preferences and tailor marketing campaigns accordingly. This can help businesses deliver targeted advertisements, personalized offers, or product recommendations to individual customers, improving conversion rates and customer satisfaction.

Other Use Cases

Factorization Machines is also applicable to tasks such as click-through rate prediction, sentiment analysis, and fraud detection. It can capture complex interactions between features and make accurate predictions based on the learned factorization model.

 

IP Insights

IP Insights is a feature in Amazon SageMaker that provides IP address geolocation and threat intelligence. However, it is not a built-in algorithm in the traditional sense.

It is designed to provide information about the geographical location and potential threat level associated with an IP address. It leverages data from various sources to determine the country, city, and coordinates associated with an IP address. Additionally, it provides threat intelligence information, such as whether the IP address is associated with known malicious activities or has a high-risk reputation.

Use Cases:

Cybersecurity and Network security

By utilizing IP Insights, organizations can analyze incoming network traffic and identify potential threats based on the geolocation and threat intelligence associated with IP addresses. This can help in detecting and mitigating malicious activities, such as unauthorized access attempts or distributed denial-of-service (DDoS) attacks.

Targeted Marketing and Content Localization

By leveraging IP Insights, businesses can tailor their marketing campaigns or content based on the geographical location of website visitors or customers. This can enable personalized experiences, targeted advertisements, or region-specific content delivery.

 

Reinforcement Learning

Reinforcement Learning (RL) can be used for sequential decision-making and learning from interactions with an environment. It is specifically designed to handle scenarios where the goal is to optimize an agent’s actions to maximize a reward signal over time.

Use Cases:

Autonomous Robotics

By training the algorithm on simulated or real-world environments, it can learn to control robotic systems to perform complex tasks. This can include tasks such as object manipulation, navigation, or even playing games. RL enables the agent to learn from trial and error, improving its performance over time through exploration and exploitation of the environment.

Recommendation Systems

By training the algorithm on user interactions and feedback, it can learn to make personalized recommendations that maximize user engagement or satisfaction. This can be applied in various domains such as e-commerce, content streaming platforms, or online advertising, where the goal is to optimize user experiences and increase conversion rates.

Other Use Cases

The Reinforcement Learning algorithm is also applicable to tasks such as resource allocation, portfolio management, and energy optimization. It can learn to make optimal decisions in dynamic and uncertain environments, leading to efficient resource utilization, investment strategies, or energy consumption.

Best Practices in Implementing Security Groups for Web Application on AWS

In-Short

CaveatWisdom

Caveat: Its easy to assign source as large VPC wide CIDR range (ex: 10.0.0.0/16) in Security Groups for private instances and avoid painful debugging of data flow however we are opening our systems to a plethora of security vulnerabilities. For example, a compromised system in the network can affect all other systems in the network.

Wisdom:

  1. Create and maintain separate private subnets for each tier of the application.
  2. Only allow the required traffic for instances, you can do this easily by assigning “Previous Tier Security Group” as the source (from where the traffic is allowed) in the in-bound rule of the “Present tier’s Security Group”.
  3. Keep Web Servers as private and always front them with a managed External Elastic Load Balancer.
  4. Access the servers through Session Manager in the System Manager Server.

In-Detail

Some Basics

Security Group is an Instance level firewall where we can apply allow rules for in-bound and out-bound traffic. In-fact security groups associate with Elastic Network Interfaces (ENIs) of the EC2 instances through which data flows.

We can apply multiple security groups for a instance, all the rules from all security groups associated with instance will be aggregated and applied.

Connection Tracking

Security Groups are stateful, that means when a request is allowed in Inbound rules, corresponding response is automatically allowed and no need to apply outbound rules explicitly. This is achieved by tracking the connection state of the traffic to and from the instance.

It is to be noted that connections can be throttled if traffic increases beyond max number of connections. If all traffic is allowed for all ports (0.0.0.0/0 or ::/0) for both in-bound and out-bound traffic then that traffic is not tracked.

Scenario

Let’s take a three-tier web application where the front end or API receiving the traffic from users will be the Web tier, application logic API lies at App tier and Database in the third tier.

Directly exposing the web servers to the open internet is a big vulnerability, it is always better to keep them in a private subnet and front them with a Load Balancer in a public subnet.

It is better to maintain separate private subnets for each tier with their own auto scaling groups.

Overall, we can have one public subnet and three private subnets in each availability zone where we host the application. It is recommended to use at least two availability zones for high availability.

The architecture for our three-tier web application can be as below.

    Architecture for 3-tier Web Application

    Architecture for 3-tire Web Application

    Chaining Security Groups

    In the above architecture Security Groups are chained from one tier to the next tier. We need to create a separate security group for each tier and a security group for load balancer in the public subnet. For Application load balancer, we need to select at least 2 subnets, 1 in each availability zone.

    Implementing Chaining of Security Groups

    1. A Security Group ALB-SG for an External Application Load Balancer should be created with source open to internet (0.0.0.0/0) in the Inbound rule for all the traffic on HTTPS Port 443. TLS/SSL can be terminated at the ALB which can take the heavy lifting of encryption and decryption. An ID for the ALB-SG will be created automatically let’s say sgr-0a123.
    2. For Web tier a Security Group Web-SG with the source as ALB-SG sgr-0a123 in the Inbound rule on HTTP port 80 should be created. With this rule only connections from ALB are allowed to web servers. Let the ID created for Web-SG be sgr-0b321.
    3. For App tier a Security Group APP-SG with the source as Web-SG sgr-0b321 in the Inbound rule on Custom port 8080 should be created. With this rule only connections from Instances with Web-SG security group are allowed to App servers. Let the ID created for App-SG be sgr-0c456.
    4. For Database tier a Security Group DB-SG with the source as App-SG sgr-0c456 in the Inbound rule on MySQL/Aurora port 3306 should be created. With this rule only connections from Instances with App-SG security group are allowed to Database servers. Let the ID created for DB-SG be sgr-0d654.

    Running Containers on AWS as per Business Requirements and Capabilities

    We can run containers with EKS, ECS, Fargate, Lambda, App Runner, Lightsail, OpenShift or on just EC2 instances on AWS Cloud. In this post I will discuss on how to choose the AWS service based on our organization requirements and capabilities.

    In-Short

    CaveatWisdom

    Caveat: Meeting the business objectives and goals can become difficult if we don’t choose the right service based on our requirements and capabilities.

    Wisdom:

    1. Understand the complexity of your application based on how many microservices and how they interact with each other.
    2. Estimate how your application scales based on business.
    3. Analyse the skillset and capabilities of your team and how much time you can spend for administration and learning.
    4. Understand the policies and priorities of your organization in the long-term.

    In-Detail

    You may wonder why we have many services for running the containers on AWS. One size does not fit all. We need to understand our business goals and requirements and our team capabilities before choosing a service.

    Let us understand each service one by one.

    All the services which are discussed below require the knowledge of building containerized images with Docker and running them.

    Running Containers on Amazon EC2 Manually

    You can deploy and run containers on EC2 Instances manually if you have just 1 to 4 applications like a website or any processing application without any scaling requirements.

    Organization Objectives:

    1. Run just 1 to 4 applications on the cloud with high availability.
    2. Have full control at the OS level.
    3. Have standard workload all the time without any scaling requirements.

    Capabilities Required:

    1. Team should have full understanding of AWS networking at VPC level including load balancers.
    2. Configure and run container runtime like docker daemon.
    3. Deploying application containers manually on the EC2 instances by accessing through SSH.
    4. Knowledge of maintaining OS on EC2 instances.

    The cost is predictable if there is no scaling requirement.

    The disadvantages in this option are:

    1. We need to maintain the OS and docker updated manually.
    2. We need to constantly monitor the health of running containers manually.

    What if you don’t want to take the headache of managing EC2 instances and monitoring the health of your containers? – Enter Amazon Lightsail

     Running Containers with Amazon Lightsail

    The easiest way to run containers is Amazon Lightsail. To run containers on Lightsail we just need to define the power of the node (EC2 instance) required and scale that is how many nodes. If the number of containers instances is more than 1, then Lightsail copies the container across multiple nodes you specify. Lightsail uses ECS under the hood. Lightsail manages the networking.

    Organization Objectives:

    1. Run multiple applications on the cloud with high availability.
    2. Infrastructure should be fully managed by AWS with no maintenance.
    3. Have standard workload and scale dynamically when there is need.
    4. Minimal and predictable cost with bundled services including load balancer and CDN.

    Capabilities Required:

    1. Team should have just knowledge of running containers.

    Lightsail can dynamically scale but it should be managed manually, we cannot implement autoscaling based on certain triggers like increase in traffic etc.

    What if you need more features like building a CI/CD pipeline, integration with a Web Application Firewall (WAF) at the edge locations? – Enter AWS App Runner

     

    Running Containers with AWS App Runner

    AWS App runner is one more easy service to run containers. We can implement Auto Scaling and secure the traffic with AWS WAF and other services like private endpoints in VPC. App Runner directly connects to the image repository and deploy the containers. We can also integrate with other AWS services like Cloud Watch, CloudTrail and X-Ray for advanced monitoring capability.

    Organization Objectives:

    1. Run multiple applications on the cloud with high availability.
    2. Infrastructure should be fully managed by AWS with no maintenance.
    3. Auto Scale as per the varying workloads.
    4. Implement high security features like traffic filtering and isolating workloads in a private secured environment.

    Capabilities Required:

    1. Team should have just knowledge of running containers.
    2. AWS knowledge of services like WAF, VPC, CloudWatch is required to handle the advanced requirements.

    App Runner supports full stack web applications including front-end and backend services. At present App Runner supports only stateless applications, stateful applications are not supported.

    What if you need to run the containers in a serverless fashion, i.e., an event driven architecture in which you run the container only when needed (invoked by an event) and pay only for the time the process runs to service the request? – Enter AWS Lambda.

    Running Containers with AWS Lambda

    With Lambda, you pay only for the time your container function runs in milliseconds and how much RAM you allocate to the function, if your function runs for 300 milliseconds to process a request then you pay only for that time. You need to build your container image with the base image provided by AWS. The base images are open-source made by AWS and they are preloaded with a language runtime and other components required to run a container image on Lambda. If we choose our own base image then we need to add appropriate runtime interface client for our function so that we can receive the invocation events and respond accordingly.

    Organization Objectives:

    1. Run multiple applications on the cloud with high availability.
    2. Infrastructure should be fully managed by AWS with no maintenance.
    3. Auto Scale as per the varying workloads.
    4. Implement high security features like traffic filtering and isolating workloads in a private secured environment.
    5. Implement event-based architecture.
    6. Pay only for the requests process without idle time for apps.
    7. Seamlessly integrate with other services like API Gateway where throttling is needed.

    Capabilities Required:

    1. Team should have just knowledge of running containers.
    2. Team should have deep understanding of AWS Lambda and event-based architectures on AWS and other AWS services.
    3. Existing applications may need to be modified to handle the event notifications and integrate with runtime client interfaces provided by the Lambda Base images.

    We need to be aware of limitations of Lambda, it is stateless, max time a Lambda function can run is 15 minutes, it provides a temporary storage for buffer operations.

    What if you need more transparency i.e., access to underlying infrastructure at the same time the infrastructure is managed by AWS? – Enter AWS Elastic Beanstalk.

    Running Containers with AWS Elastic Beanstalk

    We can run any containerized application on AWS Elastic Beanstalk which will deploy and manage the infrastructure on behalf of you. We can create and manage separate environments for development, testing, and production use, and you can deploy any version of your application to any environment. We can do rolling deployments or Blue / Green deployments. Elastic Beanstalk provisions the infrastructure i.e., VPC, EC2 instances, Load Balances with Cloud Formation Templates developed with best practices.

    For running containers Elastic Beanstalk uses ECS under-the-hood. ECS provides the cluster running the docker containers, Elastic Beanstalk manages the tasks running on the cluster.

    Organization Objectives:

    1. Run multiple applications on the cloud with high availability.
    2. Infrastructure should be fully managed by AWS with no maintenance.
    3. Auto Scale as per the varying workloads.
    4. Implement high security features like traffic filtering and isolating workloads in a private secured environment.
    5. Implement multiple environments for developing, staging and productions.
    6. Deploy with strategies like Blue / Green and Rolling updates.
    7. Access to the underlying instances.

    Capabilities Required:

    1. Team should have just knowledge of running containers.
    2. Foundational knowledge of AWS and Elastic Beanstalk is enough.

    What if you need to implement more complex microservices architecture with advanced functionality like service mesh and orchestration? Enter Elastic Container Service Directly

    Running Containers with Amazon Elastic Container Service (Amazon ECS)

    When we want to implement a complex micro-services architecture with orchestration of container, then ECS is the right choice. Amazon ECS is a fully managed service with built-in best practices for operations and configuration. It removes the headache of complexity in managing the control plane and gives option to run our workloads anywhere in cloud and on-premises.

    ECS give two launch types to run tasks, Fargate and EC2. Fargate is a serverless option with low overhead with which we can run containers without managing infrastructure. EC2 is suitable for large workloads which require consistently high CPU and memory.

    A Task in ECS is a blueprint of our microservice, it can run one or more containers. We can run tasks manually for applications like batch jobs or with a Service Schedular which ensures the scheduling strategy for long running stateless microservices. Service Schedular orchestrates containers across multiple availability zones by default using task placement strategies and constraints.

    Organization Objectives:

    1. Run complex microservices architecture with high availability and scalability.
    2. Orchestrate the containers as per complex business requirements.
    3. Integrate with AWS services seamlessly.
    4. Low learning curve for the team which can take advantage of cloud.
    5. Infrastructure should be fully managed by AWS with no maintenance.
    6. Auto Scale as per the varying workloads.
    7. Implement high security features like traffic filtering and isolating workloads in a private secured environment.
    8. Implement complex DevOps strategies with managed services for CI/CD pipelines.
    9. Access to the underlying instances for some applications and at the same time have a serverless option for some other workloads.
    10. Implement service mesh for microservices with a managed service like App Mesh.

    Capabilities Required:

    1. Team should have knowledge of running containers.
    2. Intermediate level of understanding of AWS services is required.
    3. Good knowledge of ECS orchestration and scheduling configuration will add much value.
    4. Optionally Developers should have knowledge of services mesh implementation with App mesh if it is required.

    What if you need to migrate existing on-premises container workloads running on Kubernetes to the Cloud or what if the organization policy states to adopt open-source technologies? – Enter Amazon Elastic Kubernetes Service.

     

    Running Containers with Amazon Elastic Kubernetes Service (Amazon EKS)

    Amazon EKS is a fully managed service for Kubernetes control plane and it gives option to run workloads on self-managed EC2 instances, Managed EC2 Instances or fully managed serverless Fargate service. It removes the headache of managing and configuring the Kubernetes Control Plane with in-built high availability and scalability. EKS is an upstream implementation of CNCF released Kubernetes version, so all the workloads presently running on-premises K8S will work on EKS. It gives option to extend and use the same EKS console to on-premises with EKS anywhere.

    Organization Objectives:

    1. Adopt open-source technologies as a policy.
    2. Migrate existing workloads on Kubernetes.
    3. Run complex microservices architecture with high availability and scalability.
    4. Orchestrate the containers as per complex business requirements.
    5. Integrate with AWS services seamlessly.
    6. Infrastructure should be fully managed by AWS with no maintenance.
    7. Auto Scale as per the varying workloads.
    8. Implement high security features like traffic filtering and isolating workloads in a private secured environment.
    9. Implement complex DevOps strategies with managed services for CI/CD pipelines.
    10. Access to the underlying instances for some applications and at the same time have a serverless option for some other workloads.
    11. Implement service mesh for microservices with a managed service like App Mesh.

    Capabilities Required:

    1. Team should have knowledge of running containers.
    2. Intermediate level of understanding of AWS services is required and deep understanding of networking on AWS for Kubernetes will a lot, you can read my previous blog here.
    3. Learning curve is high with Kubernetes and should spend sufficient time for learning.
    4. Good knowledge of EKS orchestration and scheduling configuration.
    5. Optionally Developers should have knowledge of services mesh implementation with App mesh if it is required.
    6. Team should have knowledge on handling Kubernetes updates, you can refer to my vlog here.

     

    Running Containers with Red Hat OpenShift Service on AWS (ROSA)

    If the Organization manages its existing workloads on Red Hat OpenShift and want to take advantage of AWS Cloud then we can migrate easily to Red Hat OpenShift Service on AWS (ROSA) which is a managed service. We can use ROSA to create Kubernetes clusters using the Red Hat OpenShift APIs and tools, and have access to the full breadth and depth of AWS services. We can also access Red Hat OpenShift licensing, billing, and support all directly through AWS

     

    I have seen many organizations adopt multiple service to run their container workloads on AWS, it is not necessary to stick to one kind of service, in a complex enterprise architecture it is recommended to keep all options open and adopt as the business needs changes.

    Interweaving Purpose-Build Databases in the Microservices Architecture

    It is best practice to have a separate database for each microservice based on its purpose. In this post we will understand how to analyse the purpose based on a scenario and choose the right database.

    In-Short

    CaveatWisdom

    Caveat: We can easily run into cost overruns if we do not choose the right database and design it properly based on the purpose of our application.

    Wisdom:

    1. Understand the access patterns (Queries) which you make on our database.
    2. Understand how your database storage scales, will it be in terra bytes or petabytes.
    3. Analyse what is most important for your application among Consistency, Availability and Partition Tolerance.
    4. Choose Purpose-Built databases on AWS cloud based on Application Purpose.

    In-Detail

    Scenario

    Let us consider we are developing a loan processing application for a Bank. The Loan could be an Auto Loan, Home Loan, Personal Loan or any other loan.

    Requirements of Scenario

    1. Customer visits loan application portal, reviews all the loan products and interest rates and applies for a loan. The portal should give a smooth experience to the customer without any latencies. Once Application is submitted it should be acknowledged immediately and should be queued for processing. 
    1. Bank Expects huge volume of loan applications across regions with its marketing efforts, loan application data should be stored in a scalable database and as it is very important data it should be replicated in multiple regions for high availability.  
    1. While processing the loan, credit worthiness of the customer has to be analysed.  
    1. Customer Profiles and Loan Documents with content management system should be stored in a secured scalable database.  
    1. Based on credit worthiness of customer loan documents should be sent for final manual approval.  
    1. Loan account for the customer should be created and loan transactions data should be maintained in it. We also need to do ad hoc queries on the transactions data in relation to floating interest rates and repayment schedules for generating different statements and reports.  
    1. Loan Application data should be sent to data warehouse for marketing and customer analytics. In the same data warehouse data from other sources and products will be ingested to improve marketing strategies and showcase relevant products to customers.  
    1. Immutable records of loan application events should be maintained for regulatory and compliance purposes and also these records should be securely shared with the insurer of the asset created by the customer with the loan.

    Note: The architecture is simplified for purpose of discussion, real production scenario architecture could be much more complex.

    Architecture Brief

    Customer facing loan application portal is a static website hosted on Amazon S3 with the CloudFront integration. API Calls are made to API Gateway which transfers the request data to Lambda functions for processing. Fanout mechanism with a combination of SNS and SQS is adopted to process and ingest data to multiple databases parallelly. Process workflow including manual approval is handled by AWS Step Functions. SNS is used for internal notifications and SES is used to intimate the status of the loan by email to the customer.

    Command Query Responsibility Segregation (CQRS) pattern is adopted in the above architecture with separate lambda functions for ingesting data and processing the data.

    Analysing Purpose and Choosing the Database

    1. When handling customer queries, especially at the time of acquiring a new customer or selling a new product to an existing customer, the response time should be very less. To give a very good experience to the customer all general frequent query responses, session state and the data which afford to be stale should be cached. Amazon ElastiCache for Redis is a managed distributed in-memory data store built for this kind of purpose. It gives a high-performance microsecond latency caching solution. It comes with multi-AZ capability for high availability.
    2. Loan application data will be mostly key value pairs like loan amount, loan type, customer id, etc. As per the requirement 2, huge volume of loan data has to stored and retrieved for procession and at the same time it should be replicated to multiple regions for very high availability. Amazon DynamoDB is a key-value store database which can give single-digit millisecond latency even at peta-byte scale. It has an inherent capability to replicate the data to other regions with Global Tables and enabling DynamoDB streams. So, this is suitable for storing Loan application data and triggering Loan Processing Lambda function with DynamoDB streams.
    3. As per the requirement 3 of the scenario, creditworthiness of the customer has to be analysed before arriving at a decision of sanctioning loan amount to the customer. It is assumed that bank collects data of customers from various sources and also maintain the data of its relationships with its existing customers. Creditworthiness is calculated on many factors especially the history of the relationship the customer had with bank with various products like savings account, credit cards, income and repayment history. When it comes to querying the relationships and analysing the data, we need a graph database. Amazon Neptune is a fully managed graph database service that work with highly connected datasets. It can scale to handle billions of relationships and lets you query them with milliseconds latency. It stores data items as vertices of the graph, and the relationships between them as edges. Loan application data can be ingested to Amazon Neptune and creditworthiness can be analysed.
    4. As per requirement 4, customer profiles and loan documents should be maintained with a content management system. Loan documents contain critical legal information which could be changing based on various products. The documents can differ based on the law of different states in which the bank operates. To address these requirements schema of the database should be dynamic. We may need to query and process these documents with-in milliseconds. We need to have a No-SQL Document Database for content management system of loan documents which scale. Amazon DocumentDB (with MongoDB Compatibility) is a fully managed database service which can supports both instance-based clusters which can scale up-to 128tb and also Elastic Clusters which can scale even to petabytes of data. We can put loan documents with dynamic schema as JSON documents in DocumentDB. We can use MongoDB drivers for developing our application with the DocumentDB. Additionally signed and scanned copies of the documents can be maintained in an S3 bucket with a reference of the document in the DocumentDB.
    5. As per the requirement 6, Loan Account should be opened for the customer where transactions data should be maintained. Here we need to maintain the integrity of the transactions data with a fixed schema and Online Transaction Processing (OLTP ). SQL is more suitable for doing ad hoc queries for generating statements. A Relational Database is more suitable for this purpose. Amazon RDS which supports six SQL database engines (Aurora, MySQL, PostgreSQL, MariaDB, Oracle and MS SQL Server) is managed service for relational databases. Amazon RDS manages backups, software patching, automatic failure detection, and recovery which are tedious manual tasks if we maintain the database ourselves. We can focus on our application development instead of maintaining these tasks. If we are comfortable with MySQL and PostgreSQL we can choose Amazon Aurora based on version compatibility. Aurora gives more throughput than standard MySQL and PostgreSQL engines as it uses Clustered Volume storage which is native to cloud.
    6. As per requirement 7, loan application data has to be stored for marketing and customer analytics. The data warehouse also stores data from multiple sources, which could run into petabytes. The data could be analysed with Machine Learning algorithms which can help in targeted marketing. Amazon Redshift is a fully managed, petabyte-scale data warehouse service with a group of nodes called cluster. Amazon Redshift service manages provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine which can run one or more databases. We can run SQL commands for analysis on the database. Amazon Redshift supports SQL client tools connecting through Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC). Amazon Redshift also give serverless options where we need not to provision any clusters, it automatically provisions data warehouse capacity and scales the underlying resources. With serverless we pay only when the data warehouse. We can use Amazon Redshift ML to train and deploy machine learning models with SQL. We can use Amazon SageMaker to train the model with data in Amazon RedShift for customer analytics. 
    7. As per the requirement 8, immutable records of loan application events should be maintained for regulatory and compliance purposes. We need to have ledge database to maintain immutable records and also securely share the date with other stakeholders in block chain applications. The insurer who is insuring the asset which is created out of the loan take by the customer may need this data for insuring purpose. With Amazon Quantum Ledger Database (Amazon QLDB) we can maintain all the activities with respect to the loan in an immutable, and cryptographically verifiable transaction log owned by the bank. We can track the history of credits and debits in loan transactions and also verify the data lineage of an insurance claim on the asset. Amazon QLDB is a fully managed service and we pay only for what we use.  

    In this post I have discussed how to choose a purpose-built database based on our application. I will be discussing designing and implementation of these databases in my future posts.

    Build Docker Container for Java App and Deploying it on Amazon EKS

    Github Link https://github.com/getramki/Deploy-JavaApp-On-EKS.git

    This repo contains a Sample Spring Boot Java App with the dockerfile which uses Amazon Corretto 17 as base image and manifestes for creating an Amazon EKS cluster and deploying the sample app to the cluster as a container and exposing it with a service and classic load balancer.

    Prerequisites

    Docker, AWS Account and IAM user with necessary permissions for creating EKS Cluster, aws cli, configure IAM user with necessary programmatic permissions, eksctl cli, kubectl Please install and configure above before going further

    • You can incur charges in your AWS Account by following this steps below
    • The code will deploy in us-west-2 region, change it where ever necessary if deploying in another region

    After downloading the repo in the terminal CD to repo directory and follow the steps for

    1. Building a Docker Image for a Java App and Pushing it to Amazon ECR.
    2. Creating an Amazon EKS cluster with eksctl
    3. Deploying the sample app to the EKS cluster.

    Steps for Building a Docker Image and Pushing it to Amazon ECR

    • Change directory to sample
    cd sample
    • Run docker daemon
    sudo dockerd 
    • Build an image
    docker build --tag sample . 
    • View local images
    docker images
    • docker build build stage
    docker build -t sample-build --target build . 
    • docker build production stage
    docker build -t sample-production --target production . 
    • Get ECR Login and pass it to docker
    aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin Replace-With-AWS-Account-ID.dkr.ecr.us-west-2.amazonaws.com
    • Create ECR repo
    aws ecr create-repository --repository-name sample-repo --image-scanning-configuration scanOnPush=true --region us-west-2
    • Tag the image
    docker tag sample-production:latest Replace-With-AWS-Account-ID.dkr.ecr.us-west-2.amazonaws.com/sample-repo
    • Push the Image to ECR Repo
    docker push Replace-With-AWS-Account-ID.dkr.ecr.us-west-2.amazonaws.com/sample-repo

    Create EKS Cluser

    Create an Amazon EKS cluster in us-west-2 region with 2 t3.micro instances Creation of EKS cluster can take up to 20 minutes

    eksctl create cluster -f devcluster-addons-us-west-2.yaml

    Deploy Image to EKS Cluster

    Update Image URL in deployment.yaml file Replace-With-AWS-Account-ID

    • Deploy Java Sample-App
    kubectl apply -f deployment.yaml
    • Deploy Java Sample-App Service
    kubectl apply -f service.yaml
    kubectl apply -f ingress.yaml
    • Get Deployments
    kubectl get deployment sample-app
    kubectl get deployments
    kubectl get service sample-app -o wide
    kubectl get pods -n default

    Delete Resources

    • Delete Deployments
    kubectl delete deployment sample-app
    • Delete services
    kubectl delete service sample-app
    • Delete ingress if you have created it
    kubectl delete ingress sample-app
    • Delete Amazon EKS Cluster
    eksctl delete cluster -f devcluster-addons-us-west-2.yaml