Cloud Security Measures in Practice with AWS & GCP: Optimizing WAF Configuration, DDoS Protection, and Access Control
Introduction
Security measures in cloud environments are essential regardless of the scale or purpose of the system. Without appropriate defenses, you expose yourself to various risks such as DDoS attacks, unauthorized access, increased load from bot traffic, and eavesdropping on communications due to poor certificate management.
This article explains practical security hardening measures for systems that use AWS and GCP. Specifically, it covers topics such as WAF (Web Application Firewall) configuration, bot countermeasures, IP address restrictions, proper VPC design, DDoS protection, log monitoring, TLS certificate management, SSH protection, and IAM policy optimization. By implementing these measures properly, you can significantly improve the security level of your cloud environment.
If you are unsure where to start or want to know the minimum security measures you should take, use this article as a reference.
Introducing a Web Application Firewall (WAF) and Proper Rule Configuration
What is a WAF?
A WAF (Web Application Firewall) is a firewall that protects web applications from attacks. Unlike a general firewall, it monitors and filters traffic at the HTTP/HTTPS level and prevents attacks such as SQL injection and cross-site scripting (XSS).
The main types of web attacks that a WAF defends against are:
- SQL Injection (SQLi): Sending malicious SQL statements to a web application to manipulate the database illegally
- Cross-Site Scripting (XSS): Embedding scripts to steal user information
- Cross-Site Request Forgery (CSRF): Tricking authenticated users into sending malicious requests
- Remote File Inclusion (RFI): Forcing the loading of external malicious scripts
- Directory Traversal: Specifying illegal paths to access confidential files on the server
- Brute-force attacks: Exhaustive attacks against login forms
- DoS/DDoS attacks: Sending a large number of requests to bring down the server
How to introduce a WAF
There are several ways to introduce a WAF:
-
Cloud-based WAF
A WAF that filters traffic via a cloud service.- Advantages:
- Easy to configure and deploy
- Highly scalable and capable of handling large volumes of traffic
- Regular signature updates are provided, reducing operational burden
- Disadvantages:
- There may be limitations when applying certain custom rules
- Risk of outages in the cloud provider you depend on
- Representative cloud-based WAFs:
- Cloudflare WAF
- AWS WAF
- Azure WAF
- Google Cloud Armor
- Advantages:
Cloud-based WAFs are easy to introduce and offer advantages such as scalability and reduced operational load. On the other hand, appliance-type and software-type WAFs allow fine-tuned adjustments and can be used in on-premises environments.
-
Software-based WAF
A WAF introduced as software running on a server.- Advantages:
- Some are open source, so the introduction cost is low
- Can be customized to fit your own environment
- Disadvantages:
- High operational and tuning burden
- Depends on hardware specs, so not suitable for handling very large traffic volumes
- Representative software-based WAFs:
- ModSecurity (runs on Apache/Nginx)
- NAXSI (for Nginx)
- Advantages:
Proper rule configuration
To maximize the effectiveness of a WAF, proper rule configuration is required.
Signature-based filtering
Blocks requests based on known attack patterns (signatures).
Example with ModSecurity (preventing SQL injection)
SecRule ARGS "@rx select.*from" "id:'1001',msg:'SQL Injection Attempt',deny,status:403"
This rule returns a 403 error if the request parameters (ARGS) contain an SQL query such as select ... from.
IP whitelist / blacklist
Allows or denies access from specific IP addresses.
IP blacklist rule in AWS WAF
{
"IPSetDescriptors": [
{
"Type": "IPV4",
"Value": "192.168.1.100/32"
}
]
}
This configuration blocks access from 192.168.1.100.
Rate limiting
To prevent DoS attacks that send a large number of requests, limit the number of requests within a certain period.
Rate limit configuration in Cloudflare WAF
{
"action": "block",
"match": {
"request": {
"rate_limit": {
"threshold": 100,
"period": 60
}
}
}
}
This configuration blocks clients that send more than 100 requests within 60 seconds.
Logging and monitoring
Monitor WAF logs and analyze suspicious access to update rules as needed.
CloudWatch logging configuration for AWS WAF
{
"LoggingConfiguration": {
"LogDestinationConfigs": [
"arn:aws:logs:us-east-1:123456789012:log-group:waf-logs"
]
}
}
With this configuration, WAF logs are stored in AWS CloudWatch and can be analyzed.
Filtering bot traffic (reCAPTCHA / Cloudflare Turnstile)
Bot traffic can cause spam and unauthorized access, so it is important to filter it appropriately. Bots often affect scenarios such as form submissions and login authentication, so solutions like Google reCAPTCHA and Cloudflare Turnstile are commonly used.
reCAPTCHA v2
This method uses image-based challenges, where users answer prompts such as “Select all traffic lights” to determine whether they are bots.
- Characteristics
- Requires explicit user actions (clicking/selecting)
- The “I’m not a robot” checkbox format is common
- Has few false positives but places a higher burden on users
- Implementation steps (Next.js example)
- Register with Google reCAPTCHA and obtain a site key and secret key
- Add the reCAPTCHA widget to the frontend
- Perform reCAPTCHA verification on the backend
Frontend
import { useState } from "react";
export default function ContactForm() {
const [recaptchaToken, setRecaptchaToken] = useState("");
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
const response = await fetch("/api/verify-recaptcha", {
method: "POST",
body: JSON.stringify({ token: recaptchaToken }),
headers: { "Content-Type": "application/json" },
});
const data = await response.json();
alert(data.success ? "Sent successfully" : "reCAPTCHA failure");
};
return (
<form onSubmit={handleSubmit}>
<div
className="g-recaptcha"
data-sitekey="YOUR_SITE_KEY"
data-callback={(token: string) => setRecaptchaToken(token)}
></div>
<button type="submit">Send</button>
</form>
);
}
Backend
export default async function handler(req, res) {
const { token } = req.body;
const secretKey = "YOUR_SECRET_KEY";
const response = await fetch(
`https://www.google.com/recaptcha/api/siteverify?secret=${secretKey}&response=${token}`,
{ method: "POST" }
);
const data = await response.json();
if (data.success) {
res.json({ success: true });
} else {
res.status(400).json({ success: false });
}
}
reCAPTCHA v3
Performs score-based evaluation and distinguishes bots without user interaction.
Characteristics
- Bot detection using a score (0.0–1.0)
- Can be applied seamlessly to form submissions and logins
- Can specify actions to support different behaviors
Frontend
import { useEffect, useState } from "react";
export default function ContactForm() {
const [recaptchaToken, setRecaptchaToken] = useState("");
useEffect(() => {
grecaptcha.ready(async () => {
const token = await grecaptcha.execute("YOUR_SITE_KEY", { action: "submit" });
setRecaptchaToken(token);
});
}, []);
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
const response = await fetch("/api/verify-recaptcha", {
method: "POST",
body: JSON.stringify({ token: recaptchaToken }),
headers: { "Content-Type": "application/json" },
});
const data = await response.json();
alert(data.success ? "Sent successfully" : "reCAPTCHA failure");
};
return (
<form onSubmit={handleSubmit}>
<button type="submit">Send</button>
</form>
);
}
Backend
export default async function handler(req, res) {
const { token } = req.body;
const secretKey = "YOUR_SECRET_KEY";
// Call Google's reCAPTCHA site verification API
const response = await fetch(
`https://www.google.com/recaptcha/api/siteverify?secret=${secretKey}&response=${token}`,
{ method: "POST" }
);
const data = await response.json();
if (!data.success) {
return res.status(400).json({ success: false, message: "reCAPTCHA verification failure" });
}
const score = data.score; // Retrieved score
console.log("reCAPTCHA score:", score); // For debugging
// Set a threshold to determine whether it is a bot
if (score >= 0.5) {
res.json({ success: true, message: "Determined to be human", score });
} else {
res.status(403).json({ success: false, message: "Possible bot", score });
}
}
Cloudflare Turnstile
import { useState } from "react";
export default function ContactForm() {
const [turnstileToken, setTurnstileToken] = useState("");
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
const response = await fetch("/api/verify-turnstile", {
method: "POST",
body: JSON.stringify({ token: turnstileToken }),
headers: { "Content-Type": "application/json" },
});
const data = await response.json();
alert(data.success ? "Sent successfully" : "Turnstile failure");
};
return (
<form onSubmit={handleSubmit}>
<div
className="cf-turnstile"
data-sitekey="YOUR_SITE_KEY"
data-callback={(token: string) => setTurnstileToken(token)}
></div>
<button type="submit">Send</button>
</form>
);
}
export default async function handler(req, res) {
const { token } = req.body;
const secretKey = "YOUR_SECRET_KEY";
const response = await fetch("https://challenges.cloudflare.com/turnstile/v0/siteverify", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ secret: secretKey, response: token }),
});
const data = await response.json();
if (data.success) {
res.json({ success: true });
} else {
res.status(400).json({ success: false });
}
}
Comparison: reCAPTCHA vs. Cloudflare Turnstile
| reCAPTCHA v2 | reCAPTCHA v3 | Cloudflare Turnstile | |
|---|---|---|---|
| User interaction | Required (image selection) | Not required (score-based) | Not required |
| Privacy | Depends on Google | Depends on Google | Depends on Cloudflare (privacy-first) |
| Processing speed | Rather slow | Normal | Fast |
| Ease of implementation | Easy | Slightly harder (score tuning) | Easy |
IP address restrictions (restricting access to admin panels and admin APIs)
IP address restriction is a mechanism that allows access only from specific IP addresses (or ranges) and denies access from all other IP addresses.
It is mainly used in the following cases:
- Admin panels (e.g., web application admin dashboards)
- Admin APIs (e.g., internal APIs or APIs for external services)
- Databases (e.g., restricting remote access)
- Cloud services (e.g., management consoles for AWS, GCP, Azure)
By applying IP address restrictions, only allowed IP addresses can access the system, minimizing the risk of unauthorized access.
Setting an IP whitelist
By setting an IP whitelist, you can allow access only from permitted IP addresses.
- Apply to web application admin panels and APIs
- Use AWS WAF, Cloudflare, Azure WAF, etc.
- Restrict via Nginx or Apache configuration
- Restrict via VPC (Virtual Private Cloud) security groups
Example: IP restriction using AWS WAF
{
"IPSet": {
"Name": "AllowedIPs",
"Scope": "REGIONAL",
"Addresses": ["203.0.113.1/32", "198.51.100.2/32"]
}
}
Example: IP restriction in Nginx
location /admin {
allow 203.0.113.1;
allow 198.51.100.2;
deny all;
}
Using VPN / Zero Trust solutions
By using VPN or Zero Trust solutions, you can allow access only via the internal network.
- Use AWS Client VPN
- OpenVPN
- WireGuard, etc.
Combining with Multi-Factor Authentication (MFA)
Introducing MFA further strengthens security.
- Google Authenticator (TOTP-based MFA)
- Duo Security (enterprise MFA solution)
- Okta MFA (combined with cloud ID management)
Example: AWS IAM MFA configuration
{
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
By applying a policy like the above to IAM users, you can restrict access for users who have not enabled MFA.
Advantages and disadvantages of IP address restrictions
Advantages
- Prevents unauthorized access
- Allows access only from specific locations or networks
- Can strengthen security by combining with WAF or VPN
Disadvantages
- Inconvenient for remote work, etc., because access from IPs not on the allowlist is blocked
- Hard to manage in environments with changing IP addresses (ISP changes, dynamic IPs, etc.)
- If the allowlist is misconfigured, administrators themselves may be unable to log in
Proper VPC (virtual network) design and subnet separation
A VPC (Virtual Private Cloud) is a virtual network in a cloud environment. Proper network design can minimize security risks.
Separation of public and private subnets
- Public subnet
- Can communicate with the outside via an Internet Gateway (IGW)
- Main use cases: Load balancers (ALB), web servers
- Private subnet
- Cannot be accessed directly from the internet
- Main use cases: Databases, application servers
- NAT Gateway
- Used for resources in private subnets to access external APIs
Example of VPC and subnet design
| Subnet name | CIDR range | Role | Route table |
|---|---|---|---|
| Public-1 | 10.0.1.0/24 |
Load balancer, bastion | Internet access via IGW |
| Public-2 | 10.0.2.0/24 |
Load balancer, bastion | Internet access via IGW |
| Private-1 | 10.0.3.0/24 |
Application servers | External access via NAT GW |
| Private-2 | 10.0.4.0/24 |
Databases | No external access |
Applying security groups and network ACLs
Security Groups (SG)
- Stateful (return traffic is automatically allowed)
- Applied per server
- Examples:
- Web servers: Allow only ports 80 and 443
- DB servers: Allow port 3306 (MySQL) only from application servers
Network ACLs (NACL)
- Stateless (return traffic must also be explicitly allowed)
- Applied per subnet
- Examples:
- Allow only ports 80 and 443 (public subnets)
- Allow port 3306 only from the application server subnet
Comparison of stateless vs. stateful
| Function | Network ACL (NACL) | Security Group (SG) |
|---|---|---|
| State | Stateless | Stateful |
| Request permission | Required | Required |
| Response permission | Must be explicitly allowed | Automatically allowed |
| Scope of application | Per subnet | Per instance |
Using a NAT Gateway
Used so that resources in private subnets can access external APIs.
- Place the NAT Gateway in a public subnet
- Resources in private subnets communicate externally via the NAT Gateway
- Direct inbound connections from the outside are not possible
Countermeasures against DDoS attacks (introducing AWS Shield / Cloud Armor)
A DDoS (Distributed Denial of Service) attack is a type of cyberattack in which a malicious attacker sends a large number of requests to a specific system or network to bring down the service. In cloud environments in particular, it is important to implement dedicated security measures to prevent service outages caused by DDoS attacks. AWS and GCP provide the following DDoS protection services.
AWS Shield
AWS Shield is a managed DDoS protection service provided by AWS that protects AWS resources from DDoS attacks. There are two types: Standard (free) and Advanced (paid).
AWS Shield Standard (free)
- Provides basic DDoS protection
- Applied free of charge to all AWS users
- Automatically protects against attacks at layer 3 (network layer) and layer 4 (transport layer)
- Integrated with AWS’s global infrastructure to automatically filter DDoS traffic
AWS Shield Advanced (paid)
- Provides advanced DDoS protection and support
- Integrates with AWS WAF and AWS Firewall Manager to apply DDoS-specific custom rules
- 24/7 support from the AWS DDoS Response Team (DRT)
- Provides real-time attack detection and detailed reports
- Economic protection (DDoS cost protection) can be applied for additional AWS resource consumption caused by DDoS attacks
Applying AWS Shield custom rules to AWS WAF
AWS Shield integrates with AWS WAF to apply custom rules.
Example AWS WAF configuration
{
"Name": "DDoSProtectionRule",
"Priority": 1,
"Action": {
"Block": {}
},
"Statement": {
"RateBasedStatement": {
"Limit": 1000,
"AggregateKeyType": "IP"
}
},
"VisibilityConfig": {
"SampledRequestsEnabled": true,
"CloudWatchMetricsEnabled": true,
"MetricName": "DDoSRateLimitRule"
}
}
Key points
RateBasedStatement: Blocks IPs that exceed 1,000 requests per 5 minutesVisibilityConfig: Logs requests that match the rule- Integration with AWS WAF: Enables more fine-grained control not only with AWS Shield but also via WAF
Cloud Armor (Google Cloud)
Cloud Armor is a DDoS protection and Web Application Firewall (WAF) provided by Google Cloud that uses Google’s global edge network to mitigate DDoS attacks.
Characteristics
- Protects against DDoS attacks at layers 3/4 (network/transport layers) and layer 7 (application layer)
- Can use preconfigured WAF rules
- Integrates with Google Cloud Load Balancer (GCLB) to automatically filter attack traffic
- Adaptive Protection automatically detects abnormal traffic patterns
Automating log monitoring and unauthorized access detection (using SIEM tools)
As part of security measures in cloud environments, automating log monitoring and unauthorized access detection is essential. In particular, by using SIEM (Security Information and Event Management) tools, you can efficiently analyze large volumes of logs and build a system that immediately detects and responds to anomalies.
Main functions
- Centralized log management
Centralized management of logs from cloud services and on-premises environments - Real-time monitoring and anomaly detection
Detect anomalies using predefined rules and machine learning - Automated incident response
Issue alerts for detected threats and automatically execute countermeasures - Compliance support
Retain and analyze logs in accordance with regulations (GDPR, ISO 27001, PCI-DSS, etc.)
AWS Security Hub
Features:
- Integrates with various AWS services (CloudTrail, GuardDuty, Inspector, etc.)
- Automates anomaly detection and compliance assessment
- Integrates with AWS Organizations to monitor multiple accounts
Use cases: - Detects suspicious API calls based on CloudTrail logs
- Detects abnormal IAM privilege escalation and automatically generates alerts
Google Chronicle
Features:
- Ultra-fast analysis of large-scale data using Google Cloud’s scalability
- Built-in threat intelligence
- Allows creation of custom rules using the YARA-L query language
Use cases: - Analyzes GCP Cloud Audit Logs to detect abnormal privilege changes
- Identifies anomalies by correlating with known attack patterns (MITRE ATT&CK)
Key points for automation
- Send cloud logs (AWS CloudTrail, GCP Cloud Audit Logs) to the SIEM
- Use machine learning to detect abnormal traffic in real time
- Configure alerts and build a mechanism to take action immediately after detection
Proper management of TLS / SSL certificates (automating Let’s Encrypt renewal)
TLS (Transport Layer Security) / SSL (Secure Sockets Layer) certificates are essential for encrypting data communications over the internet and strengthening security. In particular, the free Let’s Encrypt CA is widely used, and proper management and renewal are important.
Let’s Encrypt is a certificate authority (CA) that issues free SSL/TLS certificates and has the following advantages:
- Free to use
- Uses the ACME (Automated Certificate Management Environment) protocol to automate certificate issuance and renewal
- Easy to introduce using tools such as Certbot
- Highly trusted and recognized by many browsers
Obtaining certificates with Let’s Encrypt
The common way to obtain Let’s Encrypt certificates is to use a tool called Certbot.
Certbot is the official Let’s Encrypt client and can automate certificate issuance and renewal.
Install Certbot
sudo apt update
sudo apt install certbot python3-certbot-nginx
Obtain a certificate
sudo certbot --nginx -d example.com -d www.example.com
Check certificates
sudo certbot certificates
Automating certificate renewal
Let’s Encrypt certificates expire in 90 days, so they must be renewed regularly. Using Certbot, you can configure automatic renewal.
Test automatic renewal with Certbot
sudo certbot renew --dry-run
Configure automatic renewal
0 2 * * * certbot renew --quiet --post-hook "systemctl reload nginx"
- Add a job to cron
- Attempt renewal every day at 2:00
certbot renew: If the certificate has more than 30 days remaining, it exits without doing anything--post-hook "systemctl reload nginx": Reloads Nginx after certificate renewal to apply the new certificate
Certificate management using cloud services
AWS and GCP also provide services that automate SSL certificate management.
-
AWS Certificate Manager (ACM)
By using AWS ACM, you can automatically manage certificates for load balancers (ALB/ELB) and CloudFront.- Certificates are automatically renewed
- Cannot be used directly on EC2 or Lambda (only via ALB/CloudFront)
- You can request certificates with the
aws acm request-certificatecommand
-
GCP Managed SSL Certificates
By using GCP Managed SSL Certificates, you can manage certificates on Google Cloud Load Balancer.- Fully automatic certificate issuance and renewal
- Can be attached to and used with Cloud Load Balancer
- Can be created with the
gcloud compute ssl-certificates createcommand
Protecting SSH access using Fail2Ban
Fail2Ban is a tool that strengthens server security by detecting unauthorized access attempts and automatically blocking specific IP addresses. It is widely used to prevent brute-force attacks on SSH in particular.
Basic functions of Fail2Ban
- Detecting and blocking unauthorized access
- Detects login failures that exceed a specified number of attempts and blocks the corresponding IP for a certain period or permanently
- Monitors /var/log/auth.log (Ubuntu, Debian) and /var/log/secure (CentOS, RHEL)
- Integration with firewalls
- Integrates with firewalls such as iptables and nftables to block malicious IPs
- Can also integrate with firewalld
- Flexible rule configuration
- Can configure bantime (block duration), findtime (time window for violation detection), and maxretry (maximum number of attempts)
- Can create custom rules for each service (SSH, HTTP, FTP, etc.)
Configuration steps
-
Install Fail2Ban
sudo apt update sudo apt install fail2ban -y -
Add rules for SSH.
Fail2Ban’s configuration is in/etc/fail2ban/jail.conf, but you should not edit this file directly. Instead, create/etc/fail2ban/jail.localand customize it.sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local[sshd] enabled = true port = ssh filter = sshd logpath = /var/log/auth.log maxretry = 5 findtime = 600 bantime = 3600 ignoreip = 192.168.1.0/24 # Specify trusted IPs
Explanation of configuration items
enabled = true: Enables SSH monitoring by Fail2Banport = ssh: Port to monitor (default 22 or a custom port)filter = sshd: Filter applied to SSH (uses the defaultsshd.conf)logpath: Path to the SSH log file (on Ubuntu/var/log/auth.log, on CentOS/var/log/secure)maxretry = 5: Block after 5 failed login attemptsfindtime = 600: If there are maxretry failures within the past 600 seconds (10 minutes), blockbantime = 3600: Block the IP for 1 hour (3600 seconds)ignoreip: Excludes specified IPs from being blocked (exclude trusted networks)
Additional configuration and tuning
- Block permanently
If you setbantime = -1, the target IP will be blocked permanently.bantime = -1 - Enable Fail2Ban logging
Configure Fail2Ban to log to/var/log/fail2ban.log.logtarget = /var/log/fail2ban.log - Enable email notifications
To receive email notifications when unauthorized SSH access is detected, add the following to jail.local:This configuration requires sendmail to be installed:destemail = your@email.com sender = fail2ban@example.com mta = sendmail action = %(action_mwl)ssudo apt install sendmail -y # Ubuntu/Debian
Optimizing IAM policies in cloud environments (principle of least privilege)
Proper IAM (Identity and Access Management) configuration is a key factor in maintaining security in cloud environments. In particular, by practicing the Principle of Least Privilege (PoLP), you can reduce risks caused by unnecessary permissions.
- Preventing insider threats: Reduces the risk that accounts with unnecessary permissions accidentally delete important resources or are misused
- Minimizing the impact of attacks: If a compromised account has only minimal permissions, the damage is limited
- Improved auditing and compliance: Makes it easier to comply with corporate security guidelines and regulations (ISO 27001, SOC 2, GDPR, etc.)
Use IAM roles
❌ Bad example (granting permissions directly to an IAM user)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
✅ Good example (using IAM roles)
Instead of IAM users, use IAM roles and control access rights on a per-service basis.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-secure-bucket",
"arn:aws:s3:::my-secure-bucket/*"
]
}
]
}
- Allows only listing (ListBucket) and retrieving files (GetObject) for the specified S3 bucket.
Avoid broad permissions and narrow the scope
❌ Bad example (overly broad permissions)
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
- This policy allows all operations related to S3.
✅ Good example (limited to necessary operations)
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::my-secure-bucket/*"
}
- By allowing only specific operations, you can prevent granting unnecessary permissions.
Regularly review access permissions
Permissions for IAM users and roles tend to remain unchanged once set. It is recommended to regularly remove unused permissions.
- AWS Access Analyzer: Can analyze whether IAM policies comply with the principle of least privilege
- IAM Access Advisor: Identifies unused IAM policies and recommends deleting or restricting them
Conclusion
Security measures in cloud environments are not something you can set once and forget; they require continuous monitoring and maintenance. Attack techniques evolve daily, and to keep up, it is important to continuously review log monitoring and access control, tune WAF rules, and strengthen DDoS countermeasures.
Another major challenge in security measures is “striking the right balance.” Excessive access restrictions can impair the convenience of legitimate users, so you must find the optimal balance between usability and security.
Use the measures introduced in this article as a reference to design a security strategy that best fits your own systems and achieve safe cloud operations.
Questions about this article 📝
If you have any questions or feedback about the content, please feel free to contact us.Go to inquiry form
Related Articles
Robust Authorization Design for GraphQL and REST APIs: Best Practices for RBAC, ABAC, and OAuth 2.0
2024/05/13Introduction to Automating Development Work: A Complete Guide to ETL (Python), Bots (Slack/Discord), CI/CD (GitHub Actions), and Monitoring (Sentry/Datadog)
2024/02/12Complete Cache Strategy Guide: Maximizing Performance with CDN, Redis, and API Optimization
2024/03/07CI/CD Strategies to Accelerate and Automate Your Development Flow: Leveraging Caching, Parallel Execution, and AI Reviews
2024/03/12Practical Microservices Strategy: The Tech Stack Behind BFF, API Management, and Authentication Platform (AWS, Keycloak, gRPC, Kafka)
2024/03/22Done in 10 minutes. Easy deployment procedure for a Next.js app using the official AWS Amplify template
2024/11/05Building an Integrated Next.js × AWS CDK Environment: From Local Development with Docker to Production Deployment
2024/05/11Complete Guide to Implementing GitHub and Google OAuth Authentication with Next.js — How to Achieve Smooth User Login
2024/03/01