Robust Authorization Design for GraphQL and REST APIs: Best Practices for RBAC, ABAC, and OAuth 2.0
Introduction
In modern web applications, API security is an extremely important issue. In particular, if you do not introduce an appropriate authorization mechanism, there is a risk of unintended data leaks and unauthorized access.
This article explains in detail methods to enhance API security, such as applying Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC), authorization design for GraphQL and REST APIs, introducing rate limiting, access control using an API gateway, and monitoring best practices. Learn practical approaches to strengthen security and aim for more robust API design.
Methods for strengthening API access control introduced in this article
- Introducing Role-Based Access Control (RBAC) (distinguishing between administrators and regular users)
- Applying Attribute-Based Access Control (ABAC)
- Implementing authorization checks at the GraphQL resolver level (graphql-shield)
- Restricting scopes in REST APIs (applying OAuth 2.0)
- Introducing an API gateway (AWS API Gateway / Kong)
- Restricting data access per user (multi-tenancy design)
- Restricting GraphQL introspection (disable in production)
- Introducing rate limiting to prevent excessive data retrieval
- Proper version management of OpenAPI / GraphQL schemas
- Monitoring API logs (using Datadog / Sentry)
Introducing RBAC (Role-Based Access Control)
RBAC (Role-Based Access Control) is a mechanism that manages system access control based on user “roles.”
This enables the following kinds of access control:
- Only administrators can change settings
- Regular users can only view
- Only specific user groups can execute specific functions
RBAC not only improves security, but also reduces management costs and enables consistent access control.
Basic concepts of RBAC
RBAC mainly consists of the following three elements:
- User: An individual who uses the system (e.g., user1, admin1)
- Role: The role a user has (e.g., admin, user, editor)
- Permission: Actions that can be performed, defined per role (e.g., view articles, manage users, change settings)
By combining these, you can implement flexible access control.
Implementation example
As one way to apply RBAC to a database, consider the following data model.
Data model (example: PostgreSQL)
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(255) UNIQUE NOT NULL,
role_id INTEGER NOT NULL REFERENCES roles(id)
);
CREATE TABLE roles (
id SERIAL PRIMARY KEY,
name VARCHAR(50) UNIQUE NOT NULL
);
CREATE TABLE permissions (
id SERIAL PRIMARY KEY,
name VARCHAR(100) UNIQUE NOT NULL
);
CREATE TABLE role_permissions (
role_id INT REFERENCES roles(id) ON DELETE CASCADE,
permission_id INT REFERENCES permissions(id) ON DELETE CASCADE,
PRIMARY KEY (role_id, permission_id)
);
CREATE TABLE user_roles (
user_id INT REFERENCES users(id) ON DELETE CASCADE,
role_id INT REFERENCES roles(id) ON DELETE CASCADE,
PRIMARY KEY (user_id, role_id)
);
Key points of this data model
- users: manage which role a user belongs to via role_id
- roles: define roles such as admin and user
- permissions: define concrete permissions such as editing articles
- role_permissions: manage which role has which permissions
- user_roles: manage which roles a user has
Implementing RBAC (Express + Middleware)
Here is an example of implementing RBAC as middleware using Node.js (Express).
- Authentication (user information based on JWT)
Before applying RBAC, you need to obtain user information using JWT.
const jwt = require('jsonwebtoken');
const authenticateUser = (req, res, next) => {
const token = req.headers.authorization?.split(' ')[1];
if (!token) {
return res.status(401).json({ message: 'Unauthorized' });
}
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
next();
} catch (error) {
res.status(403).json({ message: 'Invalid token' });
}
};
- RBAC middleware
Define a function that allows access only to users with a specific role.
const authorizeRole = (requiredRole) => {
return (req, res, next) => {
if (!req.user || req.user.role !== requiredRole) {
return res.status(403).json({ message: 'Forbidden' });
}
next();
};
};
- Restricting routes
For example, to create an admin-only API endpoint, you can apply the RBAC middleware as follows:
app.get('/admin', authenticateUser, authorizeRole('admin'), (req, res) => {
res.json({ message: 'Admin-only page' });
});
To add an endpoint that regular users can access:
app.get('/user', authenticateUser, authorizeRole('user'), (req, res) => {
res.json({ message: 'Regular user-only page' });
});
Applying ABAC (Attribute-Based Access Control)
ABAC is a method of dynamically controlling access based on user attributes (such as department, position, group, etc.). While RBAC (Role-Based Access Control) is based on “roles,” ABAC allows more flexible rule settings.
Basic structure of ABAC
ABAC access control considers the following four elements:
| Element | Description |
|---|---|
| User attributes | Information related to the user (position, department, group, age, etc.) |
| Resource attributes | Type and confidentiality level of the target data (e.g., public information, confidential information) |
| Action | Permitted operations (e.g., read, write, delete) |
| Environment context | Conditions of access (e.g., IP address, time of day, device) |
Benefits of ABAC
ABAC enables more flexible access control than RBAC.
- ✅ Dynamic access control:
- Can determine access permission dynamically by considering user attributes such as position, department, and time of day.
- For example, you can enforce rules like “access allowed only during business hours” or “allowed only if the user has admin privileges and is accessing via VPN.”
- ✅ Scalable management:
- With RBAC, you need to add roles whenever new positions or departments are added, but with ABAC you can handle this by changing rules.
- ✅ Fine-grained control:
- You can control access by combining user attributes × resource attributes × action × environment context.
Setting ABAC rules
In ABAC, policies are often defined in JSON format.
For example, the rule “allow users in the engineering department to read reports” can be expressed as follows:
{
"rules": [
{
"attribute": "department",
"value": "engineering",
"action": "read",
"resource": "reports"
}
]
}
In this rule, users whose department is "engineering" are allowed to "read" reports.
ABAC implementation in Node.js
Using the simple function below, you can allow or deny access based on ABAC rules.
const rules = [
{
attribute: "department",
value: "engineering",
action: "read",
resource: "reports"
}
];
const checkAccess = (user, action, resource) => {
return rules.some(rule =>
user[rule.attribute] === rule.value &&
rule.action === action &&
rule.resource === resource
);
};
// User information
const user = { department: 'engineering' };
// Determine whether access is allowed
console.log(checkAccess(user, 'read', 'reports')); // true
console.log(checkAccess(user, 'write', 'reports')); // false
In this code:
- ABAC rules are defined in rules
- The checkAccess() function compares user information with the rules
- A user with department: 'engineering' is allowed to "read" "reports"
Drawbacks of ABAC
- Management can become complex
- Because flexible settings are possible, if rules increase too much, management can become complicated.
- Therefore, it is good to use centralized policy management tools (e.g., AWS IAM Policy, OPA (Open Policy Agent)).
- Potential performance issues
- Since user attributes and environmental conditions must be evaluated, real-time processing load may increase.
- You need to apply caching and optimizations to improve performance.
Authorization checks at the GraphQL resolver level (graphql-shield)
graphql-shield is a library that makes it easy to manage authorization in GraphQL. You can apply rules per resolver and implement access control.
Benefits
- Separation of resolver logic and authorization logic
By separating authorization processing from the resolver itself, you can improve code readability. - Flexible rule settings
You can implement fine-grained access control based on user roles (admin, regular user, etc.) and specific conditions. - Unified error handling
Since authorization errors can be returned in a consistent format, error handling on the frontend becomes easier.
Installing graphql-shield
npm install graphql-shield
Defining rules
const { rule, shield } = require('graphql-shield');
const isAdmin = rule()(async (parent, args, { user }) => {
return user.role === 'admin';
});
export const permissions = shield({
Query: {
sensitiveData: isAdmin
}
});
Key points
- Use rule() to create an authorization rule (isAdmin).
- Use shield() to apply rules to specific resolvers.
Applying to Apollo Server
To apply graphql-shield in a GraphQL server, use applyMiddleware.
const { ApolloServer } = require('apollo-server');
const { applyMiddleware } = require('graphql-middleware');
const { makeExecutableSchema } = require('@graphql-tools/schema');
const typeDefs = require('./schema');
const resolvers = require('./resolvers');
const { permissions } = require('./permissions');
// Create GraphQL schema
const schema = makeExecutableSchema({ typeDefs, resolvers });
// Apply authorization middleware
const schemaWithPermissions = applyMiddleware(schema, permissions);
// Configure Apollo Server
const server = new ApolloServer({
schema: schemaWithPermissions,
context: ({ req }) => {
// Obtain authentication information (e.g., parse JWT token)
const user = getUserFromToken(req.headers.authorization);
return { user };
}
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
});
Key points
- Create the schema with makeExecutableSchema().
- Apply graphql-shield rules with applyMiddleware().
- Obtain authentication information (user information) in context so it can be used in resolvers.
Error handling
By default, when an authorization error occurs, graphql-shield returns the error message "Not Authorised!". However, you can also set a custom error message.
const permissions = shield(
{
Query: {
sensitiveData: isAdmin
}
},
{
fallbackError: "You do not have access rights"
}
);
Scope restriction for REST APIs (applying OAuth 2.0)
By using OAuth 2.0 to restrict API scopes, you can ensure that only users with appropriate permissions can call specific APIs. This prevents inappropriate access and strengthens security.
What is a scope in OAuth 2.0?
In OAuth 2.0, a scope is used to limit the range of operations that a client holding an access token can perform. By setting scopes, you can finely control access permissions to APIs.
For example, you can restrict viewing and editing of user information by setting scopes like the following:
{
"scopes": {
"read:users": "View user information",
"write:users": "Edit user information"
}
}
- read:users: Permission to view user information
- write:users: Permission to edit user information
By defining scopes in detail like this, you can clearly specify access permissions for specific API endpoints.
Flow of API access using scopes
- The client (such as a frontend) obtains an access token from the OAuth 2.0 authorization server.
- The access token (JWT) contains scope information.
- The client sends a request to the API with the access token in the Authorization header.
- The Express server verifies the JWT and checks the scope information.
- The API allows access only if the required scope is present.
Applying scope restrictions in Express
Here is how to apply OAuth 2.0 scopes in an Express application.
- Scope-checking middleware
Use the following checkScope middleware to check the scope of the request and return 403 Forbidden if it is insufficient.
const checkScope = (scope) => {
return (req, res, next) => {
if (!req.user.scopes.includes(scope)) {
return res.status(403).json({ message: 'Insufficient scope' });
}
next();
};
};
- It assumes that
req.user.scopescontains a list of scopes the user has. - If the specified scope is not in the list, it returns a 403 error.
- Applying to API routes
Use this checkScope middleware so that only requests with appropriate scopes can access the API.
app.get('/users', checkScope('read:users'), (req, res) => {
res.json({ users: [{ id: 1, name: 'Alice' }] });
});
app.post('/users', checkScope('write:users'), (req, res) => {
res.status(201).json({ message: 'User created' });
});
- GET /users can only be executed by users with the read:users scope.
- POST /users can only be executed by users with the write:users scope.
Managing scopes using JWT (JSON Web Token)
In OAuth 2.0, it is common to manage scopes using access tokens (JWT). By including scope information in the JWT, you can check user permissions for each request.
- Decoding JWT and obtaining scopes
Here is how to decode a JWT and obtain scopes.
const jwt = require('jsonwebtoken');
const authenticateJWT = (req, res, next) => {
const authHeader = req.headers.authorization;
if (authHeader) {
const token = authHeader.split(' ')[1];
jwt.verify(token, process.env.JWT_SECRET, (err, user) => {
if (err) {
return res.sendStatus(403);
}
req.user = user; // Store user information in the request
next();
});
} else {
res.sendStatus(401);
}
};
// Apply JWT authentication to all routes
app.use(authenticateJWT);
- Obtain the JWT from the Authorization header and verify it.
- By setting the scopes information contained in the JWT to req.user, it becomes available to subsequent middleware.
Introducing an API gateway (AWS API Gateway)
An API gateway is an important component that sits between clients and backend APIs and provides the following functions. It mainly plays the roles below:
- Unified management of authentication and authorization
You can apply JWT authentication, OAuth 2.0, API keys, etc. at the API layer. - Routing and load balancing
It forwards requests to appropriate backend services and supports scaling. - Rate limiting and monitoring
It can protect against DDoS attacks and monitor API usage.
JWT authentication with AWS API Gateway
AWS API Gateway can implement JWT (JSON Web Token) authentication by integrating with Amazon Cognito or a Lambda Authorizer.
- JWT authentication integrated with Cognito
AWS API Gateway can use Cognito User Pools as an ID provider.
Setup steps
- Create a Cognito User Pool
- Create a Cognito User Pool to manage and authenticate users
- Obtain the app client ID
- Configure a Cognito Authorizer in API Gateway
- From “Authorizers” in API Gateway, add a Cognito Authorizer
- Set the User Pool ID and app client ID
- Pass the JWT token when making requests
On the client side, obtain an access token from Cognito and include it in the Authorization header of API requests:
curl -X GET https://your-api-id.execute-api.region.amazonaws.com/prod/resource \
-H "Authorization: Bearer YOUR_JWT_ACCESS_TOKEN"
- JWT authentication using a Lambda Authorizer
If you use ID providers other than Cognito (Auth0, Firebase, etc.), you can use a Lambda Authorizer.
Setup steps
- Create a Lambda function and verify the JWT
- Verify the JWT signature
- Check claims (e.g., iss, aud)
- Implement access control based on user permissions (role)
import json
import jwt
def lambda_handler(event, context):
token = event['headers']['Authorization'].split(" ")[1]
try:
decoded_token = jwt.decode(token, "YOUR_PUBLIC_KEY", algorithms=["RS256"])
return {
"principalId": decoded_token["sub"],
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": event["methodArn"]
}]
}
}
except Exception as e:
return {"message": "Unauthorized"}
- Set the Lambda function as an Authorizer in API Gateway
- From “Authorizers” in API Gateway, add a Lambda Authorizer
- Set the Lambda function you created
- Configure it to treat the Authorization header as the token
Restricting data access per user (multi-tenancy design)
Multi-tenancy is a design that allows multiple users (tenants) to use a single application.
It is often used in enterprise SaaS, and each tenant’s data must be properly isolated from other tenants.
- Single-tenancy: Provide a dedicated app/DB for each customer
- Multi-tenancy: Multiple customers share a single app while data is properly separated
There are three main patterns of data isolation design to achieve multi-tenancy.
Design patterns
- Database-per-tenant
Create an independent database for each tenant.
-
Advantages
- Complete data isolation → No risk of accessing other tenants’ data
- Ensured performance → Easy to manage resources per tenant
-
Disadvantages
- High operational cost → As you create and manage a DB per tenant, operational burden increases as you scale
- Difficult migrations → When changing tables, you must apply changes to all DBs
-
Use cases
- When strict data separation is required per company
- Financial or medical apps (where data security is top priority)
- Schema-per-tenant
Prepare a different schema for each tenant within a single database.
- Advantages
- Ensures data separation while reducing management costs
- Ensures performance (you can optimize per tenant at the schema level)
- Migrations are relatively easy (apply per schema)
- Disadvantages
- Need to manage schemas (management becomes complex as they increase)
- Need to manage DB connections (logic to switch schemas per tenant is required)
- Use cases
- SaaS services (medium or larger multi-tenant apps)
- When some data separation is required and scalability is also needed
- Row-Level Security (RLS)
Multiple tenants share a single database and schema, and data is controlled at the row level.
You can implement this using PostgreSQL’s RLS (Row-Level Security).
-
Advantages
- Most scalable (easy to manage because all data is stored in a single DB)
- Cost reduction (no need for additional DBs or schemas even if the number of tenants increases)
- Easy migrations (can be applied to the entire DB)
-
Disadvantages
- If security settings are incorrect, there is a risk of data leakage to other tenants
- Query overhead increases (proper index design is essential)
-
Use cases
- Small to medium-sized SaaS (easy to scale and cost-effective)
- Enterprise apps (where access control is required but complete separation per tenant is not necessary)
RLS implementation example
Here is how to control each tenant’s data at the row level using PostgreSQL RLS.
- Create an RLS policy
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation_policy
ON users
FOR SELECT
USING (tenant_id = current_setting('app.current_tenant')::uuid);
With the above settings, PostgreSQL returns only rows whose tenant_id matches the tenant_id set in app.current_tenant.
- Application-side settings
By setting SET app.current_tenant per tenant, the DB automatically filters appropriate data when executing SQL queries.
async function setTenantContext(tenantId: string) {
await db.query("SET app.current_tenant = $1", [tenantId]);
}
Using this method, developers do not need to manually filter with WHERE tenant_id = xxx; the DB automatically returns only the appropriate data.
Comparison of design patterns
Selection points
- Data separation is top priority → Database-per-tenant
- Want a balance → Schema-per-tenant
- Emphasis on scalability and cost → RLS (Row-Level Security)
| Design pattern | Strength of data separation | Scalability | Operational cost | Main use cases |
|---|---|---|---|---|
| Database-per-tenant | High | Low (load increases as tenants increase) | High | High-security SaaS (finance, medical) |
| Schema-per-tenant | Medium | Medium | Medium | Medium-scale SaaS (enterprise) |
| RLS (row-level) | Low | High | Low | Small to medium SaaS (startups) |
Restricting GraphQL introspection (disable in production)
GraphQL introspection is a mechanism that allows clients to query details of the schema.
For example, by executing the following query, you can obtain the schema of the API:
{
__schema {
types {
name
}
}
}
This feature allows developers to check the schema structure during development, but allowing unrestricted introspection in production introduces security risks.
Why you should disable introspection in production
-
The structure of the API becomes known to third parties
Malicious users can learn the API endpoints and schema, making it easier to identify attack targets. -
Attackers can more easily look for API vulnerabilities
For example, if unintended endpoints or old schemas are exposed, attackers may exploit them to target vulnerabilities. -
Unnecessary resource consumption
If introspection queries are executed unnecessarily, they place unnecessary load on the API server.
How to disable introspection in production
This depends on the GraphQL server implementation, but here is how to disable it in Apollo Server, a representative example.
- In Apollo Server
In Apollo Server, you can disable introspection in production by setting the introspection option to false.
import { ApolloServer } from "apollo-server";
import { ApolloServerPluginLandingPageDisabled } from "apollo-server-core";
const server = new ApolloServer({
schema,
plugins: [
// Disable GraphQL Playground UI as well
ApolloServerPluginLandingPageDisabled(),
],
// Disable introspection in production
introspection: process.env.NODE_ENV !== "production",
});
- In Express (graphqlHTTP)
If you are using express-graphql, you can set graphiql: false and introspection: false.
import express from "express";
import { graphqlHTTP } from "express-graphql";
import schema from "./schema";
const app = express();
app.use(
"/graphql",
graphqlHTTP({
schema,
// Disable GraphiQL in production
graphiql: process.env.NODE_ENV !== "production",
customFormatErrorFn: (err) => {
// Do not leak detailed error information
return { message: "Internal Server Error" };
},
// Disable introspection in production
introspection: process.env.NODE_ENV !== "production",
})
);
app.listen(4000, () => {
console.log("Server running on port 4000");
});
Cases where you do not need to completely disable introspection in production
There are cases where you do not necessarily have to “completely disable” it.
- Internal tools or closed APIs
You can allow introspection only for authenticated users so that internal developers can use it. - Allow only specific IP addresses or users with certain JWT tokens
Determine the request source in context and allow only appropriate users.
Example: Allow introspection only from specific IP addresses
const allowedIPs = ["192.168.1.1", "203.0.113.5"];
const server = new ApolloServer({
schema,
introspection: ({ req }) => {
const clientIP = req.ip || req.connection.remoteAddress;
return allowedIPs.includes(clientIP);
},
});
Introducing rate limiting to prevent excessive data retrieval
Rate limiting is important to reduce API load and prevent unauthorized access and DDoS attacks. Here, we explain in detail how to implement rate limiting using Express + Redis.
What is rate limiting?
Rate limiting is a mechanism that limits the number of API requests allowed within a certain period of time.
This prevents the following problems:
- Reducing server load: Prevents the server from becoming overloaded due to a large number of requests in a short time.
- Suppressing DDoS attacks: Prevents malicious requests from overwhelming the server.
- Fair resource allocation: Prevents specific users from consuming excessive resources and provides fair resources to other users.
Implementation method (Express + Redis)
We use the following libraries:
| Library name | Description |
|---|---|
| express-rate-limit | Rate limiting middleware for Express |
| rate-limit-redis | Library for using Redis as a store for rate limiting |
| ioredis | Redis client (manages connections) |
Installation command
npm install express-rate-limit rate-limit-redis ioredis
Implementation code
import express from 'express';
import rateLimit from 'express-rate-limit';
import RedisStore from 'rate-limit-redis';
import Redis from 'ioredis';
const app = express();
// Create Redis client
const redisClient = new Redis({
host: 'redis',
port: 6379,
enableOfflineQueue: false,
});
// Configure rate limiter
const limiter = rateLimit({
store: new RedisStore({
client: redisClient,
expiry: 60, // Limit for 60 seconds
}),
windowMs: 1 * 60 * 1000, // 1-minute window
max: 100, // Allow up to 100 requests per minute
message: 'Too many requests, please try again later.',
standardHeaders: true, // Add RateLimit headers
legacyHeaders: false, // Disable `X-RateLimit-*` headers (use new standard headers)
});
// Apply rate limiting to specific API routes
app.use('/api/', limiter);
app.get('/api/test', (req, res) => {
res.send('API response');
});
// Start server
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
How rate limiting works
- Count per client IP address.
- Store the number of requests in Redis and reset it periodically.
- If the limit is exceeded, return 429 Too Many Requests.
Proper version management of GraphQL / OpenAPI schemas
API version management is essential to maintain compatibility with clients while adding new features and improving existing ones. This section explains in detail how to manage versions in each approach (GraphQL and OpenAPI).
Why API version management is necessary
When operating an API, proper version management is required for the following reasons:
- Maintaining compatibility
You need to apply schema changes without affecting clients that use the API (frontend apps, mobile apps, third parties). - Safe migration
You need to smoothly migrate to new APIs and deprecate old APIs in stages. - Improving development speed
By organizing changes per version, the development team can safely add and modify new features.
Version management in GraphQL
In GraphQL, instead of separating endpoints per version like REST APIs, you manage versions through schema evolution.
GraphQL version management strategies
- Adding fields
GraphQL is highly backward compatible, and you can add new fields without changing existing ones.
type Query {
user: User
}
- Deprecating fields
Instead of deleting existing fields, deprecate them with @deprecated and encourage migration to new fields.
type Query {
userV1: UserV1 @deprecated(reason: "Use userV2")
userV2: UserV2
}
- Introducing new types
By versioning types such as UserV1 → UserV2, you can evolve the schema while maintaining compatibility.
type UserV1 {
id: ID
name: String
}
type UserV2 {
id: ID
fullName: String
}
Notifying clients of deprecated fields in GraphQL
GraphQL clients (such as Apollo Client) display warnings when deprecated fields are requested, which encourages developers to migrate to appropriate versions.
Version management in OpenAPI (Swagger)
OpenAPI (formerly Swagger) is a standard specification for defining REST API schemas, and there are several ways to manage versions.
OpenAPI version management strategies
- Include version numbers in URLs (recommended)
Separate endpoints per version.
Effective when API changes are large.
openapi: 3.0.0
info:
title: Example API
version: 2.0.0
paths:
/v1/users:
get:
summary: Get users (v1, deprecated)
deprecated: true
/v2/users:
get:
summary: Get users (v2)
- Specify version in HTTP headers
Use headers to switch versions.
Allows version management without changing endpoint URLs.
GET /users
Headers:
X-API-Version: 2
- Specify version in query parameters
Manage versions by including ?version=2 in query parameters.
GET /users?version=2
Monitoring API logs (using Datadog / Sentry)
Purpose of monitoring
By monitoring API logs, you can achieve the following objectives:
- Performance optimization
- Identify causes of response time fluctuations and slowdowns
- Analyze and optimize bottlenecks
- Early detection and response to errors
- Receive immediate alerts when exceptions or failures occur
- Quickly identify where problems occur and their impact
- Security monitoring
- Detect suspicious requests and signs of attacks
- Monitor abnormal API usage patterns
- Improving system reliability
- Minimize the impact of failures and maintain SLAs (Service Level Agreements)
- Speed up incident response
Monitoring with Datadog
Datadog is a tool with strengths in real-time monitoring and visualization.
Main features
- APM (Application Performance Monitoring)
Records API request/response times in detail
Visualizes request flows using spans and identifies bottlenecks - Log management
Collects and analyzes API request/response data
Supports filtering and setting custom metrics - Alert configuration
Sends alerts when API response delays or error rates exceed thresholds
Integrates with notification systems such as Slack and PagerDuty
Introduction steps (overview)
- Create a Datadog account and obtain an API key
- Install the Datadog Agent in your application
- Configure your API logs to be sent to Datadog (e.g., using winston-datadog)
- Create dashboards and monitor in real time
Monitoring error logs with Sentry
Sentry is a tool specialized in application error tracking.
Main features
- Automatic collection of error logs
Records API error logs in real time and displays stack traces
Makes it easy to identify where errors occur and their causes - User impact analysis
Analyzes which users are affected by specific errors
Visualizes error frequency and impact - Release management
Manages error reports per deployment and identifies issues occurring in specific versions - Alert notifications
Sends notifications when important errors occur
Can integrate with Slack and email
Introduction steps (overview)
- Create a Sentry account and obtain a DSN (Data Source Name)
- Configure your API to send error logs to Sentry (e.g., using @sentry/node)
- Create dashboards and monitor/analyze errors
- Configure alerts to receive notifications when critical errors occur
There is a blog post on introducing Sentry into a Next.js project, which you may find helpful.
Conclusion
Authorization and security measures for APIs are not something you can finish once and forget; they require continuous improvement. By combining multiple layers of approaches—fine-grained access control using RBAC and ABAC, resource protection through rate limiting, restricting GraphQL introspection, and strengthening log monitoring—you can build more secure APIs.
Put the methods introduced in this article into practice and establish security measures suited to your own environment. Continue working to build scalable systems while maintaining API security in the future.
Questions about this article 📝
If you have any questions or feedback about the content, please feel free to contact us.Go to inquiry form
Related Articles
Introduction to Automating Development Work: A Complete Guide to ETL (Python), Bots (Slack/Discord), CI/CD (GitHub Actions), and Monitoring (Sentry/Datadog)
2024/02/12Complete Cache Strategy Guide: Maximizing Performance with CDN, Redis, and API Optimization
2024/03/07CI/CD Strategies to Accelerate and Automate Your Development Flow: Leveraging Caching, Parallel Execution, and AI Reviews
2024/03/12How to Easily Build a Web API with Express and MongoDB [TypeScript Compatible]
2024/12/09Express (+ TypeScript) Beginner’s Guide: How to Quickly Build Web Applications
2024/12/07Management Dashboard Features (Graph Display, Data Import)
2024/06/02Cloud Security Measures in Practice with AWS & GCP: Optimizing WAF Configuration, DDoS Protection, and Access Control
2024/04/02Practical Microservices Strategy: The Tech Stack Behind BFF, API Management, and Authentication Platform (AWS, Keycloak, gRPC, Kafka)
2024/03/22