Complete Guide to Refactoring React: Improve Your Code with Modularization, Render Optimization, and Design Patterns

  • typescript
    typescript
  • javascript
    javascript
  • sonarqube
    sonarqube
Published on 2025/01/13

Introduction

In frontend development, as a project grows in size, code readability and maintainability often decline, and development speed tends to slow down. Code that started out simple becomes more complex as features are added and specifications change repeatedly, and you start running into issues like “I feel like I’ve seen this logic somewhere else…” or “It’s hard to understand what this function is doing…”.

To solve these problems, continuous review, organization, and optimization of code—i.e., refactoring—is essential. When refactoring is done properly, it improves readability, reduces duplicate code, and enhances performance, thereby raising the overall quality of the project.

In this article, we introduce practical refactoring approaches such as revisiting naming conventions, splitting functions, modularizing common logic, improving React performance, optimizing asynchronous processing, organizing dependencies, strengthening type usage, and applying design patterns. We also explain how to create a plan for applying refactoring in stages.

Let’s aim for a more maintainable and extensible frontend by improving code quality.

Improving Code Readability

Improving code readability is extremely important from the perspective of team development and long-term maintainability. Highly readable code makes it easier for developers to understand each other, reduces bugs, and makes refactoring easier. Here, we explain concrete points for improving readability in detail and also touch on differences in naming conventions between languages.

Applying Naming Conventions

Applying unified rules across a project improves code readability and maintainability. Below are examples of naming conventions by language.

  • JavaScript / TypeScript
    • Variables / functions: camelCase (e.g., fetchUserData)
    • Classes / components: PascalCase (e.g., UserProfileComponent)
    • Constants: UPPER_SNAKE_CASE (e.g., MAX_RETRY_COUNT)
  • Python
    • Variables / functions: snake_case (e.g., fetch_user_data)
    • Classes: PascalCase (e.g., UserProfile)
    • Constants: UPPER_SNAKE_CASE (e.g., MAX_RETRY_COUNT)
  • Java / C#
    • Variables / methods: camelCase (e.g., fetchUserData)
    • Classes / interfaces: PascalCase (e.g., UserProfile)
    • Constants: UPPER_SNAKE_CASE (e.g., MAX_RETRY_COUNT)
  • Go
    • Variables / functions: camelCase (e.g., fetchUserData)
    • There are few constructs equivalent to classes; structs use PascalCase (e.g., UserProfile)
    • Constants: PascalCase or UPPER_SNAKE_CASE (e.g., MaxRetryCount or MAX_RETRY_COUNT)

Use Meaningful Names

Avoid abstract names like data or temp, and give them concrete meaning.

  • Bad examples: getData(), processTemp()
  • Good examples: fetchUserList(), convertTemperatureToCelsius()

For variables that represent state, using prefixes like is, has, should makes the intent easier to understand.

  • Examples: isAdminUser, hasPendingRequests, shouldRetry

Applying Function Splitting

Design functions so that each has a single responsibility (Single Responsibility Principle, SRP) to improve code readability and maintainability.

  • Give each function a single responsibility
    If one function has multiple roles, the intent of the code becomes unclear and testing becomes difficult.

Function naming rules

  • Define function names in the form verb + noun to make the function’s role clear.
    • Examples: fetchUserProfile(), validateEmail(), sendNotification()
  • Bad example: processData() → Unclear what is being processed
  • Good example: calculateOrderTotal() → Clear what is being calculated
// ❌ Bad example: One function has multiple responsibilities
function getUserDataAndFormat(userId: string) {
  const user = fetchFromDatabase(userId);
  user.name = `${user.firstName} ${user.lastName}`;
  return user;
}

// ✅ Good example: Split functions and clarify each responsibility
function fetchUserData(userId: string): User {
  return fetchFromDatabase(userId);
}

function formatUserName(user: User): string {
  return `${user.firstName} ${user.lastName}`;
}

Tidying Up Code (Formatting and Style)

To improve readability, it is also important to use appropriate formatting and follow style guides.

Unifying indentation

  • Standardize indentation within the team (typically 2 spaces or 4 spaces)
  • Spaces are recommended for JavaScript / TypeScript / Python
  • Tabs are recommended for Go and C

Appropriate line breaks and whitespace

  • Insert blank lines between major processing blocks

Using comments

  • Add comments where necessary, but avoid being overly verbose
  • Write comments that explain the intent of the code (explain “why” rather than “what”)

Bad example

// Get user data
const user = fetchUserData(userId);

Good example

// Fetch user data from API (only request when cache is invalid)
const user = fetchUserData(userId);

Modularizing Common Logic

To reduce code duplication and improve maintainability and reusability, it is important to properly modularize common logic. Here, we explain in detail, with concrete examples, how to modularize common logic.

Gather functions used commonly into a utils folder, etc., to prevent code duplication.

Key points include the following three:

  • Extract frequently used logic (e.g., date formatting, data conversion, validation)
  • Organize by category within the utils folder (e.g., dateUtils.ts, stringUtils.ts, etc.)
  • Write tests to guarantee the intended behavior

Example) Date formatting function

utils/dateUtils.ts
// utils/dateUtils.ts
export function formatDate(date: unknown, format: string = 'YYYY-MM-DD'): string {
  if (!(date instanceof Date) || isNaN(date.getTime())) {
    return '-';
  }

  return new Intl.DateTimeFormat('ja-JP', { dateStyle: 'short' }).format(date);
}

Usage example

import { formatDate } from '@/utils/dateUtils';

console.log(formatDate(new Date())); // Example: "2024/03/10"

Test code

tests/utils/dateUtils.spec.ts
import { formatDate } from '@/utils/dateUtils';

describe('formatDate', () => {
  it('converts to the correct date format', () => {
    const date = new Date(2024, 2, 10);
    expect(formatDate(date)).toBe('2024/03/10');
  });

  it('returns "-" when an invalid date is passed', () => {
    expect(formatDate(new Date('invalid-date'))).toBe('-');
  });

  it('returns "-" when null is passed', () => {
    expect(formatDate(null)).toBe('-');
  });

  it('returns "-" when undefined is passed', () => {
    expect(formatDate(undefined)).toBe('-');
  });

  it('returns "-" when a string is passed', () => {
    expect(formatDate('2024-03-10')).toBe('-');
  });

  it('returns "-" when a number is passed', () => {
    expect(formatDate(0)).toBe('-');
  });
});

Example) Function for API requests

utils/apiClient.ts
export async function fetchData<T>(url: string, options?: RequestInit): Promise<T> {
  const response = await fetch(url, options);
  if (!response.ok) {
    throw new Error(`HTTP error! Status: ${response.status}`);
  }
  return response.json();
}
  • By centralizing communication with the API, you can manage error handling and requests in one place.

Usage example

import { fetchData } from '@/utils/apiClient';

async function getUserData() {
  try {
    const user = await fetchData<{ name: string; age: number }>('https://api.example.com/user');
    console.log(user);
  } catch (error) {
    console.error('Error fetching user data:', error);
  }
}

Test code

tests/utils/apiClient.spec.ts
import { fetchData } from '@/utils/apiClient';

global.fetch = jest.fn();

describe('fetchData', () => {
  it('can fetch data from API', async () => {
    (fetch as jest.Mock).mockResolvedValueOnce({
      ok: true,
      json: async () => ({ name: 'John Doe' }),
    });

    const data = await fetchData<{ name: string }>('https://api.example.com/user');
    expect(data).toEqual({ name: 'John Doe' });
  });

  it('throws an error when the API request fails', async () => {
    (fetch as jest.Mock).mockResolvedValueOnce({
      ok: false,
      status: 500,
    });

    await expect(fetchData('https://api.example.com/user')).rejects.toThrow('HTTP error! Status: 500');
  });

  it('throws an error when a network error occurs', async () => {
    (fetch as jest.Mock).mockRejectedValueOnce(new Error('Network Error'));

    await expect(fetchData('https://api.example.com/user')).rejects.toThrow('Network Error');
  });
});

Example) Email address validation

utils/validation.ts
export function isValidEmail(email: string): boolean {
  return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
}

Usage example

import { isValidEmail } from '@/utils/validation';

console.log(isValidEmail('test@example.com')); // true
console.log(isValidEmail('invalid-email')); // false

Test code

tests/utils/validation.spec.ts
import { isValidEmail } from '@/utils/validation';

describe('isValidEmail', () => {
  it('accepts valid email formats', () => {
    expect(isValidEmail('test@example.com')).toBe(true);
    expect(isValidEmail('user.name+alias@domain.co.jp')).toBe(true);
  });

  it('rejects invalid email formats', () => {
    expect(isValidEmail('invalid-email')).toBe(false);
    expect(isValidEmail('user@domain,com')).toBe(false);
    expect(isValidEmail('user@.com')).toBe(false);
  });
});

Example) Local storage utilities

utils/storage.ts
export function getStorageItem<T>(key: string): T | null {
  const item = localStorage.getItem(key);
  return item ? JSON.parse(item) : null;
}

export function setStorageItem<T>(key: string, value: T): void {
  localStorage.setItem(key, JSON.stringify(value));
}

export function removeStorageItem(key: string): void {
  localStorage.removeItem(key);
}

Usage example

import { getStorageItem, setStorageItem } from '@/utils/storage';

setStorageItem('user', { name: 'John Doe', age: 30 });

const user = getStorageItem<{ name: string; age: number }>('user');
console.log(user); // { name: 'John Doe', age: 30 }

Test code

tests/utils/storage.spec.ts
import { getStorageItem, setStorageItem, removeStorageItem } from '@/utils/storage';

describe('localStorage utils', () => {
  beforeEach(() => {
    localStorage.clear();
  });

  it('can save data correctly', () => {
    setStorageItem('testKey', { foo: 'bar' });
    expect(localStorage.getItem('testKey')).toBe(JSON.stringify({ foo: 'bar' }));
  });

  it('can retrieve data correctly', () => {
    localStorage.setItem('testKey', JSON.stringify({ foo: 'bar' }));
    expect(getStorageItem<{ foo: string }>('testKey')).toEqual({ foo: 'bar' });
  });

  it('can delete data', () => {
    localStorage.setItem('testKey', JSON.stringify({ foo: 'bar' }));
    removeStorageItem('testKey');
    expect(localStorage.getItem('testKey')).toBeNull();
  });
});

Example) Error handling function

utils/errorHandler.ts
export function handleError(error: unknown): string {
  if (error instanceof Error) {
    return error.message;
  }
  return 'An unknown error occurred';
}

Usage example

import { handleError } from '@/utils/errorHandler';

try {
  throw new Error('Something went wrong');
} catch (error) {
  console.error(handleError(error)); // "Something went wrong"
}

Test code

tests/utils/errorHandler.spec.ts
import { handleError } from '@/utils/errorHandler';

describe('handleError', () => {
  it('can get the message from an Error object', () => {
    const error = new Error('Something went wrong');
    expect(handleError(error)).toBe('Something went wrong');
  });

  it('returns the default message for unknown errors', () => {
    expect(handleError(null)).toBe('An unknown error occurred');
    expect(handleError(undefined)).toBe('An unknown error occurred');
  });
});

Splitting Large Components (Separation of Concerns)

When React components grow too large, the following problems arise:

  • Decreased readability: The code becomes harder to scan and understand.
  • Decreased reusability: A specific component becomes tied to a single use case and is hard to reuse elsewhere.
  • Testing becomes difficult: When a single component has many responsibilities, unit testing becomes harder.

Separating Presentation and Logic

By splitting components into Container Components and Presentational Components, you can clarify roles and limit responsibilities.

  • Container Component
    • Handles data fetching and state management
    • Fetches data from external APIs or custom hooks
    • Processes business logic
    • Wraps presentational components and passes necessary data and functions
  • Presentational Component
    • Responsible only for rendering the UI
    • Displays the props it receives as-is
    • Receives user input and event handlers, but does not manage state

Example) User profile component before splitting

// ❌ Bad example
function UserProfile() {
  const { data: user, isLoading, error } = useUserData();

  if (isLoading) {
    return <Loading />;
  }

  if (error) {
    return <ErrorMessage message="Could not fetch user information." />;
  }

  return (
    <div className="p-4 border rounded-lg">
      <h2 className="text-xl font-bold">{user.name}</h2>
      <p className="text-gray-600">{user.email}</p>
    </div>
  );
}

Problems before splitting

  1. Too many responsibilities

    • Calls useUserData to fetch data
    • Handles loading and error states
    • Renders the UI
  2. Hard to reuse

    • It’s difficult to reuse UserProfile on other screens
    • For example, even if you just want to display user information on another page, it still calls useUserData internally, so you can’t freely pass in a user
  3. Hard to test

    • Since data fetching and UI rendering are mixed, you have to consider the data fetching part when writing UI tests

Example) User profile component after splitting

// ✅ Container Component (data fetching and state management)
function UserProfileContainer() {
  const { data: user, isLoading, error } = useUserData();

  if (isLoading) return <Loading />;
  if (error) return <ErrorMessage message="Could not fetch user information." />;

  return <UserProfile user={user} />;
}

// ✅ Presentational Component (UI rendering)
function UserProfile({ user }: { user: User }) {
  return (
    <div className="p-4 border rounded-lg">
      <h2 className="text-xl font-bold">{user.name}</h2>
      <p className="text-gray-600">{user.email}</p>
    </div>
  );
}

Benefits

  1. Component roles become clear

    • UserProfileContainer is responsible only for data fetching and state management
    • UserProfile is responsible only for display
  2. Improved reusability

    • UserProfile can be reused as long as you pass in different data (e.g., another user’s information)
  3. Easier testing

    • UserProfile works with just props, so it’s easy to test
    • UserProfileContainer can be tested using mock data for the data fetching part

Applying Design Patterns

In software development, design patterns are an important factor in improving system maintainability and extensibility. In this article, we focus on “Clean Architecture” and the “Repository Pattern,” which are particularly useful in frontend and backend development, and introduce how to actually apply them.

Clean Architecture

Clean Architecture is an architectural pattern proposed by Robert C. Martin and consists of the following four main layers:

  • Entities: The most important part that represents business rules
  • Use Cases: Application-specific business rules
  • Interface Adapters: Bridges to frameworks and databases
  • Frameworks & Drivers: External systems and UI implementation details

Benefits

  • High maintainability: Since each layer is clearly separated, the impact of changes is limited.
  • Easy testing: Because business logic is separated from the UI and DB, unit testing is easier.
  • High flexibility in technology choices: It’s easy to change databases or frameworks.

Example of applying Clean Architecture

// Entity layer (domain model)
class User {
  constructor(
    public id: string,
    public name: string,
    public email: string
  ) {}
}

// Use case layer (application business logic)
class GetUserUseCase {
  constructor(private userRepository: UserRepository) {}

  async execute(userId: string): Promise<User> {
    return this.userRepository.findById(userId);
  }
}

// Interface adapter layer (repository implementation)
class UserRepositoryImpl implements UserRepository {
  constructor(private db: Database) {}

  async findById(id: string): Promise<User> {
    const userData = await this.db.query("SELECT * FROM users WHERE id = ?", [id]);
    return new User(userData.id, userData.name, userData.email);
  }
}

Repository Pattern

The Repository Pattern abstracts data access so that business logic does not depend on the concrete implementation of the database.

Benefits

  • Separation of data access: Separates business logic from data access logic, making them loosely coupled.
  • Easy testing: Makes it easier to mock the database.
  • Easy to change data sources: For example, it reduces the impact when changing from an SQL DB to a NoSQL DB.

Example of applying the Repository Pattern

interface UserRepository {
  findById(id: string): Promise<User>;
  save(user: User): Promise<void>;
}

class InMemoryUserRepository implements UserRepository {
  private users = new Map<string, User>();

  async findById(id: string): Promise<User | null> {
    return this.users.get(id) || null;
  }

  async save(user: User): Promise<void> {
    this.users.set(user.id, user);
  }
}

Combining Clean Architecture and the Repository Pattern

By combining the use case layer of Clean Architecture with the Repository Pattern, you can achieve a more flexible design.

class CreateUserUseCase {
  constructor(private userRepository: UserRepository) {}

  async execute(name: string, email: string): Promise<User> {
    const user = new User(uuid(), name, email);
    await this.userRepository.save(user);
    return user;
  }
}

Improving React Performance (Reducing Unnecessary Renders, Applying Memoization)

To improve the performance of React applications, it is important to reduce unnecessary renders and apply memoization appropriately. In this article, we introduce techniques that help optimize React rendering.

Reducing Unnecessary Renders

  • Use React.memo
    By using React.memo, you can prevent re-renders as long as the component’s props do not change.
import React from 'react';

const ExpensiveComponent = React.memo(({ value }: { value: number }) => {
  console.log('Rendering ExpensiveComponent');
  return <div>{value}</div>;
});

export default ExpensiveComponent;

Key points

  • React.memo compares props and suppresses re-renders when there are no changes.

  • Simply wrapping with React.memo is not enough; if function props (such as event handlers) change, it will still re-render, so use it together with useCallback.

  • Memoizing functions with useCallback
    If a function is recreated on every render, it can cause child components to re-render. Use useCallback to memoize functions.

import { useState, useCallback } from 'react';
import ExpensiveComponent from './ExpensiveComponent';

const ParentComponent = () => {
  const [count, setCount] = useState(0);
  const [value, setValue] = useState(10);

  const increment = useCallback(() => setCount((prev) => prev + 1), []);

  return (
    <div>
      <button onClick={increment}>Increment: {count}</button>
      <ExpensiveComponent value={value} />
    </div>
  );
};

export default ParentComponent;

Key points

  • With useCallback, a new function is not generated as long as the dependency array does not change.

  • It helps prevent unnecessary re-renders of components wrapped with React.memo.

  • Caching computation results with useMemo
    Use useMemo to memoize expensive computations and prevent unnecessary recalculations.

import { useState, useMemo } from 'react';

const ExpensiveCalculation = (num: number) => {
  console.log('Calculating...');
  return num * 2;
};

const MemoizedComponent = () => {
  const [count, setCount] = useState(0);
  const [value, setValue] = useState(10);

  const computedValue = useMemo(() => ExpensiveCalculation(value), [value]);

  return (
    <div>
      <button onClick={() => setCount(count + 1)}>Increment: {count}</button>
      <div>Computed Value: {computedValue}</div>
    </div>
  );
};

export default MemoizedComponent;

Key points

  • useMemo caches the result as long as the values in the dependency array do not change.
  • Use useMemo only when the computation cost is high.

Setting key Properly

When rendering lists, setting key properly allows React’s virtual DOM to perform diffing efficiently.

const items = ['Apple', 'Banana', 'Orange'];

const ItemList = () => {
  return (
    <ul>
      {items.map((item, index) => (
        <li key={item}>{item}</li>
      ))}
    </ul>
  );
};

Key points

  • Use a unique and stable value for key (indexes are not recommended).
  • Setting appropriate keys helps prevent unnecessary re-renders.

Optimizing Asynchronous JavaScript (Proper Use of Promise.all, Stronger Error Handling)

Optimizing asynchronous processing in JavaScript is directly tied to application performance and stability. By using Promise.all appropriately and strengthening error handling, you can implement efficient and robust asynchronous processing.

Basics of Promise.all

Promise.all runs multiple Promises in parallel and returns the results when all of them succeed.

const promises = [fetchData1(), fetchData2(), fetchData3()];
Promise.all(promises)
  .then(results => {
    console.log("All processes completed", results);
  })
  .catch(error => {
    console.error("Error occurred", error);
  });

Benefits

  • Parallel execution: Running processes simultaneously improves performance.
  • Simple code: You can get all results together in an array.

Drawbacks

  • If one process fails, the whole thing fails: An error in the middle causes the chain to go into catch, and you lose the results of the other successful processes.

Using Promise.allSettled for Error Handling

Promise.allSettled waits until all Promises have finished and returns results regardless of success or failure.

const promises = [fetchData1(), fetchData2(), fetchData3()];
Promise.allSettled(promises)
  .then(results => {
    results.forEach((result, index) => {
      if (result.status === "fulfilled") {
        console.log(`Success: ${index}`, result.value);
      } else {
        console.error(`failure: ${index}`, result.reason);
      }
    });
  });

Benefits of Promise.allSettled

  • Waits until all processes finish: Other processes continue even if some fail.
  • Allows per-promise error handling: You can handle success and failure for each Promise individually.

Strengthening Error Handling with Promise.all

You can also use catch on each Promise to handle errors individually when using Promise.all.

const safeFetch = (promise) => {
  return promise.catch(error => ({ error }));
};

const promises = [
  safeFetch(fetchData1()),
  safeFetch(fetchData2()),
  safeFetch(fetchData3()),
];

Promise.all(promises).then(results => {
  results.forEach((result, index) => {
    if (result.error) {
      console.error(`failure: ${index}`, result.error);
    } else {
      console.log(`Success: ${index}`, result);
    }
  });
});

Applying Per-Task Error Handling

Instead of catching all errors in one place, it’s important to handle them appropriately for each specific process.

const fetchDataWithRetry = async (fetchFunction, retries = 3) => {
  for (let i = 0; i < retries; i++) {
    try {
      return await fetchFunction();
    } catch (error) {
      console.warn(`Retry ${i + 1} failed`, error);
    }
  }
  throw new Error("Maximum retry count exceeded");
};

const promises = [
  fetchDataWithRetry(fetchData1),
  fetchDataWithRetry(fetchData2),
  fetchDataWithRetry(fetchData3),
];

Promise.all(promises)
  .then(results => console.log("All successful", results))
  .catch(error => console.error("Error", error));

Organizing Dependencies: Removing Unused Libraries and Updating to the Latest Versions

In software development, properly managing project dependencies is extremely important. By removing unnecessary libraries and updating necessary ones to the latest stable versions, you can expect improved performance, reduced security risks, and better maintainability.

Benefits

  • Performance improvements
    If unnecessary libraries increase, build time and application startup time become longer. Runtime memory usage also increases, which can cause sluggish behavior.

  • Reduced security risks
    Old libraries may contain vulnerabilities. Updating to the latest version can fix known vulnerabilities and improve security.

  • Improved maintainability
    Having many unused or outdated libraries makes code management complicated. It can also become an obstacle when other developers take over the project in the future.

Step 1: List Current Dependencies

First, check the project’s dependencies.

npm list --depth=0  # For npm
pnpm list --depth=0  # For pnpm
yarn list --depth=0  # For yarn

Also, check dependencies and devDependencies in package.json to understand which libraries are actually being used.

Step 2: Identify and Remove Unnecessary Libraries

To find unnecessary libraries, try the following steps:

  • Search for libraries that are not used in the project

    npx depcheck
    

    depcheck is a useful tool for identifying unused dependencies.

  • Manually check package.json

    • Libraries that were introduced experimentally in the past but are no longer used
    • Libraries that became unnecessary due to framework changes
    • Libraries in devDependencies that are not used in production

Once you identify unnecessary libraries, remove them.

npm uninstall <package_name>
pnpm remove <package_name>
yarn remove <package_name>

After updating libraries, run tests to confirm that the application still works correctly.

npm test  # Run your preconfigured test script

Step 3: Update Dependencies to the Latest Versions

To update old libraries to the latest stable versions, run the following commands:

npm outdated  # Check which packages can be updated
npm update    # Update minor versions

Step 4: Verification and Regression Testing

After updating libraries, run tests to confirm that the application still works correctly.

npm test  # Run your preconfigured test script

It is also important to run E2E tests and manual tests to ensure that there are no issues caused by version upgrades.

Step 5: Regular Review of Dependencies

Depending on the size of the project, it is recommended to review dependencies regularly.

Step 6: Managing Lock Files

Manage package-lock.json (npm) or yarn.lock (Yarn) properly to prevent unintended version upgrades.

Step 7: Check Release Notes for Major Libraries

Major libraries such as React and Next.js may include breaking changes in new versions. Before updating, check the release notes and follow the migration guides.

Eliminating any in TypeScript and Expanding the Scope of Types

When using TypeScript, the any type offers flexibility but also undermines type safety. In this article, we explain in detail how to eliminate any and how to expand the scope of type application.

Why Should We Eliminate any?

Using the any type disables TypeScript’s type checking and can cause the following problems:

  • Lack of type safety: With any, type mismatches cannot be detected at compile time and can lead to runtime errors.
  • Reduced code readability: Losing type information makes the structure of functions and objects unclear.
  • No type inference: You cannot take advantage of TypeScript’s powerful type inference.

Techniques for Reducing any

Here are several techniques for safely eliminating any and expanding the scope of type application.

  1. Use unknown
    unknown can be used as an alternative to any and preserves type safety until proper type checks are performed.
function parseJson(json: string): unknown {
  return JSON.parse(json);
}

const data: unknown = parseJson('{"name":"Alice"}');
if (typeof data === 'object' && data !== null && 'name' in data) {
  console.log((data as { name: string }).name); // Cast after type check
}
  1. Use generics

By using generics for function return values and arguments, you can broaden the scope of type application.

function identity<T>(value: T): T {
  return value;
}

const result = identity<string>("Hello"); // Type becomes string
  1. Add explicit type annotations

By specifying appropriate types explicitly, you can prevent the use of any.

interface User {
  id: number;
  name: string;
}

const getUser = (id: number): User => {
  return { id, name: "John Doe" };
};
  1. Use utility types

TypeScript provides utility types for extending and constraining types.

type PartialUser = Partial<User>; // Make all properties optional

const user: PartialUser = { name: "Alice" }; // OK
  1. Use type inference

Leverage TypeScript’s type inference to avoid explicit use of any.

const numbers = [1, 2, 3];
const doubled = numbers.map(num => num * 2); // `num` is inferred as number

When You Have No Choice but to Use any

Even when you must use any, it is important to control it properly.

  1. Use type guards

Perform type checks and cast any safely.

function isUser(obj: any): obj is User {
  return typeof obj.id === "number" && typeof obj.name === "string";
}

const maybeUser: any = { id: 1, name: "Alice" };
if (isUser(maybeUser)) {
  console.log(maybeUser.name); // Type can be applied safely
}
  1. Do not overuse as

Avoid using as any as much as possible and set appropriate types instead.

// NG example
const data: any = getData();
const user = data as User; // Type safety is lost
  1. Use library type definitions

When using external libraries, install appropriate type definition files (@types/...) to expand the scope of type application.

npm install --save-dev @types/lodash

There is also an article summarizing points to note when migrating from JavaScript to TypeScript.

It introduces various techniques related to types. Please refer to it together.

https://shinagawa-web.com/en/blogs/typescript-migration-support

Planning Refactoring (Supporting Gradual Application)

In software development, refactoring is essential to improve code readability and maintainability. However, in large-scale projects, it is difficult to change everything at once, and a proper plan is required. In this article, we explain in detail how to plan refactoring with staged application in mind.

Define the Goals of Refactoring

  • Improve code readability
  • Improve maintainability
  • Optimize performance
  • Reduce bugs
  • Eliminate technical debt

Decide Priorities

  • Prioritize areas with a large impact (code that changes frequently or tends to be a hotbed of bugs)
  • Prioritize code for features with high business value
  • Prioritize areas with large technical debt (parts where old designs are slowing down development)

Analyze the Current Codebase

  • Analyze code quality (use tools such as SonarQube, ESLint, Stylelint)
  • Visualize dependencies (use Dependency Cruiser or Graphviz)
  • Check test coverage (Jest, React Testing Library, etc.)

Design Refactoring with Impact in Mind

  • Split by module (apply changes in stages by component or feature)
  • Start with parts that have fewer dependencies
  • Design with coexistence of old and new code in mind (use Feature Toggles, Strangler Pattern)

How to Proceed with Refactoring

  1. Conduct a PoC (Proof of Concept)
    • Make small changes to check the impact
  2. Apply small changes in stages
    • Example: Start with renaming functions or tidying up code
  3. Refactor major features or highly coupled parts
    • Example: Migrating from legacy class components to function components
  4. Improve architecture
    • Example: Migrating from Redux to Zustand, organizing APIs

Points for Safe Refactoring

  • Enhance tests
    • Prepare unit tests, integration tests, and E2E tests
  • Use feature flags
    • Introduce a mechanism that allows you to switch behavior while applying changes in stages
  • Use code reviews and pair programming
    • Incorporate third-party perspectives to improve the quality of refactoring
  • Minimize the impact range of each release
    • Adopt a micro-release strategy

Managing Code After Refactoring

To confirm that refactoring has been done properly, evaluate the results using the following metrics:

  • Measure code quality metrics

    • Cyclomatic Complexity: Measure the complexity of code branches and evaluate simplification through refactoring
    • Maintainability Index: Quantify how easy the code is to understand and change
    • Code duplication rate: Reduce code duplication and improve maintainability
  • Confirm improved development speed

    • Changes in PR merge speed: Check whether changes are reviewed and integrated smoothly
    • Decrease in bug reports: Measure the effect of refactoring on bug reduction
    • Shorter release cycles: Check whether new feature development proceeds smoothly

Record refactoring changes properly and share them with the entire team to improve future maintainability.

  • Keep records of refactoring

    • Reason for change: Clarify why refactoring was done
    • Impact range: Organize which modules or features are affected
    • Migration steps: Describe the steps to migrate from old code to new code
    • Backward compatibility: Note considerations for making changes without affecting existing features
  • Update code comments and README

    • API specification changes: Reflect changes to endpoint specifications and parameters in documentation
    • Component usage: Clearly describe how to use changed components
    • Environment configuration updates: Record changes to configuration files and CI/CD scripts

Conclusion

Refactoring is not something that ends after being done once; it is important to continuously accumulate improvements. By organizing code with a focus on readability, reusability, and performance, you can improve the development experience and make it easier to add new features while keeping bugs under control.

In particular, keeping the following points in mind will help you refactor effectively:

  • Accumulate small improvements: Don’t change too much at once; apply refactoring in stages
  • Clarify the intent of the code: Improve readability through naming and function splitting
  • Share refactoring policies within the team: Maintain consistency and keep development efficient
  • Use automated tests: Guarantee behavior after refactoring and proceed safely

Code changes continuously as the project grows. To control that change and evolve it into a better form, make refactoring a habit and build a “culture of maintaining maintainable code.”

Xでシェア
Facebookでシェア
LinkedInでシェア

Questions about this article 📝

If you have any questions or feedback about the content, please feel free to contact us.
Go to inquiry form