Skip to content

Parallelize TSConfig resolution per file to improve performance #554

@no-yan

Description

@no-yan

Currently, tsgolint processes TSConfig resolution sequentially for each file. While the BFS within the resolution itself is parallelized, it leads to poor CPU utilization as the cache hit rate increases.
This issue proposes changing the parallelization strategy.

Note

Disclosure: I used some AI tools: Claude for prototype, Gemini fo improve my English.

Problem

In large repositories, TSConfig resolution takes more than 10 seconds.

  • Current Strategy: Sequential processing per file. Parallelism is only applied inside the BFS search for tsconfig.json.
  • Bottleneck: Once the cache is warm, the BFS returns almost instantly. This causes the process to effectively becomes single-threaded, leaving other CPU core idle.

Trace

The following trace shows poor process utilization (in Elastic/Kibana).
From 55ms onwards, TSConfig resolution continues, but in the latter half, single-threaded discovery persists for about 1.5 seconds.
Image

Proposed changes

Instead of iterating through files sequentially, we should process them in parallel using worker pool.

A secondary benefit is that file-specific goroutine creation is eliminated, reducing scheduling overhead.

Future work

While this change provides an immediate performance boost, even better solutions could be explored in the future, such as:

  • Pipelining the linting process to improve overall throughput.
  • Inverting the resolution logic: Instead of finding a TSConfig for each file, we could identify files belonging to each TSConfig.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions