The Temporal Server consists of four independently scalable services:
Frontend gateway: for rate limiting, routing, authorizing
History subsystem: maintains data (mutable state, queues, and timers)
Matching subsystem: hosts Task Queues for dispatching
Worker service: for internal background workflows
The Frontend Service is stateless , no sharding/partitioning
For example, a real life production deployment can have 5 Frontend, 15 History, 17 Matching, and 3 Worker services per cluster.
Each service is aware of the others, including scaled instances, through a membership protocol via Ringpop.
Types of inbound calls include the following:
Admin operations via the CLI
Multi-cluster Replication related calls from a remote Cluster
Every inbound request related to a Workflow Execution must have a Workflow Id, which becomes hashed for routing purposes.
The Frontend Service has access to the hash rings that maintain service membership information,
including how many nodes (instances of each service) are in the Cluster.
The Frontend service talks to the Matching service, History service, Worker service, the database, and Elasticsearch (if in use).
It uses the grpcPort 7233 to host the service handler.
It uses port 6933 for membership related communication.
The History Service tracks the state of Workflow Executions.
The History Service scales horizontally via individual shards, configured during the Cluster's creation.
Each shard maintains data (routing Ids, mutable state) and queues.
There are three types of queues that a History shard maintains:
This is used to transfer internal tasks to the Matching Service.
Whenever a new Workflow Task needs to be scheduled, the History Service transactionally dispatches it to the Matching Service.
This is used to durably persist Timers.
This is used only for the experimental Multi-Cluster feature
The History service talks to the Matching Service and the Database.
The Matching Service is responsible for hosting Task Queues for Task dispatching.
It talks to the Frontend service, History service, and the database.
It is responsible for matching Workers to Tasks and routing new tasks to the appropriate queue.
This service can scale internally by having multiple instances.
The Worker Service runs background processing for the replication queue, system Workflows
It talks to the Frontend service.
The database stores the following types of data:
Tasks: Tasks to be dispatched.
State of Workflow Executions:
Execution table: A capture of the mutable state of Workflow Executions.
History table: An append only log of Workflow Execution History Events.
Namespace metadata: Metadata of each Namespace in the Cluster.
Visibility data: Enables operations like "show all running Workflow Executions".
For production environments, we recommend using ElasticSearch.
postgresql jdbc 驱动 ResultSet 取值
temporal 获取 namespace信息 关键代码和堆栈信息
temporal worker 线程信息 及 轮询获取工作流关键代码