## Native distributed architecture `Cluster` now becomes the first class citizen of `CabloyJS`. In other words, the CabloyJS project is ready to be deployed in a clustered environment The original `Worker + Agent` process model of EggJS is very convenient for a single machine. However, when it comes to multi machine clusters, especially Docker based cluster deployment, `Agent` process lose its usefulness. More importantly, if the development is based on the `Agent` process at the beginning, it is difficult to smoothly transition to the distributed scene later Therefore, the backend of CabloyJS uses `Redis`. It starts from the bottom of the framework to design a native distributed architecture, and has derived a series of distributed development components, such as `Broadcast`, `Queue`, `Schedule`, `Startup`, which facilitates distributed development from the beginning. Therefore, after the system is scaled up, cluster expansion can be easily done * See Also: [Broadcast](https://cabloy.com/articles/broadcast.html), [Queue](https://cabloy.com/articles/queue.html), [Schedule](https://cabloy.com/articles/schedule.html), [Startup](https://cabloy.com/articles/startup.html) ## Redis Config Since `CabloyJS` has three built-in running environments, we need to configure different Redis parameters for different running environments > Take notice: The following are the default configurations, which generally do not need to be changed. Just ensure that the `host` and `port` conform to the actual values `src/backend/config/config.{env}.js` ``` javascript // redis const __redisConnectionDefault = { host: '127.0.0.1', port: 6379, password: '', db: 0, }; const __redisConnectionDefaultCache = Object.assign({}, __redisConnectionDefault, { keyPrefix: `cache_${appInfo.name}:`, }); const __redisConnectionDefaultIO = Object.assign({}, __redisConnectionDefault, { keyPrefix: `io_${appInfo.name}:`, }); config.redisConnection = { default: __redisConnectionDefault, cache: __redisConnectionDefaultCache, io: __redisConnectionDefaultIO, }; config.redis = { clients: { redlock: config.redisConnection.default, limiter: config.redisConnection.default, queue: config.redisConnection.default, broadcast: config.redisConnection.default, cache: config.redisConnection.cache, io: config.redisConnection.io, }, }; ``` The above codes defined multiple Redis client instances for different scenarios, such as: `redlock`, `limiter`, etc. But these client instances eventually point to the same Redis service When the traffic increases, you can consider that different client instances point to different Redis services to share the system pressure * \__redisConnectionDefault | Name | Description | |----|----| | host | IP Address | | port | Port | | password | Password | | db | database index, default is 0 | * config.redis.clients | Name | Description | |----|----| | redlock | Distributed lock | | limiter | Limiter | | queue | Queue | | broadcast | Broadcast | | cache | Cache | | io | socketio |