lua-resty-upstream-plus
Name
lua-resty-upstream-plus
Status
This library is currently considered production ready.
Description
This Lua library can be used with balancer_by_lua*.
Synopsis
nginx.conf
http {
# shared dictionary is required for upstream healthcheck
lua_shared_dict healthcheck 64m;
}
resty.upstream
lua_package_path "/path/to/lua-resty-upstream/lib/?.lua;;";
upstream backend {
server 0.0.0.1;
balancer_by_lua_block {
local upstream = ngx.ctx.upstream
upstream:balancer(ngx.ctx.algorithm, ngx.ctx.key)
}
}
server {
location /t {
rewrite_by_lua_block {
local resty_upstream = require "resty.upstream"
local healthcheck = require "resty.healthcheck"
local nodes = {
{ id = "server1", ip = "127.0.0.1", port = 1990, weight = 4 },
{ id = "server2", ip = "127.0.0.1", port = 1991, weight = 2 },
{ id = "server3", ip = "127.0.0.1", port = 1992, weight = 1 },
}
local checker_opts = {
enable = 1,
shm = "healthcheck",
app_id = "app_id",
upstream_id = "upstream_id",
get_latest_version = function ()
return "version"
end,
type = "http",
http_req = "GET /status HTTP/1.0\r\nHost: foo.com\r\n\r\n",
interval = 1,
timeout = 1 * 1000,
fall = 2,
rise = 2,
-- valid_statuses = {200, 302},
concurrency = 1,
}
ngx.ctx.upstream = resty_upstream.init("test", "v001", nodes, checker_opts)
ngx.ctx.algorithm = "chash"
ngx.ctx.key = uri
}
proxy_pass http://backend;
}
}
resty.healthcheck
It can be use alone.
local healthcheck = require "resty.healthcheck"
local nodes = {
{ id = "xx", ip = "120.24.93.123", port = 89, weight = 1 },
}
local checker_opts = {
enable = true,
shm = "healthcheck",
get_latest_version = function()
return "cluster.version"
end,
type = "http", -- tcp, tls, https, mysql, postgresql
http_req_host = 'openresty.org', -- for sslhandshake
http_req = "GET /cn/ HTTP/1.0\r\nHost: openresty.org\r\n\r\n",
resp_body_match = "^match$", -- it's optional, will check if the http response matches
valid_statuses = {200, 302}, -- it's optional
interval = 1, -- healthcheck interval
timeout = 1 * 1000, -- ms
fall = 2,
rise = 2,
concurrency = 1, -- concurrency for healthcheck server in a single group
report_status = function(nodes_status)
ngx.log(ngx.ERR, require "cjson" .encode(nodes_status))
end,
report_interval = 10, -- seconds
}
local health_version = nil
local version = "version"
local checker = {
opts = checker_opts,
u_version = version,
nodes = nodes,
org_nodes = nodes, -- only used in fetch_health_nodes
check_time = 0,
is_checking = false,
health_version = health_version or 0,
}
local checkers = { checker }
healthcheck.batch_check(checkers)
ngx.sleep(0.1)
Methods
init
syntax: upstream = resty_upstream.init(name, version, nodes, checker_opts)
resty_upstream.init(“test”, “v001”, nodes, checker_opts)
- name: This is a string parameter that represents the identifier or name of the upstream group. It’s used to distinguish different upstream configurations.
- version: This is a string that represents the version of the upstream configuration. It’s used to track changes to the upstream configuration over time.
- nodes: This is a table (array) containing information about the backend servers in the upstream. Each node in the array is a table with properties describing the server:
- id: A unique identifier for the server
- ip: The IP address of the server
- port: The port number on which the server is listening
- weight: The weight assigned to this server (for weighted load balancing algorithms)
- checker_opts: This is a table containing opts for health checking of the upstream servers. When provided, it enables active health checking of the backend servers. The table can contain the following fields:
- enable: Whether health checking is enabled (1/0 or true/false)
- shm: The name of the shared memory zone used for health checking (must match the lua_shared_dict directive in nginx.conf)
- shm_report: The name of the shared memory zone used for health checking report.
- report_interval: The interval (in seconds) at which health check results are reported to shared memory (3 minutes by default).
- app_id and upstream_id: Identifiers for the application and upstream
- get_latest_version: A function that returns the current version of the upstream configuration
- keepalive: If set to true, the health checker will use keepalive connections for http and https.
- resp_body_match: It is a plain string that represents the expected to be in the response body.
- sni: It is required in tls health check.
- ssl_verify: The enable SSL verification or not.
- db_database: database name for mysql and postgresql health check.
- db_user: database user name for mysql and postgresql health check.
- db_password: database password for mysql and postgresql health check.
- db_ssl: enable SSL or not
- db_ssl_verify: enable SSL verification or not
- db_sql: The SQL query to execute for mysql and postgresql health check.
- db_keepalive: Enable keepalive for mysql and postgresql health check.
- db_keepalive_timeout: The timeout (in seconds) for idle keepalive connections (60 seconds by default).
- db_result_match: The expected result which should be contained in the SQL query result of mysql and postgresql health check.
- type: The type of health check.
- tcp
- tls
- http
- https
- mysql
- postgresql
- http_req_host: The server name to use in the SSL handshake.
- http_req: The HTTP request string to send during HTTP health checks.
- interval: The interval (in seconds) between health checks.
- timeout: The timeout (in milliseconds) for health check requests.
- fall: The number of consecutive failures required to mark a server as unhealthy.
- rise: The number of consecutive successes required to mark a server as healthy.
- concurrency: The number of concurrent health check connections.
- valid_statuses: An array of HTTP status codes that indicate a healthy server.
- resp_body_match: A pattern that the response body must match for the server to be considered healthy.
- disable_errmsg: If disabled, the health check error message will not be stored in the
shm_reportshared memory.
local nodes = {
{ id = "xx", ip = "127.0.0.1", port = 80, weight = 1 },
{ id = "xxx", ip = "127.0.0.1", port = 8080, weight = 2 }
}
local checker_opts = {
enable = 1,
shm = "healthcheck",
app_id = "app_id",
upstream_id = "upstream_id",
get_latest_version = function ()
return "version"
end,
type = "http",
http_req = "GET /status HTTP/1.0\r\nHost: foo.com\r\n\r\n",
interval = 1,
timeout = 1 * 1000,
fall = 2,
rise = 2,
-- valid_statuses = {200, 302},
concurrency = 1,
}
resty_upstream.init("test", "v001", nodes, checker_opts)
balancer
syntax: upstream:balancer(algorithm, key, ctx, host?, sticky?)
The balancer method is used to select an appropriate backend server from the upstream based on the specified load balancing algorithm and parameters. This method is typically called within a balancer_by_lua_block to dynamically route requests to backend servers.
Parameters:
algorithm: A string that specifies the load balancing algorithm to use. Supported algorithms include:chash: Consistent hash algorithm, which selects a backend server based on the hash of the keyhash: Simple hash algorithm, which also selects a backend based on the hash of the keyroundrobin: Round-robin algorithm, which distributes requests evenly among backend servers
key: A string used as input for the hash algorithms (chash or hash). For consistent and simple hash algorithms, this key determines which backend server will be selected. This could be a URI, IP address, or other request-specific value.ctx: A table containing context information used during the load balancing process. This is typically an ngx.ctx or custom context table that may contain additional information needed for the balancing decision.host(optional): A string representing the host header value to be used in the request to the backend server. If specified, it can be used to override the original host header.sticky(optional): A table containing sticky session configuration. When specified, it ensures that requests from the same client are consistently routed to the same backend server.
The method will set the appropriate nginx variables to direct the request to the selected backend server. It works with nginx’s internal balancer mechanism to accomplish this.
upstream:balancer(ngx.ctx.algorithm, ngx.ctx.key)
init_worker
syntax: init_worker(run_interval, check_interval)
The init_worker method is used to initialize worker processes and set up background health checks for upstream servers. This method should be called in the init_worker_by_lua_block context to properly initialize the health checking mechanism and related timers.
Parameters:
run_interval: An optional number specifying the interval (in seconds) for running health checks. Default value is 1 second if not provided.check_interval: An optional number specifying the interval (in seconds) for checking the status of upstream servers. This controls how frequently the system polls server status.
When called, this method initializes the background health checking process which periodically verifies the availability of upstream servers according to the intervals specified. The health check mechanism helps ensure that only healthy servers are included in the load balancing process.
Example:
init_worker(1, 5) -- Run health checks every 1 second, check server status every 5 seconds
enable_strict_health_check
syntax: enable_strict_health_check(flag)
The enable_strict_health_check method is used to enable or disable strict health checking mode for upstream servers.
When enabled, only online (healthy) upstream nodes will be used for load balancing,
filtering out nodes that are marked as down by the health check mechanism.
When disabled, unhealthy upstream nodes will still be used for load balancing if there are no online nodes available.
Example:
upstream.enable_strict_health_check(true) -- Enable strict health checking
set_max_running_health_timer
syntax: set_max_running_health_timer(num)
The set_max_running_health_timer method sets the maximum number of concurrent health check timers that can run simultaneously. This allows for control over system resources used by the health checking mechanism, preventing excessive resource consumption when monitoring a large number of upstream servers.
Parameters:
num: A positive integer specifying the maximum number of concurrent health check timers allowed. This limits how many health check operations can be performed simultaneously.
This method is particularly useful when managing a large number of upstream servers, as it helps prevent resource exhaustion by limiting the number of concurrent health check operations. Setting an appropriate value can help balance between Liveness (checking server health frequently) and system resource usage.
Example:
upstream.set_max_running_health_timer(10) -- Limit to 10 concurrent health check timers
Installation
First you need to configure the lua_package_path directive
to add the path of your lua-resty-upstream source tree to ngx_lua’s LUA_PATH search
path, as in
# nginx.conf
http {
lua_package_path "/path/to/lua-resty-upstream/lib/?.lua;;";
# shared dictionary is required for upstream healthcheck
lua_shared_dict healthcheck 64m;
...
}
Ensure that the system account running your Nginx ‘‘worker’’ proceses have
enough permission to read the .lua file.
Copyright and License
Copyright (C) 2022 ~ 2025 by OpenResty Inc. All rights reserved.
License: Proprietary.
See Also
- the ngx_lua module: http://wiki.nginx.org/HttpLuaModule