diff --git a/README.md b/README.md index 73f08419..4da0389a 100644 --- a/README.md +++ b/README.md @@ -1,20 +1,18 @@ -BOSH Release for haproxy -=========================== +# BOSH Release for HAProxy Questions? Pop in our [slack channel](https://cloudfoundry.slack.com/messages/haproxy-boshrelease/)! -This BOSH release is an attempt to get a more customizable/secure haproxy release than what +This BOSH release is an attempt to get a more customizable/secure HAProxy release than what is provided in [cf-release](https://github.com/cloudfoundry/cf-release). It allows users to -blacklist internal-only domains, preventing potential Host header spoofing from allowing -unauthorized access of internal APIs. It also allows for better control over haproxy's +blocklist internal-only domains, preventing potential Host header spoofing from allowing +unauthorized access of internal APIs. It also allows for better control over HAProxy's timeouts, for greater resiliency under heavy load. -Usage ------ +## Usage To deploy this BOSH release: -``` +```bash git clone https://github.com/cloudfoundry-community/haproxy-boshrelease.git cd haproxy-boshrelease @@ -27,6 +25,16 @@ bosh deploy manifests/haproxy.yml \ To make alterations to the deployment you can use the `bosh deploy [-o operator-file.yml]` flag to provide [operations files](https://bosh.io/docs/cli-ops-files.html). +## Documentation + +- [External Certificates](/docs/external_certs.md) - Using HAProxy with additional external certificates +- [Mutual TLS](/docs/mutual_tls.md) - Mutual TLS configuration +- [Rate Limiting](/docs/rate_limiting.md) - Client IP based rate limiting +- [Keepalived](/docs/keepalived.md) - Keepalived integration for high availability +- [Core Dumps](/docs/coredumps.md) - Enabling core dumps for HAProxy debugging +- [Dependency Updates](/docs/version-bumps.md) - How to bump dependency versions +- [Release Process](/docs/release-process.md) - How to create a new release + ## Development Feel free to contribute back to this via a pull request on a feature branch! Once merged, we'll @@ -35,11 +43,11 @@ cut a new final release for you. ### Unit Tests and Linting #### PR Validation -PRs will be automatically tested by https://concourse.arp.cloudfoundry.org/teams/main/pipelines/haproxy-boshrelease once a maintainer has labelled the PR with the `run-ci` label +PRs will be automatically tested by https://concourse.arp.cloudfoundry.org/teams/main/pipelines/haproxy-boshrelease once a maintainer has labeled the PR with the `run-ci` label #### Local Test Execution -Unit/rspec Tests and linters can be run locally to verify correct functionality before pushing to the CI system. -If you change any erb logic in the jobs directory please add a corresponding test to `spec`. +Unit/RSpec tests and linters can be run locally to verify correct functionality before pushing to the CI system. +If you change any ERB logic in the jobs directory, please add a corresponding test to `spec`. ```bash # install the necessary dependencies, once @@ -68,12 +76,8 @@ bundle exec guard ``` #### Test Debugging -Unit/rspec Tests can also be debugged/stepped through when needed. See for example the [VSCode rdbg Ruby Debugger](https://marketplace.visualstudio.com/items?itemName=KoichiSasada.vscode-rdbg) extension. You can follow the "Launch without configuration" instructions for the extension, just set the "Debug command line" input to `bundle exec rspec `. +Unit/RSpec tests can also be debugged/stepped through when needed. See for example the [VSCode rdbg Ruby Debugger](https://marketplace.visualstudio.com/items?itemName=KoichiSasada.vscode-rdbg) extension. You can follow the "Launch without configuration" instructions for the extension, just set the "Debug command line" input to `bundle exec rspec `. -### Acceptance tests +### Acceptance Tests See [acceptance-tests README](/acceptance-tests/README.md). - -### Certificate reloads during runtime - -See [external_certs README](/docs/external_certs.md) diff --git a/docs/coredumps.md b/docs/coredumps.md new file mode 100644 index 00000000..611ea1f5 --- /dev/null +++ b/docs/coredumps.md @@ -0,0 +1,67 @@ +# Enabling Core Dumps for HAProxy + +When debugging crashes or unexpected behavior in HAProxy, it can be useful to enable core dumps for post-mortem analysis. + +## Required Changes + +Enabling core dumps requires a few modifications to the BOSH release. + +Depending on the BPM (BOSH Process Manager) version, BPM needs to be either configured or disabled: + +### 1a. Disable BPM (BPM <= 1.4.29) + +BPM <= 1.4.29 restricts the process environment in ways that prevent core dumps from being written. To work around this, the monit configuration must be changed to manage HAProxy directly via `haproxy_wrapper` instead of BPM: + +- **Start program** in `jobs/haproxy/monit`: Change from `/var/vcap/jobs/bpm/bin/bpm start haproxy` to `/var/vcap/jobs/haproxy/bin/haproxy_wrapper` +- **Stop program** in `jobs/haproxy/monit`: Change from `/var/vcap/jobs/bpm/bin/bpm stop haproxy` to `/bin/bash -c 'kill $(cat /var/vcap/sys/run/haproxy/haproxy.pid)'` +- **PID file** in `jobs/haproxy/monit`: Change from `/var/vcap/sys/run/bpm/haproxy/haproxy.pid` to `/var/vcap/sys/run/haproxy/haproxy.pid` +- **PID file** in `jobs/haproxy/templates/drain.erb`: Change the `pidfile=` variable from `/var/vcap/sys/run/bpm/haproxy/haproxy.pid` to `/var/vcap/sys/run/haproxy/haproxy.pid` +- **PID file** in `jobs/haproxy/templates/reload.erb`: Change the `pidfile=` variable from `/var/vcap/sys/run/bpm/haproxy/haproxy.pid` to `/var/vcap/sys/run/haproxy/haproxy.pid` + +### 1b. Set BPM core_file_size limit (BPM > 1.4.29) + +BPM > 1.4.29 allows setting the core dump file size limit. In `jobs/haproxy/templates/bpm.yml`, add `core_file_size` to the existing `limits:` block (alongside `open_files`): + +```yaml + limits: + open_files: <%= p("ha_proxy.max_open_files") %> + core_file_size: 1073741824 +``` + +### 2. Configure the HAProxy wrapper script + +The following must be added to `haproxy_wrapper.erb` before HAProxy is started: + +```bash +ulimit -c unlimited # Allow unlimited core dump file size +ulimit -n 256000 # Ensure sufficient file descriptors +echo /var/vcap/data/haproxy/core.%e.%p.%t > /proc/sys/kernel/core_pattern +``` + +The core pattern places dumps in `/var/vcap/data/haproxy/` with the filename format `core...`. + +### 3. Enable `set-dumpable` in HAProxy config + +Add the `set-dumpable` directive to the `global` section in `haproxy.config.erb`. This is required because HAProxy drops privileges to the `vcap` user after startup, which by default causes the kernel to disable core dumps. `set-dumpable` calls `prctl(PR_SET_DUMPABLE, 1)` to re-enable them. + +## Analyzing Core Dumps + +Core dump files are written to `/var/vcap/data/haproxy/`. + +To analyze a core dump, use `gdb` with the HAProxy binary and the core file: + +```bash +gdb /var/vcap/packages/haproxy/bin/haproxy /var/vcap/data/haproxy/core... +``` + +Useful GDB commands once loaded: + +``` +bt # Print backtrace of the crashing thread +bt full # Print backtrace with local variables +info threads # List all threads +thread # Switch to thread N +thread apply all bt # Print backtraces for all threads +``` + +> **Note:** For meaningful stack traces, HAProxy should be compiled with debug symbols. To enable this, add `DEBUG_CFLAGS="-g"` to the `make` command in `packages/haproxy/packaging`. Without debug symbols, the backtrace will show only addresses without function names. \ No newline at end of file diff --git a/docs/external_certs.md b/docs/external_certs.md index 3ad6b690..36e2260b 100644 --- a/docs/external_certs.md +++ b/docs/external_certs.md @@ -1,35 +1,35 @@ # Using HAProxy with additional External Certificates -By default, the HAproxy BOSH manifest contains all certificates to be used during runtime. +By default, the HAProxy BOSH manifest contains all certificates to be used during runtime. The way to pass the certificates can be either via the `ha_proxy.ssl_pem` property that sets one chain for -the hostname HAproxy is running on. Or they can be passed via the `ha_proxy.crt_list` property, which is essentially +the hostname HAProxy is running on. Or they can be passed via the `ha_proxy.crt_list` property, which is essentially a list of `ssl_pem` properties that allows to configure multiple entries for different hostnames using SNI. ## What are External Certificates and why are they needed? -If you are planning to use more than one certificate on your HAproxy you are most likely going to use the `ha_proxy.crt_list` +If you are planning to use more than one certificate on your HAProxy you are most likely going to use the `ha_proxy.crt_list` property. The main use-case for this property is to register different trust configurations for different hosts. -For example, if your HAproxy services both the secure.example.com and www.example.com hosts, they both might have different +For example, if your HAProxy services both the secure.example.com and www.example.com hosts, they both might have different requirements towards security. One could be using mTLS and a more secure certificate than the other or they could be using different CAs. The problems start when you are using a very large `ha_proxy.crt_list` with dozens or even hundreds of entries while using BOSH to deploy them. The way BOSH works is that all certificates will become part of the manifest during rendering and those certificates will then be extracted -from the manifest and onto the HAproxy disk during deployment. If the manifest becomes very large (> 20M) the time BOSH needs to render and deploy increases significantly. At the same time, providing your customers the capability to register custom domains and certificates tends to be a very dynamic process, i.e. you never know when a customer will register a domain and upload a certificate to deploy but you will want to deploy the new certificate as quickly as possible so the customer can use it right away. Using the given way, you'll end up deploying HAproxy all the time. The major downside of this is that every time you deploy HAproxy, there will be a brief moment where the old process exits and the new process has not yet started. This will drop all existing connections to HAproxy and any client connected at that moment will receive a disruption. +from the manifest and onto the HAProxy disk during deployment. If the manifest becomes very large (> 20M) the time BOSH needs to render and deploy increases significantly. At the same time, providing your customers the capability to register custom domains and certificates tends to be a very dynamic process, i.e. you never know when a customer will register a domain and upload a certificate to deploy but you will want to deploy the new certificate as quickly as possible so the customer can use it right away. Using the given way, you'll end up deploying HAProxy all the time. The major downside of this is that every time you deploy HAProxy, there will be a brief moment where the old process exits and the new process has not yet started. This will drop all existing connections to HAProxy and any client connected at that moment will receive a disruption. How can this dilemma be solved? External certificates to the rescue! ## How does it all work? -The HAproxy BOSH release provides an additional property `ha_proxy.ext_crt_list` that enables the use of a second source of certificates. -When used, HAproxy will expect an additional `crt-list` file to be present in a specific folder (by default: `/var/vcap/jobs/haproxy/config/ssl/ext`). -If the file exists its contents will be merged with the existing certificates from the manifest before HAproxy is started. -Since the list of certificates is now provided by two decoupled sources, those sources need to be synchronized in order to avoid starting HAproxy with an incomplete set of certificates. During startup, HAproxy will wait for the second `crt-list` file to appear. This allows an external service (e.g. another BOSH release) to generate the file and place it in the directory where it is expected. +The HAProxy BOSH release provides an additional property `ha_proxy.ext_crt_list` that enables the use of a second source of certificates. +When used, HAProxy will expect an additional `crt-list` file to be present in a specific folder (by default: `/var/vcap/jobs/haproxy/config/ssl/ext`). +If the file exists its contents will be merged with the existing certificates from the manifest before HAProxy is started. +Since the list of certificates is now provided by two decoupled sources, those sources need to be synchronized in order to avoid starting HAProxy with an incomplete set of certificates. During startup, HAProxy will wait for the second `crt-list` file to appear. This allows an external service (e.g. another BOSH release) to generate the file and place it in the directory where it is expected. -At runtime, when a new certificate needs to be added, the external service can simply update the second `crt-list` file and trigger a [hitless reload](https://www.haproxy.com/blog/hitless-reloads-with-haproxy-howto/) of HAproxy using the `/var/vcap/jobs/haproxy/bin/reload` command. No connections will be dropped. +At runtime, when a new certificate needs to be added, the external service can simply update the second `crt-list` file and trigger a [hitless reload](https://www.haproxy.com/blog/hitless-reloads-with-haproxy-howto/) of HAProxy using the `/var/vcap/jobs/haproxy/bin/reload` command. No connections will be dropped. -Depending on your configuration, HAproxy will refuse to start without external certificates or it will continue without them after a timeout. +Depending on your configuration, HAProxy will refuse to start without external certificates or it will continue without them after a timeout. -## Configuring HAproxy to use External Certificates +## Configuring HAProxy to use External Certificates The feature is controlled by these properties: ``` @@ -42,10 +42,10 @@ ha_proxy.ext_crt_list_file: The location from which to load additional external certificates list default: "/var/vcap/jobs/haproxy/config/ssl/ext/crt-list" ha_proxy.ext_crt_list_timeout: - Timeout (in seconds) to wait for the external certificates list located at `ha_proxy.ext_crt_list_file` to appear during HAproxy startup + Timeout (in seconds) to wait for the external certificates list located at `ha_proxy.ext_crt_list_file` to appear during HAProxy startup default: 60 ha_proxy.ext_crt_list_policy: What to do if the external certificates list located at `ha_proxy.ext_crt_list_file` does not appear within the time - denoted by `ha_proxy.ext_crt_list_timeout`. Set to either 'fail' (HAproxy will not start) or 'continue' (HAproxy will start without external certificates) + denoted by `ha_proxy.ext_crt_list_timeout`. Set to either 'fail' (HAProxy will not start) or 'continue' (HAProxy will start without external certificates) default: "fail" ``` diff --git a/docs/keepalived.md b/docs/keepalived.md index 82c4d50f..78249811 100644 --- a/docs/keepalived.md +++ b/docs/keepalived.md @@ -1,32 +1,32 @@ -# Purpose of keepalived implementation +# Purpose of Keepalived Implementation -Adding support for [keepalived](http://www.keepalived.org/documentation.html) to enable high availability in an haproxy deployment, leveraging the https://en.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol more formally [RFC 5798](https://tools.ietf.org/html/rfc5798). See [keep-alived man page](https://linux.die.net/man/5/keepalived.conf) for more precision over capabilities of keep-alived as well as the [keep-alived user manual](http://www.keepalived.org/pdf/UserGuide.pdf) +Adding support for [keepalived](http://www.keepalived.org/documentation.html) to enable high availability in an HAProxy deployment, leveraging the [Virtual Router Redundancy Protocol](https://en.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol), formally specified in [RFC 5798](https://tools.ietf.org/html/rfc5798). See the [keepalived man page](https://linux.die.net/man/5/keepalived.conf) for more details on keepalived capabilities, as well as the [keepalived user manual](http://www.keepalived.org/pdf/UserGuide.pdf). -This enables declaring an virtual IP (``keepalived.vip``) that will automatically fail over between the multiple haproxy VMs: the master will initially be the bosh vm for haproxy job instance 0. The default IP addresses assigned by bosh to vms on eth0 are used within the VRRP protocol. +This enables declaring a virtual IP (`keepalived.vip`) that will automatically fail over between the multiple HAProxy VMs: the master will initially be the BOSH VM for HAProxy job instance 0. The default IP addresses assigned by BOSH to VMs on eth0 are used within the VRRP protocol. -Prereqs: - * The haproxy VMs must be within the same broadcast domain, i.e. receive multicast traffic sent to the 224.0.0.18 broadcast and IP protocol number 112. -* The clients using this VIP must be within the [same broadcast domain](https://en.wikipedia.org/wiki/Broadcast_domain) as the haproxy vms and accepting ARP gratuitious. +Prerequisites: + * The HAProxy VMs must be within the same broadcast domain, i.e. receive multicast traffic sent to the 224.0.0.18 broadcast and IP protocol number 112. + * The clients using this VIP must be within the [same broadcast domain](https://en.wikipedia.org/wiki/Broadcast_domain) as the HAProxy VMs and accepting gratuitous ARP. -# This feature has been successfully tested on the following IAAS : +# This feature has been successfully tested on the following IaaS: * Cloudstack w/ XenServer -# Limitations and future enhancements -* logs collection and monitoring/alerting : keepalived logs are sent to syslog and can t be retrieved using `bosh logs` you have to tail /var/log/syslog to get info -* Health check period is hardcoded to 2s : we will add parameter for this -* mcast_src_ip @IP is 224.0.0.18 : we will add parameter for this -* Not yet email notification : we will add parameter for this -* Hardcoded VRRP advertisement to 1 S (advert_int) triggering a new VRRP election and fail over. Not yet drain script handling to prevent downtime while bosh upgrades. -* For the moment, KeepAlived is configured to use broadcast for network communication between nodes. Future versions will be able to use unicast to expose a VIP or control a distinct SDN system such as an AWS ElasticIP (through custom VRRP failover notification scripts) +# Limitations and Future Enhancements +* Log collection and monitoring/alerting: keepalived logs are sent to syslog and cannot be retrieved using `bosh logs`. You have to tail `/var/log/syslog` to get info. +* Health check period is hardcoded to 2s. We will add a parameter for this. +* mcast_src_ip address is 224.0.0.18. We will add a parameter for this. +* No email notification yet. We will add a parameter for this. +* Hardcoded VRRP advertisement interval to 1s (advert_int), triggering a new VRRP election and failover. No drain script handling yet to prevent downtime while BOSH upgrades. +* For the moment, keepalived is configured to use broadcast for network communication between nodes. Future versions will be able to use unicast to expose a VIP or control a distinct SDN system such as an AWS Elastic IP (through custom VRRP failover notification scripts). -# testing -## First verification -* after setting up keepalived.vip parameter, connect to the instance with index 0 of your AZ. BOSH will configure this one as master -* run `sudo ip a` -* you should see the VIP (in example above, VIP is set as 10.234.250.201) +# Testing +## First Verification +* After setting up the `keepalived.vip` parameter, connect to the instance with index 0 of your AZ. BOSH will configure this one as master. +* Run `sudo ip a`. +* You should see the VIP (in the example above, VIP is set to 10.234.250.201): ``` 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 @@ -36,18 +36,18 @@ Prereqs: inet 10.234.250.201/32 scope global eth0 valid_lft forever preferred_lft forever ``` -* The VIP is up, you can perform further testing and access your backend services using the VIP +* The VIP is up. You can perform further testing and access your backend services using the VIP. -## Failover scenario -* Let s stop haproxy on first node by running `monit stop haproxy` -* Let s run `ip a` on first node +## Failover Scenario +* Stop HAProxy on the first node by running `monit stop haproxy`. +* Run `ip a` on the first node: ``` 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 06:c9:f6:00:0a:38 brd ff:ff:ff:ff:ff:ff inet 10.234.250.199/26 brd 10.234.250.255 scope global eth0 valid_lft forever preferred_lft forever ``` -* no more VIP, let s look at our second node +* No more VIP. Let's look at the second node: ``` 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 06:38:ce:00:0a:39 brd ff:ff:ff:ff:ff:ff @@ -56,44 +56,44 @@ Prereqs: inet 10.234.250.201/32 scope global eth0 valid_lft forever preferred_lft forever ``` -* It works ! If we look at logs on first node : +* It works! If we look at logs on the first node: ``` Dec 7 12:47:34 localhost Keepalived_vrrp[4558]: VRRP_Script(check_haproxy) failed Dec 7 12:47:34 localhost Keepalived_vrrp[4558]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Effective priority = 101 Dec 7 12:47:35 localhost Keepalived_vrrp[4558]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Received higher prio advert 102 Dec 7 12:47:35 localhost Keepalived_vrrp[4558]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Entering BACKUP STATE ``` -and second node : +and the second node: ``` Dec 7 12:47:35 localhost Keepalived_vrrp[4544]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) forcing a new MASTER election Dec 7 12:47:36 localhost Keepalived_vrrp[4544]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Transition to MASTER STATE Dec 7 12:47:37 localhost Keepalived_vrrp[4544]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Entering MASTER STATE ``` -* Same scenario if you stop the master node : +* Same scenario if you stop the master node: ``` Dec 7 12:55:52 localhost Keepalived_vrrp[4544]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Received higher prio advert 103 Dec 7 12:55:52 localhost Keepalived_vrrp[4544]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Entering BACKUP STATE Dec 7 12:58:22 localhost Keepalived_vrrp[4544]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Transition to MASTER STATE Dec 7 12:58:23 localhost Keepalived_vrrp[4544]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Entering MASTER STATE ``` -* If you kill the VM running the master node (using IAAS) : +* If you kill the VM running the master node (using the IaaS): ``` Dec 8 14:01:34 localhost Keepalived_vrrp[11463]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Transition to MASTER STATE Dec 8 14:01:35 localhost Keepalived_vrrp[11463]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Entering MASTER STATE ``` -and after restarting the master node : +and after restarting the master node: ``` Dec 8 14:02:55 localhost Keepalived_vrrp[11463]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Received lower prio advert 101, forcing new election Dec 8 14:02:56 localhost Keepalived_vrrp[11463]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Received higher prio advert 103 Dec 8 14:02:56 localhost Keepalived_vrrp[11463]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Entering BACKUP STATE ``` -* Running the canary on master node : -master node : +* Running the canary on the master node: +Master node: ``` Dec 8 14:13:20 localhost Keepalived_vrrp[1046]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Effective priority = 101 ``` -slave node : +Backup node: ``` Dec 8 14:13:24 localhost Keepalived_vrrp[11463]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Transition to MASTER STATE Dec 8 14:13:25 localhost Keepalived_vrrp[11463]: VRRP_Instance(haproxy_keepalived_mysql_infra_check_haproxy) Entering MASTER STATE diff --git a/docs/mutual_tls.md b/docs/mutual_tls.md index 7b074267..7d5f90e1 100644 --- a/docs/mutual_tls.md +++ b/docs/mutual_tls.md @@ -29,7 +29,7 @@ it's connecting to, add the following properties to the mix: properties: haproxy: backend_ssl: verify - backend_ca: | + backend_ca_file: | ----- BEGIN CERTIFICATE ----- CA Certificate for validating backend certs ----- END CERTIFICATE ----- @@ -40,7 +40,7 @@ properties: ## Configuring HAProxy to Pass Client Certificates to Apps HAProxy can be configured to pass client certificates on to apps requiring them on the backend. -This does not enforce mutual TLS at the HAPrcxy level, nor does it enable it at the app level. +This does not enforce mutual TLS at the HAProxy level, nor does it enable it at the app level. Instead, it allows for HAProxy to accept client certificates optionally, which are then passed to backend apps via the `X-Forwarded-Client-Cert` HTTP Header. Apps must then be written to inspect that header, and perform a manual certificate validation based on the value of the `X-Forwarded-Client-Cert` @@ -61,8 +61,8 @@ were used to issue the client certs. certs, to ensure client certs have not been revoked. If HAProxy has trouble validating a client cert, it will refuse to serve the request, unless -that specific error has been ignored. This can be configured via `ha_proxy.client_cert_ignore_err` -An exhaustive list of these error codes can be found here][4] +that specific error has been ignored. This can be configured via `ha_proxy.client_cert_ignore_err`. +An exhaustive list of these error codes can be found [here][4]. [1]: https://github.com/cloudfoundry/haproxy-boshrelease [2]: #using-haproxy-in-front-of-backends-that-require-mutual-tls diff --git a/docs/rate_limiting.md b/docs/rate_limiting.md index 2a948bef..b6e63ea6 100644 --- a/docs/rate_limiting.md +++ b/docs/rate_limiting.md @@ -10,21 +10,21 @@ There are two rate limit configuration groups: - `connections_rate_limit` for connection based rate limiting on OSI layer 4/TCP - `requests_rate_limit` for request based rate limiting on OSI layer 7/HTTP -Both groups contain the (roughly) same attributes : -- `requests` (for `requests_rate_limit`) and `connections` (for `connections_rate_limit`): the amount of requests/connections that are allowed within a time window (see `window_size`) before further incoming requests/connections are denied/blocked +Both groups contain roughly the same attributes: +- `requests` (for `requests_rate_limit`) and `connections` (for `connections_rate_limit`): the number of requests/connections allowed within a time window (see `window_size`) before further incoming requests/connections are denied/blocked - `window_size`: Window size for counting connections - `table_size`: Size of the stick table in which the IPs and counters are stored. - `block`: Whether or not to block connections. If `block` is disabled (or not provided), incoming requests/connections will still be tracked in the respective stick-tables, but will not be denied. ## Effects of Rate Limiting -Once a rate-limit is reached, haproxy-boshrelease will no longer proxy incoming request from the rate-limited client IP to a backend. Depending on the type of rate limiting, haproxy will respond with one of the following: +Once a rate limit is reached, haproxy-boshrelease will no longer proxy incoming requests from the rate-limited client IP to a backend. Depending on the type of rate limiting, HAProxy will respond with one of the following: ### Request based Rate Limiting -HAProxy responds the client with HTTP Status Code: `429: Too Many Requests`. +HAProxy responds to the client with HTTP Status Code: `429: Too Many Requests`. ### Connection based Rate Limiting The TCP connection will be rejected. This would for example show up as `Empty reply from server` for a `curl`-client. -This will not result in a log statement on HAProxy side, which can make tracing issues more difficult. +This will not result in a log statement on the HAProxy side, which can make tracing issues more difficult. > Note: > If both rate-limits are reached simultaneously (e.g. if they are configured identically and every incoming HTTP request uses a new TCP connection), connection based rate-limiting will come into effect first, resulting in a dropped TCP connection. @@ -32,7 +32,7 @@ This will not result in a log statement on HAProxy side, which can make tracing ## Configuration Examples > Note: -> The following example assume only a `http-in` frontend is configured, a `https-in` frontend would behave identically +> The following examples assume only an `http-in` frontend is configured; an `https-in` frontend would behave identically. ### Count Incoming Requests Only (No blocking) #### Configuration (`deployments/haproxy/config.yml`) @@ -72,7 +72,7 @@ backend st_http_req_rate # [...] frontend http-in http-request track-sc1 src table st_http_req_rate - http-request deny status 429 content-type "text/plain" string "429: Too Many Requests" if { sc_http_req_rate(1) gt <%= p("ha_proxy.requests_rate_limit.requests") %> } + http-request deny status 429 if { sc_http_req_rate(1) gt 10 } ``` @@ -103,14 +103,14 @@ backend st_tcp_conn_rate frontend http-in # [...] http-request track-sc1 src table st_http_req_rate - http-request deny status 429 content-type "text/plain" string "429: Too Many Requests" if { sc_http_req_rate(1) gt 10 } + http-request deny status 429 if { sc_http_req_rate(1) gt 10 } tcp-request content track-sc0 src table st_tcp_conn_rate tcp-request connection reject if { sc_conn_rate(0) gt 10} ``` -## Querying current stick-table status -To give us more insights into what is going on inside HAProxy regarding its rate limits we can query the stats socket to get the raw table data: +## Querying Current Stick-Table Status +To get more insight into what is going on inside HAProxy regarding its rate limits, you can query the stats socket to get the raw table data: ```bash $ echo "show table st_http_req_rate" | socat /var/vcap/sys/run/haproxy/stats.sock - @@ -118,4 +118,4 @@ $ echo "show table st_http_req_rate" | socat /var/vcap/sys/run/haproxy/stats.soc 0x56495f3dc3d0: key=172.18.0.1 use=0 exp=7618 http_req_rate(10000)=10 ``` -> Please note you will likely need 'sudo' permission to run socat. +> Note: You will likely need `sudo` permission to run socat. diff --git a/docs/release-process.md b/docs/release-process.md index 05e88359..e6c3129f 100644 --- a/docs/release-process.md +++ b/docs/release-process.md @@ -6,7 +6,7 @@ Only approvers in the [Networking area of the ARP working group](https://github.com/cloudfoundry/community/blob/main/toc/working-groups/app-runtime-platform.md#roles--technical-assets) can create new releases. First, a draft release is prepared via running some jobs in the [haproxy-boshrelease pipeline](https://concourse.arp.cloudfoundry.org/teams/main/pipelines/haproxy-boshrelease) of the community concourse. -Afterwards, the release notes are written and the draft release is finalized in the Github Web UI. +Afterwards, the release notes are written and the draft release is finalized in the GitHub Web UI. Here are the detailed steps: 1. The version number is controlled by the concourse pipeline and can be automatically incremented via the `patch`, `minor` and `major` steps. Please refer to [Versioning Guide](https://github.com/cloudfoundry/haproxy-boshrelease/tree/master/ci#versioning-guide). @@ -17,12 +17,12 @@ Here are the detailed steps: 2. After configuring the version by running one of the three jobs, run the `rc` job. 3. When the `rc` job has succeeded, trigger the `shipit` job to create a new draft release. -4. Using the GitHub UI, finalise the release note and release: - * Use the "Generate Release Note" button to get a list of all changes. Remove all CI and test related commits as those don't impact the resulting release bundle. Retain information that changes the release itself (e.g. HAProxy version bumps). - * Add information about noteworthy fixes, changes and features. Look at the overall changes list to ensure you didn't miss important changes by other committers. - * Add information about shipped version bumps in the "Upgrades" section (HAProxy, keepalived, etc.). The versions table is generated automatically and shows the versions contained in this release already. +4. Using the GitHub UI, finalize the release notes and publish the release: + * Use the "Generate Release Note" button to get a list of all changes. Remove all CI and test related commits, as those don't impact the resulting release bundle. Retain information that changes the release itself (e.g. HAProxy version bumps). + * Add information about noteworthy fixes, changes, and features. Look at the overall changes list to ensure you didn't miss important changes by other committers. + * Add information about shipped version bumps in the "Upgrades" section (HAProxy, keepalived, etc.). The versions table is generated automatically and shows the versions contained in this release. - Once the release note text is complete, finalise the release via "Publish Release". Leave the "Set as the latest release" checkbox ticked. + Once the release note text is complete, finalize the release via "Publish Release". Leave the "Set as the latest release" checkbox ticked. ## Access to Concourse diff --git a/docs/version-bumps.md b/docs/version-bumps.md index f0d39f69..5091efb3 100644 --- a/docs/version-bumps.md +++ b/docs/version-bumps.md @@ -20,7 +20,7 @@ decided to fully automate those updates: Dependabot, if so the PR is approved and auto-merge is enabled. 4. Once all checks pass the pull request is merged by GitHub. -Note: currently this only active for Dependabot as we are missing a separate user to approve pull +Note: Currently this is only active for Dependabot, as we are missing a separate user to approve pull requests generated by CFN-CI. ## Known Pitfalls