Commit Graph

6 Commits

Author SHA1 Message Date
Jakob Borg
ad2d3702ae all: Upgrade github.com/gogo/protobuf and regenerate (fixes #6085) 2019-10-18 09:53:59 +02:00
Jakob Borg
80894948f6
build: Upgrade github.com/gogo/protobuf (#5994)
This is the result of:

- Changing build.go to take the protobuf version from the modules
  instead of hardcoded
- `go get github.com/gogo/protobuf@v1.3.0` to upgrade
- `go run build.go proto` to regenerate our code
2019-09-04 07:33:29 +01:00
Jakob Borg
b01edca420
all: Update protobuf package 1.0.0 -> 1.2.0 (#5452)
Also adds a few file global options to keep the generated code similar
to what we already had.
2019-01-14 11:53:36 +01:00
Jakob Borg
944ddcf768
all: Become a Go module (fixes #5148) (#5384)
* go mod init; rm -rf vendor

* tweak proto files and generation

* go mod vendor

* clean up build.go

* protobuf literals in tests

* downgrade gogo/protobuf
2018-12-18 12:36:38 +01:00
Jakob Borg
71fab4d250 cmd/stdiscosrv: Record time of failed lookup
So that we can eventually garbage collect keys that noone is asking
about any more.
2018-03-06 16:15:29 +01:00
Jakob Borg
916ec63af6 cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:

- Protocol towards clients is unchanged.

- Recommended large scale design is still to be deployed nehind nginx (I
  tested, and it's still a lot faster at terminating TLS).

- Database backend is leveldb again, only. It scales enough, is easy to
  setup, and we don't need any backend to take care of.

- Server supports replication. This is a simple TCP channel - protect it
  with a firewall when deploying over the internet. (We deploy this within
  the same datacenter, and with firewall.) Any incoming client announces
  are sent over the replication channel(s) to other peer discosrvs.
  Incoming replication changes are applied to the database as if they came
  from clients, but without the TLS/certificate overhead.

- Metrics are exposed using the prometheus library, when enabled.

- The database values and replication protocol is protobuf, because JSON
  was quite CPU intensive when I tried that and benchmarked it.

- The "Retry-After" value for failed lookups gets slowly increased from
  a default of 120 seconds, by 5 seconds for each failed lookup,
  independently by each discosrv. This lowers the query load over time for
  clients that are never seen. The Retry-After maxes out at 3600 after a
  couple of weeks of this increase. The number of failed lookups is
  stored in the database, now and then (avoiding making each lookup a
  database put).

All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 08:52:31 +00:00