syncthing/vendor/manifest

815 lines
23 KiB
Plaintext
Raw Normal View History

{
"version": 0,
"dependencies": [
{
"importpath": "code.cloudfoundry.org/bytefmt",
"repository": "https://github.com/cloudfoundry/bytefmt",
"vcs": "git",
"revision": "a052d587819f45f719a22e344a8ad7858deb3733",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/AudriusButkevicius/cli",
"repository": "https://github.com/AudriusButkevicius/cli",
"vcs": "git",
"revision": "7f561c78b5a4aad858d9fd550c92b5da6d55efbb",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/AudriusButkevicius/go-nat-pmp",
"repository": "https://github.com/AudriusButkevicius/go-nat-pmp",
"vcs": "git",
"revision": "452c97607362b2ab5a7839b8d1704f0396b640ca",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/AudriusButkevicius/pfilter",
"repository": "https://github.com/AudriusButkevicius/pfilter",
"vcs": "git",
"revision": "9dca34a5b530bfc9843fa8aa2ff08ff9821032cb",
"branch": "master",
"notests": true
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/BurntSushi/toml",
"repository": "https://github.com/BurntSushi/toml",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "a368813c5e648fee92e5f6c30e3944ff9d5e8895",
"branch": "master",
"notests": true
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/a8m/mark",
"repository": "https://github.com/a8m/mark",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "44f2db6188458162890ca13980819247418d8e45",
"branch": "master",
"notests": true
},
2016-05-31 22:35:35 +02:00
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/beorn7/perks/quantile",
"repository": "https://github.com/beorn7/perks",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9",
"branch": "master",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"path": "/quantile",
"notests": true
2016-05-31 22:35:35 +02:00
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/bkaradzic/go-lz4",
"repository": "https://github.com/bkaradzic/go-lz4",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "7224d8d8f27ef618c0a95f1ae69dbb0488abc33a",
"branch": "master",
"notests": true
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/calmh/du",
"repository": "https://github.com/calmh/du",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "dd9dc2043353249b2910b29dcfd6f6d4e64f39be",
"branch": "master",
"notests": true
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/calmh/xdr",
"repository": "https://github.com/calmh/xdr",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "08e072f9cb164f943a92eb59f90f3abc64ac6e8f",
"branch": "master",
"notests": true
},
2016-05-31 22:35:35 +02:00
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/cheggaaa/pb",
"repository": "https://github.com/cheggaaa/pb",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "18d384da9bdc1e5a08fc2a62a494c321d9ae74ea",
"branch": "master",
"notests": true
2016-05-31 22:35:35 +02:00
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/chmduquesne/rollinghash",
"repository": "https://github.com/chmduquesne/rollinghash",
"vcs": "git",
"revision": "abb8cbaf9915e48ee20cae94bcd94221b61707a2",
"branch": "master",
"notests": true
2016-05-31 22:35:35 +02:00
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/d4l3k/messagediff",
"repository": "https://github.com/d4l3k/messagediff",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "29f32d820d112dbd66e58492a6ffb7cc3106312b",
"branch": "master",
"notests": true
2016-05-31 22:35:35 +02:00
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/dustin/go-humanize",
"repository": "https://github.com/dustin/go-humanize",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "bb3d318650d48840a39aa21a027c6630e198e626",
"branch": "master",
"notests": true
2016-05-31 22:35:35 +02:00
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/edsrzf/mmap-go",
"repository": "https://github.com/edsrzf/mmap-go",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "0bce6a6887123b67a60366d2c9fe2dfb74289d2e",
"branch": "master",
"notests": true
2016-05-31 22:35:35 +02:00
},
2016-03-06 21:32:10 +01:00
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/gernest/wow",
"repository": "https://github.com/gernest/wow",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "7e0b2a2398989a5d220eebac5742d45422ba7de8",
"branch": "master",
"notests": true
2016-03-06 21:32:10 +01:00
},
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/go-ini/ini",
"repository": "https://github.com/go-ini/ini",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "32e4c1e6bc4e7d0d8451aa6b75200d19e37a536a",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/gobwas/glob",
"repository": "https://github.com/gobwas/glob",
"vcs": "git",
"revision": "51eb1ee00b6d931c66d229ceeb7c31b985563420",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/gogo/protobuf",
"repository": "https://github.com/gogo/protobuf",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "160de10b2537169b5ae3e7e221d28269ef40d311",
"branch": "master",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"notests": true,
"allfiles": true
},
2016-05-31 22:35:35 +02:00
{
"importpath": "github.com/golang/groupcache/lru",
"repository": "https://github.com/golang/groupcache",
"vcs": "git",
"revision": "84a468cf14b4376def5d68c722b139b881c450a4",
2016-05-31 22:35:35 +02:00
"branch": "master",
"path": "/lru",
"notests": true
},
{
"importpath": "github.com/golang/protobuf/proto",
"repository": "https://github.com/golang/protobuf",
"vcs": "git",
"revision": "1e59b77b52bf8e4b449a57e6f79f21226d571845",
"branch": "master",
"path": "/proto",
"notests": true
},
{
"importpath": "github.com/golang/protobuf/ptypes/any",
"repository": "https://github.com/golang/protobuf",
"vcs": "git",
"revision": "1e59b77b52bf8e4b449a57e6f79f21226d571845",
"branch": "master",
"path": "ptypes/any",
"notests": true
2016-05-31 22:35:35 +02:00
},
{
"importpath": "github.com/golang/snappy",
"repository": "https://github.com/golang/snappy",
"vcs": "git",
"revision": "553a641470496b2327abcac10b36396bd98e45c9",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/jackpal/gateway",
"repository": "https://github.com/jackpal/gateway",
"vcs": "git",
"revision": "5795ac81146e01d3fab7bcf21c043c3d6a32b006",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/kballard/go-shellquote",
"repository": "https://github.com/kballard/go-shellquote",
"vcs": "git",
"revision": "cd60e84ee657ff3dc51de0b4f55dd299a3e136f2",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/klauspost/cpuid",
"repository": "https://github.com/klauspost/cpuid",
"vcs": "git",
"revision": "eae9b3e628d72774e13bdf024e78c0802f85a5b9",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/klauspost/reedsolomon",
"repository": "https://github.com/klauspost/reedsolomon",
"vcs": "git",
"revision": "0b30fa71cc8e4e9010c9aba6d0320e2e5b163b29",
"branch": "master",
"notests": true
},
2016-05-31 22:35:35 +02:00
{
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"importpath": "github.com/magefile/mage/mg",
"repository": "https://github.com/magefile/mage",
"vcs": "git",
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
"revision": "63768081a3236a7c6c53ef72e402ae1fe1664b61",
"branch": "master",
"path": "/mg",
"notests": true
},
{
"importpath": "github.com/magefile/mage/sh",
"repository": "https://github.com/magefile/mage",
"vcs": "git",
"revision": "63768081a3236a7c6c53ef72e402ae1fe1664b61",
"branch": "master",
"path": "sh",
"notests": true
},
{
"importpath": "github.com/magefile/mage/types",
"repository": "https://github.com/magefile/mage",
"vcs": "git",
"revision": "63768081a3236a7c6c53ef72e402ae1fe1664b61",
"branch": "master",
"path": "types",
"notests": true
},
{
"importpath": "github.com/mattn/go-runewidth",
"repository": "https://github.com/mattn/go-runewidth",
"vcs": "git",
"revision": "97311d9f7767e3d6f422ea06661bc2c7a19e8a5d",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
"repository": "https://github.com/matttproud/golang_protobuf_extensions",
"vcs": "git",
"revision": "c12348ce28de40eed0136aa2b644d0ee0650e56c",
"branch": "master",
"path": "/pbutil",
"notests": true
},
{
"importpath": "github.com/minio/cli",
"repository": "https://github.com/minio/cli",
"vcs": "git",
"revision": "45db1f8a055198ad8c12754026cb2c51c584c756",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/minio/minio-go",
"repository": "https://github.com/minio/minio-go",
"vcs": "git",
"revision": "17b9efe2ee358a550ff2d414160b75fc85c86f2e",
"branch": "master",
"notests": true
2016-05-31 22:35:35 +02:00
},
{
"importpath": "github.com/minio/sha256-simd",
"repository": "https://github.com/minio/sha256-simd",
"vcs": "git",
"revision": "ad98a36ba0da87206e3378c556abbfeaeaa98668",
"branch": "master",
"notests": true
},
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
{
"importpath": "github.com/mitchellh/go-homedir",
"repository": "https://github.com/mitchellh/go-homedir",
"vcs": "git",
"revision": "b8bc1bf767474819792c23f32d8286a45736f1c6",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/onsi/ginkgo",
"repository": "https://github.com/onsi/ginkgo",
"vcs": "git",
"revision": "6c46eb8334b30dc55b42f1a1c725d5ce97375390",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/onsi/gomega",
"repository": "https://github.com/onsi/gomega",
"vcs": "git",
"revision": "ba3724c94e4dd5d5690d37c190f1c54b2c1b4e64",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/oschwald/geoip2-golang",
"repository": "https://github.com/oschwald/geoip2-golang",
"vcs": "git",
"revision": "5b1dc16861f81d05d9836bb21c2d0d65282fc0b8",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/oschwald/maxminddb-golang",
"repository": "https://github.com/oschwald/maxminddb-golang",
"vcs": "git",
"revision": "26fe5ace1c706491c2936119e1dc69c1a9c04d7f",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/petermattis/goid",
"repository": "https://github.com/petermattis/goid",
"vcs": "git",
"revision": "3db12ebb2a599ba4a96bea1c17b61c2f78a40e02",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/pkg/errors",
"repository": "https://github.com/pkg/errors",
"vcs": "git",
"revision": "e881fd58d78e04cf6d0de1217f8707c8cc2249bc",
"branch": "master",
"notests": true
},
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
{
"importpath": "github.com/prometheus/client_golang/prometheus",
"repository": "https://github.com/prometheus/client_golang",
"vcs": "git",
"revision": "180b8fdc22b4ea7750bcb43c925277654a1ea2f3",
"branch": "master",
"path": "/prometheus",
"notests": true
},
{
"importpath": "github.com/prometheus/client_model/go",
"repository": "https://github.com/prometheus/client_model",
"vcs": "git",
"revision": "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c",
"branch": "master",
"path": "/go",
"notests": true
},
{
"importpath": "github.com/prometheus/common/expfmt",
"repository": "https://github.com/prometheus/common",
"vcs": "git",
"revision": "2e54d0b93cba2fd133edc32211dcc32c06ef72ca",
"branch": "master",
"path": "expfmt",
"notests": true
},
{
"importpath": "github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg",
"repository": "https://github.com/prometheus/common",
"vcs": "git",
"revision": "2e54d0b93cba2fd133edc32211dcc32c06ef72ca",
"branch": "master",
"path": "internal/bitbucket.org/ww/goautoneg",
"notests": true
},
{
"importpath": "github.com/prometheus/common/model",
"repository": "https://github.com/prometheus/common",
"vcs": "git",
"revision": "2e54d0b93cba2fd133edc32211dcc32c06ef72ca",
"branch": "master",
"path": "/model",
"notests": true
},
{
"importpath": "github.com/prometheus/procfs",
"repository": "https://github.com/prometheus/procfs",
"vcs": "git",
"revision": "b15cd069a83443be3154b719d0cc9fe8117f09fb",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/rcrowley/go-metrics",
"repository": "https://github.com/rcrowley/go-metrics",
"vcs": "git",
"revision": "e181e095bae94582363434144c61a9653aff6e50",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/remyoudompheng/bigfft",
"repository": "https://github.com/remyoudompheng/bigfft",
"vcs": "git",
"revision": "52369c62f4463a21c8ff8531194c5526322b8521",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/sasha-s/go-deadlock",
"repository": "https://github.com/sasha-s/go-deadlock",
"vcs": "git",
"revision": "03d40e5dbd5488667a13b3c2600b2f7c2886f02f",
"branch": "master",
"notests": true
},
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
{
"importpath": "github.com/sirupsen/logrus",
"repository": "https://github.com/sirupsen/logrus",
"vcs": "git",
"revision": "d682213848ed68c0a260ca37d6dd5ace8423f5ba",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/stathat/go",
"repository": "https://github.com/stathat/go",
"vcs": "git",
"revision": "74669b9f388d9d788c97399a0824adbfee78400e",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/syncthing/notify",
"repository": "https://github.com/syncthing/notify",
"vcs": "git",
"revision": "b9ceffc925039c77cd9e0d38f248279ccc4399e2",
"branch": "master",
"notests": true
},
2016-03-11 09:25:32 +01:00
{
"importpath": "github.com/syndtr/goleveldb/leveldb",
2016-03-11 09:25:32 +01:00
"repository": "https://github.com/syndtr/goleveldb",
"vcs": "git",
"revision": "34011bf325bce385408353a30b101fe5e923eb6e",
"branch": "master",
"path": "/leveldb",
"notests": true
2016-03-11 09:25:32 +01:00
},
{
"importpath": "github.com/templexxx/cpufeat",
"repository": "https://github.com/templexxx/cpufeat",
"vcs": "git",
"revision": "3794dfbfb04749f896b521032f69383f24c3687e",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/templexxx/xor",
"repository": "https://github.com/templexxx/xor",
"vcs": "git",
"revision": "0af8e873c554da75f37f2049cdffda804533d44c",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/thejerf/suture",
"repository": "https://github.com/thejerf/suture",
"vcs": "git",
"revision": "87e298c9891673c9ae76e10c2c9be589127e5f49",
"branch": "master",
"notests": true
},
{
"importpath": "github.com/tjfoc/gmsm/sm4",
"repository": "https://github.com/tjfoc/gmsm",
"vcs": "git",
"revision": "98aa888b79d8de04afe0fccf45ed10594efc858b",
"branch": "master",
"path": "/sm4",
"notests": true
},
{
"importpath": "github.com/vitrun/qart/coding",
"repository": "https://github.com/vitrun/qart",
"vcs": "git",
"revision": "bf64b92db6b05651d6c25a3dabf2d543b360c0aa",
"branch": "master",
"path": "coding",
"notests": true
},
{
"importpath": "github.com/vitrun/qart/gf256",
"repository": "https://github.com/vitrun/qart",
"vcs": "git",
"revision": "bf64b92db6b05651d6c25a3dabf2d543b360c0aa",
"branch": "master",
"path": "gf256",
"notests": true
},
{
"importpath": "github.com/vitrun/qart/qr",
"repository": "https://github.com/vitrun/qart",
"vcs": "git",
"revision": "bf64b92db6b05651d6c25a3dabf2d543b360c0aa",
"branch": "master",
"path": "/qr",
"notests": true
},
{
"importpath": "golang.org/x/crypto/bcrypt",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "95a4943f35d008beabde8c11e5075a1b714e6419",
"branch": "master",
"path": "/bcrypt",
"notests": true
},
{
"importpath": "golang.org/x/crypto/blowfish",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "95a4943f35d008beabde8c11e5075a1b714e6419",
"branch": "master",
"path": "blowfish",
"notests": true
},
{
"importpath": "golang.org/x/crypto/cast5",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "95a4943f35d008beabde8c11e5075a1b714e6419",
"branch": "master",
"path": "cast5",
"notests": true
},
{
"importpath": "golang.org/x/crypto/pbkdf2",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "95a4943f35d008beabde8c11e5075a1b714e6419",
"branch": "master",
"path": "pbkdf2",
"notests": true
},
{
"importpath": "golang.org/x/crypto/salsa20",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "95a4943f35d008beabde8c11e5075a1b714e6419",
"branch": "master",
"path": "salsa20",
"notests": true
},
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
{
"importpath": "golang.org/x/crypto/ssh/terminal",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
"branch": "master",
"path": "/ssh/terminal",
"notests": true
},
{
"importpath": "golang.org/x/crypto/tea",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "95a4943f35d008beabde8c11e5075a1b714e6419",
"branch": "master",
"path": "tea",
"notests": true
},
{
"importpath": "golang.org/x/crypto/twofish",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "95a4943f35d008beabde8c11e5075a1b714e6419",
"branch": "master",
"path": "/twofish",
"notests": true
},
{
"importpath": "golang.org/x/crypto/xtea",
"repository": "https://go.googlesource.com/crypto",
"vcs": "git",
"revision": "95a4943f35d008beabde8c11e5075a1b714e6419",
"branch": "master",
"path": "xtea",
"notests": true
},
2016-09-13 21:56:33 +02:00
{
"importpath": "golang.org/x/net/bpf",
"repository": "https://go.googlesource.com/net",
"vcs": "git",
"revision": "d866cfc389cec985d6fda2859936a575a55a3ab6",
2016-09-13 21:56:33 +02:00
"branch": "master",
"path": "bpf",
"notests": true
2016-09-13 21:56:33 +02:00
},
{
"importpath": "golang.org/x/net/html",
"repository": "https://go.googlesource.com/net",
"vcs": "git",
"revision": "d866cfc389cec985d6fda2859936a575a55a3ab6",
"branch": "master",
"path": "html",
"notests": true
},
{
"importpath": "golang.org/x/net/internal/iana",
"repository": "https://go.googlesource.com/net",
"vcs": "git",
"revision": "d866cfc389cec985d6fda2859936a575a55a3ab6",
"branch": "master",
"path": "internal/iana",
"notests": true
},
2016-09-13 21:47:00 +02:00
{
"importpath": "golang.org/x/net/internal/socket",
2016-09-13 21:47:00 +02:00
"repository": "https://go.googlesource.com/net",
"vcs": "git",
"revision": "d866cfc389cec985d6fda2859936a575a55a3ab6",
2016-09-13 21:47:00 +02:00
"branch": "master",
"path": "internal/socket",
"notests": true
2016-09-13 21:47:00 +02:00
},
{
"importpath": "golang.org/x/net/ipv4",
"repository": "https://go.googlesource.com/net",
"vcs": "git",
"revision": "d866cfc389cec985d6fda2859936a575a55a3ab6",
"branch": "master",
"path": "/ipv4",
"notests": true
},
{
"importpath": "golang.org/x/net/ipv6",
"repository": "https://go.googlesource.com/net",
2017-03-04 07:28:11 +01:00
"vcs": "git",
"revision": "d866cfc389cec985d6fda2859936a575a55a3ab6",
"branch": "master",
"path": "/ipv6",
"notests": true
},
{
"importpath": "golang.org/x/net/proxy",
"repository": "https://go.googlesource.com/net",
"vcs": "git",
"revision": "d866cfc389cec985d6fda2859936a575a55a3ab6",
"branch": "master",
"path": "/proxy",
"notests": true
},
{
"importpath": "golang.org/x/sys/unix",
"repository": "https://go.googlesource.com/sys",
"vcs": "git",
"revision": "83801418e1b59fb1880e363299581ee543af32ca",
"branch": "master",
"path": "/unix",
"notests": true
},
{
"importpath": "golang.org/x/sys/windows",
"repository": "https://go.googlesource.com/sys",
"vcs": "git",
"revision": "83801418e1b59fb1880e363299581ee543af32ca",
"branch": "master",
"path": "windows",
"notests": true
},
{
"importpath": "golang.org/x/text/encoding",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "encoding",
"notests": true
},
{
"importpath": "golang.org/x/text/internal/format",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "internal/format",
"notests": true
},
{
"importpath": "golang.org/x/text/internal/gen",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "internal/gen",
"notests": true
},
{
"importpath": "golang.org/x/text/internal/tag",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "internal/tag",
"notests": true
},
{
"importpath": "golang.org/x/text/internal/triegen",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "internal/triegen",
"notests": true
},
{
"importpath": "golang.org/x/text/internal/ucd",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "internal/ucd",
"notests": true
},
{
"importpath": "golang.org/x/text/internal/utf8internal",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "internal/utf8internal",
"notests": true
},
{
"importpath": "golang.org/x/text/language",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "language",
"notests": true
},
{
"importpath": "golang.org/x/text/runes",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "runes",
"notests": true
},
{
"importpath": "golang.org/x/text/transform",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "/transform",
"notests": true
},
{
"importpath": "golang.org/x/text/unicode/cldr",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "unicode/cldr",
"notests": true
},
{
"importpath": "golang.org/x/text/unicode/norm",
"repository": "https://go.googlesource.com/text",
"vcs": "git",
"revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
"branch": "master",
"path": "/unicode/norm",
"notests": true
},
{
"importpath": "golang.org/x/time/rate",
"repository": "https://go.googlesource.com/time",
"vcs": "git",
"revision": "6dc17368e09b0e8634d71cac8168d853e869a0c7",
"branch": "master",
"path": "/rate",
"notests": true
},
cmd/stdiscosrv: New discovery server (fixes #4618) This is a new revision of the discovery server. Relevant changes and non-changes: - Protocol towards clients is unchanged. - Recommended large scale design is still to be deployed nehind nginx (I tested, and it's still a lot faster at terminating TLS). - Database backend is leveldb again, only. It scales enough, is easy to setup, and we don't need any backend to take care of. - Server supports replication. This is a simple TCP channel - protect it with a firewall when deploying over the internet. (We deploy this within the same datacenter, and with firewall.) Any incoming client announces are sent over the replication channel(s) to other peer discosrvs. Incoming replication changes are applied to the database as if they came from clients, but without the TLS/certificate overhead. - Metrics are exposed using the prometheus library, when enabled. - The database values and replication protocol is protobuf, because JSON was quite CPU intensive when I tried that and benchmarked it. - The "Retry-After" value for failed lookups gets slowly increased from a default of 120 seconds, by 5 seconds for each failed lookup, independently by each discosrv. This lowers the query load over time for clients that are never seen. The Retry-After maxes out at 3600 after a couple of weeks of this increase. The number of failed lookups is stored in the database, now and then (avoiding making each lookup a database put). All in all this means clients can be pointed towards a cluster using just multiple A / AAAA records to gain both load sharing and redundancy (if one is down, clients will talk to the remaining ones). GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 09:52:31 +01:00
{
"importpath": "gopkg.in/urfave/cli.v1",
"repository": "https://gopkg.in/urfave/cli.v1",
"vcs": "git",
"revision": "cfb38830724cc34fedffe9a2a29fb54fa9169cd1",
"branch": "master",
"notests": true
},
{
"importpath": "gopkg.in/yaml.v2",
"repository": "https://gopkg.in/yaml.v2",
"vcs": "git",
"revision": "287cf08546ab5e7e37d55a84f7ed3fd1db036de5",
"branch": "v2",
"notests": true
}
]
}