traefik/provider/k8s
Phil Kates 87eac1dc1a Fix deadlock in k8s provider
On a reasonably sized cluster:
63 nodes
87 services
90 endpoints

The initialization of the k8s provider would hang.

I tracked this down to the ResourceEventHandlerFuncs. Once you reach the
channel buffer size (10) the k8s Informer gets stuck. You can't read or
write messages to the channel anymore. I think this is probably a lock
issue somewhere in k8s but the more reasonable solution for the traefik
usecase is to just drop events when the queue is full since we only use
the events for signalling, not their content, thus dropping an event
doesn't matter.
2016-12-07 20:12:14 +01:00
..
client.go Fix deadlock in k8s provider 2016-12-07 20:12:14 +01:00
namespace.go Switched Kubernetes provider to new client implementation: https://github.com/kubernetes/client-go 2016-11-30 19:16:48 +01:00