15 Commits
v1.0.2 ... main

Author SHA1 Message Date
Julian Haseleu
4805faf9db chore(): added docs
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m1s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m18s
2025-10-07 09:38:38 +00:00
4f81479069 fix(): also drop api server response to debug in case there are no errors
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m0s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m20s
Release / Test (release) Successful in 1m5s
Release / Build_Image (release) Successful in 1m19s
2025-10-07 11:18:39 +02:00
d3682557b1 cleanup and refactor health service
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m2s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m22s
2025-10-07 11:13:15 +02:00
60844be81b fix(): catch errors and increase verbosity
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m0s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m19s
2025-10-07 10:54:45 +02:00
76fb779d08 bloaded put
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m0s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m18s
2025-10-07 10:39:42 +02:00
bafd97fbaf fix(): back to post
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m1s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m26s
2025-10-07 10:22:49 +02:00
49252a5f7a feat(): more verbosity
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m1s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m20s
2025-10-07 10:14:49 +02:00
091ab2eb2f fix(): switch to post
Some checks failed
Gitea Docker Build Demo / Test (push) Failing after 1m2s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m23s
2025-10-07 09:52:58 +02:00
cd1dca0d5e fix(): test !!
Some checks failed
Gitea Docker Build Demo / Test (push) Failing after 1m3s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m25s
2025-10-07 09:49:54 +02:00
b43b31335e fix(): test
Some checks failed
Gitea Docker Build Demo / Test (push) Failing after 1m0s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m18s
2025-10-07 09:46:36 +02:00
6fdb629cc9 fix(): use correct resource path
Some checks failed
Gitea Docker Build Demo / Test (push) Failing after 1m1s
Gitea Docker Build Demo / Build_Image (push) Has been cancelled
2025-10-07 09:44:50 +02:00
5e3ee60c91 fix(): add cidr range to template
Some checks failed
Gitea Docker Build Demo / Build_Image (push) Successful in 1m22s
Gitea Docker Build Demo / Test (push) Failing after 1m5s
2025-10-07 09:33:52 +02:00
eacc8ac9f2 fix(): just use raw json
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m4s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m24s
2025-10-07 09:25:28 +02:00
79fd7ff3b7 fix(): explicitly register cilium schema
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m4s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m23s
2025-10-07 09:17:22 +02:00
6083039648 fix(): explicit object identifier
All checks were successful
Gitea Docker Build Demo / Test (push) Successful in 1m4s
Gitea Docker Build Demo / Build_Image (push) Successful in 1m23s
2025-10-07 08:59:30 +02:00
9 changed files with 456 additions and 30 deletions

View File

@@ -40,7 +40,7 @@ jobs:
with: with:
context: . context: .
file: ./Dockerfile file: ./Dockerfile
push: false push: true
tags: | tags: |
lerentis/canada-kaktus:${{ github.sha }} lerentis/canada-kaktus:${{ github.sha }}
# - name: Sign the published Docker image # - name: Sign the published Docker image

334
README.md
View File

@@ -1,2 +1,334 @@
# canada-kaktus # Canada Kaktus Documentation
[![Build Status](https://git.uploadfilter24.eu/covidnetes/canada-kaktus/actions/workflows/main.yaml/badge.svg?branch=main)](https://git.uploadfilter24.eu/covidnetes/canada-kaktus/actions)
## Overview
Canada Kaktus is a Kubernetes controller that automatically manages Cilium LoadBalancer IP pools by synchronizing them with Hetzner Cloud server instances. It continuously monitors Hetzner Cloud servers matching a specific label selector and updates a Kubernetes Custom Resource Definition (CRD) to maintain an up-to-date IP pool for load balancing services.
## Purpose
The application serves as a bridge between Hetzner Cloud infrastructure and Kubernetes/Cilium networking, ensuring that load balancer IP pools always reflect the current set of available server instances. This automation eliminates the need for manual IP pool management when servers are added or removed from the cluster.
## Architecture
### Components
1. **Configuration Management** (`config.go`)
- Environment-based configuration with default values
- Supports JSON configuration files with auto-reload capability
- Manages Hetzner Cloud API tokens and label selectors
2. **Health Monitoring** (`health.go`)
- HTTP health endpoint on port 8080
- Thread-safe health state management
- RESTful health checks for Kubernetes probes
3. **Hetzner Cloud Integration** (`hetzner.go`)
- Interacts with Hetzner Cloud API
- Discovers servers based on label selectors
- Extracts public IPv4 addresses from server instances
4. **Kubernetes Integration** (`k8s.go`)
- Manages Cilium LoadBalancer IP Pool CRDs
- In-cluster Kubernetes client configuration
- Template-based CRD generation and updates
5. **Logging** (`utils/logging.go`)
- Structured JSON logging with configurable levels
- Contextual logging with caller information
## Processing Flow
```mermaid
graph TD
A[Application Start] --> B[Load Configuration]
B --> C[Configure Logger]
C --> D[Start Health Server]
D --> E[Enter Main Loop]
E --> F[Query Hetzner Cloud API]
F --> G{Servers Found?}
G -->|No| H[Log Error & Set Unhealthy]
G -->|Yes| I[Extract IP Addresses]
I --> J{IPs Valid?}
J -->|No| K[Log Error & Set Unhealthy]
J -->|Yes| L[Get Current CRD Resource Version]
L --> M[Generate IP Pool Template]
M --> N[Update Kubernetes CRD]
N --> O{Update Successful?}
O -->|No| P[Log Error & Set Unhealthy]
O -->|Yes| Q[Log Success & Set Healthy]
H --> R[Wait 15 minutes]
K --> R
P --> R
Q --> R
R --> E
subgraph "Health Endpoint"
S[HTTP GET /health] --> T[Return Health Status]
end
subgraph "Hetzner Cloud"
U[Server Instances] --> V[Label Selector Filter]
V --> W[Public IPv4 Addresses]
end
subgraph "Kubernetes"
X[CiliumLoadBalancerIPPool CRD] --> Y[IP Pool Configuration]
Y --> Z[Load Balancer Services]
end
```
## Configuration
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `CANADA_KAKTUS_LOGLEVEL` | `Info` | Logging level (Debug, Info, Warn, Error) |
| `CANADA_KAKTUS_LABELSELECTOR` | `kops.k8s.io/instance-role=Node` | Label selector for Hetzner Cloud servers |
| `CANADA_KAKTUS_HCLOUD_TOKEN` | *(required)* | Hetzner Cloud API token |
### Configuration File
Optionally, a `config.json` file can be used with auto-reload capability:
```json
{
"LogLevel": "Info",
"LabelSelector": "kops.k8s.io/instance-role=Node",
"HcloudToken": "your-hetzner-token-here"
}
```
## Deployment
### Prerequisites
- Kubernetes cluster with Cilium CNI
- Hetzner Cloud API token with read access to servers
- Proper RBAC permissions for CRD management
### Required Kubernetes Permissions
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: canada-kaktus
rules:
- apiGroups: ["cilium.io"]
resources: ["ciliumloadbalancerippools"]
verbs: ["get", "create", "update", "patch"]
```
### Docker Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: canada-kaktus
spec:
replicas: 1
selector:
matchLabels:
app: canada-kaktus
template:
metadata:
labels:
app: canada-kaktus
spec:
containers:
- name: canada-kaktus
image: your-registry/canada-kaktus:latest
env:
- name: CANADA_KAKTUS_HCLOUD_TOKEN
valueFrom:
secretKeyRef:
name: hetzner-credentials
key: token
ports:
- containerPort: 8080
name: health
livenessProbe:
httpGet:
path: /health
port: health
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: health
initialDelaySeconds: 5
periodSeconds: 10
```
## API Endpoints
### Health Check
- **URL**: `GET /health`
- **Port**: `8080`
- **Response Codes**:
- `200 OK`: All operations successful
- `503 Service Unavailable`: Error in processing loop
## Generated Resources
### Cilium LoadBalancer IP Pool CRD
The application generates and maintains a `CiliumLoadBalancerIPPool` resource:
```yaml
apiVersion: cilium.io/v2alpha1
kind: CiliumLoadBalancerIPPool
metadata:
name: covidnetes-pool
annotations:
argocd.argoproj.io/tracking-id: "cilium-lb:cilium.io/CiliumLoadBalancerIPPool:kube-system/covidnetes-pool"
managed-by: "canada-kaktus"
spec:
blocks:
- cidr: "192.168.1.100/32"
- cidr: "192.168.1.101/32"
disabled: false
```
## Operation Details
### Main Loop Behavior
1. **Interval**: Runs every 15 minutes
2. **Error Handling**: Non-fatal errors are logged and health status is updated
3. **Resilience**: Continues operation despite temporary failures
4. **State Management**: Maintains health status for monitoring systems
### Error Scenarios
- **Hetzner API Failures**: Network issues, authentication problems, rate limiting
- **Kubernetes API Failures**: RBAC issues, CRD not found, API server unavailable
- **Configuration Issues**: Invalid tokens, missing permissions, malformed templates
### Logging
All operations are logged with structured JSON format including:
- Timestamp
- Log level
- Caller information
- Contextual details
- Error messages
Example log entry:
```json
{
"Caller": "Main",
"level": "info",
"msg": "Successfully recreated IP Pool CRD",
"time": "2025-10-07T10:30:00Z"
}
```
## Dependencies
### Go Modules
- **Hetzner Cloud SDK**: `github.com/hetznercloud/hcloud-go` - Hetzner Cloud API client
- **Kubernetes Client**: `k8s.io/client-go` - Kubernetes API interactions
- **Configuration**: `github.com/jinzhu/configor` - Environment and file-based config
- **Logging**: `github.com/sirupsen/logrus` - Structured logging
- **HTTP Router**: `github.com/gorilla/mux` - Health endpoint routing
### External Services
- **Hetzner Cloud API**: Server discovery and metadata retrieval
- **Kubernetes API**: CRD management and cluster integration
- **Cilium**: LoadBalancer IP pool consumption
## Monitoring and Observability
### Health Monitoring
- HTTP health endpoint for liveness/readiness probes
- Health status reflects the success of the last operation cycle
- Automatic health status updates on errors
### Logging
- Configurable log levels (Debug, Info, Warn, Error)
- Structured JSON output for log aggregation
- Contextual information for debugging
### Metrics
Currently, the application provides health status via HTTP endpoint. For production deployments, consider adding:
- Prometheus metrics for operation success/failure rates
- Timing metrics for API calls
- Counter metrics for IP pool updates
## Troubleshooting
### Common Issues
1. **Authentication Failures**
- Verify Hetzner Cloud token is valid and has necessary permissions
- Check token is correctly set in environment variable
2. **No Servers Found**
- Verify label selector matches your server configuration
- Check servers exist in the configured Hetzner project
3. **Kubernetes Permission Errors**
- Ensure proper RBAC permissions for CRD access
- Verify service account has necessary cluster roles
4. **Health Endpoint Unavailable**
- Check port 8080 is accessible
- Verify no port conflicts in the cluster
### Debug Mode
Enable debug logging by setting:
```bash
export CANADA_KAKTUS_LOGLEVEL=Debug
```
This provides detailed information about:
- Server discovery process
- IP address extraction
- CRD template generation
- Kubernetes API interactions
## Development
### Building
```bash
go mod download
go build -o canada-kaktus ./cmd/main.go
```
### Testing
```bash
go test ./internal/...
```
### Local Development
For local testing, ensure you have:
- Valid Hetzner Cloud token
- Kubernetes cluster access (can use kind/minikube)
- Cilium installed in the cluster
Set environment variables and run:
```bash
export CANADA_KAKTUS_HCLOUD_TOKEN="your-token"
go run ./cmd/main.go
```

View File

@@ -2,6 +2,7 @@ package main
import ( import (
"fmt" "fmt"
"net/http"
"time" "time"
"git.uploadfilter24.eu/covidnetes/canada-kaktus/internal" "git.uploadfilter24.eu/covidnetes/canada-kaktus/internal"
@@ -20,11 +21,13 @@ func main() {
}).Fatal(fmt.Sprintf("Error generating Config: %s", err.Error())) }).Fatal(fmt.Sprintf("Error generating Config: %s", err.Error()))
} }
hs := internal.NewHealthServer()
go func() { go func() {
log.WithFields(log.Fields{ log.WithFields(log.Fields{
"Caller": "Main", "Caller": "Main",
}).Info("Starting Health Endpoint") }).Info("Starting Health Endpoint")
internal.StartHealthEndpoint() hs.Start()
}() }()
log.WithFields(log.Fields{ log.WithFields(log.Fields{
@@ -37,18 +40,21 @@ func main() {
log.WithFields(log.Fields{ log.WithFields(log.Fields{
"Caller": "Main", "Caller": "Main",
}).Error(fmt.Sprintf("Error getting all Nodes: %s", err.Error())) }).Error(fmt.Sprintf("Error getting all Nodes: %s", err.Error()))
hs.SetHealthState(http.StatusServiceUnavailable)
} }
ips, err := internal.GetAllIps(servers) ips, err := internal.GetAllIps(servers)
if err != nil { if err != nil {
log.WithFields(log.Fields{ log.WithFields(log.Fields{
"Caller": "Main", "Caller": "Main",
}).Error(fmt.Sprintf("Error getting all IPs: %s", err.Error())) }).Error(fmt.Sprintf("Error getting all IPs: %s", err.Error()))
hs.SetHealthState(http.StatusServiceUnavailable)
} }
err = internal.RecreateIPPoolCrd(cfg, "covidnetes-pool", ips) err = internal.RecreateIPPoolCrd(cfg, "covidnetes-pool", ips)
if err != nil { if err != nil {
log.WithFields(log.Fields{ log.WithFields(log.Fields{
"Caller": "Main", "Caller": "Main",
}).Error(fmt.Sprintf("Error recreating IP Pool CRD: %s", err.Error())) }).Error(fmt.Sprintf("Error recreating IP Pool CRD: %s", err.Error()))
hs.SetHealthState(http.StatusServiceUnavailable)
} else { } else {
log.WithFields(log.Fields{ log.WithFields(log.Fields{
"Caller": "Main", "Caller": "Main",

2
go.mod
View File

@@ -7,6 +7,7 @@ require (
github.com/hetznercloud/hcloud-go v1.59.2 github.com/hetznercloud/hcloud-go v1.59.2
github.com/jinzhu/configor v1.2.2 github.com/jinzhu/configor v1.2.2
github.com/sirupsen/logrus v1.9.3 github.com/sirupsen/logrus v1.9.3
k8s.io/apimachinery v0.34.1
k8s.io/client-go v0.34.1 k8s.io/client-go v0.34.1
) )
@@ -38,7 +39,6 @@ require (
gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/api v0.34.1 // indirect k8s.io/api v0.34.1 // indirect
k8s.io/apimachinery v0.34.1 // indirect
k8s.io/klog/v2 v2.130.1 // indirect k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect

View File

@@ -3,29 +3,54 @@ package internal
import ( import (
"fmt" "fmt"
"net/http" "net/http"
"sync"
"github.com/gorilla/mux" "github.com/gorilla/mux"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
func StartHealthEndpoint() { type HealthServer struct {
mu sync.RWMutex
state int
}
func NewHealthServer() *HealthServer {
return &HealthServer{
state: http.StatusOK,
}
}
func (hs *HealthServer) SetHealthState(code int) {
hs.mu.Lock()
defer hs.mu.Unlock()
hs.state = code
}
func (hs *HealthServer) GetHealthState() int {
hs.mu.RLock()
defer hs.mu.RUnlock()
return hs.state
}
func (hs *HealthServer) Start() {
r := mux.NewRouter() r := mux.NewRouter()
r.Use(mux.CORSMethodMiddleware(r)) r.Use(mux.CORSMethodMiddleware(r))
r.HandleFunc("/health", send200).Methods(http.MethodGet) r.HandleFunc("/health", hs.sendHealth).Methods(http.MethodGet)
err := http.ListenAndServe("0.0.0.0:8080", r) err := http.ListenAndServe("0.0.0.0:8080", r)
if err != nil { if err != nil {
log.WithFields(log.Fields{ log.WithFields(log.Fields{
"Caller": "StartHealthEndpoint", "Caller": "HealthServer.Start",
}).Error(fmt.Sprintf("Error creating health endpoint: %s", err.Error())) }).Error(fmt.Sprintf("Error creating health endpoint: %s", err.Error()))
} }
} }
func send200(w http.ResponseWriter, r *http.Request) { func (hs *HealthServer) sendHealth(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK) code := hs.GetHealthState()
w.WriteHeader(code)
_, err := w.Write([]byte{}) _, err := w.Write([]byte{})
if err != nil { if err != nil {
log.WithFields(log.Fields{ log.WithFields(log.Fields{
"Caller": "send200", "Caller": "HealthServer.sendHealth",
}).Error(fmt.Sprintf("Error answering health endpoint: %s", err.Error())) }).Error(fmt.Sprintf("Error answering health endpoint: %s", err.Error()))
} }
} }

View File

@@ -7,8 +7,9 @@ import (
) )
func TestHealth(t *testing.T) { func TestHealth(t *testing.T) {
hs := NewHealthServer()
go func() { go func() {
StartHealthEndpoint() hs.Start()
}() }()
request, _ := http.NewRequest(http.MethodGet, "http://localhost:8080/health", strings.NewReader("")) request, _ := http.NewRequest(http.MethodGet, "http://localhost:8080/health", strings.NewReader(""))
resp, err := http.DefaultClient.Do(request) resp, err := http.DefaultClient.Do(request)

View File

@@ -18,11 +18,17 @@ func GetAllNodes(cfg *Config) ([]*hcloud.Server, error) {
return nil, fmt.Errorf("error listing Hetzner Nodes: %s", err.Error()) return nil, fmt.Errorf("error listing Hetzner Nodes: %s", err.Error())
} }
if servers == nil { if len(servers) == 0 {
return nil, fmt.Errorf("no Nodes found with label selector: %s", cfg.LabelSelector) return nil, fmt.Errorf("no Nodes found with label selector: %s", cfg.LabelSelector)
} }
return servers, nil
for _, instance := range servers {
log.WithFields(log.Fields{
"Caller": "GetAllNodes",
}).Debugf("Found server: %s", instance.Name)
}
return servers, nil
} }
func GetAllIps(servers []*hcloud.Server) ([]string, error) { func GetAllIps(servers []*hcloud.Server) ([]string, error) {
@@ -33,7 +39,7 @@ func GetAllIps(servers []*hcloud.Server) ([]string, error) {
} }
log.WithFields(log.Fields{ log.WithFields(log.Fields{
"Caller": "GetAllIps", "Caller": "GetAllIps",
}).Info(fmt.Sprintf("Found IP: %s", instance.PrivateNet[0].IP.String())) }).Debugf("Found IP: %s", instance.PublicNet.IPv4.IP.String())
ips[i] = instance.PublicNet.IPv4.IP.String() ips[i] = instance.PublicNet.IPv4.IP.String()
} }
return ips, nil return ips, nil

View File

@@ -3,9 +3,11 @@ package internal
import ( import (
"bytes" "bytes"
"context" "context"
"encoding/json"
"fmt" "fmt"
"html/template" "html/template"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/kubernetes/scheme" "k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest" "k8s.io/client-go/rest"
@@ -23,7 +25,8 @@ var IP_POOL_TEMPLATE = `
"metadata": { "metadata": {
"name": "{{ .Name }}", "name": "{{ .Name }}",
"annotations": { "annotations": {
"argocd.argoproj.io/tracking-id": "cilium-lb:cilium.io/CiliumLoadBalancerIPPool:kube-system/covidnetes-pool" "argocd.argoproj.io/tracking-id": "cilium-lb:cilium.io/CiliumLoadBalancerIPPool:kube-system/covidnetes-pool",
"managed-by": "canada-kaktus"
} }
}, },
"spec": { "spec": {
@@ -31,7 +34,7 @@ var IP_POOL_TEMPLATE = `
{{- range $i, $ip := .IPs }} {{- range $i, $ip := .IPs }}
{{- if $i}},{{ end }} {{- if $i}},{{ end }}
{ {
"cidr": "{{ $ip }}" "cidr": "{{ $ip }}/32"
} }
{{- end }} {{- end }}
], ],
@@ -47,40 +50,68 @@ type CrdConfig struct {
func RecreateIPPoolCrd(cfg *Config, name string, ips []string) error { func RecreateIPPoolCrd(cfg *Config, name string, ips []string) error {
routeclient, err := createRestClient() if len(ips) == 0 {
return fmt.Errorf("no IPs provided to create IP Pool CRD")
}
routeclient, err := createRestClient()
if err != nil { if err != nil {
return fmt.Errorf("error creating REST Client: %v", err.Error()) return fmt.Errorf("error creating REST Client: %v", err.Error())
} }
body, err := generateIpPool(name, ips) resourceVersion, err := getResourceVersion(routeclient, name)
if err != nil {
return fmt.Errorf("error getting resourceVersion: %v", err.Error())
}
body, err := generateIpPool(name, ips)
if err != nil { if err != nil {
return fmt.Errorf("error generating CRD: %v", err.Error()) return fmt.Errorf("error generating CRD: %v", err.Error())
} }
decode := scheme.Codecs.UniversalDeserializer().Decode // Inject resourceVersion into the JSON
var obj map[string]interface{}
obj, _, err := decode([]byte(body), nil, nil) if err := json.Unmarshal([]byte(body), &obj); err != nil {
return fmt.Errorf("could not unmarshal generated CRD: %v", err)
}
if meta, ok := obj["metadata"].(map[string]interface{}); ok {
meta["resourceVersion"] = resourceVersion
}
finalBody, err := json.Marshal(obj)
if err != nil { if err != nil {
return fmt.Errorf("could not deserialize CRD: %v", err.Error()) return fmt.Errorf("could not marshal final CRD: %v", err)
} }
res := routeclient.Post(). res := routeclient.Put().
Resource("routes"). Resource("ciliumloadbalancerippools").
Body(&obj). Name(name).
Body(finalBody).
Do(context.TODO()) Do(context.TODO())
var status int var status int
res.StatusCode(&status) res.StatusCode(&status)
if status >= 200 && status <= 400 { raw, rawErr := res.Raw()
return fmt.Errorf("failed to post CRD to kube api: %v", res.Error().Error())
if status < 200 || status >= 400 {
log.WithFields(log.Fields{
"Caller": "RecreateIPPoolCrd",
}).Warnf("Response from k8s api server: %s", string(raw))
return fmt.Errorf("failed to post CRD to kube api: %v", res.Error())
}
log.WithFields(log.Fields{
"Caller": "RecreateIPPoolCrd",
}).Debugf("Response from k8s api server: %s", string(raw))
if rawErr != nil {
log.WithFields(log.Fields{
"Caller": "RecreateIPPoolCrd",
}).Warnf("Could not get raw response from k8s api server: %v", rawErr)
} }
return nil return nil
} }
func createRestClient() (*rest.RESTClient, error) { func createRestClient() (*rest.RESTClient, error) {
k8s_config, err := rest.InClusterConfig() k8s_config, err := rest.InClusterConfig()
if err != nil { if err != nil {
@@ -116,3 +147,27 @@ func generateIpPool(name string, ips []string) (string, error) {
} }
return buf.String(), nil return buf.String(), nil
} }
func getResourceVersion(client *rest.RESTClient, name string) (string, error) {
res := client.Get().
Resource("ciliumloadbalancerippools").
Name(name).
Do(context.TODO())
raw, err := res.Raw()
if err != nil {
return "", fmt.Errorf("could not fetch CRD: %v", err)
}
var obj map[string]interface{}
if err := json.Unmarshal(raw, &obj); err != nil {
return "", fmt.Errorf("could not unmarshal CRD: %v", err)
}
meta, ok := obj["metadata"].(map[string]interface{})
if !ok {
return "", fmt.Errorf("metadata missing in CRD")
}
rv, ok := meta["resourceVersion"].(string)
if !ok {
return "", fmt.Errorf("resourceVersion missing in metadata")
}
return rv, nil
}

View File

@@ -12,7 +12,8 @@ func TestGenerateIpPoolCRD(t *testing.T) {
"metadata": { "metadata": {
"name": "covidnetes-pool", "name": "covidnetes-pool",
"annotations": { "annotations": {
"argocd.argoproj.io/tracking-id": "cilium-lb:cilium.io/CiliumLoadBalancerIPPool:kube-system/covidnetes-pool" "argocd.argoproj.io/tracking-id": "cilium-lb:cilium.io/CiliumLoadBalancerIPPool:kube-system/covidnetes-pool",
"managed-by": "canada-kaktus"
} }
}, },
"spec": { "spec": {
@@ -28,7 +29,7 @@ func TestGenerateIpPoolCRD(t *testing.T) {
} }
} }
` `
got, err := generateIpPool("covidnetes-pool", []string{"49.13.48.9/32", "91.107.211.117/32"}) got, err := generateIpPool("covidnetes-pool", []string{"49.13.48.9", "91.107.211.117"})
if err != nil { if err != nil {
t.Errorf("%s", err.Error()) t.Errorf("%s", err.Error())
} }