NovaNet Configuration Reference¶
This document covers all configuration options for NovaNet, including Helm values, the novanet.json config file, and environment variables.
Helm Values Reference¶
The following table lists all configurable values in the NovaNet Helm chart (deploy/helm/novanet/values.yaml).
Image Configuration¶
| Key | Default | Description |
|---|---|---|
image.agent.repository |
ghcr.io/azrtydxb/novanet/novanet-agent |
Container image for the Go management plane agent |
image.agent.tag |
latest |
Image tag for the agent |
image.agent.pullPolicy |
IfNotPresent |
Image pull policy for the agent. Use Always during development. |
image.dataplane.repository |
ghcr.io/azrtydxb/novanet/novanet-dataplane |
Container image for the Rust eBPF dataplane |
image.dataplane.tag |
latest |
Image tag for the dataplane |
image.dataplane.pullPolicy |
IfNotPresent |
Image pull policy for the dataplane |
Core Configuration¶
| Key | Default | Description |
|---|---|---|
config.clusterCIDR |
"10.42.0.0/16" |
The cluster-wide CIDR from which PodCIDRs are allocated. Must match the cluster's --cluster-cidr setting. |
config.nodeCIDRMaskSize |
24 |
Subnet mask size for per-node PodCIDR allocation. A /24 provides 254 pod IPs per node. |
config.tunnelProtocol |
"geneve" |
Tunnel encapsulation protocol for overlay mode. "geneve" supports identity metadata in TLV options. "vxlan" provides broader hardware offload compatibility. |
config.routingMode |
"overlay" |
Networking mode. "overlay" creates tunnels between nodes. "native" uses underlay routing via the integrated routing manager and FRR sidecar. |
config.logLevel |
"info" |
Log verbosity level. One of "debug", "info", "warn", "error". |
Native Routing¶
| Key | Default | Description |
|---|---|---|
routing.enabled |
false |
Enable native routing. Must be true when config.routingMode is "native". |
routing.protocol |
"bgp" |
Routing protocol to use. "bgp" for eBGP peering or "ospf" for OSPF area injection. |
routing.frr_socket_dir |
"/run/frr" |
Path to the FRR management socket directory. The FRR sidecar must mount this path. |
routing.controlPlaneVIP |
"" |
Virtual IP for Kubernetes API server HA. Advertised via BGP with health checks. |
routing.bfd.enabled |
false |
Enable BFD (Bidirectional Forwarding Detection) for fast failure detection. |
routing.bfd.minRxMs |
300 |
BFD minimum receive interval in milliseconds. |
routing.bfd.minTxMs |
300 |
BFD minimum transmit interval in milliseconds. |
routing.bfd.detectMultiplier |
3 |
BFD detect multiplier (missed packets before declaring peer down). |
routing.peers |
[] |
List of BGP peers to configure. Each entry has neighbor_address, remote_as, description, and optionally bfd_enabled, bfd_min_rx_ms, bfd_min_tx_ms, bfd_detect_multiplier. |
Input validation: All string parameters passed to FRR (neighbor addresses, descriptions,
passwords, source addresses, network prefixes, route-map names, OSPF interface names, OSPF area
IDs, and BFD peer interfaces) are validated before constructing VTY commands. Neighbor addresses,
source addresses, and BFD peer addresses must be valid IPv4 or IPv6 addresses. Address-family
identifiers (AFI) are validated against a fixed allowlist (ipv4-unicast, ipv4, ipv6-unicast,
ipv6); unrecognized values are rejected. All other string fields are rejected if they contain
control characters (newlines, tabs, etc.) to prevent VTY command injection.
L4 Load Balancing¶
| Key | Default | Description |
|---|---|---|
l4lb.enabled |
false |
Enable L4 socket-based load balancing (kube-proxy replacement). Uses eBPF cgroup programs for Kubernetes Service load balancing. |
CNI Configuration¶
| Key | Default | Description |
|---|---|---|
cni.binPath |
"/opt/cni/bin" |
Directory where the CNI binary is installed on the host. |
cni.confPath |
"/etc/cni/net.d" |
Directory where the CNI configuration file is written on the host. |
Egress Configuration¶
| Key | Default | Description |
|---|---|---|
egress.masqueradeEnabled |
true |
Enable SNAT/masquerade for pod-to-external traffic. When enabled, pod source IPs are rewritten to the node IP for traffic leaving the cluster. |
Policy Configuration¶
| Key | Default | Description |
|---|---|---|
policy.defaultDeny |
false |
Enable cluster-wide default-deny policy. When false (default), pods without any selecting NetworkPolicy allow all traffic (standard Kubernetes behavior). When true, all traffic is denied unless explicitly allowed by a NetworkPolicy. |
eBPF Services¶
| Key | Default | Description |
|---|---|---|
ebpfServices.enabled |
true |
Enable the eBPF Services gRPC API for external consumers (e.g., NovaEdge). Exposes SOCKMAP acceleration, mesh redirects, rate limiting, and backend health monitoring via a dedicated Unix socket. |
ebpfServices.socketPath |
"/run/novanet/ebpf-services.sock" |
Unix socket path for the EBPFServices gRPC server. External consumers connect to this socket to request kernel-level eBPF operations. |
Metrics Configuration¶
| Key | Default | Description |
|---|---|---|
metrics.enabled |
true |
Enable the Prometheus metrics endpoint on the agent. |
metrics.port |
9103 |
Port for the Prometheus metrics HTTP endpoint. |
Resource Management¶
| Key | Default | Description |
|---|---|---|
resources.agent.requests.cpu |
"100m" |
CPU request for the agent container. |
resources.agent.requests.memory |
"128Mi" |
Memory request for the agent container. |
resources.agent.limits.cpu |
"500m" |
CPU limit for the agent container. |
resources.agent.limits.memory |
"256Mi" |
Memory limit for the agent container. |
resources.dataplane.requests.cpu |
"100m" |
CPU request for the dataplane container. |
resources.dataplane.requests.memory |
"64Mi" |
Memory request for the dataplane container. |
resources.dataplane.limits.cpu |
"500m" |
CPU limit for the dataplane container. |
resources.dataplane.limits.memory |
"128Mi" |
Memory limit for the dataplane container. |
Scheduling¶
| Key | Default | Description |
|---|---|---|
tolerations |
[{"operator": "Exists", "effect": "NoSchedule"}, {"operator": "Exists", "effect": "NoExecute"}] |
Tolerations applied to the DaemonSet pods. Defaults tolerate all taints with NoSchedule and NoExecute effects so NovaNet runs on every node. |
nodeSelector |
{kubernetes.io/os: linux} |
Node selector for the DaemonSet. Defaults to Linux nodes only. |
priorityClassName |
"system-node-critical" |
Priority class for NovaNet pods. CNI pods must be high priority to ensure networking is available for other workloads. |
updateStrategy.type |
"RollingUpdate" |
DaemonSet update strategy. |
updateStrategy.rollingUpdate.maxUnavailable |
1 |
Maximum number of nodes updated simultaneously during a rolling update. |
novanet.json Config File¶
The Helm chart generates a ConfigMap that is mounted as /etc/novanet/novanet.json inside the agent container. This file is generated from the Helm values and should not be edited directly in production. However, it is useful to understand the format for debugging.
Full Schema¶
{
"listen_socket": "/run/novanet/novanet.sock",
"cni_socket": "/run/novanet/cni.sock",
"dataplane_socket": "/run/novanet/dataplane.sock",
"cluster_cidr": "10.42.0.0/16",
"node_cidr_mask_size": 24,
"tunnel_protocol": "geneve",
"routing_mode": "overlay",
"routing": {
"protocol": "bgp",
"frr_socket_dir": "/run/frr"
},
"egress": {
"masquerade_enabled": true
},
"policy": {
"default_deny": false
},
"ebpf_services": {
"enabled": true,
"socket_path": "/run/novanet/ebpf-services.sock"
},
"log_level": "info",
"metrics_address": "127.0.0.1:9103"
}
Field Descriptions¶
| Field | Type | Description |
|---|---|---|
listen_socket |
string | Unix socket path where the agent listens for CLI connections. |
cni_socket |
string | Unix socket path where the agent listens for CNI binary requests. |
dataplane_socket |
string | Unix socket path for agent-to-dataplane gRPC communication. |
cluster_cidr |
string | Cluster-wide pod CIDR in notation like "10.42.0.0/16". |
node_cidr_mask_size |
int | Subnet mask size for per-node PodCIDR allocation (e.g., 24). |
tunnel_protocol |
string | "geneve" or "vxlan". Only used in overlay mode. |
routing_mode |
string | "overlay" or "native". |
routing.protocol |
string | Routing protocol: "bgp" or "ospf". Only used in native mode. |
routing.frr_socket_dir |
string | Path to the FRR management socket directory. Only used in native mode. |
egress.masquerade_enabled |
bool | Enable SNAT/masquerade for pod-to-external traffic. |
policy.default_deny |
bool | Enable cluster-wide default-deny policy. |
log_level |
string | One of "debug", "info", "warn", "error". |
ebpf_services.enabled |
bool | Enable the eBPF Services gRPC API. |
ebpf_services.socket_path |
string | Unix socket path for the EBPFServices gRPC server. |
metrics_address |
string | Address for the Prometheus metrics HTTP endpoint (default: 127.0.0.1:9103, localhost only). |
Validation Rules¶
The agent validates the config on startup and exits with a clear error if any rule is violated:
cluster_cidrmust be a valid CIDR notationtunnel_protocolmust be"geneve"or"vxlan"routing_modemust be"overlay"or"native"- If
routing_modeis"native",routing.protocolmust be set
Environment Variables¶
The following environment variables are set automatically by the Helm chart via the DaemonSet pod spec (using the downward API and ConfigMap). They can also be used to override config file values.
| Variable | Source | Description |
|---|---|---|
NOVANET_NODE_NAME |
Downward API (spec.nodeName) |
Name of the Kubernetes node this agent is running on. Used to identify the node in Kubernetes API queries. |
NOVANET_NODE_IP |
Downward API (status.hostIP) |
IP address of the node. Used as the tunnel source IP in overlay mode and as the router ID in native mode. |
NOVANET_POD_CIDR |
Kubernetes Node spec | The PodCIDR allocated to this node by the cluster. Discovered via the Kubernetes API using NOVANET_NODE_NAME. |
NOVANET_CLUSTER_CIDR |
ConfigMap | The cluster-wide pod CIDR (e.g., 10.42.0.0/16). Overrides cluster_cidr in the config file. |
NOVANET_ROUTING_MODE |
ConfigMap | Routing mode (overlay or native). Overrides routing_mode in the config file. |
NOVANET_TUNNEL_PROTOCOL |
ConfigMap | Tunnel protocol (geneve or vxlan). Overrides tunnel_protocol in the config file. |
The config file supports environment variable expansion. Any value in novanet.json containing ${VAR_NAME} will be replaced with the environment variable's value at load time.
Example Configurations¶
Overlay Mode with Geneve (Default)¶
This is the simplest configuration. Works on any network without special requirements.
# values-overlay-geneve.yaml
config:
clusterCIDR: "10.42.0.0/16"
nodeCIDRMaskSize: 24
tunnelProtocol: "geneve"
routingMode: "overlay"
logLevel: "info"
routing:
enabled: false
egress:
masqueradeEnabled: true
policy:
defaultDeny: false
metrics:
enabled: true
port: 9103
Install:
helm install novanet ./deploy/helm/novanet \
-n nova-system \
--create-namespace \
-f values-overlay-geneve.yaml
Native Routing with BGP¶
High-performance configuration using BGP to distribute pod routes. Requires a BGP-capable network.
# values-native-bgp.yaml
config:
clusterCIDR: "10.42.0.0/16"
nodeCIDRMaskSize: 24
routingMode: "native"
logLevel: "info"
routing:
enabled: true
protocol: "bgp"
frr_socket_dir: "/run/frr"
controlPlaneVIP: "192.168.100.10"
bfd:
enabled: true
minRxMs: 300
minTxMs: 300
detectMultiplier: 3
peers:
- neighbor_address: "192.168.100.2"
remote_as: 65000
description: "TOR-1"
bfd_enabled: true
- neighbor_address: "192.168.100.3"
remote_as: 65000
description: "TOR-2"
bfd_enabled: true
l4lb:
enabled: true
egress:
masqueradeEnabled: true
policy:
defaultDeny: false
metrics:
enabled: true
port: 9103
Install:
helm install novanet ./deploy/helm/novanet \
-n nova-system \
--create-namespace \
-f values-native-bgp.yaml
Security-Hardened with Default Deny¶
For production environments requiring strict network isolation.
# values-secure.yaml
config:
clusterCIDR: "10.42.0.0/16"
nodeCIDRMaskSize: 24
tunnelProtocol: "geneve"
routingMode: "overlay"
logLevel: "warn"
routing:
enabled: false
egress:
masqueradeEnabled: true
policy:
defaultDeny: true
metrics:
enabled: true
port: 9103
resources:
agent:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "1000m"
memory: "1Gi"
dataplane:
requests:
cpu: "200m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "512Mi"
Important: When policy.defaultDeny is true, all pod-to-pod traffic is denied by default. You must create NetworkPolicy objects to allow required communication paths. At minimum, you will need policies for:
- DNS resolution (allow pods to reach CoreDNS on port 53 UDP/TCP)
- Kubernetes API server access (for pods that need it)
- Application-specific ingress and egress rules
Custom Egress Policies¶
Example restricting egress for a specific namespace while allowing others:
# values-egress-restricted.yaml
config:
clusterCIDR: "10.42.0.0/16"
routingMode: "overlay"
egress:
masqueradeEnabled: true
Egress restrictions are applied via standard Kubernetes NetworkPolicy objects:
# Deny all egress from the "restricted" namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: restricted
spec:
podSelector: {}
policyTypes:
- Egress
egress: []
---
# Allow only DNS and specific external CIDRs
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-and-api
namespace: restricted
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
- to:
- ipBlock:
cidr: 203.0.113.0/24
ports:
- protocol: TCP
port: 443
Admission Webhooks¶
The NovaNet operator registers validating admission webhooks for the following CRDs. These webhooks reject malformed resources at creation/update time rather than letting them fail silently at runtime.
| CRD | Webhook | Validations |
|---|---|---|
HostEndpointPolicy |
vhostendpointpolicy.kb.io |
Action must be Allow or Deny; CIDRs must be valid and canonical; ports must be in range 1-65535; endPort must be >= port; protocol must be TCP, UDP, or SCTP |
NovaNetworkPolicy |
vnovanetworkpolicy.kb.io |
PolicyTypes must be Ingress or Egress; IPBlock CIDRs and exceptions must be valid and exceptions must be contained within the parent CIDR; port numbers in range; endPort requires port; protocol must be TCP, UDP, or SCTP |
IPPool |
vippool.kb.io |
Type must be a valid IPPoolType; CIDRs must be valid and non-overlapping; addresses must be valid IPs; at least one CIDR or address is required |
EgressGatewayPolicy |
vegressgatewaypolicy.kb.io |
Destination and excluded CIDRs must be valid and canonical; egressIP must be a valid IP address |
NovaNetCluster |
vnovanetcluster.kb.io |
ClusterCIDR and ClusterCIDRv6 must be valid CIDRs; ControlPlaneVIP must be a valid IP; MTU must be 0 (auto) or 1280-9000; agent ports must be in range 1-65535 |
Note: IPAllocation is an internal-only CRD managed exclusively by the IPAM controller and does not have a validating webhook.
The webhook manifests are located at config/webhook/manifests.yaml. The operator wires them up automatically via controller-runtime's webhook builder.
Socket Paths¶
NovaNet uses Unix domain sockets for all inter-component communication:
| Socket | Default Path | Purpose |
|---|---|---|
| Agent listen | /run/novanet/novanet.sock |
CLI (novanetctl) connects here |
| CNI | /run/novanet/cni.sock |
CNI binary connects here during pod setup |
| Dataplane | /run/novanet/dataplane.sock |
Agent-to-dataplane gRPC communication |
| eBPF Services | /run/novanet/ebpf-services.sock |
External consumers (e.g., NovaEdge) connect here for eBPF operations |
| FRR | /run/frr/ |
Agent routing manager communicates with the FRR sidecar |
All sockets under /run/novanet/ are created by the NovaNet agent. The FRR socket directory is managed by the FRR sidecar container within the same DaemonSet pod.
Next Steps¶
- Installation Guide -- Getting started
- Native Routing Guide -- Native routing setup
- Troubleshooting Guide -- Debugging issues