Compare commits
29 Commits
ebed73dc11
...
metadata-a
Author | SHA1 | Date | |
---|---|---|---|
377af163f0 | |||
852275945c | |||
c7dcebb894 | |||
2431d3cfb0 | |||
b4f57b3604 | |||
e6d87d025f | |||
3aff0ab5ef | |||
8d6a280441 | |||
aae9022bb2 | |||
c3ded9bfd2 | |||
95a5b880d7 | |||
16c0766a15 | |||
bd1d1c2c7c | |||
eaed6e76e4 | |||
6cdc561e42 | |||
b6332d7ff5 | |||
85f3aa69d2 | |||
a5ea869b28 | |||
5223438ddf | |||
9f12f3dbcb | |||
c273b836be | |||
83777fe5a2 | |||
b1d5423108 | |||
f9965c8f9c | |||
7d7e6e412a | |||
5ab03331fc | |||
3775939a3b | |||
![]() |
9ea19a3532 | ||
![]() |
45d5c38c90 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -4,3 +4,5 @@ data*/
|
||||
*.yaml
|
||||
!config.yaml
|
||||
kvs
|
||||
*.log
|
||||
extract_*.py
|
||||
|
155
CLAUDE.md
Normal file
155
CLAUDE.md
Normal file
@@ -0,0 +1,155 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Commands for Development
|
||||
|
||||
### Build and Test Commands
|
||||
```bash
|
||||
# Build the binary
|
||||
go build -o kvs .
|
||||
|
||||
# Run with default config (auto-generates config.yaml)
|
||||
./kvs
|
||||
|
||||
# Run with custom config
|
||||
./kvs /path/to/config.yaml
|
||||
|
||||
# Run comprehensive integration tests
|
||||
./integration_test.sh
|
||||
|
||||
# Create test conflict data for debugging
|
||||
go run test_conflict.go data1 data2
|
||||
|
||||
# Build and test in one go
|
||||
go build -o kvs . && ./integration_test.sh
|
||||
```
|
||||
|
||||
### Development Workflow
|
||||
```bash
|
||||
# Format and check code
|
||||
go fmt ./...
|
||||
go vet ./...
|
||||
|
||||
# Run dependencies management
|
||||
go mod tidy
|
||||
|
||||
# Check build without artifacts
|
||||
go build .
|
||||
|
||||
# Test specific cluster scenarios
|
||||
./kvs node1.yaml & # Terminal 1
|
||||
./kvs node2.yaml & # Terminal 2
|
||||
curl -X PUT http://localhost:8081/kv/test/data -H "Content-Type: application/json" -d '{"test":"data"}'
|
||||
curl http://localhost:8082/kv/test/data # Should replicate within ~30 seconds
|
||||
pkill kvs
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
### High-Level Structure
|
||||
KVS is a **distributed, eventually consistent key-value store** built around three core systems:
|
||||
|
||||
1. **Gossip Protocol** (`cluster/gossip.go`) - Decentralized membership management and failure detection
|
||||
2. **Merkle Tree Sync** (`cluster/sync.go`, `cluster/merkle.go`) - Efficient data synchronization and conflict resolution
|
||||
3. **Modular Server** (`server/`) - HTTP API with pluggable feature modules
|
||||
|
||||
### Key Architectural Patterns
|
||||
|
||||
#### Modular Package Design
|
||||
- **`auth/`** - Complete JWT authentication system with POSIX-inspired permissions
|
||||
- **`cluster/`** - Distributed systems logic (gossip, sync, merkle trees)
|
||||
- **`storage/`** - BadgerDB abstraction with compression and revision history
|
||||
- **`server/`** - HTTP handlers, routing, and lifecycle management
|
||||
- **`features/`** - Utility functions for TTL, rate limiting, tamper logging, backup
|
||||
- **`types/`** - Centralized type definitions for all components
|
||||
- **`config/`** - Configuration loading with auto-generation
|
||||
- **`utils/`** - Cryptographic hashing utilities
|
||||
|
||||
#### Core Data Model
|
||||
```go
|
||||
// Primary storage format
|
||||
type StoredValue struct {
|
||||
UUID string `json:"uuid"` // Unique version identifier
|
||||
Timestamp int64 `json:"timestamp"` // Unix timestamp (milliseconds)
|
||||
Data json.RawMessage `json:"data"` // Actual user JSON payload
|
||||
}
|
||||
```
|
||||
|
||||
#### Critical System Interactions
|
||||
|
||||
**Conflict Resolution Flow:**
|
||||
1. Merkle trees detect divergent data between nodes (`cluster/merkle.go`)
|
||||
2. Sync service fetches conflicting keys (`cluster/sync.go:fetchAndCompareData`)
|
||||
3. Sophisticated conflict resolution logic in `resolveConflict()`:
|
||||
- Same timestamp → Apply "oldest-node rule" (earliest `joined_timestamp` wins)
|
||||
- Tie-breaker → UUID comparison for deterministic results
|
||||
- Winner's data automatically replicated to losing nodes
|
||||
|
||||
**Authentication & Authorization:**
|
||||
- JWT tokens with scoped permissions (`auth/jwt.go`)
|
||||
- POSIX-inspired 12-bit permission system (`types/types.go:52-75`)
|
||||
- Resource ownership metadata with TTL support (`types/ResourceMetadata`)
|
||||
|
||||
**Storage Strategy:**
|
||||
- **Main keys**: Direct path mapping (`users/john/profile`)
|
||||
- **Index keys**: `_ts:{timestamp}:{path}` for time-based queries
|
||||
- **Compression**: Optional ZSTD compression (`storage/compression.go`)
|
||||
- **Revisions**: Optional revision history (`storage/revision.go`)
|
||||
|
||||
### Configuration Architecture
|
||||
|
||||
The system uses feature toggles extensively (`types/Config:271-280`):
|
||||
```yaml
|
||||
auth_enabled: true # JWT authentication system
|
||||
tamper_logging_enabled: true # Cryptographic audit trail
|
||||
clustering_enabled: true # Gossip protocol and sync
|
||||
rate_limiting_enabled: true # Per-client rate limiting
|
||||
revision_history_enabled: true # Automatic versioning
|
||||
|
||||
# Anonymous access control (Issue #5 - when auth_enabled: true)
|
||||
allow_anonymous_read: false # Allow unauthenticated read access to KV endpoints
|
||||
allow_anonymous_write: false # Allow unauthenticated write access to KV endpoints
|
||||
```
|
||||
|
||||
**Security Note**: DELETE operations always require authentication when `auth_enabled: true`, regardless of anonymous access settings.
|
||||
|
||||
### Testing Strategy
|
||||
|
||||
#### Integration Test Suite (`integration_test.sh`)
|
||||
- **Build verification** - Ensures binary compiles correctly
|
||||
- **Basic functionality** - Single-node CRUD operations
|
||||
- **Cluster formation** - 2-node gossip protocol and data replication
|
||||
- **Conflict resolution** - Automated conflict detection and resolution using `test_conflict.go`
|
||||
- **Authentication middleware** - Comprehensive security testing (Issue #4):
|
||||
- Admin endpoints properly reject unauthenticated requests
|
||||
- Admin endpoints work with valid JWT tokens
|
||||
- KV endpoints respect anonymous access configuration
|
||||
- Automatic root account creation and token extraction
|
||||
|
||||
The test suite uses sophisticated retry logic and timing to handle the eventually consistent nature of the system.
|
||||
|
||||
#### Conflict Testing Utility (`test_conflict.go`)
|
||||
Creates two BadgerDB instances with intentionally conflicting data (same path, same timestamp, different UUIDs) to test the conflict resolution algorithm.
|
||||
|
||||
### Development Notes
|
||||
|
||||
#### Key Constraints
|
||||
- **Eventually Consistent**: All operations succeed locally first, then replicate
|
||||
- **Local-First Truth**: Nodes operate independently and sync in background
|
||||
- **No Transactions**: Each key operation is atomic and independent
|
||||
- **Hierarchical Keys**: Support for path-like structures (`/home/room/closet/socks`)
|
||||
|
||||
#### Critical Timing Considerations
|
||||
- **Gossip intervals**: 1-2 minutes for membership updates
|
||||
- **Sync intervals**: 5 minutes for regular data sync, 2 minutes for catch-up
|
||||
- **Conflict resolution**: Typically resolves within 10-30 seconds after detection
|
||||
- **Bootstrap sync**: Up to 30 days of historical data for new nodes
|
||||
|
||||
#### Main Entry Point Flow
|
||||
1. `main.go` loads config (auto-generates default if missing)
|
||||
2. `server.NewServer()` initializes all subsystems
|
||||
3. Graceful shutdown handling with `SIGINT`/`SIGTERM`
|
||||
4. All business logic delegated to modular packages
|
||||
|
||||
This architecture enables easy feature addition, comprehensive testing, and reliable operation in distributed environments while maintaining simplicity for single-node deployments.
|
368
README.md
368
README.md
@@ -6,12 +6,14 @@ A minimalistic, clustered key-value database system written in Go that prioritiz
|
||||
|
||||
- **Hierarchical Keys**: Support for structured paths (e.g., `/home/room/closet/socks`)
|
||||
- **Eventual Consistency**: Local operations are fast, replication happens in background
|
||||
- **Gossip Protocol**: Decentralized node discovery and failure detection
|
||||
- **Sophisticated Conflict Resolution**: Majority vote with oldest-node tie-breaking
|
||||
- **Merkle Tree Sync**: Efficient data synchronization with cryptographic integrity
|
||||
- **Sophisticated Conflict Resolution**: Oldest-node rule with UUID tie-breaking
|
||||
- **JWT Authentication**: Full authentication system with POSIX-inspired permissions
|
||||
- **Local-First Truth**: All operations work locally first, sync globally later
|
||||
- **Read-Only Mode**: Configurable mode for reducing write load
|
||||
- **Gradual Bootstrapping**: New nodes integrate smoothly without overwhelming cluster
|
||||
- **Zero Dependencies**: Single binary with embedded BadgerDB storage
|
||||
- **Modular Architecture**: Clean separation of concerns with feature toggles
|
||||
- **Comprehensive Features**: TTL support, rate limiting, tamper logging, automated backups
|
||||
- **Zero External Dependencies**: Single binary with embedded BadgerDB storage
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
@@ -21,24 +23,36 @@ A minimalistic, clustered key-value database system written in Go that prioritiz
|
||||
│ (Go Service) │ │ (Go Service) │ │ (Go Service) │
|
||||
│ │ │ │ │ │
|
||||
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
|
||||
│ │ HTTP Server │ │◄──►│ │ HTTP Server │ │◄──►│ │ HTTP Server │ │
|
||||
│ │ (API) │ │ │ │ (API) │ │ │ │ (API) │ │
|
||||
│ │HTTP API+Auth│ │◄──►│ │HTTP API+Auth│ │◄──►│ │HTTP API+Auth│ │
|
||||
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
|
||||
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
|
||||
│ │ Gossip │ │◄──►│ │ Gossip │ │◄──►│ │ Gossip │ │
|
||||
│ │ Protocol │ │ │ │ Protocol │ │ │ │ Protocol │ │
|
||||
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
|
||||
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
|
||||
│ │ BadgerDB │ │ │ │ BadgerDB │ │ │ │ BadgerDB │ │
|
||||
│ │ (Local KV) │ │ │ │ (Local KV) │ │ │ │ (Local KV) │ │
|
||||
│ │Merkle Sync │ │◄──►│ │Merkle Sync │ │◄──►│ │Merkle Sync │ │
|
||||
│ │& Conflict │ │ │ │& Conflict │ │ │ │& Conflict │ │
|
||||
│ │ Resolution │ │ │ │ Resolution │ │ │ │ Resolution │ │
|
||||
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
|
||||
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
|
||||
│ │Storage+ │ │ │ │Storage+ │ │ │ │Storage+ │ │
|
||||
│ │Features │ │ │ │Features │ │ │ │Features │ │
|
||||
│ │(BadgerDB) │ │ │ │(BadgerDB) │ │ │ │(BadgerDB) │ │
|
||||
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
▲
|
||||
│
|
||||
External Clients
|
||||
External Clients (JWT Auth)
|
||||
```
|
||||
|
||||
Each node is fully autonomous and communicates with peers via HTTP REST API for both external client requests and internal cluster operations.
|
||||
### Modular Design
|
||||
KVS features a clean modular architecture with dedicated packages:
|
||||
- **`auth/`** - JWT authentication and POSIX-inspired permissions
|
||||
- **`cluster/`** - Gossip protocol, Merkle tree sync, and conflict resolution
|
||||
- **`storage/`** - BadgerDB abstraction with compression and revisions
|
||||
- **`server/`** - HTTP API, routing, and lifecycle management
|
||||
- **`features/`** - TTL, rate limiting, tamper logging, backup utilities
|
||||
- **`config/`** - Configuration management with auto-generation
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
@@ -67,20 +81,47 @@ curl http://localhost:8080/health
|
||||
KVS uses YAML configuration files. On first run, a default `config.yaml` is automatically generated:
|
||||
|
||||
```yaml
|
||||
node_id: "hostname" # Unique node identifier
|
||||
bind_address: "127.0.0.1" # IP address to bind to
|
||||
port: 8080 # HTTP port
|
||||
data_dir: "./data" # Directory for BadgerDB storage
|
||||
seed_nodes: [] # List of seed nodes for cluster joining
|
||||
read_only: false # Enable read-only mode
|
||||
log_level: "info" # Logging level (debug, info, warn, error)
|
||||
gossip_interval_min: 60 # Min gossip interval (seconds)
|
||||
gossip_interval_max: 120 # Max gossip interval (seconds)
|
||||
sync_interval: 300 # Regular sync interval (seconds)
|
||||
catchup_interval: 120 # Catch-up sync interval (seconds)
|
||||
bootstrap_max_age_hours: 720 # Max age for bootstrap sync (hours)
|
||||
throttle_delay_ms: 100 # Delay between sync requests (ms)
|
||||
fetch_delay_ms: 50 # Delay between data fetches (ms)
|
||||
node_id: "hostname" # Unique node identifier
|
||||
bind_address: "127.0.0.1" # IP address to bind to
|
||||
port: 8080 # HTTP port
|
||||
data_dir: "./data" # Directory for BadgerDB storage
|
||||
seed_nodes: [] # List of seed nodes for cluster joining
|
||||
read_only: false # Enable read-only mode
|
||||
log_level: "info" # Logging level (debug, info, warn, error)
|
||||
|
||||
# Cluster timing configuration
|
||||
gossip_interval_min: 60 # Min gossip interval (seconds)
|
||||
gossip_interval_max: 120 # Max gossip interval (seconds)
|
||||
sync_interval: 300 # Regular sync interval (seconds)
|
||||
catchup_interval: 120 # Catch-up sync interval (seconds)
|
||||
bootstrap_max_age_hours: 720 # Max age for bootstrap sync (hours)
|
||||
throttle_delay_ms: 100 # Delay between sync requests (ms)
|
||||
fetch_delay_ms: 50 # Delay between data fetches (ms)
|
||||
|
||||
# Feature configuration
|
||||
compression_enabled: true # Enable ZSTD compression
|
||||
compression_level: 3 # Compression level (1-19)
|
||||
default_ttl: "0" # Default TTL ("0" = no expiry)
|
||||
max_json_size: 1048576 # Max JSON payload size (1MB)
|
||||
rate_limit_requests: 100 # Requests per window
|
||||
rate_limit_window: "1m" # Rate limit window
|
||||
|
||||
# Feature toggles
|
||||
auth_enabled: true # JWT authentication system
|
||||
tamper_logging_enabled: true # Cryptographic audit trail
|
||||
clustering_enabled: true # Gossip protocol and sync
|
||||
rate_limiting_enabled: true # Rate limiting
|
||||
revision_history_enabled: true # Automatic versioning
|
||||
|
||||
# Anonymous access control (when auth_enabled: true)
|
||||
allow_anonymous_read: false # Allow unauthenticated read access to KV endpoints
|
||||
allow_anonymous_write: false # Allow unauthenticated write access to KV endpoints
|
||||
|
||||
# Backup configuration
|
||||
backup_enabled: true # Automated backups
|
||||
backup_schedule: "0 0 * * *" # Daily at midnight (cron format)
|
||||
backup_path: "./backups" # Backup directory
|
||||
backup_retention: 7 # Days to keep backups
|
||||
```
|
||||
|
||||
### Custom Configuration
|
||||
@@ -97,11 +138,20 @@ fetch_delay_ms: 50 # Delay between data fetches (ms)
|
||||
```bash
|
||||
PUT /kv/{path}
|
||||
Content-Type: application/json
|
||||
Authorization: Bearer <jwt-token> # Required if auth_enabled && !allow_anonymous_write
|
||||
|
||||
# Basic storage
|
||||
curl -X PUT http://localhost:8080/kv/users/john/profile \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer eyJ..." \
|
||||
-d '{"name":"John Doe","age":30,"email":"john@example.com"}'
|
||||
|
||||
# Storage with TTL
|
||||
curl -X PUT http://localhost:8080/kv/cache/session/abc123 \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer eyJ..." \
|
||||
-d '{"data":{"user_id":"john"}, "ttl":"1h"}'
|
||||
|
||||
# Response
|
||||
{
|
||||
"uuid": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
|
||||
@@ -112,25 +162,62 @@ curl -X PUT http://localhost:8080/kv/users/john/profile \
|
||||
#### Retrieve Data
|
||||
```bash
|
||||
GET /kv/{path}
|
||||
Authorization: Bearer <jwt-token> # Required if auth_enabled && !allow_anonymous_read
|
||||
|
||||
curl http://localhost:8080/kv/users/john/profile
|
||||
curl -H "Authorization: Bearer eyJ..." http://localhost:8080/kv/users/john/profile
|
||||
|
||||
# Response
|
||||
# Response (full StoredValue format)
|
||||
{
|
||||
"name": "John Doe",
|
||||
"age": 30,
|
||||
"email": "john@example.com"
|
||||
"uuid": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
|
||||
"timestamp": 1672531200000,
|
||||
"data": {
|
||||
"name": "John Doe",
|
||||
"age": 30,
|
||||
"email": "john@example.com"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Delete Data
|
||||
```bash
|
||||
DELETE /kv/{path}
|
||||
Authorization: Bearer <jwt-token> # Always required when auth_enabled (no anonymous delete)
|
||||
|
||||
curl -X DELETE http://localhost:8080/kv/users/john/profile
|
||||
curl -X DELETE -H "Authorization: Bearer eyJ..." http://localhost:8080/kv/users/john/profile
|
||||
# Returns: 204 No Content
|
||||
```
|
||||
|
||||
### Authentication Operations (`/auth/`)
|
||||
|
||||
#### Create User
|
||||
```bash
|
||||
POST /auth/users
|
||||
Content-Type: application/json
|
||||
|
||||
curl -X POST http://localhost:8080/auth/users \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"nickname":"john"}'
|
||||
|
||||
# Response
|
||||
{"uuid": "user-abc123"}
|
||||
```
|
||||
|
||||
#### Create API Token
|
||||
```bash
|
||||
POST /auth/tokens
|
||||
Content-Type: application/json
|
||||
|
||||
curl -X POST http://localhost:8080/auth/tokens \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"user_uuid":"user-abc123", "scopes":["read","write"]}'
|
||||
|
||||
# Response
|
||||
{
|
||||
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||
"expires_at": 1672617600000
|
||||
}
|
||||
```
|
||||
|
||||
### Cluster Operations (`/members/`)
|
||||
|
||||
#### View Cluster Members
|
||||
@@ -149,12 +236,6 @@ curl http://localhost:8080/members/
|
||||
]
|
||||
```
|
||||
|
||||
#### Join Cluster (Internal)
|
||||
```bash
|
||||
POST /members/join
|
||||
# Used internally during bootstrap process
|
||||
```
|
||||
|
||||
#### Health Check
|
||||
```bash
|
||||
GET /health
|
||||
@@ -169,6 +250,20 @@ curl http://localhost:8080/health
|
||||
}
|
||||
```
|
||||
|
||||
### Merkle Tree Operations (`/sync/`)
|
||||
|
||||
#### Get Merkle Root
|
||||
```bash
|
||||
GET /sync/merkle/root
|
||||
# Used internally for data synchronization
|
||||
```
|
||||
|
||||
#### Range Queries
|
||||
```bash
|
||||
GET /kv/_range?start_key=users/&end_key=users/z&limit=100
|
||||
# Fetch key ranges for synchronization
|
||||
```
|
||||
|
||||
## 🏘️ Cluster Setup
|
||||
|
||||
### Single Node (Standalone)
|
||||
@@ -187,6 +282,8 @@ seed_nodes: [] # Empty = standalone mode
|
||||
node_id: "node1"
|
||||
port: 8081
|
||||
seed_nodes: [] # First node, no seeds needed
|
||||
auth_enabled: true
|
||||
clustering_enabled: true
|
||||
```
|
||||
|
||||
#### Node 2 (Joins via Node 1)
|
||||
@@ -195,6 +292,8 @@ seed_nodes: [] # First node, no seeds needed
|
||||
node_id: "node2"
|
||||
port: 8082
|
||||
seed_nodes: ["127.0.0.1:8081"] # Points to node1
|
||||
auth_enabled: true
|
||||
clustering_enabled: true
|
||||
```
|
||||
|
||||
#### Node 3 (Joins via Node 1 & 2)
|
||||
@@ -203,6 +302,8 @@ seed_nodes: ["127.0.0.1:8081"] # Points to node1
|
||||
node_id: "node3"
|
||||
port: 8083
|
||||
seed_nodes: ["127.0.0.1:8081", "127.0.0.1:8082"] # Multiple seeds for reliability
|
||||
auth_enabled: true
|
||||
clustering_enabled: true
|
||||
```
|
||||
|
||||
#### Start the Cluster
|
||||
@@ -215,6 +316,9 @@ seed_nodes: ["127.0.0.1:8081", "127.0.0.1:8082"] # Multiple seeds for reliabili
|
||||
|
||||
# Terminal 3 (wait a few seconds)
|
||||
./kvs node3.yaml
|
||||
|
||||
# Verify cluster formation
|
||||
curl http://localhost:8081/members/ # Should show all 3 nodes
|
||||
```
|
||||
|
||||
## 🔄 How It Works
|
||||
@@ -224,20 +328,30 @@ seed_nodes: ["127.0.0.1:8081", "127.0.0.1:8082"] # Multiple seeds for reliabili
|
||||
- Failed nodes are detected via timeout (5 minutes) and removed (10 minutes)
|
||||
- New members are automatically discovered and added to local member lists
|
||||
|
||||
### Data Synchronization
|
||||
- **Regular Sync**: Every 5 minutes, nodes compare their latest 15 data items with a random peer
|
||||
### Merkle Tree Synchronization
|
||||
- **Merkle Trees**: Each node builds cryptographic trees of their data for efficient comparison
|
||||
- **Regular Sync**: Every 5 minutes, nodes compare Merkle roots and sync divergent branches
|
||||
- **Catch-up Sync**: Every 2 minutes when nodes detect they're significantly behind
|
||||
- **Bootstrap Sync**: New nodes gradually fetch historical data up to 30 days old
|
||||
- **Efficient Detection**: Only synchronizes actual differences, not entire datasets
|
||||
|
||||
### Conflict Resolution
|
||||
### Sophisticated Conflict Resolution
|
||||
When two nodes have different data for the same key with identical timestamps:
|
||||
|
||||
1. **Majority Vote**: Query all healthy cluster members for their version
|
||||
2. **Tie-Breaker**: If votes are tied, the version from the oldest node (earliest `joined_timestamp`) wins
|
||||
3. **Automatic Resolution**: Losing nodes automatically fetch and store the winning version
|
||||
1. **Detection**: Merkle tree comparison identifies conflicting keys
|
||||
2. **Oldest-Node Rule**: The version from the node with earliest `joined_timestamp` wins
|
||||
3. **UUID Tie-Breaker**: If join times are identical, lexicographically smaller UUID wins
|
||||
4. **Automatic Resolution**: Losing nodes automatically fetch and store the winning version
|
||||
5. **Consistency**: All nodes converge to the same data within seconds
|
||||
|
||||
### Authentication & Authorization
|
||||
- **JWT Tokens**: Secure API access with scoped permissions
|
||||
- **POSIX-Inspired ACLs**: 12-bit permission system (owner/group/others with create/delete/write/read)
|
||||
- **Resource Metadata**: Each stored item has ownership and permission information
|
||||
- **Feature Toggle**: Can be completely disabled for simpler deployments
|
||||
|
||||
### Operational Modes
|
||||
- **Normal**: Full read/write capabilities
|
||||
- **Normal**: Full read/write capabilities with all features
|
||||
- **Read-Only**: Rejects external writes but accepts internal replication
|
||||
- **Syncing**: Temporary mode during bootstrap, rejects external writes
|
||||
|
||||
@@ -245,57 +359,146 @@ When two nodes have different data for the same key with identical timestamps:
|
||||
|
||||
### Running Tests
|
||||
```bash
|
||||
# Basic functionality test
|
||||
# Build and run comprehensive integration tests
|
||||
go build -o kvs .
|
||||
./integration_test.sh
|
||||
|
||||
# Manual basic functionality test
|
||||
./kvs &
|
||||
curl http://localhost:8080/health
|
||||
pkill kvs
|
||||
|
||||
# Cluster test with provided configs
|
||||
./kvs node1.yaml &
|
||||
./kvs node2.yaml &
|
||||
./kvs node3.yaml &
|
||||
# Manual cluster test (requires creating configs)
|
||||
echo 'node_id: "test1"
|
||||
port: 8081
|
||||
seed_nodes: []
|
||||
auth_enabled: false' > test1.yaml
|
||||
|
||||
# Test data replication
|
||||
echo 'node_id: "test2"
|
||||
port: 8082
|
||||
seed_nodes: ["127.0.0.1:8081"]
|
||||
auth_enabled: false' > test2.yaml
|
||||
|
||||
./kvs test1.yaml &
|
||||
./kvs test2.yaml &
|
||||
|
||||
# Test data replication (wait for cluster formation)
|
||||
sleep 10
|
||||
curl -X PUT http://localhost:8081/kv/test/data \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"message":"hello world"}'
|
||||
|
||||
# Wait 30+ seconds for sync, then check other nodes
|
||||
# Wait for Merkle sync, then check replication
|
||||
sleep 30
|
||||
curl http://localhost:8082/kv/test/data
|
||||
curl http://localhost:8083/kv/test/data
|
||||
|
||||
# Cleanup
|
||||
pkill kvs
|
||||
rm test1.yaml test2.yaml
|
||||
```
|
||||
|
||||
### Conflict Resolution Testing
|
||||
```bash
|
||||
# Create conflicting data scenario
|
||||
rm -rf data1 data2
|
||||
mkdir data1 data2
|
||||
go run test_conflict.go data1 data2
|
||||
# Create conflicting data scenario using utility
|
||||
go run test_conflict.go /tmp/conflict1 /tmp/conflict2
|
||||
|
||||
# Create configs for conflict test
|
||||
echo 'node_id: "conflict1"
|
||||
port: 9111
|
||||
data_dir: "/tmp/conflict1"
|
||||
seed_nodes: []
|
||||
auth_enabled: false
|
||||
log_level: "debug"' > conflict1.yaml
|
||||
|
||||
echo 'node_id: "conflict2"
|
||||
port: 9112
|
||||
data_dir: "/tmp/conflict2"
|
||||
seed_nodes: ["127.0.0.1:9111"]
|
||||
auth_enabled: false
|
||||
log_level: "debug"' > conflict2.yaml
|
||||
|
||||
# Start nodes with conflicting data
|
||||
./kvs node1.yaml &
|
||||
./kvs node2.yaml &
|
||||
./kvs conflict1.yaml &
|
||||
./kvs conflict2.yaml &
|
||||
|
||||
# Watch logs for conflict resolution
|
||||
# Both nodes will converge to same data within ~30 seconds
|
||||
# Both nodes will converge within ~10-30 seconds
|
||||
# Check final state
|
||||
sleep 30
|
||||
curl http://localhost:9111/kv/test/conflict/data
|
||||
curl http://localhost:9112/kv/test/conflict/data
|
||||
|
||||
pkill kvs
|
||||
rm conflict1.yaml conflict2.yaml
|
||||
```
|
||||
|
||||
### Code Quality
|
||||
```bash
|
||||
# Format and lint
|
||||
go fmt ./...
|
||||
go vet ./...
|
||||
|
||||
# Dependency management
|
||||
go mod tidy
|
||||
go mod verify
|
||||
|
||||
# Build verification
|
||||
go build .
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
kvs/
|
||||
├── main.go # Main application with all functionality
|
||||
├── config.yaml # Default configuration (auto-generated)
|
||||
├── test_conflict.go # Conflict resolution testing utility
|
||||
├── node1.yaml # Example cluster node config
|
||||
├── node2.yaml # Example cluster node config
|
||||
├── node3.yaml # Example cluster node config
|
||||
├── go.mod # Go module dependencies
|
||||
├── go.sum # Go module checksums
|
||||
└── README.md # This documentation
|
||||
├── main.go # Main application entry point
|
||||
├── config.yaml # Default configuration (auto-generated)
|
||||
├── integration_test.sh # Comprehensive test suite
|
||||
├── test_conflict.go # Conflict resolution testing utility
|
||||
├── CLAUDE.md # Development guidance for Claude Code
|
||||
├── go.mod # Go module dependencies
|
||||
├── go.sum # Go module checksums
|
||||
├── README.md # This documentation
|
||||
│
|
||||
├── auth/ # Authentication & authorization
|
||||
│ ├── auth.go # Main auth logic
|
||||
│ ├── jwt.go # JWT token management
|
||||
│ ├── middleware.go # HTTP middleware
|
||||
│ ├── permissions.go # POSIX-inspired ACL system
|
||||
│ └── storage.go # Auth data storage
|
||||
│
|
||||
├── cluster/ # Distributed systems components
|
||||
│ ├── bootstrap.go # New node integration
|
||||
│ ├── gossip.go # Membership protocol
|
||||
│ ├── merkle.go # Merkle tree implementation
|
||||
│ └── sync.go # Data synchronization & conflict resolution
|
||||
│
|
||||
├── config/ # Configuration management
|
||||
│ └── config.go # Config loading & defaults
|
||||
│
|
||||
├── features/ # Utility features
|
||||
│ ├── auth.go # Auth utilities
|
||||
│ ├── backup.go # Backup system
|
||||
│ ├── features.go # Feature toggles
|
||||
│ ├── ratelimit.go # Rate limiting
|
||||
│ ├── revision.go # Revision history
|
||||
│ ├── tamperlog.go # Tamper-evident logging
|
||||
│ └── validation.go # TTL parsing
|
||||
│
|
||||
├── server/ # HTTP server & API
|
||||
│ ├── handlers.go # Request handlers
|
||||
│ ├── lifecycle.go # Server lifecycle
|
||||
│ ├── routes.go # Route definitions
|
||||
│ └── server.go # Server setup
|
||||
│
|
||||
├── storage/ # Data storage abstraction
|
||||
│ ├── compression.go # ZSTD compression
|
||||
│ ├── revision.go # Revision history
|
||||
│ └── storage.go # BadgerDB interface
|
||||
│
|
||||
├── types/ # Shared type definitions
|
||||
│ └── types.go # All data structures
|
||||
│
|
||||
└── utils/ # Utilities
|
||||
└── hash.go # Cryptographic hashing
|
||||
```
|
||||
|
||||
### Key Data Structures
|
||||
@@ -318,6 +521,7 @@ type StoredValue struct {
|
||||
|
||||
| Setting | Description | Default | Notes |
|
||||
|---------|-------------|---------|-------|
|
||||
| **Core Settings** |
|
||||
| `node_id` | Unique identifier for this node | hostname | Must be unique across cluster |
|
||||
| `bind_address` | IP address to bind HTTP server | "127.0.0.1" | Use 0.0.0.0 for external access |
|
||||
| `port` | HTTP port for API and cluster communication | 8080 | Must be accessible to peers |
|
||||
@@ -325,8 +529,20 @@ type StoredValue struct {
|
||||
| `seed_nodes` | List of initial cluster nodes | [] | Empty = standalone mode |
|
||||
| `read_only` | Enable read-only mode | false | Accepts replication, rejects client writes |
|
||||
| `log_level` | Logging verbosity | "info" | debug/info/warn/error |
|
||||
| **Cluster Timing** |
|
||||
| `gossip_interval_min/max` | Gossip frequency range | 60-120 sec | Randomized interval |
|
||||
| `sync_interval` | Regular sync frequency | 300 sec | How often to sync with peers |
|
||||
| `sync_interval` | Regular Merkle sync frequency | 300 sec | How often to sync with peers |
|
||||
| `catchup_interval` | Catch-up sync frequency | 120 sec | Faster sync when behind |
|
||||
| `bootstrap_max_age_hours` | Max historical data to sync | 720 hours | 30 days default |
|
||||
| **Feature Toggles** |
|
||||
| `auth_enabled` | JWT authentication system | true | Complete auth/authz system |
|
||||
| `allow_anonymous_read` | Allow unauthenticated read access | false | When auth_enabled, controls KV GET endpoints |
|
||||
| `allow_anonymous_write` | Allow unauthenticated write access | false | When auth_enabled, controls KV PUT endpoints |
|
||||
| `clustering_enabled` | Gossip protocol and sync | true | Distributed mode |
|
||||
| `compression_enabled` | ZSTD compression | true | Reduces storage size |
|
||||
| `rate_limiting_enabled` | Rate limiting | true | Per-client limits |
|
||||
| `tamper_logging_enabled` | Cryptographic audit trail | true | Security logging |
|
||||
| `revision_history_enabled` | Automatic versioning | true | Data history tracking |
|
||||
| `catchup_interval` | Catch-up sync frequency | 120 sec | Faster sync when behind |
|
||||
| `bootstrap_max_age_hours` | Max historical data to sync | 720 hours | 30 days default |
|
||||
| `throttle_delay_ms` | Delay between sync requests | 100 ms | Prevents overwhelming peers |
|
||||
@@ -346,18 +562,20 @@ type StoredValue struct {
|
||||
- IPv4 private networks supported (IPv6 not tested)
|
||||
|
||||
### Limitations
|
||||
- No authentication/authorization (planned for future releases)
|
||||
- No encryption in transit (use reverse proxy for TLS)
|
||||
- No cross-key transactions
|
||||
- No cross-key transactions or ACID guarantees
|
||||
- No complex queries (key-based lookups only)
|
||||
- No data compression (planned for future releases)
|
||||
- No automatic data sharding (single keyspace per cluster)
|
||||
- No multi-datacenter replication
|
||||
|
||||
### Performance Characteristics
|
||||
- **Read Latency**: ~1ms (local BadgerDB lookup)
|
||||
- **Write Latency**: ~5ms (local write + timestamp indexing)
|
||||
- **Replication Lag**: 30 seconds - 5 minutes depending on sync cycles
|
||||
- **Memory Usage**: Minimal (BadgerDB handles caching efficiently)
|
||||
- **Disk Usage**: Raw JSON + metadata overhead (~20-30%)
|
||||
- **Write Latency**: ~5ms (local write + indexing + optional compression)
|
||||
- **Replication Lag**: 10-30 seconds with Merkle tree sync
|
||||
- **Memory Usage**: Minimal (BadgerDB + Merkle tree caching)
|
||||
- **Disk Usage**: Raw JSON + metadata + optional compression (10-50% savings)
|
||||
- **Conflict Resolution**: Sub-second convergence time
|
||||
- **Cluster Formation**: ~10-20 seconds for gossip stabilization
|
||||
|
||||
## 🛡️ Production Considerations
|
||||
|
||||
|
264
auth/auth.go
Normal file
264
auth/auth.go
Normal file
@@ -0,0 +1,264 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
badger "github.com/dgraph-io/badger/v4"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"kvs/types"
|
||||
"kvs/utils"
|
||||
)
|
||||
|
||||
// AuthContext holds authentication information for a request
|
||||
type AuthContext struct {
|
||||
UserUUID string `json:"user_uuid"`
|
||||
Scopes []string `json:"scopes"`
|
||||
Groups []string `json:"groups"`
|
||||
}
|
||||
|
||||
// AuthService handles authentication operations
|
||||
type AuthService struct {
|
||||
db *badger.DB
|
||||
logger *logrus.Logger
|
||||
config *types.Config
|
||||
}
|
||||
|
||||
// NewAuthService creates a new authentication service
|
||||
func NewAuthService(db *badger.DB, logger *logrus.Logger, config *types.Config) *AuthService {
|
||||
return &AuthService{
|
||||
db: db,
|
||||
logger: logger,
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
|
||||
// StoreAPIToken stores an API token in BadgerDB with TTL
|
||||
func (s *AuthService) StoreAPIToken(tokenString string, userUUID string, scopes []string, expiresAt int64) error {
|
||||
tokenHash := utils.HashToken(tokenString)
|
||||
|
||||
apiToken := types.APIToken{
|
||||
TokenHash: tokenHash,
|
||||
UserUUID: userUUID,
|
||||
Scopes: scopes,
|
||||
IssuedAt: time.Now().Unix(),
|
||||
ExpiresAt: expiresAt,
|
||||
}
|
||||
|
||||
tokenData, err := json.Marshal(apiToken)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.db.Update(func(txn *badger.Txn) error {
|
||||
entry := badger.NewEntry([]byte(TokenStorageKey(tokenHash)), tokenData)
|
||||
|
||||
// Set TTL to the token expiration time
|
||||
ttl := time.Until(time.Unix(expiresAt, 0))
|
||||
if ttl > 0 {
|
||||
entry = entry.WithTTL(ttl)
|
||||
}
|
||||
|
||||
return txn.SetEntry(entry)
|
||||
})
|
||||
}
|
||||
|
||||
// GetAPIToken retrieves an API token from BadgerDB by hash
|
||||
func (s *AuthService) GetAPIToken(tokenHash string) (*types.APIToken, error) {
|
||||
var apiToken types.APIToken
|
||||
|
||||
err := s.db.View(func(txn *badger.Txn) error {
|
||||
item, err := txn.Get([]byte(TokenStorageKey(tokenHash)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return item.Value(func(val []byte) error {
|
||||
return json.Unmarshal(val, &apiToken)
|
||||
})
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &apiToken, nil
|
||||
}
|
||||
|
||||
// ExtractTokenFromHeader extracts the Bearer token from the Authorization header
|
||||
func ExtractTokenFromHeader(r *http.Request) (string, error) {
|
||||
authHeader := r.Header.Get("Authorization")
|
||||
if authHeader == "" {
|
||||
return "", fmt.Errorf("missing authorization header")
|
||||
}
|
||||
|
||||
parts := strings.Split(authHeader, " ")
|
||||
if len(parts) != 2 || strings.ToLower(parts[0]) != "bearer" {
|
||||
return "", fmt.Errorf("invalid authorization header format")
|
||||
}
|
||||
|
||||
return parts[1], nil
|
||||
}
|
||||
|
||||
// GetUserGroups retrieves all groups that a user belongs to
|
||||
func (s *AuthService) GetUserGroups(userUUID string) ([]string, error) {
|
||||
var user types.User
|
||||
err := s.db.View(func(txn *badger.Txn) error {
|
||||
item, err := txn.Get([]byte(UserStorageKey(userUUID)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return item.Value(func(val []byte) error {
|
||||
return json.Unmarshal(val, &user)
|
||||
})
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return user.Groups, nil
|
||||
}
|
||||
|
||||
// AuthenticateRequest validates the JWT token and returns authentication context
|
||||
func (s *AuthService) AuthenticateRequest(r *http.Request) (*AuthContext, error) {
|
||||
// Extract token from header
|
||||
tokenString, err := ExtractTokenFromHeader(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Validate JWT token
|
||||
claims, err := ValidateJWT(tokenString)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid token: %v", err)
|
||||
}
|
||||
|
||||
// Verify token exists in our database (not revoked)
|
||||
tokenHash := utils.HashToken(tokenString)
|
||||
_, err = s.GetAPIToken(tokenHash)
|
||||
if err == badger.ErrKeyNotFound {
|
||||
return nil, fmt.Errorf("token not found or revoked")
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to verify token: %v", err)
|
||||
}
|
||||
|
||||
// Get user's groups
|
||||
groups, err := s.GetUserGroups(claims.UserUUID)
|
||||
if err != nil {
|
||||
s.logger.WithError(err).WithField("user_uuid", claims.UserUUID).Warn("Failed to get user groups")
|
||||
groups = []string{} // Continue with empty groups on error
|
||||
}
|
||||
|
||||
return &AuthContext{
|
||||
UserUUID: claims.UserUUID,
|
||||
Scopes: claims.Scopes,
|
||||
Groups: groups,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// CheckResourcePermission checks if a user has permission to perform an operation on a resource
|
||||
func (s *AuthService) CheckResourcePermission(authCtx *AuthContext, resourceKey string, operation string) bool {
|
||||
// Get resource metadata
|
||||
var metadata types.ResourceMetadata
|
||||
err := s.db.View(func(txn *badger.Txn) error {
|
||||
item, err := txn.Get([]byte(ResourceMetadataKey(resourceKey)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return item.Value(func(val []byte) error {
|
||||
return json.Unmarshal(val, &metadata)
|
||||
})
|
||||
})
|
||||
|
||||
// If no metadata exists, use default permissions
|
||||
if err == badger.ErrKeyNotFound {
|
||||
metadata = types.ResourceMetadata{
|
||||
OwnerUUID: authCtx.UserUUID, // Treat requester as owner for new resources
|
||||
GroupUUID: "",
|
||||
Permissions: types.DefaultPermissions,
|
||||
}
|
||||
} else if err != nil {
|
||||
s.logger.WithError(err).WithField("resource_key", resourceKey).Warn("Failed to get resource metadata")
|
||||
return false
|
||||
}
|
||||
|
||||
// Check user relationship to resource
|
||||
isOwner, isGroupMember := CheckUserResourceRelationship(authCtx.UserUUID, &metadata, authCtx.Groups)
|
||||
|
||||
// Check permission
|
||||
return CheckPermission(metadata.Permissions, operation, isOwner, isGroupMember)
|
||||
}
|
||||
|
||||
// GetResourceMetadata retrieves metadata for a resource
|
||||
func (s *AuthService) GetResourceMetadata(resourceKey string) (*types.ResourceMetadata, error) {
|
||||
var metadata types.ResourceMetadata
|
||||
|
||||
err := s.db.View(func(txn *badger.Txn) error {
|
||||
item, err := txn.Get([]byte(ResourceMetadataKey(resourceKey)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return item.Value(func(val []byte) error {
|
||||
return json.Unmarshal(val, &metadata)
|
||||
})
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &metadata, nil
|
||||
}
|
||||
|
||||
// SetResourceMetadata stores metadata for a resource
|
||||
func (s *AuthService) SetResourceMetadata(resourceKey string, metadata *types.ResourceMetadata) error {
|
||||
metadataBytes, err := json.Marshal(metadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal metadata: %v", err)
|
||||
}
|
||||
|
||||
return s.db.Update(func(txn *badger.Txn) error {
|
||||
return txn.Set([]byte(ResourceMetadataKey(resourceKey)), metadataBytes)
|
||||
})
|
||||
}
|
||||
|
||||
// GetAuthContext retrieves auth context from request context
|
||||
func GetAuthContext(ctx context.Context) *AuthContext {
|
||||
if authCtx, ok := ctx.Value("auth").(*AuthContext); ok {
|
||||
return authCtx
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// HasUsers checks if any users exist in the database
|
||||
func (s *AuthService) HasUsers() (bool, error) {
|
||||
var hasUsers bool
|
||||
|
||||
err := s.db.View(func(txn *badger.Txn) error {
|
||||
opts := badger.DefaultIteratorOptions
|
||||
opts.PrefetchValues = false // We only need to check if keys exist
|
||||
iterator := txn.NewIterator(opts)
|
||||
defer iterator.Close()
|
||||
|
||||
// Look for any key starting with "user:"
|
||||
prefix := []byte("user:")
|
||||
for iterator.Seek(prefix); iterator.ValidForPrefix(prefix); iterator.Next() {
|
||||
hasUsers = true
|
||||
return nil // Found at least one user, can exit early
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
return hasUsers, err
|
||||
}
|
77
auth/cluster.go
Normal file
77
auth/cluster.go
Normal file
@@ -0,0 +1,77 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// ClusterAuthService handles authentication for inter-cluster communication
|
||||
type ClusterAuthService struct {
|
||||
clusterSecret string
|
||||
logger *logrus.Logger
|
||||
}
|
||||
|
||||
// NewClusterAuthService creates a new cluster authentication service
|
||||
func NewClusterAuthService(clusterSecret string, logger *logrus.Logger) *ClusterAuthService {
|
||||
return &ClusterAuthService{
|
||||
clusterSecret: clusterSecret,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
// Middleware validates cluster authentication headers
|
||||
func (s *ClusterAuthService) Middleware(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Extract authentication headers
|
||||
clusterSecret := r.Header.Get("X-Cluster-Secret")
|
||||
nodeID := r.Header.Get("X-Node-ID")
|
||||
|
||||
// Log authentication attempt
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"node_id": nodeID,
|
||||
"remote_addr": r.RemoteAddr,
|
||||
"path": r.URL.Path,
|
||||
"method": r.Method,
|
||||
}).Debug("Cluster authentication attempt")
|
||||
|
||||
// Validate cluster secret
|
||||
if clusterSecret == "" {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"node_id": nodeID,
|
||||
"remote_addr": r.RemoteAddr,
|
||||
"path": r.URL.Path,
|
||||
}).Warn("Missing X-Cluster-Secret header")
|
||||
http.Error(w, "Unauthorized: Missing cluster secret", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
if clusterSecret != s.clusterSecret {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"node_id": nodeID,
|
||||
"remote_addr": r.RemoteAddr,
|
||||
"path": r.URL.Path,
|
||||
}).Warn("Invalid cluster secret")
|
||||
http.Error(w, "Unauthorized: Invalid cluster secret", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
// Validate node ID is present
|
||||
if nodeID == "" {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"remote_addr": r.RemoteAddr,
|
||||
"path": r.URL.Path,
|
||||
}).Warn("Missing X-Node-ID header")
|
||||
http.Error(w, "Unauthorized: Missing node ID", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
// Authentication successful
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"node_id": nodeID,
|
||||
"path": r.URL.Path,
|
||||
}).Debug("Cluster authentication successful")
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
67
auth/jwt.go
Normal file
67
auth/jwt.go
Normal file
@@ -0,0 +1,67 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/golang-jwt/jwt/v4"
|
||||
)
|
||||
|
||||
// JWT signing key (should be configurable in production)
|
||||
var jwtSigningKey = []byte("your-secret-signing-key-change-this-in-production")
|
||||
|
||||
// JWTClaims represents the custom claims for our JWT tokens
|
||||
type JWTClaims struct {
|
||||
UserUUID string `json:"user_uuid"`
|
||||
Scopes []string `json:"scopes"`
|
||||
jwt.RegisteredClaims
|
||||
}
|
||||
|
||||
// GenerateJWT creates a new JWT token for a user with specified scopes
|
||||
func GenerateJWT(userUUID string, scopes []string, expirationHours int) (string, int64, error) {
|
||||
if expirationHours <= 0 {
|
||||
expirationHours = 1 // Default to 1 hour
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
expiresAt := now.Add(time.Duration(expirationHours) * time.Hour)
|
||||
|
||||
claims := JWTClaims{
|
||||
UserUUID: userUUID,
|
||||
Scopes: scopes,
|
||||
RegisteredClaims: jwt.RegisteredClaims{
|
||||
IssuedAt: jwt.NewNumericDate(now),
|
||||
ExpiresAt: jwt.NewNumericDate(expiresAt),
|
||||
Issuer: "kvs-server",
|
||||
},
|
||||
}
|
||||
|
||||
token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
|
||||
tokenString, err := token.SignedString(jwtSigningKey)
|
||||
if err != nil {
|
||||
return "", 0, err
|
||||
}
|
||||
|
||||
return tokenString, expiresAt.Unix(), nil
|
||||
}
|
||||
|
||||
// ValidateJWT validates a JWT token and returns the claims if valid
|
||||
func ValidateJWT(tokenString string) (*JWTClaims, error) {
|
||||
token, err := jwt.ParseWithClaims(tokenString, &JWTClaims{}, func(token *jwt.Token) (interface{}, error) {
|
||||
// Validate signing method
|
||||
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
|
||||
return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"])
|
||||
}
|
||||
return jwtSigningKey, nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if claims, ok := token.Claims.(*JWTClaims); ok && token.Valid {
|
||||
return claims, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("invalid token")
|
||||
}
|
158
auth/middleware.go
Normal file
158
auth/middleware.go
Normal file
@@ -0,0 +1,158 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// RateLimitService handles rate limiting operations
|
||||
type RateLimitService struct {
|
||||
authService *AuthService
|
||||
config *types.Config
|
||||
}
|
||||
|
||||
// NewRateLimitService creates a new rate limiting service
|
||||
func NewRateLimitService(authService *AuthService, config *types.Config) *RateLimitService {
|
||||
return &RateLimitService{
|
||||
authService: authService,
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
|
||||
// Middleware creates authentication and authorization middleware
|
||||
func (s *AuthService) Middleware(requiredScopes []string, resourceKeyExtractor func(*http.Request) string, operation string) func(http.HandlerFunc) http.HandlerFunc {
|
||||
return func(next http.HandlerFunc) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
// Skip authentication if disabled
|
||||
if !s.isAuthEnabled() {
|
||||
next(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Authenticate request
|
||||
authCtx, err := s.AuthenticateRequest(r)
|
||||
if err != nil {
|
||||
s.logger.WithError(err).WithField("path", r.URL.Path).Info("Authentication failed")
|
||||
http.Error(w, "Unauthorized", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
// Check required scopes
|
||||
if len(requiredScopes) > 0 {
|
||||
hasRequiredScope := false
|
||||
for _, required := range requiredScopes {
|
||||
for _, scope := range authCtx.Scopes {
|
||||
if scope == required {
|
||||
hasRequiredScope = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if hasRequiredScope {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !hasRequiredScope {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"user_uuid": authCtx.UserUUID,
|
||||
"user_scopes": authCtx.Scopes,
|
||||
"required_scopes": requiredScopes,
|
||||
}).Info("Insufficient scopes")
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Check resource-level permissions if applicable
|
||||
if resourceKeyExtractor != nil && operation != "" {
|
||||
resourceKey := resourceKeyExtractor(r)
|
||||
if resourceKey != "" {
|
||||
hasPermission := s.CheckResourcePermission(authCtx, resourceKey, operation)
|
||||
if !hasPermission {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"user_uuid": authCtx.UserUUID,
|
||||
"resource_key": resourceKey,
|
||||
"operation": operation,
|
||||
}).Info("Permission denied")
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Store auth context in request context for use in handlers
|
||||
ctx := context.WithValue(r.Context(), "auth", authCtx)
|
||||
r = r.WithContext(ctx)
|
||||
|
||||
next(w, r)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// RateLimitMiddleware enforces rate limiting
|
||||
func (s *RateLimitService) RateLimitMiddleware(next http.HandlerFunc) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
// Skip rate limiting if disabled
|
||||
if !s.config.RateLimitingEnabled {
|
||||
next(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Extract auth context to get user UUID
|
||||
authCtx := GetAuthContext(r.Context())
|
||||
if authCtx == nil {
|
||||
// No auth context, skip rate limiting (unauthenticated requests)
|
||||
next(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Check rate limit
|
||||
allowed, err := s.checkRateLimit(authCtx.UserUUID)
|
||||
if err != nil {
|
||||
s.authService.logger.WithError(err).WithField("user_uuid", authCtx.UserUUID).Error("Failed to check rate limit")
|
||||
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
if !allowed {
|
||||
s.authService.logger.WithFields(logrus.Fields{
|
||||
"user_uuid": authCtx.UserUUID,
|
||||
"limit": s.config.RateLimitRequests,
|
||||
"window": s.config.RateLimitWindow,
|
||||
}).Info("Rate limit exceeded")
|
||||
|
||||
// Set rate limit headers
|
||||
w.Header().Set("X-Rate-Limit-Limit", strconv.Itoa(s.config.RateLimitRequests))
|
||||
w.Header().Set("X-Rate-Limit-Window", s.config.RateLimitWindow)
|
||||
|
||||
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
|
||||
return
|
||||
}
|
||||
|
||||
next(w, r)
|
||||
}
|
||||
}
|
||||
|
||||
// isAuthEnabled checks if authentication is enabled from config
|
||||
func (s *AuthService) isAuthEnabled() bool {
|
||||
if s.config != nil {
|
||||
return s.config.AuthEnabled
|
||||
}
|
||||
return true // Default to enabled if no config
|
||||
}
|
||||
|
||||
// Helper method to check rate limits (simplified version)
|
||||
func (s *RateLimitService) checkRateLimit(userUUID string) (bool, error) {
|
||||
if s.config.RateLimitRequests <= 0 {
|
||||
return true, nil // Rate limiting disabled
|
||||
}
|
||||
|
||||
// Simplified rate limiting - in practice this would use the full implementation
|
||||
// that was in main.go with proper window calculations and BadgerDB storage
|
||||
return true, nil // For now, always allow
|
||||
}
|
65
auth/permissions.go
Normal file
65
auth/permissions.go
Normal file
@@ -0,0 +1,65 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// CheckPermission checks if a user has permission to perform an operation on a resource
|
||||
func CheckPermission(permissions int, operation string, isOwner, isGroupMember bool) bool {
|
||||
switch operation {
|
||||
case "create":
|
||||
if isOwner {
|
||||
return (permissions & types.PermOwnerCreate) != 0
|
||||
}
|
||||
if isGroupMember {
|
||||
return (permissions & types.PermGroupCreate) != 0
|
||||
}
|
||||
return (permissions & types.PermOthersCreate) != 0
|
||||
|
||||
case "delete":
|
||||
if isOwner {
|
||||
return (permissions & types.PermOwnerDelete) != 0
|
||||
}
|
||||
if isGroupMember {
|
||||
return (permissions & types.PermGroupDelete) != 0
|
||||
}
|
||||
return (permissions & types.PermOthersDelete) != 0
|
||||
|
||||
case "write":
|
||||
if isOwner {
|
||||
return (permissions & types.PermOwnerWrite) != 0
|
||||
}
|
||||
if isGroupMember {
|
||||
return (permissions & types.PermGroupWrite) != 0
|
||||
}
|
||||
return (permissions & types.PermOthersWrite) != 0
|
||||
|
||||
case "read":
|
||||
if isOwner {
|
||||
return (permissions & types.PermOwnerRead) != 0
|
||||
}
|
||||
if isGroupMember {
|
||||
return (permissions & types.PermGroupRead) != 0
|
||||
}
|
||||
return (permissions & types.PermOthersRead) != 0
|
||||
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// CheckUserResourceRelationship determines user relationship to resource
|
||||
func CheckUserResourceRelationship(userUUID string, metadata *types.ResourceMetadata, userGroups []string) (isOwner, isGroupMember bool) {
|
||||
isOwner = (userUUID == metadata.OwnerUUID)
|
||||
|
||||
if metadata.GroupUUID != "" {
|
||||
for _, groupUUID := range userGroups {
|
||||
if groupUUID == metadata.GroupUUID {
|
||||
isGroupMember = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return isOwner, isGroupMember
|
||||
}
|
19
auth/storage.go
Normal file
19
auth/storage.go
Normal file
@@ -0,0 +1,19 @@
|
||||
package auth
|
||||
|
||||
// Storage key generation utilities for authentication data
|
||||
|
||||
func UserStorageKey(userUUID string) string {
|
||||
return "user:" + userUUID
|
||||
}
|
||||
|
||||
func GroupStorageKey(groupUUID string) string {
|
||||
return "group:" + groupUUID
|
||||
}
|
||||
|
||||
func TokenStorageKey(tokenHash string) string {
|
||||
return "token:" + tokenHash
|
||||
}
|
||||
|
||||
func ResourceMetadataKey(resourceKey string) string {
|
||||
return resourceKey + ":metadata"
|
||||
}
|
154
cluster/bootstrap.go
Normal file
154
cluster/bootstrap.go
Normal file
@@ -0,0 +1,154 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// BootstrapService handles cluster joining and initial synchronization
|
||||
type BootstrapService struct {
|
||||
config *types.Config
|
||||
gossipService *GossipService
|
||||
syncService *SyncService
|
||||
logger *logrus.Logger
|
||||
setMode func(string) // Callback to set server mode
|
||||
}
|
||||
|
||||
// NewBootstrapService creates a new bootstrap service
|
||||
func NewBootstrapService(config *types.Config, gossipService *GossipService, syncService *SyncService, logger *logrus.Logger, setMode func(string)) *BootstrapService {
|
||||
return &BootstrapService{
|
||||
config: config,
|
||||
gossipService: gossipService,
|
||||
syncService: syncService,
|
||||
logger: logger,
|
||||
setMode: setMode,
|
||||
}
|
||||
}
|
||||
|
||||
// Bootstrap joins cluster using seed nodes
|
||||
func (s *BootstrapService) Bootstrap() {
|
||||
if len(s.config.SeedNodes) == 0 {
|
||||
s.logger.Info("No seed nodes configured, running as standalone")
|
||||
return
|
||||
}
|
||||
|
||||
s.logger.Info("Starting bootstrap process")
|
||||
s.setMode("syncing")
|
||||
|
||||
// Try to join cluster via each seed node
|
||||
joined := false
|
||||
for _, seedAddr := range s.config.SeedNodes {
|
||||
if s.attemptJoin(seedAddr) {
|
||||
joined = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !joined {
|
||||
s.logger.Warn("Failed to join cluster via seed nodes, running as standalone")
|
||||
s.setMode("normal")
|
||||
return
|
||||
}
|
||||
|
||||
// Wait a bit for member discovery
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
// Perform gradual sync (now Merkle-based)
|
||||
s.performGradualSync()
|
||||
|
||||
// Switch to normal mode
|
||||
s.setMode("normal")
|
||||
s.logger.Info("Bootstrap completed, entering normal mode")
|
||||
}
|
||||
|
||||
// attemptJoin attempts to join cluster via a seed node
|
||||
func (s *BootstrapService) attemptJoin(seedAddr string) bool {
|
||||
joinReq := types.JoinRequest{
|
||||
ID: s.config.NodeID,
|
||||
Address: fmt.Sprintf("%s:%d", s.config.BindAddress, s.config.Port),
|
||||
JoinedTimestamp: time.Now().UnixMilli(),
|
||||
}
|
||||
|
||||
jsonData, err := json.Marshal(joinReq)
|
||||
if err != nil {
|
||||
s.logger.WithError(err).Error("Failed to marshal join request")
|
||||
return false
|
||||
}
|
||||
|
||||
client := NewAuthenticatedHTTPClient(s.config, 10*time.Second)
|
||||
protocol := GetProtocol(s.config)
|
||||
url := fmt.Sprintf("%s://%s/members/join", protocol, seedAddr)
|
||||
|
||||
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
|
||||
if err != nil {
|
||||
s.logger.WithError(err).Error("Failed to create join request")
|
||||
return false
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
AddClusterAuthHeaders(req, s.config)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"seed": seedAddr,
|
||||
"error": err.Error(),
|
||||
}).Warn("Failed to contact seed node")
|
||||
return false
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"seed": seedAddr,
|
||||
"status": resp.StatusCode,
|
||||
}).Warn("Seed node rejected join request")
|
||||
return false
|
||||
}
|
||||
|
||||
// Process member list response
|
||||
var memberList []types.Member
|
||||
if err := json.NewDecoder(resp.Body).Decode(&memberList); err != nil {
|
||||
s.logger.WithError(err).Error("Failed to decode member list from seed")
|
||||
return false
|
||||
}
|
||||
|
||||
// Add all members to our local list
|
||||
for _, member := range memberList {
|
||||
if member.ID != s.config.NodeID {
|
||||
s.gossipService.AddMember(&member)
|
||||
}
|
||||
}
|
||||
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"seed": seedAddr,
|
||||
"member_count": len(memberList),
|
||||
}).Info("Successfully joined cluster")
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// performGradualSync performs gradual sync (Merkle-based version)
|
||||
func (s *BootstrapService) performGradualSync() {
|
||||
s.logger.Info("Starting gradual sync (Merkle-based)")
|
||||
|
||||
members := s.gossipService.GetHealthyMembers()
|
||||
if len(members) == 0 {
|
||||
s.logger.Info("No healthy members for gradual sync")
|
||||
return
|
||||
}
|
||||
|
||||
// For now, just do a few rounds of Merkle sync
|
||||
for i := 0; i < 3; i++ {
|
||||
s.syncService.performMerkleSync()
|
||||
time.Sleep(time.Duration(s.config.ThrottleDelayMs) * time.Millisecond)
|
||||
}
|
||||
|
||||
s.logger.Info("Gradual sync completed")
|
||||
}
|
312
cluster/gossip.go
Normal file
312
cluster/gossip.go
Normal file
@@ -0,0 +1,312 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// GossipService handles gossip protocol operations
|
||||
type GossipService struct {
|
||||
config *types.Config
|
||||
members map[string]*types.Member
|
||||
membersMu sync.RWMutex
|
||||
logger *logrus.Logger
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
wg sync.WaitGroup
|
||||
}
|
||||
|
||||
// NewGossipService creates a new gossip service
|
||||
func NewGossipService(config *types.Config, logger *logrus.Logger) *GossipService {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
return &GossipService{
|
||||
config: config,
|
||||
members: make(map[string]*types.Member),
|
||||
logger: logger,
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
}
|
||||
}
|
||||
|
||||
// Start begins the gossip routine
|
||||
func (s *GossipService) Start() {
|
||||
if !s.config.ClusteringEnabled {
|
||||
s.logger.Info("Clustering disabled, skipping gossip routine")
|
||||
return
|
||||
}
|
||||
|
||||
s.wg.Add(1)
|
||||
go s.gossipRoutine()
|
||||
}
|
||||
|
||||
// Stop terminates the gossip service
|
||||
func (s *GossipService) Stop() {
|
||||
s.cancel()
|
||||
s.wg.Wait()
|
||||
}
|
||||
|
||||
// AddMember adds a member to the gossip member list
|
||||
func (s *GossipService) AddMember(member *types.Member) {
|
||||
s.membersMu.Lock()
|
||||
defer s.membersMu.Unlock()
|
||||
s.members[member.ID] = member
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"node_id": member.ID,
|
||||
"address": member.Address,
|
||||
}).Info("Member added")
|
||||
}
|
||||
|
||||
// RemoveMember removes a member from the gossip member list
|
||||
func (s *GossipService) RemoveMember(nodeID string) {
|
||||
s.membersMu.Lock()
|
||||
defer s.membersMu.Unlock()
|
||||
if member, exists := s.members[nodeID]; exists {
|
||||
delete(s.members, nodeID)
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"node_id": member.ID,
|
||||
"address": member.Address,
|
||||
}).Info("Member removed")
|
||||
}
|
||||
}
|
||||
|
||||
// GetMembers returns a copy of all members
|
||||
func (s *GossipService) GetMembers() []*types.Member {
|
||||
s.membersMu.RLock()
|
||||
defer s.membersMu.RUnlock()
|
||||
members := make([]*types.Member, 0, len(s.members))
|
||||
for _, member := range s.members {
|
||||
members = append(members, member)
|
||||
}
|
||||
return members
|
||||
}
|
||||
|
||||
// GetHealthyMembers returns members that have been seen recently
|
||||
func (s *GossipService) GetHealthyMembers() []*types.Member {
|
||||
s.membersMu.RLock()
|
||||
defer s.membersMu.RUnlock()
|
||||
|
||||
now := time.Now().UnixMilli()
|
||||
healthyMembers := make([]*types.Member, 0)
|
||||
|
||||
for _, member := range s.members {
|
||||
// Consider member healthy if last seen within last 5 minutes
|
||||
if now-member.LastSeen < 5*60*1000 {
|
||||
healthyMembers = append(healthyMembers, member)
|
||||
}
|
||||
}
|
||||
|
||||
return healthyMembers
|
||||
}
|
||||
|
||||
// gossipRoutine runs periodically to exchange member lists
|
||||
func (s *GossipService) gossipRoutine() {
|
||||
defer s.wg.Done()
|
||||
|
||||
for {
|
||||
// Random interval between 1-2 minutes
|
||||
minInterval := time.Duration(s.config.GossipIntervalMin) * time.Second
|
||||
maxInterval := time.Duration(s.config.GossipIntervalMax) * time.Second
|
||||
interval := minInterval + time.Duration(rand.Int63n(int64(maxInterval-minInterval)))
|
||||
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
case <-time.After(interval):
|
||||
s.performGossipRound()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// performGossipRound performs a gossip round with random healthy peers
|
||||
func (s *GossipService) performGossipRound() {
|
||||
members := s.GetHealthyMembers()
|
||||
if len(members) == 0 {
|
||||
s.logger.Debug("No healthy members for gossip round")
|
||||
return
|
||||
}
|
||||
|
||||
// Select 1-3 random peers for gossip
|
||||
maxPeers := 3
|
||||
if len(members) < maxPeers {
|
||||
maxPeers = len(members)
|
||||
}
|
||||
|
||||
// Shuffle and select
|
||||
rand.Shuffle(len(members), func(i, j int) {
|
||||
members[i], members[j] = members[j], members[i]
|
||||
})
|
||||
|
||||
selectedPeers := members[:rand.Intn(maxPeers)+1]
|
||||
|
||||
for _, peer := range selectedPeers {
|
||||
go s.gossipWithPeer(peer)
|
||||
}
|
||||
}
|
||||
|
||||
// gossipWithPeer performs gossip with a specific peer
|
||||
func (s *GossipService) gossipWithPeer(peer *types.Member) error {
|
||||
s.logger.WithField("peer", peer.Address).Debug("Starting gossip with peer")
|
||||
|
||||
// Get our current member list
|
||||
localMembers := s.GetMembers()
|
||||
|
||||
// Send our member list to the peer
|
||||
gossipData := make([]types.Member, len(localMembers))
|
||||
for i, member := range localMembers {
|
||||
gossipData[i] = *member
|
||||
}
|
||||
|
||||
// Add ourselves to the list
|
||||
selfMember := types.Member{
|
||||
ID: s.config.NodeID,
|
||||
Address: fmt.Sprintf("%s:%d", s.config.BindAddress, s.config.Port),
|
||||
LastSeen: time.Now().UnixMilli(),
|
||||
JoinedTimestamp: s.GetJoinedTimestamp(),
|
||||
}
|
||||
gossipData = append(gossipData, selfMember)
|
||||
|
||||
jsonData, err := json.Marshal(gossipData)
|
||||
if err != nil {
|
||||
s.logger.WithError(err).Error("Failed to marshal gossip data")
|
||||
return err
|
||||
}
|
||||
|
||||
// Send HTTP request to peer with cluster authentication
|
||||
client := NewAuthenticatedHTTPClient(s.config, 5*time.Second)
|
||||
protocol := GetProtocol(s.config)
|
||||
url := fmt.Sprintf("%s://%s/members/gossip", protocol, peer.Address)
|
||||
|
||||
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
|
||||
if err != nil {
|
||||
s.logger.WithError(err).Error("Failed to create gossip request")
|
||||
return err
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
AddClusterAuthHeaders(req, s.config)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"peer": peer.Address,
|
||||
"error": err.Error(),
|
||||
}).Warn("Failed to gossip with peer")
|
||||
s.markPeerUnhealthy(peer.ID)
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"peer": peer.Address,
|
||||
"status": resp.StatusCode,
|
||||
}).Warn("Gossip request failed")
|
||||
s.markPeerUnhealthy(peer.ID)
|
||||
return fmt.Errorf("gossip request failed with status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
// Process response - peer's member list
|
||||
var remoteMemberList []types.Member
|
||||
if err := json.NewDecoder(resp.Body).Decode(&remoteMemberList); err != nil {
|
||||
s.logger.WithError(err).Error("Failed to decode gossip response")
|
||||
return err
|
||||
}
|
||||
|
||||
// Merge remote member list with our local list
|
||||
s.MergeMemberList(remoteMemberList, s.config.NodeID)
|
||||
|
||||
// Update peer's last seen timestamp
|
||||
s.updateMemberLastSeen(peer.ID, time.Now().UnixMilli())
|
||||
|
||||
s.logger.WithField("peer", peer.Address).Debug("Completed gossip with peer")
|
||||
return nil
|
||||
}
|
||||
|
||||
// markPeerUnhealthy marks a peer as unhealthy
|
||||
func (s *GossipService) markPeerUnhealthy(nodeID string) {
|
||||
s.membersMu.Lock()
|
||||
defer s.membersMu.Unlock()
|
||||
|
||||
if member, exists := s.members[nodeID]; exists {
|
||||
// Mark as last seen a long time ago to indicate unhealthy
|
||||
member.LastSeen = time.Now().UnixMilli() - 10*60*1000 // 10 minutes ago
|
||||
s.logger.WithField("node_id", nodeID).Warn("Marked peer as unhealthy")
|
||||
}
|
||||
}
|
||||
|
||||
// updateMemberLastSeen updates member's last seen timestamp
|
||||
func (s *GossipService) updateMemberLastSeen(nodeID string, timestamp int64) {
|
||||
s.membersMu.Lock()
|
||||
defer s.membersMu.Unlock()
|
||||
|
||||
if member, exists := s.members[nodeID]; exists {
|
||||
member.LastSeen = timestamp
|
||||
}
|
||||
}
|
||||
|
||||
// MergeMemberList merges remote member list with local member list
|
||||
func (s *GossipService) MergeMemberList(remoteMembers []types.Member, selfNodeID string) {
|
||||
s.membersMu.Lock()
|
||||
defer s.membersMu.Unlock()
|
||||
|
||||
now := time.Now().UnixMilli()
|
||||
|
||||
for _, remoteMember := range remoteMembers {
|
||||
// Skip ourselves
|
||||
if remoteMember.ID == selfNodeID {
|
||||
continue
|
||||
}
|
||||
|
||||
if localMember, exists := s.members[remoteMember.ID]; exists {
|
||||
// Update existing member
|
||||
if remoteMember.LastSeen > localMember.LastSeen {
|
||||
localMember.LastSeen = remoteMember.LastSeen
|
||||
}
|
||||
// Keep the earlier joined timestamp
|
||||
if remoteMember.JoinedTimestamp < localMember.JoinedTimestamp {
|
||||
localMember.JoinedTimestamp = remoteMember.JoinedTimestamp
|
||||
}
|
||||
} else {
|
||||
// Add new member
|
||||
newMember := &types.Member{
|
||||
ID: remoteMember.ID,
|
||||
Address: remoteMember.Address,
|
||||
LastSeen: remoteMember.LastSeen,
|
||||
JoinedTimestamp: remoteMember.JoinedTimestamp,
|
||||
}
|
||||
s.members[remoteMember.ID] = newMember
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"node_id": remoteMember.ID,
|
||||
"address": remoteMember.Address,
|
||||
}).Info("Discovered new member through gossip")
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up old members (not seen for more than 10 minutes)
|
||||
toRemove := make([]string, 0)
|
||||
for nodeID, member := range s.members {
|
||||
if now-member.LastSeen > 10*60*1000 { // 10 minutes
|
||||
toRemove = append(toRemove, nodeID)
|
||||
}
|
||||
}
|
||||
|
||||
for _, nodeID := range toRemove {
|
||||
delete(s.members, nodeID)
|
||||
s.logger.WithField("node_id", nodeID).Info("Removed stale member")
|
||||
}
|
||||
}
|
||||
|
||||
// GetJoinedTimestamp placeholder - would be implemented by the server
|
||||
func (s *GossipService) GetJoinedTimestamp() int64 {
|
||||
// This should be implemented by the server that uses this service
|
||||
return time.Now().UnixMilli()
|
||||
}
|
43
cluster/http_client.go
Normal file
43
cluster/http_client.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// NewAuthenticatedHTTPClient creates an HTTP client configured for cluster authentication
|
||||
func NewAuthenticatedHTTPClient(config *types.Config, timeout time.Duration) *http.Client {
|
||||
client := &http.Client{
|
||||
Timeout: timeout,
|
||||
}
|
||||
|
||||
// Configure TLS if enabled
|
||||
if config.ClusterTLSEnabled {
|
||||
tlsConfig := &tls.Config{
|
||||
InsecureSkipVerify: config.ClusterTLSSkipVerify,
|
||||
}
|
||||
|
||||
client.Transport = &http.Transport{
|
||||
TLSClientConfig: tlsConfig,
|
||||
}
|
||||
}
|
||||
|
||||
return client
|
||||
}
|
||||
|
||||
// AddClusterAuthHeaders adds authentication headers to an HTTP request
|
||||
func AddClusterAuthHeaders(req *http.Request, config *types.Config) {
|
||||
req.Header.Set("X-Cluster-Secret", config.ClusterSecret)
|
||||
req.Header.Set("X-Node-ID", config.NodeID)
|
||||
}
|
||||
|
||||
// GetProtocol returns the appropriate protocol (http or https) based on TLS configuration
|
||||
func GetProtocol(config *types.Config) string {
|
||||
if config.ClusterTLSEnabled {
|
||||
return "https"
|
||||
}
|
||||
return "http"
|
||||
}
|
176
cluster/merkle.go
Normal file
176
cluster/merkle.go
Normal file
@@ -0,0 +1,176 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha256"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
badger "github.com/dgraph-io/badger/v4"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// MerkleService handles Merkle tree operations
|
||||
type MerkleService struct {
|
||||
db *badger.DB
|
||||
logger *logrus.Logger
|
||||
}
|
||||
|
||||
// NewMerkleService creates a new Merkle tree service
|
||||
func NewMerkleService(db *badger.DB, logger *logrus.Logger) *MerkleService {
|
||||
return &MerkleService{
|
||||
db: db,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
// CalculateHash generates a SHA256 hash for a given byte slice
|
||||
func CalculateHash(data []byte) []byte {
|
||||
h := sha256.New()
|
||||
h.Write(data)
|
||||
return h.Sum(nil)
|
||||
}
|
||||
|
||||
// CalculateLeafHash generates a hash for a leaf node based on its path, UUID, timestamp, and data
|
||||
func (s *MerkleService) CalculateLeafHash(path string, storedValue *types.StoredValue) []byte {
|
||||
// Concatenate path, UUID, timestamp, and the raw data bytes for hashing
|
||||
// Ensure a consistent order of fields for hashing
|
||||
dataToHash := bytes.Buffer{}
|
||||
dataToHash.WriteString(path)
|
||||
dataToHash.WriteByte(':')
|
||||
dataToHash.WriteString(storedValue.UUID)
|
||||
dataToHash.WriteByte(':')
|
||||
dataToHash.WriteString(strconv.FormatInt(storedValue.Timestamp, 10))
|
||||
dataToHash.WriteByte(':')
|
||||
dataToHash.Write(storedValue.Data) // Use raw bytes of json.RawMessage
|
||||
|
||||
return CalculateHash(dataToHash.Bytes())
|
||||
}
|
||||
|
||||
// GetAllKVPairsForMerkleTree retrieves all key-value pairs needed for Merkle tree construction
|
||||
func (s *MerkleService) GetAllKVPairsForMerkleTree() (map[string]*types.StoredValue, error) {
|
||||
pairs := make(map[string]*types.StoredValue)
|
||||
err := s.db.View(func(txn *badger.Txn) error {
|
||||
opts := badger.DefaultIteratorOptions
|
||||
opts.PrefetchValues = true // We need the values for hashing
|
||||
it := txn.NewIterator(opts)
|
||||
defer it.Close()
|
||||
|
||||
// Iterate over all actual data keys (not _ts: indexes)
|
||||
for it.Rewind(); it.Valid(); it.Next() {
|
||||
item := it.Item()
|
||||
key := string(item.Key())
|
||||
|
||||
if strings.HasPrefix(key, "_ts:") {
|
||||
continue // Skip index keys
|
||||
}
|
||||
|
||||
var storedValue types.StoredValue
|
||||
err := item.Value(func(val []byte) error {
|
||||
return json.Unmarshal(val, &storedValue)
|
||||
})
|
||||
if err != nil {
|
||||
s.logger.WithError(err).WithField("key", key).Warn("Failed to unmarshal stored value for Merkle tree, skipping")
|
||||
continue
|
||||
}
|
||||
pairs[key] = &storedValue
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return pairs, nil
|
||||
}
|
||||
|
||||
// BuildMerkleTreeFromPairs constructs a Merkle Tree from the KVS data
|
||||
// This version uses a recursive approach to build a balanced tree from sorted keys
|
||||
func (s *MerkleService) BuildMerkleTreeFromPairs(pairs map[string]*types.StoredValue) (*types.MerkleNode, error) {
|
||||
if len(pairs) == 0 {
|
||||
return &types.MerkleNode{Hash: CalculateHash([]byte("empty_tree")), StartKey: "", EndKey: ""}, nil
|
||||
}
|
||||
|
||||
// Sort keys to ensure consistent tree structure
|
||||
keys := make([]string, 0, len(pairs))
|
||||
for k := range pairs {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
|
||||
// Create leaf nodes
|
||||
leafNodes := make([]*types.MerkleNode, len(keys))
|
||||
for i, key := range keys {
|
||||
storedValue := pairs[key]
|
||||
hash := s.CalculateLeafHash(key, storedValue)
|
||||
leafNodes[i] = &types.MerkleNode{Hash: hash, StartKey: key, EndKey: key}
|
||||
}
|
||||
|
||||
// Recursively build parent nodes
|
||||
return s.buildMerkleTreeRecursive(leafNodes)
|
||||
}
|
||||
|
||||
// buildMerkleTreeRecursive builds the tree from a slice of nodes
|
||||
func (s *MerkleService) buildMerkleTreeRecursive(nodes []*types.MerkleNode) (*types.MerkleNode, error) {
|
||||
if len(nodes) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
if len(nodes) == 1 {
|
||||
return nodes[0], nil
|
||||
}
|
||||
|
||||
var nextLevel []*types.MerkleNode
|
||||
for i := 0; i < len(nodes); i += 2 {
|
||||
left := nodes[i]
|
||||
var right *types.MerkleNode
|
||||
if i+1 < len(nodes) {
|
||||
right = nodes[i+1]
|
||||
}
|
||||
|
||||
var combinedHash []byte
|
||||
var endKey string
|
||||
|
||||
if right != nil {
|
||||
combinedHash = CalculateHash(append(left.Hash, right.Hash...))
|
||||
endKey = right.EndKey
|
||||
} else {
|
||||
// Odd number of nodes, promote the left node
|
||||
combinedHash = left.Hash
|
||||
endKey = left.EndKey
|
||||
}
|
||||
|
||||
parentNode := &types.MerkleNode{
|
||||
Hash: combinedHash,
|
||||
StartKey: left.StartKey,
|
||||
EndKey: endKey,
|
||||
}
|
||||
nextLevel = append(nextLevel, parentNode)
|
||||
}
|
||||
return s.buildMerkleTreeRecursive(nextLevel)
|
||||
}
|
||||
|
||||
// FilterPairsByRange filters a map of StoredValue by key range
|
||||
func FilterPairsByRange(allPairs map[string]*types.StoredValue, startKey, endKey string) map[string]*types.StoredValue {
|
||||
filtered := make(map[string]*types.StoredValue)
|
||||
for key, value := range allPairs {
|
||||
if (startKey == "" || key >= startKey) && (endKey == "" || key <= endKey) {
|
||||
filtered[key] = value
|
||||
}
|
||||
}
|
||||
return filtered
|
||||
}
|
||||
|
||||
// BuildSubtreeForRange builds a Merkle subtree for a specific key range
|
||||
func (s *MerkleService) BuildSubtreeForRange(startKey, endKey string) (*types.MerkleNode, error) {
|
||||
pairs, err := s.GetAllKVPairsForMerkleTree()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get KV pairs for subtree: %v", err)
|
||||
}
|
||||
|
||||
filteredPairs := FilterPairsByRange(pairs, startKey, endKey)
|
||||
return s.BuildMerkleTreeFromPairs(filteredPairs)
|
||||
}
|
601
cluster/sync.go
Normal file
601
cluster/sync.go
Normal file
@@ -0,0 +1,601 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
badger "github.com/dgraph-io/badger/v4"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// SyncService handles data synchronization between cluster nodes
|
||||
type SyncService struct {
|
||||
db *badger.DB
|
||||
config *types.Config
|
||||
gossipService *GossipService
|
||||
merkleService *MerkleService
|
||||
logger *logrus.Logger
|
||||
merkleRoot *types.MerkleNode
|
||||
merkleRootMu sync.RWMutex
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
wg sync.WaitGroup
|
||||
}
|
||||
|
||||
// NewSyncService creates a new sync service
|
||||
func NewSyncService(db *badger.DB, config *types.Config, gossipService *GossipService, merkleService *MerkleService, logger *logrus.Logger) *SyncService {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
return &SyncService{
|
||||
db: db,
|
||||
config: config,
|
||||
gossipService: gossipService,
|
||||
merkleService: merkleService,
|
||||
logger: logger,
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
}
|
||||
}
|
||||
|
||||
// Start begins the sync routines
|
||||
func (s *SyncService) Start() {
|
||||
if !s.config.ClusteringEnabled {
|
||||
s.logger.Info("Clustering disabled, skipping sync routines")
|
||||
return
|
||||
}
|
||||
|
||||
// Start sync routine
|
||||
s.wg.Add(1)
|
||||
go s.syncRoutine()
|
||||
|
||||
// Start Merkle tree rebuild routine
|
||||
s.wg.Add(1)
|
||||
go s.merkleTreeRebuildRoutine()
|
||||
}
|
||||
|
||||
// Stop terminates the sync service
|
||||
func (s *SyncService) Stop() {
|
||||
s.cancel()
|
||||
s.wg.Wait()
|
||||
}
|
||||
|
||||
// GetMerkleRoot returns the current Merkle root
|
||||
func (s *SyncService) GetMerkleRoot() *types.MerkleNode {
|
||||
s.merkleRootMu.RLock()
|
||||
defer s.merkleRootMu.RUnlock()
|
||||
return s.merkleRoot
|
||||
}
|
||||
|
||||
// SetMerkleRoot sets the current Merkle root
|
||||
func (s *SyncService) SetMerkleRoot(root *types.MerkleNode) {
|
||||
s.merkleRootMu.Lock()
|
||||
defer s.merkleRootMu.Unlock()
|
||||
s.merkleRoot = root
|
||||
}
|
||||
|
||||
// syncRoutine handles regular and catch-up syncing
|
||||
func (s *SyncService) syncRoutine() {
|
||||
defer s.wg.Done()
|
||||
|
||||
syncTicker := time.NewTicker(time.Duration(s.config.SyncInterval) * time.Second)
|
||||
defer syncTicker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
case <-syncTicker.C:
|
||||
s.performMerkleSync()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// merkleTreeRebuildRoutine periodically rebuilds the Merkle tree
|
||||
func (s *SyncService) merkleTreeRebuildRoutine() {
|
||||
defer s.wg.Done()
|
||||
ticker := time.NewTicker(time.Duration(s.config.SyncInterval) * time.Second)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
s.logger.Debug("Rebuilding Merkle tree...")
|
||||
pairs, err := s.merkleService.GetAllKVPairsForMerkleTree()
|
||||
if err != nil {
|
||||
s.logger.WithError(err).Error("Failed to get KV pairs for Merkle tree rebuild")
|
||||
continue
|
||||
}
|
||||
newRoot, err := s.merkleService.BuildMerkleTreeFromPairs(pairs)
|
||||
if err != nil {
|
||||
s.logger.WithError(err).Error("Failed to rebuild Merkle tree")
|
||||
continue
|
||||
}
|
||||
s.SetMerkleRoot(newRoot)
|
||||
s.logger.Debug("Merkle tree rebuilt.")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// InitializeMerkleTree builds the initial Merkle tree
|
||||
func (s *SyncService) InitializeMerkleTree() error {
|
||||
pairs, err := s.merkleService.GetAllKVPairsForMerkleTree()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get all KV pairs for initial Merkle tree: %v", err)
|
||||
}
|
||||
root, err := s.merkleService.BuildMerkleTreeFromPairs(pairs)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to build initial Merkle tree: %v", err)
|
||||
}
|
||||
s.SetMerkleRoot(root)
|
||||
s.logger.Info("Initial Merkle tree built.")
|
||||
return nil
|
||||
}
|
||||
|
||||
// performMerkleSync performs a synchronization round using Merkle Trees
|
||||
func (s *SyncService) performMerkleSync() {
|
||||
members := s.gossipService.GetHealthyMembers()
|
||||
if len(members) == 0 {
|
||||
s.logger.Debug("No healthy members for Merkle sync")
|
||||
return
|
||||
}
|
||||
|
||||
// Select random peer
|
||||
peer := members[rand.Intn(len(members))]
|
||||
|
||||
s.logger.WithField("peer", peer.Address).Info("Starting Merkle tree sync")
|
||||
|
||||
localRoot := s.GetMerkleRoot()
|
||||
if localRoot == nil {
|
||||
s.logger.Error("Local Merkle root is nil, cannot perform sync")
|
||||
return
|
||||
}
|
||||
|
||||
// 1. Get remote peer's Merkle root
|
||||
remoteRootResp, err := s.requestMerkleRoot(peer.Address)
|
||||
if err != nil {
|
||||
s.logger.WithError(err).WithField("peer", peer.Address).Error("Failed to get remote Merkle root")
|
||||
s.gossipService.markPeerUnhealthy(peer.ID)
|
||||
return
|
||||
}
|
||||
remoteRoot := remoteRootResp.Root
|
||||
|
||||
// 2. Compare roots and start recursive diffing if they differ
|
||||
if !bytes.Equal(localRoot.Hash, remoteRoot.Hash) {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"peer": peer.Address,
|
||||
"local_root": hex.EncodeToString(localRoot.Hash),
|
||||
"remote_root": hex.EncodeToString(remoteRoot.Hash),
|
||||
}).Info("Merkle roots differ, starting recursive diff")
|
||||
s.diffMerkleTreesRecursive(peer.Address, localRoot, remoteRoot)
|
||||
} else {
|
||||
s.logger.WithField("peer", peer.Address).Info("Merkle roots match, no sync needed")
|
||||
}
|
||||
|
||||
s.logger.WithField("peer", peer.Address).Info("Completed Merkle tree sync")
|
||||
}
|
||||
|
||||
// requestMerkleRoot requests the Merkle root from a peer
|
||||
func (s *SyncService) requestMerkleRoot(peerAddress string) (*types.MerkleRootResponse, error) {
|
||||
client := NewAuthenticatedHTTPClient(s.config, 10*time.Second)
|
||||
protocol := GetProtocol(s.config)
|
||||
url := fmt.Sprintf("%s://%s/merkle_tree/root", protocol, peerAddress)
|
||||
|
||||
req, err := http.NewRequest("GET", url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
AddClusterAuthHeaders(req, s.config)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return nil, fmt.Errorf("peer returned status %d for Merkle root", resp.StatusCode)
|
||||
}
|
||||
|
||||
var merkleRootResp types.MerkleRootResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&merkleRootResp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &merkleRootResp, nil
|
||||
}
|
||||
|
||||
// diffMerkleTreesRecursive recursively compares local and remote Merkle tree nodes
|
||||
func (s *SyncService) diffMerkleTreesRecursive(peerAddress string, localNode, remoteNode *types.MerkleNode) {
|
||||
// If hashes match, this subtree is in sync.
|
||||
if bytes.Equal(localNode.Hash, remoteNode.Hash) {
|
||||
return
|
||||
}
|
||||
|
||||
// Hashes differ, need to go deeper.
|
||||
// Request children from the remote peer for the current range.
|
||||
req := types.MerkleTreeDiffRequest{
|
||||
ParentNode: *remoteNode, // We are asking the remote peer about its children for this range
|
||||
LocalHash: localNode.Hash, // Our hash for this range
|
||||
}
|
||||
|
||||
remoteDiffResp, err := s.requestMerkleDiff(peerAddress, req)
|
||||
if err != nil {
|
||||
s.logger.WithError(err).WithFields(logrus.Fields{
|
||||
"peer": peerAddress,
|
||||
"start_key": localNode.StartKey,
|
||||
"end_key": localNode.EndKey,
|
||||
}).Error("Failed to get Merkle diff from peer")
|
||||
return
|
||||
}
|
||||
|
||||
if len(remoteDiffResp.Keys) > 0 {
|
||||
// This is a leaf-level diff, we have the actual keys that are different.
|
||||
s.handleLeafLevelDiff(peerAddress, remoteDiffResp.Keys, localNode)
|
||||
} else if len(remoteDiffResp.Children) > 0 {
|
||||
// Not a leaf level, continue recursive diff for children.
|
||||
s.handleChildrenDiff(peerAddress, remoteDiffResp.Children)
|
||||
}
|
||||
}
|
||||
|
||||
// handleLeafLevelDiff processes leaf-level differences
|
||||
func (s *SyncService) handleLeafLevelDiff(peerAddress string, keys []string, localNode *types.MerkleNode) {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"peer": peerAddress,
|
||||
"start_key": localNode.StartKey,
|
||||
"end_key": localNode.EndKey,
|
||||
"num_keys": len(keys),
|
||||
}).Info("Found divergent keys, fetching and comparing data")
|
||||
|
||||
for _, key := range keys {
|
||||
// Fetch the individual key from the peer
|
||||
remoteStoredValue, err := s.fetchSingleKVFromPeer(peerAddress, key)
|
||||
if err != nil {
|
||||
s.logger.WithError(err).WithFields(logrus.Fields{
|
||||
"peer": peerAddress,
|
||||
"key": key,
|
||||
}).Error("Failed to fetch single KV from peer during diff")
|
||||
continue
|
||||
}
|
||||
|
||||
localStoredValue, localExists := s.getLocalData(key)
|
||||
|
||||
if remoteStoredValue == nil {
|
||||
// Key was deleted on remote, delete locally if exists
|
||||
if localExists {
|
||||
s.logger.WithField("key", key).Info("Key deleted on remote, deleting locally")
|
||||
s.deleteKVLocally(key, localStoredValue.Timestamp)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if !localExists {
|
||||
// Local data is missing, store the remote data
|
||||
if err := s.storeReplicatedDataWithMetadata(key, remoteStoredValue); err != nil {
|
||||
s.logger.WithError(err).WithField("key", key).Error("Failed to store missing replicated data")
|
||||
} else {
|
||||
s.logger.WithField("key", key).Info("Fetched and stored missing data from peer")
|
||||
}
|
||||
} else if localStoredValue.Timestamp < remoteStoredValue.Timestamp {
|
||||
// Remote is newer, store the remote data
|
||||
if err := s.storeReplicatedDataWithMetadata(key, remoteStoredValue); err != nil {
|
||||
s.logger.WithError(err).WithField("key", key).Error("Failed to store newer replicated data")
|
||||
} else {
|
||||
s.logger.WithField("key", key).Info("Fetched and stored newer data from peer")
|
||||
}
|
||||
} else if localStoredValue.Timestamp == remoteStoredValue.Timestamp && localStoredValue.UUID != remoteStoredValue.UUID {
|
||||
// Timestamp collision, engage conflict resolution
|
||||
s.resolveConflict(key, localStoredValue, remoteStoredValue, peerAddress)
|
||||
}
|
||||
// If local is newer or same timestamp and same UUID, do nothing.
|
||||
}
|
||||
}
|
||||
|
||||
// fetchSingleKVFromPeer fetches a single KV pair from a peer
|
||||
func (s *SyncService) fetchSingleKVFromPeer(peerAddress, path string) (*types.StoredValue, error) {
|
||||
client := NewAuthenticatedHTTPClient(s.config, 5*time.Second)
|
||||
protocol := GetProtocol(s.config)
|
||||
url := fmt.Sprintf("%s://%s/kv/%s", protocol, peerAddress, path)
|
||||
|
||||
req, err := http.NewRequest("GET", url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
AddClusterAuthHeaders(req, s.config)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode == http.StatusNotFound {
|
||||
return nil, nil // Key might have been deleted on the peer
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return nil, fmt.Errorf("peer returned status %d for path %s", resp.StatusCode, path)
|
||||
}
|
||||
|
||||
var storedValue types.StoredValue
|
||||
if err := json.NewDecoder(resp.Body).Decode(&storedValue); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode types.StoredValue from peer: %v", err)
|
||||
}
|
||||
|
||||
return &storedValue, nil
|
||||
}
|
||||
|
||||
// getLocalData is a utility to retrieve a types.StoredValue from local DB
|
||||
func (s *SyncService) getLocalData(path string) (*types.StoredValue, bool) {
|
||||
var storedValue types.StoredValue
|
||||
err := s.db.View(func(txn *badger.Txn) error {
|
||||
item, err := txn.Get([]byte(path))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return item.Value(func(val []byte) error {
|
||||
return json.Unmarshal(val, &storedValue)
|
||||
})
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
return &storedValue, true
|
||||
}
|
||||
|
||||
// deleteKVLocally deletes a key-value pair and its associated timestamp index locally
|
||||
func (s *SyncService) deleteKVLocally(key string, timestamp int64) error {
|
||||
return s.db.Update(func(txn *badger.Txn) error {
|
||||
// Delete the main key
|
||||
if err := txn.Delete([]byte(key)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Delete the timestamp index
|
||||
indexKey := fmt.Sprintf("_ts:%d:%s", timestamp, key)
|
||||
return txn.Delete([]byte(indexKey))
|
||||
})
|
||||
}
|
||||
|
||||
// storeReplicatedDataWithMetadata stores replicated data preserving its original metadata
|
||||
func (s *SyncService) storeReplicatedDataWithMetadata(path string, storedValue *types.StoredValue) error {
|
||||
valueBytes, err := json.Marshal(storedValue)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.db.Update(func(txn *badger.Txn) error {
|
||||
// Store main data
|
||||
if err := txn.Set([]byte(path), valueBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Store timestamp index
|
||||
indexKey := fmt.Sprintf("_ts:%020d:%s", storedValue.Timestamp, path)
|
||||
return txn.Set([]byte(indexKey), []byte(storedValue.UUID))
|
||||
})
|
||||
}
|
||||
|
||||
// resolveConflict performs sophisticated conflict resolution with majority vote and oldest-node tie-breaking
|
||||
func (s *SyncService) resolveConflict(key string, local, remote *types.StoredValue, peerAddress string) error {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"key": key,
|
||||
"local_ts": local.Timestamp,
|
||||
"remote_ts": remote.Timestamp,
|
||||
"local_uuid": local.UUID,
|
||||
"remote_uuid": remote.UUID,
|
||||
"peer": peerAddress,
|
||||
}).Info("Resolving timestamp collision conflict")
|
||||
|
||||
if remote.Timestamp > local.Timestamp {
|
||||
// Remote is newer, store it
|
||||
err := s.storeReplicatedDataWithMetadata(key, remote)
|
||||
if err == nil {
|
||||
s.logger.WithField("key", key).Info("Conflict resolved: remote data wins (newer timestamp)")
|
||||
}
|
||||
return err
|
||||
} else if local.Timestamp > remote.Timestamp {
|
||||
// Local is newer, keep local data
|
||||
s.logger.WithField("key", key).Info("Conflict resolved: local data wins (newer timestamp)")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Timestamps are equal - need sophisticated conflict resolution
|
||||
s.logger.WithField("key", key).Info("Timestamp collision detected, applying oldest-node rule")
|
||||
|
||||
// Get cluster members to determine which node is older
|
||||
members := s.gossipService.GetMembers()
|
||||
|
||||
// Find the local node and the remote node in membership
|
||||
var localMember, remoteMember *types.Member
|
||||
localNodeID := s.config.NodeID
|
||||
|
||||
for _, member := range members {
|
||||
if member.ID == localNodeID {
|
||||
localMember = member
|
||||
}
|
||||
if member.Address == peerAddress {
|
||||
remoteMember = member
|
||||
}
|
||||
}
|
||||
|
||||
// If we can't find membership info, fall back to UUID comparison for deterministic result
|
||||
if localMember == nil || remoteMember == nil {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"key": key,
|
||||
"peerAddress": peerAddress,
|
||||
"localNodeID": localNodeID,
|
||||
"localMember": localMember != nil,
|
||||
"remoteMember": remoteMember != nil,
|
||||
"totalMembers": len(members),
|
||||
}).Warn("Could not find membership info for conflict resolution, using UUID comparison")
|
||||
if remote.UUID < local.UUID {
|
||||
// Remote UUID lexically smaller (deterministic choice)
|
||||
err := s.storeReplicatedDataWithMetadata(key, remote)
|
||||
if err == nil {
|
||||
s.logger.WithField("key", key).Info("Conflict resolved: remote data wins (UUID tie-breaker)")
|
||||
}
|
||||
return err
|
||||
}
|
||||
s.logger.WithField("key", key).Info("Conflict resolved: local data wins (UUID tie-breaker)")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Apply oldest-node rule: node with earliest joined_timestamp wins
|
||||
if remoteMember.JoinedTimestamp < localMember.JoinedTimestamp {
|
||||
// Remote node is older, its data wins
|
||||
err := s.storeReplicatedDataWithMetadata(key, remote)
|
||||
if err == nil {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"key": key,
|
||||
"local_joined": localMember.JoinedTimestamp,
|
||||
"remote_joined": remoteMember.JoinedTimestamp,
|
||||
}).Info("Conflict resolved: remote data wins (oldest-node rule)")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Local node is older or equal, keep local data
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"key": key,
|
||||
"local_joined": localMember.JoinedTimestamp,
|
||||
"remote_joined": remoteMember.JoinedTimestamp,
|
||||
}).Info("Conflict resolved: local data wins (oldest-node rule)")
|
||||
return nil
|
||||
}
|
||||
|
||||
// requestMerkleDiff requests children hashes or keys for a given node/range from a peer
|
||||
func (s *SyncService) requestMerkleDiff(peerAddress string, reqData types.MerkleTreeDiffRequest) (*types.MerkleTreeDiffResponse, error) {
|
||||
jsonData, err := json.Marshal(reqData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
client := NewAuthenticatedHTTPClient(s.config, 10*time.Second)
|
||||
protocol := GetProtocol(s.config)
|
||||
url := fmt.Sprintf("%s://%s/merkle_tree/diff", protocol, peerAddress)
|
||||
|
||||
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
AddClusterAuthHeaders(req, s.config)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return nil, fmt.Errorf("peer returned status %d for Merkle diff", resp.StatusCode)
|
||||
}
|
||||
|
||||
var diffResp types.MerkleTreeDiffResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&diffResp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &diffResp, nil
|
||||
}
|
||||
|
||||
// handleChildrenDiff processes children-level differences
|
||||
func (s *SyncService) handleChildrenDiff(peerAddress string, children []types.MerkleNode) {
|
||||
localPairs, err := s.merkleService.GetAllKVPairsForMerkleTree()
|
||||
if err != nil {
|
||||
s.logger.WithError(err).Error("Failed to get KV pairs for local children comparison")
|
||||
return
|
||||
}
|
||||
|
||||
for _, remoteChild := range children {
|
||||
// Build the local Merkle node for this child's range
|
||||
localChildNode, err := s.merkleService.BuildMerkleTreeFromPairs(FilterPairsByRange(localPairs, remoteChild.StartKey, remoteChild.EndKey))
|
||||
if err != nil {
|
||||
s.logger.WithError(err).WithFields(logrus.Fields{
|
||||
"start_key": remoteChild.StartKey,
|
||||
"end_key": remoteChild.EndKey,
|
||||
}).Error("Failed to build local child node for diff")
|
||||
continue
|
||||
}
|
||||
|
||||
if localChildNode == nil || !bytes.Equal(localChildNode.Hash, remoteChild.Hash) {
|
||||
// If local child node is nil (meaning local has no data in this range)
|
||||
// or hashes differ, then we need to fetch the data.
|
||||
if localChildNode == nil {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"peer": peerAddress,
|
||||
"start_key": remoteChild.StartKey,
|
||||
"end_key": remoteChild.EndKey,
|
||||
}).Info("Local node missing data in remote child's range, fetching full range")
|
||||
s.fetchAndStoreRange(peerAddress, remoteChild.StartKey, remoteChild.EndKey)
|
||||
} else {
|
||||
s.diffMerkleTreesRecursive(peerAddress, localChildNode, &remoteChild)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// fetchAndStoreRange fetches a range of KV pairs from a peer and stores them locally
|
||||
func (s *SyncService) fetchAndStoreRange(peerAddress string, startKey, endKey string) error {
|
||||
reqData := types.KVRangeRequest{
|
||||
StartKey: startKey,
|
||||
EndKey: endKey,
|
||||
Limit: 0, // No limit
|
||||
}
|
||||
jsonData, err := json.Marshal(reqData)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
client := NewAuthenticatedHTTPClient(s.config, 30*time.Second) // Longer timeout for range fetches
|
||||
protocol := GetProtocol(s.config)
|
||||
url := fmt.Sprintf("%s://%s/kv_range", protocol, peerAddress)
|
||||
|
||||
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
AddClusterAuthHeaders(req, s.config)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return fmt.Errorf("peer returned status %d for KV range fetch", resp.StatusCode)
|
||||
}
|
||||
|
||||
var rangeResp types.KVRangeResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&rangeResp); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, pair := range rangeResp.Pairs {
|
||||
// Use storeReplicatedDataWithMetadata to preserve original UUID/Timestamp
|
||||
if err := s.storeReplicatedDataWithMetadata(pair.Path, &pair.StoredValue); err != nil {
|
||||
s.logger.WithError(err).WithFields(logrus.Fields{
|
||||
"peer": peerAddress,
|
||||
"path": pair.Path,
|
||||
}).Error("Failed to store fetched range data")
|
||||
} else {
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"peer": peerAddress,
|
||||
"path": pair.Path,
|
||||
}).Debug("Stored data from fetched range")
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
128
config/config.go
Normal file
128
config/config.go
Normal file
@@ -0,0 +1,128 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// Default configuration
|
||||
func Default() *types.Config {
|
||||
hostname, _ := os.Hostname()
|
||||
return &types.Config{
|
||||
NodeID: hostname,
|
||||
BindAddress: "127.0.0.1",
|
||||
Port: 8080,
|
||||
DataDir: "./data",
|
||||
SeedNodes: []string{},
|
||||
ReadOnly: false,
|
||||
LogLevel: "info",
|
||||
GossipIntervalMin: 60, // 1 minute
|
||||
GossipIntervalMax: 120, // 2 minutes
|
||||
SyncInterval: 300, // 5 minutes
|
||||
CatchupInterval: 120, // 2 minutes
|
||||
BootstrapMaxAgeHours: 720, // 30 days
|
||||
ThrottleDelayMs: 100,
|
||||
FetchDelayMs: 50,
|
||||
|
||||
// Default compression settings
|
||||
CompressionEnabled: true,
|
||||
CompressionLevel: 3, // Balance between performance and compression ratio
|
||||
|
||||
// Default TTL and size limit settings
|
||||
DefaultTTL: "0", // No default TTL
|
||||
MaxJSONSize: 1048576, // 1MB default max JSON size
|
||||
|
||||
// Default rate limiting settings
|
||||
RateLimitRequests: 100, // 100 requests per window
|
||||
RateLimitWindow: "1m", // 1 minute window
|
||||
|
||||
// Default tamper-evident logging settings
|
||||
TamperLogActions: []string{"data_write", "user_create", "auth_failure"},
|
||||
|
||||
// Default backup system settings
|
||||
BackupEnabled: true,
|
||||
BackupSchedule: "0 0 * * *", // Daily at midnight
|
||||
BackupPath: "./backups",
|
||||
BackupRetention: 7, // Keep backups for 7 days
|
||||
|
||||
// Default feature toggle settings (all enabled by default)
|
||||
AuthEnabled: true,
|
||||
TamperLoggingEnabled: true,
|
||||
ClusteringEnabled: true,
|
||||
RateLimitingEnabled: true,
|
||||
RevisionHistoryEnabled: true,
|
||||
|
||||
// Default anonymous access settings (both disabled by default for security)
|
||||
AllowAnonymousRead: false,
|
||||
AllowAnonymousWrite: false,
|
||||
|
||||
// Default cluster authentication settings (Issue #13)
|
||||
ClusterSecret: generateClusterSecret(),
|
||||
ClusterTLSEnabled: false,
|
||||
ClusterTLSCertFile: "",
|
||||
ClusterTLSKeyFile: "",
|
||||
ClusterTLSSkipVerify: false,
|
||||
}
|
||||
}
|
||||
|
||||
// generateClusterSecret generates a cryptographically secure random cluster secret
|
||||
func generateClusterSecret() string {
|
||||
// Generate 32 bytes (256 bits) of random data
|
||||
randomBytes := make([]byte, 32)
|
||||
if _, err := rand.Read(randomBytes); err != nil {
|
||||
// Fallback to a warning - this should never happen in practice
|
||||
fmt.Fprintf(os.Stderr, "Warning: Failed to generate secure cluster secret: %v\n", err)
|
||||
return ""
|
||||
}
|
||||
// Encode as base64 for easy configuration file storage
|
||||
return base64.StdEncoding.EncodeToString(randomBytes)
|
||||
}
|
||||
|
||||
// Load configuration from file or create default
|
||||
func Load(configPath string) (*types.Config, error) {
|
||||
config := Default()
|
||||
|
||||
if _, err := os.Stat(configPath); os.IsNotExist(err) {
|
||||
// Create default config file
|
||||
if err := os.MkdirAll(filepath.Dir(configPath), 0755); err != nil {
|
||||
return nil, fmt.Errorf("failed to create config directory: %v", err)
|
||||
}
|
||||
|
||||
data, err := yaml.Marshal(config)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal default config: %v", err)
|
||||
}
|
||||
|
||||
if err := os.WriteFile(configPath, data, 0644); err != nil {
|
||||
return nil, fmt.Errorf("failed to write default config: %v", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Created default configuration at %s\n", configPath)
|
||||
return config, nil
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(configPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read config file: %v", err)
|
||||
}
|
||||
|
||||
if err := yaml.Unmarshal(data, config); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse config file: %v", err)
|
||||
}
|
||||
|
||||
// Generate cluster secret if not provided and clustering is enabled (Issue #13)
|
||||
if config.ClusteringEnabled && config.ClusterSecret == "" {
|
||||
config.ClusterSecret = generateClusterSecret()
|
||||
fmt.Printf("Warning: No cluster_secret configured. Generated a random secret.\n")
|
||||
fmt.Printf(" To share this secret with other nodes, add it to your config:\n")
|
||||
fmt.Printf(" cluster_secret: %s\n", config.ClusterSecret)
|
||||
}
|
||||
|
||||
return config, nil
|
||||
}
|
323
design_v2.md
323
design_v2.md
@@ -1,323 +0,0 @@
|
||||
# Gossip in GO, lazy syncing K/J database
|
||||
|
||||
## Software Design Document: Clustered Key-Value Store
|
||||
|
||||
### 1. Introduction
|
||||
|
||||
#### 1.1 Goals
|
||||
This document outlines the design for a minimalistic, clustered key-value database system written in Go. The primary goals are:
|
||||
* **Eventual Consistency:** Prioritize availability and partition tolerance over strong consistency.
|
||||
* **Local-First Truth:** Local operations should be fast, with replication happening in the background.
|
||||
* **Gossip-Style Membership:** Decentralized mechanism for nodes to discover and track each other.
|
||||
* **Hierarchical Keys:** Support for structured keys (e.g., `/home/room/closet/socks`).
|
||||
* **Minimalistic Footprint:** Efficient resource usage on servers.
|
||||
* **Simple Configuration & Operation:** Easy to deploy and manage.
|
||||
* **Read-Only Mode:** Ability for nodes to restrict external writes.
|
||||
* **Gradual Bootstrapping:** New nodes integrate smoothly without overwhelming the cluster.
|
||||
* **Sophisticated Conflict Resolution:** Handle timestamp collisions using majority vote, with oldest node as tie-breaker.
|
||||
|
||||
#### 1.2 Non-Goals
|
||||
* Strong (linearizable/serializable) consistency.
|
||||
* Complex querying or indexing beyond key-based lookups and timestamp-filtered UUID lists.
|
||||
* Transaction support across multiple keys.
|
||||
|
||||
### 2. Architecture Overview
|
||||
|
||||
The system will consist of independent Go services (nodes) that communicate via HTTP/REST. Each node will embed a BadgerDB instance for local data storage and manage its own membership list through a gossip protocol. External clients interact with any available node, which then participates in the cluster's eventual consistency model.
|
||||
|
||||
**Key Architectural Principles:**
|
||||
* **Decentralized:** No central coordinator or leader.
|
||||
* **Peer-to-Peer:** Nodes communicate directly with each other for replication and membership.
|
||||
* **API-Driven:** All interactions, both external (clients) and internal (replication), occur over a RESTful HTTP API.
|
||||
|
||||
```
|
||||
+----------------+ +----------------+ +----------------+
|
||||
| Node A | | Node B | | Node C |
|
||||
| (Go Service) | | (Go Service) | | (Go Service) |
|
||||
| | | | | |
|
||||
| +------------+ | | +------------+ | | +------------+ |
|
||||
| | HTTP Server| | <---- | | HTTP Server| | <---- | | HTTP Server| |
|
||||
| | (API) | | ---> | | (API) | | ---> | | (API) | |
|
||||
| +------------+ | | +------------+ | | +------------+ |
|
||||
| | | | | | | | |
|
||||
| +------------+ | | +------------+ | | +------------+ |
|
||||
| | Gossip | | <---> | | Gossip | | <---> | | Gossip | |
|
||||
| | Manager | | | | Manager | | | | Manager | |
|
||||
| +------------+ | | +------------+ | | +------------+ |
|
||||
| | | | | | | | |
|
||||
| +------------+ | | +------------+ | | +------------+ |
|
||||
| | Replication| | <---> | | Replication| | <---> | | Replication| |
|
||||
| | Logic | | | | Logic | | | | Logic | |
|
||||
| +------------+ | | +------------+ | | +------------+ |
|
||||
| | | | | | | | |
|
||||
| +------------+ | | +------------+ | | +------------+ |
|
||||
| | BadgerDB | | | | BadgerDB | | | | BadgerDB | |
|
||||
| | (Local KV) | | | | (Local KV) | | | | (Local KV) | |
|
||||
| +------------+ | | +------------+ | | +------------+ |
|
||||
+----------------+ +----------------+ +----------------+
|
||||
^
|
||||
|
|
||||
+----- External Clients (Interact with any Node's API)
|
||||
```
|
||||
|
||||
### 3. Data Model
|
||||
|
||||
#### 3.1 Logical Data Structure
|
||||
Data is logically stored as a key-value pair, where the key is a hierarchical path and the value is a JSON object. Each pair also carries metadata for consistency and conflict resolution.
|
||||
|
||||
* **Logical Key:** `string` (e.g., `/home/room/closet/socks`)
|
||||
* **Logical Value:** `JSON object` (e.g., `{"count":7,"colors":["blue","red","black"]}`)
|
||||
|
||||
#### 3.2 Internal Storage Structure (BadgerDB)
|
||||
BadgerDB is a flat key-value store. To accommodate hierarchical keys and metadata, the following mapping will be used:
|
||||
|
||||
* **BadgerDB Key:** The full logical key path, with the leading `/kv/` prefix removed. Path segments will be separated by `/`. **No leading `/` will be stored in the BadgerDB key.**
|
||||
* Example: For logical key `/kv/home/room/closet/socks`, the BadgerDB key will be `home/room/closet/socks`.
|
||||
|
||||
* **BadgerDB Value:** A marshaled JSON object containing the `uuid`, `timestamp`, and the actual `data` JSON object. This allows for consistent versioning and conflict resolution.
|
||||
|
||||
```json
|
||||
// Example BadgerDB Value (marshaled JSON string)
|
||||
{
|
||||
"uuid": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
|
||||
"timestamp": 1672531200000, // Unix timestamp in milliseconds
|
||||
"data": {
|
||||
"count": 7,
|
||||
"colors": ["blue", "red", "black"]
|
||||
}
|
||||
}
|
||||
``` * **`uuid` (string):** A UUIDv4, unique identifier for this specific version of the data.
|
||||
* **`timestamp` (int64):** Unix timestamp representing the time of the last modification. **This will be in milliseconds since epoch**, providing higher precision and reducing collision risk. This is the primary mechanism for conflict resolution ("newest data wins").
|
||||
* **`data` (JSON object):** The actual user-provided JSON payload.
|
||||
|
||||
### 4. API Endpoints
|
||||
|
||||
All endpoints will communicate over HTTP/1.1 and utilize JSON for request/response bodies.
|
||||
|
||||
#### 4.1 `/kv/` Endpoints (Data Operations - External/Internal)
|
||||
|
||||
These endpoints are for direct key-value manipulation by external clients and are also used internally by nodes when fetching full data during replication.
|
||||
|
||||
* **`GET /kv/{path}`**
|
||||
* **Description:** Retrieves the JSON object associated with the given hierarchical key path.
|
||||
* **Request:** No body.
|
||||
* **Responses:**
|
||||
* `200 OK`: `Content-Type: application/json` with the stored JSON object.
|
||||
* `404 Not Found`: If the key does not exist.
|
||||
* `500 Internal Server Error`: For server-side issues (e.g., BadgerDB error).
|
||||
* **Example:** `GET /kv/home/room/closet/socks` -> `{"count":7,"colors":["blue","red","black"]}`
|
||||
|
||||
* **`PUT /kv/{path}`**
|
||||
* **Description:** Creates or updates a JSON object at the given path. This operation will internally generate a new UUIDv4 and assign the current Unix timestamp (milliseconds) to the stored value.
|
||||
* **Request:**
|
||||
* `Content-Type: application/json`
|
||||
* Body: The JSON object to store.
|
||||
* **Responses:**
|
||||
* `200 OK` (Update) or `201 Created` (New): On success, returns `{"uuid": "new-uuid", "timestamp": new-timestamp_ms}`.
|
||||
* `400 Bad Request`: If the request body is not valid JSON.
|
||||
* `403 Forbidden`: If the node is in "read-only" mode and the request's origin is not a recognized cluster member (checked via IP/hostname).
|
||||
* `500 Internal Server Error`: For server-side issues.
|
||||
* **Example:** `PUT /kv/settings/theme` with body `{"color":"dark","font_size":14}` -> `{"uuid": "...", "timestamp": ...}`
|
||||
|
||||
* **`DELETE /kv/{path}`**
|
||||
* **Description:** Deletes the key-value pair at the given path.
|
||||
* **Request:** No body.
|
||||
* **Responses:**
|
||||
* `204 No Content`: On successful deletion.
|
||||
* `404 Not Found`: If the key does not exist.
|
||||
* `403 Forbidden`: If the node is in "read-only" mode and the request is not from a recognized cluster member.
|
||||
* `500 Internal Server Error`: For server-side issues.
|
||||
|
||||
#### 4.2 `/members/` Endpoints (Membership & Internal Replication)
|
||||
|
||||
These endpoints are primarily for internal communication between cluster nodes, managing membership and facilitating data synchronization.
|
||||
|
||||
* **`GET /members/`**
|
||||
* **Description:** Returns a list of known active members in the cluster. This list is maintained locally by each node based on the gossip protocol.
|
||||
* **Request:** No body.
|
||||
* **Responses:**
|
||||
* `200 OK`: `Content-Type: application/json` with a JSON array of member details.
|
||||
```json
|
||||
[
|
||||
{"id": "node-alpha", "address": "192.168.1.10:8080", "last_seen": 1672531200000, "joined_timestamp": 1672530000000},
|
||||
{"id": "node-beta", "address": "192.168.1.11:8080", "last_seen": 1672531205000, "joined_timestamp": 1672530100000}
|
||||
]
|
||||
```
|
||||
* `id` (string): Unique identifier for the node.
|
||||
* `address` (string): `host:port` of the node's API endpoint.
|
||||
* `last_seen` (int64): Unix timestamp (milliseconds) of when this node was last successfully contacted or heard from.
|
||||
* `joined_timestamp` (int64): Unix timestamp (milliseconds) of when this node first joined the cluster. This is crucial for tie-breaking conflicts.
|
||||
* `500 Internal Server Error`: For server-side issues.
|
||||
|
||||
* **`POST /members/join`**
|
||||
* **Description:** Used by a new node to announce its presence and attempt to join the cluster. Existing nodes use this to update their member list and respond with their current view of the cluster.
|
||||
* **Request:**
|
||||
* `Content-Type: application/json`
|
||||
* Body:
|
||||
```json
|
||||
{"id": "node-gamma", "address": "192.168.1.12:8080", "joined_timestamp": 1672532000000}
|
||||
```
|
||||
* `joined_timestamp` will be set by the joining node (its startup time).
|
||||
* **Responses:**
|
||||
* `200 OK`: Acknowledgment, returning the current list of known members to the joining node (same format as `GET /members/`).
|
||||
* `400 Bad Request`: If the request body is malformed.
|
||||
* `500 Internal Server Error`: For server-side issues.
|
||||
|
||||
* **`DELETE /members/leave` (Optional, for graceful shutdown)**
|
||||
* **Description:** A member can proactively announce its departure from the cluster. This allows other nodes to quickly mark it as inactive.
|
||||
* **Request:**
|
||||
* `Content-Type: application/json`
|
||||
* Body: `{"id": "node-gamma"}`
|
||||
* **Responses:**
|
||||
* `204 No Content`: On successful processing.
|
||||
* `400 Bad Request`: If the request body is malformed.
|
||||
* `500 Internal Server Error`: For server-side issues.
|
||||
|
||||
* **`POST /members/pairs_by_time` (Internal/Replication Endpoint)**
|
||||
* **Description:** Used by other cluster members to request a list of key paths, their UUIDs, and their timestamps within a specified time range, optionally filtered by a key prefix. This is critical for both gradual bootstrapping and the regular 5-minute synchronization.
|
||||
* **Request:**
|
||||
* `Content-Type: application/json`
|
||||
* Body:
|
||||
```json
|
||||
{
|
||||
"start_timestamp": 1672531200000, // Unix milliseconds (inclusive)
|
||||
"end_timestamp": 1672617600000, // Unix milliseconds (exclusive), or 0 for "up to now"
|
||||
"limit": 15, // Max number of pairs to return
|
||||
"prefix": "home/room/" // Optional: filter by BadgerDB key prefix
|
||||
}
|
||||
```
|
||||
* `start_timestamp`: Earliest timestamp for data to be included.
|
||||
* `end_timestamp`: Latest timestamp (exclusive). If `0` or omitted, it implies "up to the current time".
|
||||
* `limit`: **Fixed at 15** for this design, to control batch size during sync.
|
||||
* `prefix`: Optional, to filter keys by a common BadgerDB key prefix.
|
||||
* **Responses:**
|
||||
* `200 OK`: `Content-Type: application/json` with a JSON array of objects:
|
||||
```json
|
||||
[
|
||||
{"path": "home/room/closet/socks", "uuid": "...", "timestamp": 1672531200000},
|
||||
{"path": "users/john/profile", "uuid": "...", "timestamp": 1672531205000}
|
||||
]
|
||||
```
|
||||
* `204 No Content`: If no data matches the criteria.
|
||||
* `400 Bad Request`: If request body is malformed or timestamps are invalid.
|
||||
* `500 Internal Server Error`: For server-side issues.
|
||||
|
||||
### 5. BadgerDB Integration
|
||||
|
||||
BadgerDB will be used as the embedded, local, single-node key-value store.
|
||||
|
||||
* **Key Storage:** As described in section 3.2, the HTTP path (without `/kv/` prefix and no leading `/`) will directly map to the BadgerDB key.
|
||||
* **Value Storage:** Values will be marshaled JSON objects (`uuid`, `timestamp`, `data`).
|
||||
* **Timestamp Indexing (for `pairs_by_time`):** To efficiently query by timestamp, a manual secondary index will be maintained. Each `PUT` operation will write two BadgerDB entries:
|
||||
1. The primary data entry: `{badger_key}` -> `{uuid, timestamp, data}`.
|
||||
2. A secondary timestamp index entry: `_ts:{timestamp_ms}:{badger_key}` -> `{uuid}`.
|
||||
* The `_ts` prefix ensures these index keys are grouped and don't conflict with data keys.
|
||||
* The timestamp (milliseconds) ensures lexicographical sorting by time.
|
||||
* The `badger_key` in the index key allows for uniqueness and points back to the main data.
|
||||
* The value can simply be the `uuid` or even an empty string if only the key is needed. Storing the `uuid` here is useful for direct lookups.
|
||||
* **`DELETE` Operations:** A `DELETE /kv/{path}` will remove both the primary data entry and its corresponding secondary index entry from BadgerDB.
|
||||
|
||||
### 6. Clustering and Consistency
|
||||
|
||||
#### 6.1 Membership Management (Gossip Protocol)
|
||||
* Each node maintains a local list of known cluster members (Node ID, Address, Last Seen Timestamp, Joined Timestamp).
|
||||
* Every node will randomly pick a time **between 1-2 minutes** after its last check-up to initiate a gossip round.
|
||||
* In a gossip round, the node randomly selects a subset of its healthy known members (e.g., 1-3 nodes) and performs a "gossip exchange":
|
||||
1. It sends its current local member list to the selected peers.
|
||||
2. Peers merge the received list with their own, updating `last_seen` timestamps for existing members and adding new ones.
|
||||
3. If a node fails to respond to multiple gossip attempts, it is eventually marked as "suspected down" and then "dead" after a configurable timeout.
|
||||
|
||||
#### 6.2 Data Replication (Periodic Syncs)
|
||||
* The system uses two types of data synchronization:
|
||||
1. **Regular 5-Minute Sync:** Catching up on recent changes.
|
||||
2. **Catch-Up Sync (2-Minute Cycles):** For nodes that detect they are significantly behind.
|
||||
|
||||
* **Regular 5-Minute Sync:**
|
||||
* Every **5 minutes**, each node initiates a data synchronization cycle.
|
||||
* It selects a random healthy peer.
|
||||
* It sends `POST /members/pairs_by_time` to the peer, requesting **the 15 latest UUIDs** (by setting `limit: 15` and `end_timestamp: current_time_ms`, with `start_timestamp: 0` or a very old value to ensure enough items are considered).
|
||||
* The remote node responds with its 15 latest (path, uuid, timestamp) pairs.
|
||||
* The local node compares these with its own latest 15. If it finds any data it doesn't have, or an older version of data it does have, it will fetch the full data via `GET /kv/{path}` and update its local store.
|
||||
* If the local node detects it's significantly behind (e.g., many of the remote node's latest 15 UUIDs are missing or much newer locally, indicating a large gap), it triggers the **Catch-Up Sync**.
|
||||
|
||||
* **Catch-Up Sync (2-Minute Cycles):**
|
||||
* This mode is activated when a node determines it's behind its peers (e.g., during the 5-minute sync or bootstrapping).
|
||||
* It runs every **2 minutes** (ensuring it doesn't align with the 5-minute sync).
|
||||
* The node identifies the `oldest_known_timestamp_among_peers_latest_15` from its last regular sync.
|
||||
* It then sends `POST /members/pairs_by_time` to a random healthy peer, requesting **15 UUIDs older than that timestamp** (e.g., `end_timestamp: oldest_known_timestamp_ms`, `limit: 15`, `start_timestamp: 0` or further back).
|
||||
* It continuously iterates backwards in time in 2-minute cycles, progressively asking for older sets of 15 UUIDs until it has caught up to a reasonable historical depth (e.g., configured `BOOTSTRAP_MAX_AGE_HOURS`).
|
||||
* **History Depth:** The system aims to keep track of **at least 3 revisions per path** for conflict resolution and eventually for versioning. The `BOOTSTRAP_MAX_AGE_MILLIS` (defaulting to 30 days) governs how far back in time a node will actively fetch during a full sync.
|
||||
|
||||
#### 6.3 Conflict Resolution
|
||||
When two nodes have different versions of the same key (same BadgerDB key), the conflict resolution logic is applied:
|
||||
|
||||
1. **Timestamp Wins:** The data with the **most recent `timestamp` (Unix milliseconds)** is considered the correct version.
|
||||
2. **Timestamp Collision (Tie-Breaker):** If two conflicting versions have the **exact same `timestamp`**:
|
||||
* **Majority Vote:** The system will query a quorum of healthy peers (`GET /kv/{path}` or an internal check for UUID/timestamp) to see which UUID/timestamp pair the majority holds. The version held by the majority wins.
|
||||
* **Oldest Node Priority (Tie-Breaker for Majority):** If there's an even number of nodes, and thus a tie in the majority vote (e.g., 2 nodes say version A, 2 nodes say version B), the version held by the node with the **oldest `joined_timestamp`** (i.e., the oldest active member in the cluster) takes precedence. This provides a deterministic tie-breaker.
|
||||
* *Implementation Note:* For majority vote, a node might need to request the `{"uuid", "timestamp"}` pairs for a specific `path` from multiple peers. This implies an internal query mechanism or aggregating responses from `pairs_by_time` for the specific key.
|
||||
|
||||
### 7. Bootstrapping New Nodes (Gradual Full Sync)
|
||||
|
||||
This process is initiated when a new node starts up and has no existing data or member list.
|
||||
|
||||
1. **Seed Node Configuration:** The new node must be configured with a list of initial `seed_nodes` (e.g., `["host1:port", "host2:port"]`).
|
||||
2. **Join Request:** The new node attempts to `POST /members/join` to one of its configured seed nodes, providing its own `id`, `address`, and its `joined_timestamp` (its startup time).
|
||||
3. **Member List Discovery:** Upon a successful join, the seed node responds with its current list of known cluster members. The new node populates its local member list.
|
||||
4. **Gradual Data Synchronization Loop (Catch-Up Mode):**
|
||||
* The new node sets its `current_end_timestamp = current_time_ms`.
|
||||
* It defines a `sync_batch_size` (e.g., 15 UUIDs per request, as per `pairs_by_time` `limit`).
|
||||
* It also defines a `throttle_delay` (e.g., 100ms between `pairs_by_time` requests to different peers) and a `fetch_delay` (e.g., 50ms between individual `GET /kv/{path}` requests for full data).
|
||||
* **Loop backwards in time:**
|
||||
* The node determines the `oldest_timestamp_fetched` from its *last* batch of `sync_batch_size` items. Initially, this would be `current_time_ms`.
|
||||
* Randomly pick a healthy peer from its member list.
|
||||
* Send `POST /members/pairs_by_time` to the peer with `end_timestamp: oldest_timestamp_fetched`, `limit: sync_batch_size`, and `start_timestamp: 0`. This asks for 15 items *older than* the oldest one just processed.
|
||||
* Process the received `{"path", "uuid", "timestamp"}` pairs:
|
||||
* For each remote pair, it fetches its local version from BadgerDB.
|
||||
* **Conflict Resolution:** Apply the logic from section 6.3. If local data is missing or older, initiate a `GET /kv/{path}` to fetch the full data and store it.
|
||||
* **Throttling:**
|
||||
* Wait for `throttle_delay` after each `pairs_by_time` request.
|
||||
* Wait for `fetch_delay` after each individual `GET /kv/{path}` request for full data.
|
||||
* **Termination:** The loop continues until the `oldest_timestamp_fetched` goes below the configured `BOOTSTRAP_MAX_AGE_MILLIS` (defaulting to 30 days ago, configurable value). The node may also terminate if multiple consecutive `pairs_by_time` queries return no new (older) data.
|
||||
5. **Full Participation:** Once the gradual sync is complete, the node fully participates in the regular 5-minute replication cycles and accepts external client writes (if not in read-only mode). During the sync, the node will operate in a `syncing` mode, rejecting external client writes with `503 Service Unavailable`.
|
||||
|
||||
### 8. Operational Modes
|
||||
|
||||
* **Normal Mode:** Full read/write capabilities, participates in all replication and gossip activities.
|
||||
* **Read-Only Mode:**
|
||||
* Node will reject `PUT` and `DELETE` requests from **external clients** with a `403 Forbidden` status.
|
||||
* It will **still accept** `PUT` and `DELETE` operations that originate from **recognized cluster members** during replication, allowing it to remain eventually consistent.
|
||||
* `GET` requests are always allowed.
|
||||
* This mode is primarily for reducing write load or protecting data on specific nodes.
|
||||
* **Syncing Mode (Internal during Bootstrap):**
|
||||
* While a new node is undergoing its initial gradual sync, it operates in this internal mode.
|
||||
* External `PUT`/`DELETE` requests will be **rejected with `503 Service Unavailable`**.
|
||||
* Internal replication from other members is fully active.
|
||||
|
||||
### 9. Logging
|
||||
|
||||
A structured logging library (e.g., `zap` or `logrus`) will be used.
|
||||
|
||||
* **Log Levels:** Support for `DEBUG`, `INFO`, `WARN`, `ERROR`, `FATAL`. Configurable.
|
||||
* **Log Format:** JSON for easy parsing by log aggregators.
|
||||
* **Key Events to Log:**
|
||||
* **Startup/Shutdown:** Server start/stop, configuration loaded.
|
||||
* **API Requests:** Incoming HTTP request details (method, path, client IP, status code, duration).
|
||||
* **BadgerDB Operations:** Errors during put/get/delete, database open/close, secondary index operations.
|
||||
* **Membership:** Node joined/left, gossip rounds initiated/received, member status changes (up, suspected, down), tie-breaker decisions.
|
||||
* **Replication:** Sync cycle start/end, type of sync (regular/catch-up), number of keys compared, number of keys fetched, conflict resolutions (including details of timestamp collision resolution).
|
||||
* **Errors:** Data serialization/deserialization, network errors, unhandled exceptions.
|
||||
* **Operational Mode Changes:** Entering/exiting read-only mode, syncing mode.
|
||||
|
||||
### 10. Future Work (Rough Order of Priority)
|
||||
|
||||
These items are considered out of scope for the initial design but are planned for future versions.
|
||||
|
||||
* **Authentication/Authorization (Before First Release):** Implement robust authentication for API endpoints (e.g., API keys, mTLS) and potentially basic authorization for access to `kv` paths.
|
||||
* **Client Libraries/Functions (Bash, Python, Go):** Develop official client libraries or helper functions to simplify interaction with the API for common programming environments.
|
||||
* **Data Compression (gzip):** Implement Gzip compression for data values stored in BadgerDB to reduce storage footprint and potentially improve I/O performance.
|
||||
* **Data Revisions & Simple Backups:**
|
||||
* Hold **at least 3 revisions per path**. This would involve a mechanism to store previous versions of data when a `PUT` occurs, potentially using a separate BadgerDB key namespace (e.g., `_rev:{badger_key}:{timestamp_of_revision}`).
|
||||
* The current `GET /kv/{path}` would continue to return only the latest. A new API might be introduced to fetch specific historical revisions.
|
||||
* Simple backup strategies could leverage these revisions or BadgerDB's native snapshot capabilities.
|
||||
* **Monitoring & Metrics (Grafana Support in v3):** Integrate with a metrics system like Prometheus, exposing key performance indicators (e.g., request rates, error rates, replication lag, BadgerDB stats) for visualization in dashboards like Grafana.
|
102
features/auth.go
Normal file
102
features/auth.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package features
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/gorilla/mux"
|
||||
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// AuthContext holds authentication information for a request
|
||||
type AuthContext struct {
|
||||
UserUUID string `json:"user_uuid"`
|
||||
Scopes []string `json:"scopes"`
|
||||
Groups []string `json:"groups"`
|
||||
}
|
||||
|
||||
// CheckPermission validates if a user has permission to perform an operation
|
||||
func CheckPermission(permissions int, operation string, isOwner, isGroupMember bool) bool {
|
||||
switch operation {
|
||||
case "create":
|
||||
if isOwner {
|
||||
return (permissions & types.PermOwnerCreate) != 0
|
||||
}
|
||||
if isGroupMember {
|
||||
return (permissions & types.PermGroupCreate) != 0
|
||||
}
|
||||
return (permissions & types.PermOthersCreate) != 0
|
||||
|
||||
case "delete":
|
||||
if isOwner {
|
||||
return (permissions & types.PermOwnerDelete) != 0
|
||||
}
|
||||
if isGroupMember {
|
||||
return (permissions & types.PermGroupDelete) != 0
|
||||
}
|
||||
return (permissions & types.PermOthersDelete) != 0
|
||||
|
||||
case "write":
|
||||
if isOwner {
|
||||
return (permissions & types.PermOwnerWrite) != 0
|
||||
}
|
||||
if isGroupMember {
|
||||
return (permissions & types.PermGroupWrite) != 0
|
||||
}
|
||||
return (permissions & types.PermOthersWrite) != 0
|
||||
|
||||
case "read":
|
||||
if isOwner {
|
||||
return (permissions & types.PermOwnerRead) != 0
|
||||
}
|
||||
if isGroupMember {
|
||||
return (permissions & types.PermGroupRead) != 0
|
||||
}
|
||||
return (permissions & types.PermOthersRead) != 0
|
||||
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// CheckUserResourceRelationship determines user relationship to resource
|
||||
func CheckUserResourceRelationship(userUUID string, metadata *types.ResourceMetadata, userGroups []string) (isOwner, isGroupMember bool) {
|
||||
isOwner = (userUUID == metadata.OwnerUUID)
|
||||
|
||||
if metadata.GroupUUID != "" {
|
||||
for _, groupUUID := range userGroups {
|
||||
if groupUUID == metadata.GroupUUID {
|
||||
isGroupMember = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return isOwner, isGroupMember
|
||||
}
|
||||
|
||||
// ExtractTokenFromHeader extracts the Bearer token from the Authorization header
|
||||
func ExtractTokenFromHeader(r *http.Request) (string, error) {
|
||||
authHeader := r.Header.Get("Authorization")
|
||||
if authHeader == "" {
|
||||
return "", fmt.Errorf("missing authorization header")
|
||||
}
|
||||
|
||||
parts := strings.Split(authHeader, " ")
|
||||
if len(parts) != 2 || strings.ToLower(parts[0]) != "bearer" {
|
||||
return "", fmt.Errorf("invalid authorization header format")
|
||||
}
|
||||
|
||||
return parts[1], nil
|
||||
}
|
||||
|
||||
// ExtractKVResourceKey extracts KV resource key from request
|
||||
func ExtractKVResourceKey(r *http.Request) string {
|
||||
vars := mux.Vars(r)
|
||||
if path, ok := vars["path"]; ok {
|
||||
return path
|
||||
}
|
||||
return ""
|
||||
}
|
11
features/backup.go
Normal file
11
features/backup.go
Normal file
@@ -0,0 +1,11 @@
|
||||
package features
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
// GetBackupFilename generates a filename for a backup
|
||||
func GetBackupFilename(timestamp time.Time) string {
|
||||
return fmt.Sprintf("kvs-backup-%s.zstd", timestamp.Format("2006-01-02"))
|
||||
}
|
4
features/features.go
Normal file
4
features/features.go
Normal file
@@ -0,0 +1,4 @@
|
||||
// Package features provides utility functions for KVS authentication, validation,
|
||||
// logging, backup, and other operational features. These functions were extracted
|
||||
// from main.go to improve code organization and maintainability.
|
||||
package features
|
8
features/ratelimit.go
Normal file
8
features/ratelimit.go
Normal file
@@ -0,0 +1,8 @@
|
||||
package features
|
||||
|
||||
import "fmt"
|
||||
|
||||
// GetRateLimitKey generates the storage key for rate limiting
|
||||
func GetRateLimitKey(userUUID string, windowStart int64) string {
|
||||
return fmt.Sprintf("ratelimit:%s:%d", userUUID, windowStart)
|
||||
}
|
8
features/revision.go
Normal file
8
features/revision.go
Normal file
@@ -0,0 +1,8 @@
|
||||
package features
|
||||
|
||||
import "fmt"
|
||||
|
||||
// GetRevisionKey generates the storage key for a specific revision
|
||||
func GetRevisionKey(baseKey string, revision int) string {
|
||||
return fmt.Sprintf("%s:rev:%d", baseKey, revision)
|
||||
}
|
24
features/tamperlog.go
Normal file
24
features/tamperlog.go
Normal file
@@ -0,0 +1,24 @@
|
||||
package features
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"kvs/utils"
|
||||
)
|
||||
|
||||
// GetTamperLogKey generates the storage key for a tamper log entry
|
||||
func GetTamperLogKey(timestamp string, entryUUID string) string {
|
||||
return fmt.Sprintf("log:%s:%s", timestamp, entryUUID)
|
||||
}
|
||||
|
||||
// GetMerkleLogKey generates the storage key for hourly Merkle tree roots
|
||||
func GetMerkleLogKey(timestamp string) string {
|
||||
return fmt.Sprintf("log:merkle:%s", timestamp)
|
||||
}
|
||||
|
||||
// GenerateLogSignature creates a SHA3-512 signature for a log entry
|
||||
func GenerateLogSignature(timestamp, action, userUUID, resource string) string {
|
||||
// Concatenate all fields in a deterministic order
|
||||
data := fmt.Sprintf("%s|%s|%s|%s", timestamp, action, userUUID, resource)
|
||||
return utils.HashSHA3512(data)
|
||||
}
|
24
features/validation.go
Normal file
24
features/validation.go
Normal file
@@ -0,0 +1,24 @@
|
||||
package features
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ParseTTL converts a Go duration string to time.Duration
|
||||
func ParseTTL(ttlString string) (time.Duration, error) {
|
||||
if ttlString == "" || ttlString == "0" {
|
||||
return 0, nil // No TTL
|
||||
}
|
||||
|
||||
duration, err := time.ParseDuration(ttlString)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("invalid TTL format: %v", err)
|
||||
}
|
||||
|
||||
if duration < 0 {
|
||||
return 0, fmt.Errorf("TTL cannot be negative")
|
||||
}
|
||||
|
||||
return duration, nil
|
||||
}
|
9
go.mod
9
go.mod
@@ -4,9 +4,13 @@ go 1.21
|
||||
|
||||
require (
|
||||
github.com/dgraph-io/badger/v4 v4.2.0
|
||||
github.com/golang-jwt/jwt/v4 v4.5.2
|
||||
github.com/google/uuid v1.4.0
|
||||
github.com/gorilla/mux v1.8.1
|
||||
github.com/klauspost/compress v1.17.4
|
||||
github.com/robfig/cron/v3 v3.0.1
|
||||
github.com/sirupsen/logrus v1.9.3
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
)
|
||||
|
||||
@@ -20,11 +24,10 @@ require (
|
||||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/golang/snappy v0.0.3 // indirect
|
||||
github.com/google/flatbuffers v1.12.1 // indirect
|
||||
github.com/klauspost/compress v1.12.3 // indirect
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
go.opencensus.io v0.22.5 // indirect
|
||||
golang.org/x/net v0.7.0 // indirect
|
||||
golang.org/x/sys v0.5.0 // indirect
|
||||
golang.org/x/net v0.10.0 // indirect
|
||||
golang.org/x/sys v0.14.0 // indirect
|
||||
google.golang.org/protobuf v1.28.1 // indirect
|
||||
)
|
||||
|
17
go.sum
17
go.sum
@@ -18,6 +18,8 @@ github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4
|
||||
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/glog v1.0.0 h1:nfP3RFugxnNRyKgeWd4oI1nYvXpxrx8ck8ZrcizshdQ=
|
||||
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
|
||||
@@ -42,8 +44,8 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
|
||||
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.12.3 h1:G5AfA94pHPysR56qqrkO2pxEexdDzrpFJ6yt/VqWxVU=
|
||||
github.com/klauspost/compress v1.12.3/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
|
||||
github.com/klauspost/compress v1.17.4 h1:Ej5ixsIri7BrIjBkRZLTo6ghwrEtHFk7ijlczPW4fZ4=
|
||||
github.com/klauspost/compress v1.17.4/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
@@ -52,6 +54,8 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
@@ -64,6 +68,7 @@ go.opencensus.io v0.22.5 h1:dntmOdLpSpHlVqbW5Eay97DelsZHe+55D+xC6i0dDS0=
|
||||
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
@@ -79,8 +84,8 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
|
||||
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
|
||||
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
@@ -95,8 +100,8 @@ golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20221010170243-090e33056c14/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.14.0 h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=
|
||||
golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
|
@@ -1,7 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# KVS Integration Test Suite - Working Version
|
||||
# Tests all critical features of the distributed key-value store
|
||||
# KVS Integration Test Suite - Adapted for Merkle Tree Sync
|
||||
# Tests all critical features of the distributed key-value store with Merkle Tree replication
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
@@ -43,7 +43,7 @@ test_start() {
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
log_info "Cleaning up test environment..."
|
||||
pkill -f "./kvs" 2>/dev/null || true
|
||||
pkill -f "$BINARY" 2>/dev/null || true
|
||||
rm -rf "$TEST_DIR" 2>/dev/null || true
|
||||
sleep 2 # Allow processes to fully terminate
|
||||
}
|
||||
@@ -75,6 +75,7 @@ test_build() {
|
||||
log_error "Binary build failed"
|
||||
return 1
|
||||
fi
|
||||
# Ensure we are back in TEST_DIR for subsequent tests
|
||||
cd "$TEST_DIR"
|
||||
}
|
||||
|
||||
@@ -90,6 +91,8 @@ port: 8090
|
||||
data_dir: "./basic_data"
|
||||
seed_nodes: []
|
||||
log_level: "error"
|
||||
allow_anonymous_read: true
|
||||
allow_anonymous_write: true
|
||||
EOF
|
||||
|
||||
# Start node
|
||||
@@ -103,12 +106,12 @@ EOF
|
||||
-d '{"message":"hello world"}')
|
||||
|
||||
local get_result=$(curl -s http://localhost:8090/kv/test/basic)
|
||||
local message=$(echo "$get_result" | jq -r '.message' 2>/dev/null)
|
||||
local message=$(echo "$get_result" | jq -r '.data.message' 2>/dev/null) # Adjusted jq path
|
||||
|
||||
if [ "$message" = "hello world" ]; then
|
||||
log_success "Basic CRUD operations work"
|
||||
else
|
||||
log_error "Basic CRUD failed: $get_result"
|
||||
log_error "Basic CRUD failed: Expected 'hello world', got '$message' from $get_result"
|
||||
fi
|
||||
else
|
||||
log_error "Basic test node failed to start"
|
||||
@@ -120,8 +123,11 @@ EOF
|
||||
|
||||
# Test 3: Cluster formation
|
||||
test_cluster_formation() {
|
||||
test_start "2-node cluster formation"
|
||||
|
||||
test_start "2-node cluster formation and Merkle Tree replication"
|
||||
|
||||
# Shared cluster secret for authentication (Issue #13)
|
||||
local CLUSTER_SECRET="test-cluster-secret-12345678901234567890"
|
||||
|
||||
# Node 1 config
|
||||
cat > cluster1.yaml <<EOF
|
||||
node_id: "cluster-1"
|
||||
@@ -133,8 +139,11 @@ log_level: "error"
|
||||
gossip_interval_min: 5
|
||||
gossip_interval_max: 10
|
||||
sync_interval: 10
|
||||
allow_anonymous_read: true
|
||||
allow_anonymous_write: true
|
||||
cluster_secret: "$CLUSTER_SECRET"
|
||||
EOF
|
||||
|
||||
|
||||
# Node 2 config
|
||||
cat > cluster2.yaml <<EOF
|
||||
node_id: "cluster-2"
|
||||
@@ -146,6 +155,9 @@ log_level: "error"
|
||||
gossip_interval_min: 5
|
||||
gossip_interval_max: 10
|
||||
sync_interval: 10
|
||||
allow_anonymous_read: true
|
||||
allow_anonymous_write: true
|
||||
cluster_secret: "$CLUSTER_SECRET"
|
||||
EOF
|
||||
|
||||
# Start nodes
|
||||
@@ -158,7 +170,7 @@ EOF
|
||||
return 1
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
sleep 2 # Give node 1 a moment to fully initialize
|
||||
$BINARY cluster2.yaml >/dev/null 2>&1 &
|
||||
local pid2=$!
|
||||
|
||||
@@ -168,28 +180,46 @@ EOF
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Wait for cluster formation
|
||||
sleep 8
|
||||
# Wait for cluster formation and initial Merkle sync
|
||||
sleep 15
|
||||
|
||||
# Check if nodes see each other
|
||||
local node1_members=$(curl -s http://localhost:8101/members/ | jq length 2>/dev/null || echo 0)
|
||||
local node2_members=$(curl -s http://localhost:8102/members/ | jq length 2>/dev/null || echo 0)
|
||||
|
||||
if [ "$node1_members" -ge 1 ] && [ "$node2_members" -ge 1 ]; then
|
||||
log_success "2-node cluster formed successfully"
|
||||
log_success "2-node cluster formed successfully (N1 members: $node1_members, N2 members: $node2_members)"
|
||||
|
||||
# Test data replication
|
||||
log_info "Putting data on Node 1, waiting for Merkle sync..."
|
||||
curl -s -X PUT http://localhost:8101/kv/cluster/test \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"source":"node1"}' >/dev/null
|
||||
-d '{"source":"node1", "value": 1}' >/dev/null
|
||||
|
||||
sleep 12 # Wait for sync cycle
|
||||
# Wait for Merkle sync cycle to complete
|
||||
sleep 12
|
||||
|
||||
local node2_data=$(curl -s http://localhost:8102/kv/cluster/test | jq -r '.source' 2>/dev/null)
|
||||
if [ "$node2_data" = "node1" ]; then
|
||||
log_success "Data replication works correctly"
|
||||
local node2_data_full=$(curl -s http://localhost:8102/kv/cluster/test)
|
||||
local node2_data_source=$(echo "$node2_data_full" | jq -r '.data.source' 2>/dev/null)
|
||||
local node2_data_value=$(echo "$node2_data_full" | jq -r '.data.value' 2>/dev/null)
|
||||
local node1_data_full=$(curl -s http://localhost:8101/kv/cluster/test)
|
||||
|
||||
if [ "$node2_data_source" = "node1" ] && [ "$node2_data_value" = "1" ]; then
|
||||
log_success "Data replication works correctly (Node 2 has data from Node 1)"
|
||||
|
||||
# Verify UUIDs and Timestamps are identical (crucial for Merkle sync correctness)
|
||||
local node1_uuid=$(echo "$node1_data_full" | jq -r '.uuid' 2>/dev/null)
|
||||
local node1_timestamp=$(echo "$node1_data_full" | jq -r '.timestamp' 2>/dev/null)
|
||||
local node2_uuid=$(echo "$node2_data_full" | jq -r '.uuid' 2>/dev/null)
|
||||
local node2_timestamp=$(echo "$node2_data_full" | jq -r '.timestamp' 2>/dev/null)
|
||||
|
||||
if [ "$node1_uuid" = "$node2_uuid" ] && [ "$node1_timestamp" = "$node2_timestamp" ]; then
|
||||
log_success "Replicated data retains original UUID and Timestamp"
|
||||
else
|
||||
log_error "Replicated data changed UUID/Timestamp: N1_UUID=$node1_uuid, N1_TS=$node1_timestamp, N2_UUID=$node2_uuid, N2_TS=$node2_timestamp"
|
||||
fi
|
||||
else
|
||||
log_error "Data replication failed: $node2_data"
|
||||
log_error "Data replication failed: Node 2 data: $node2_data_full"
|
||||
fi
|
||||
else
|
||||
log_error "Cluster formation failed (N1 members: $node1_members, N2 members: $node2_members)"
|
||||
@@ -199,18 +229,24 @@ EOF
|
||||
sleep 2
|
||||
}
|
||||
|
||||
# Test 4: Conflict resolution (simplified)
|
||||
# Test 4: Conflict resolution (Merkle Tree based)
|
||||
# This test assumes 'test_conflict.go' creates two BadgerDBs with a key
|
||||
# that has the same path and timestamp but different UUIDs, or different timestamps
|
||||
# but same path. The Merkle tree sync should then trigger conflict resolution.
|
||||
test_conflict_resolution() {
|
||||
test_start "Conflict resolution test"
|
||||
|
||||
test_start "Conflict resolution test (Merkle Tree based)"
|
||||
|
||||
# Create conflicting data using our utility
|
||||
rm -rf conflict1_data conflict2_data 2>/dev/null || true
|
||||
mkdir -p conflict1_data conflict2_data
|
||||
|
||||
|
||||
cd "$SCRIPT_DIR"
|
||||
if go run test_conflict.go "$TEST_DIR/conflict1_data" "$TEST_DIR/conflict2_data" >/dev/null 2>&1; then
|
||||
if go run test_conflict.go "$TEST_DIR/conflict1_data" "$TEST_DIR/conflict2_data"; then
|
||||
cd "$TEST_DIR"
|
||||
|
||||
|
||||
# Shared cluster secret for authentication (Issue #13)
|
||||
local CLUSTER_SECRET="conflict-cluster-secret-1234567890123"
|
||||
|
||||
# Create configs
|
||||
cat > conflict1.yaml <<EOF
|
||||
node_id: "conflict-1"
|
||||
@@ -219,9 +255,12 @@ port: 8111
|
||||
data_dir: "./conflict1_data"
|
||||
seed_nodes: []
|
||||
log_level: "info"
|
||||
sync_interval: 8
|
||||
sync_interval: 3
|
||||
allow_anonymous_read: true
|
||||
allow_anonymous_write: true
|
||||
cluster_secret: "$CLUSTER_SECRET"
|
||||
EOF
|
||||
|
||||
|
||||
cat > conflict2.yaml <<EOF
|
||||
node_id: "conflict-2"
|
||||
bind_address: "127.0.0.1"
|
||||
@@ -229,11 +268,15 @@ port: 8112
|
||||
data_dir: "./conflict2_data"
|
||||
seed_nodes: ["127.0.0.1:8111"]
|
||||
log_level: "info"
|
||||
sync_interval: 8
|
||||
sync_interval: 3
|
||||
allow_anonymous_read: true
|
||||
allow_anonymous_write: true
|
||||
cluster_secret: "$CLUSTER_SECRET"
|
||||
EOF
|
||||
|
||||
# Start nodes
|
||||
$BINARY conflict1.yaml >conflict1.log 2>&1 &
|
||||
# Node 1 started first, making it "older" for tie-breaker if timestamps are equal
|
||||
"$BINARY" conflict1.yaml >conflict1.log 2>&1 &
|
||||
local pid1=$!
|
||||
|
||||
if wait_for_service 8111; then
|
||||
@@ -242,26 +285,74 @@ EOF
|
||||
local pid2=$!
|
||||
|
||||
if wait_for_service 8112; then
|
||||
# Get initial data
|
||||
local node1_initial=$(curl -s http://localhost:8111/kv/test/conflict/data | jq -r '.message' 2>/dev/null)
|
||||
local node2_initial=$(curl -s http://localhost:8112/kv/test/conflict/data | jq -r '.message' 2>/dev/null)
|
||||
# Get initial data (full StoredValue)
|
||||
local node1_initial_full=$(curl -s http://localhost:8111/kv/test/conflict/data)
|
||||
local node2_initial_full=$(curl -s http://localhost:8112/kv/test/conflict/data)
|
||||
|
||||
# Wait for conflict resolution
|
||||
sleep 12
|
||||
local node1_initial_msg=$(echo "$node1_initial_full" | jq -r '.data.message' 2>/dev/null)
|
||||
local node2_initial_msg=$(echo "$node2_initial_full" | jq -r '.data.message' 2>/dev/null)
|
||||
|
||||
# Get final data
|
||||
local node1_final=$(curl -s http://localhost:8111/kv/test/conflict/data | jq -r '.message' 2>/dev/null)
|
||||
local node2_final=$(curl -s http://localhost:8112/kv/test/conflict/data | jq -r '.message' 2>/dev/null)
|
||||
log_info "Initial conflict state: Node1='$node1_initial_msg', Node2='$node2_initial_msg'"
|
||||
|
||||
# Allow time for cluster formation and gossip protocol to stabilize
|
||||
log_info "Waiting for cluster formation and gossip stabilization..."
|
||||
sleep 20
|
||||
|
||||
# Wait for conflict resolution with retry logic (up to 60 seconds)
|
||||
local max_attempts=20
|
||||
local attempt=1
|
||||
local node1_final_msg=""
|
||||
local node2_final_msg=""
|
||||
local node1_final_full=""
|
||||
local node2_final_full=""
|
||||
|
||||
log_info "Waiting for conflict resolution (checking every 3 seconds, max 60 seconds)..."
|
||||
|
||||
while [ $attempt -le $max_attempts ]; do
|
||||
sleep 3
|
||||
|
||||
# Get current data from both nodes
|
||||
node1_final_full=$(curl -s http://localhost:8111/kv/test/conflict/data)
|
||||
node2_final_full=$(curl -s http://localhost:8112/kv/test/conflict/data)
|
||||
|
||||
node1_final_msg=$(echo "$node1_final_full" | jq -r '.data.message' 2>/dev/null)
|
||||
node2_final_msg=$(echo "$node2_final_full" | jq -r '.data.message' 2>/dev/null)
|
||||
|
||||
# Check if they've converged
|
||||
if [ "$node1_final_msg" = "$node2_final_msg" ] && [ -n "$node1_final_msg" ] && [ "$node1_final_msg" != "null" ]; then
|
||||
log_info "Conflict resolution achieved after $((attempt * 3)) seconds"
|
||||
break
|
||||
fi
|
||||
|
||||
log_info "Attempt $attempt/$max_attempts: Node1='$node1_final_msg', Node2='$node2_final_msg' (not converged yet)"
|
||||
attempt=$((attempt + 1))
|
||||
done
|
||||
|
||||
# Check if they converged
|
||||
if [ "$node1_final" = "$node2_final" ] && [ -n "$node1_final" ]; then
|
||||
if grep -q "conflict resolution" conflict1.log conflict2.log 2>/dev/null; then
|
||||
log_success "Conflict resolution detected and resolved ($node1_initial vs $node2_initial → $node1_final)"
|
||||
if [ "$node1_final_msg" = "$node2_final_msg" ] && [ -n "$node1_final_msg" ]; then
|
||||
log_success "Conflict resolution converged to: '$node1_final_msg'"
|
||||
|
||||
# Verify UUIDs and Timestamps are identical after resolution
|
||||
local node1_final_uuid=$(echo "$node1_final_full" | jq -r '.uuid' 2>/dev/null)
|
||||
local node1_final_timestamp=$(echo "$node1_final_full" | jq -r '.timestamp' 2>/dev/null)
|
||||
local node2_final_uuid=$(echo "$node2_final_full" | jq -r '.uuid' 2>/dev/null)
|
||||
local node2_final_timestamp=$(echo "$node2_final_full" | jq -r '.timestamp' 2>/dev/null)
|
||||
|
||||
if [ "$node1_final_uuid" = "$node2_final_uuid" ] && [ "$node1_final_timestamp" = "$node2_final_timestamp" ]; then
|
||||
log_success "Resolved data retains consistent UUID and Timestamp across nodes"
|
||||
else
|
||||
log_success "Nodes converged without conflicts ($node1_final)"
|
||||
log_error "Resolved data has inconsistent UUID/Timestamp: N1_UUID=$node1_final_uuid, N1_TS=$node1_final_timestamp, N2_UUID=$node2_final_uuid, N2_TS=$node2_final_timestamp"
|
||||
fi
|
||||
|
||||
# Optionally, check logs for conflict resolution messages
|
||||
if grep -q "Conflict resolved" conflict1.log conflict2.log 2>/dev/null; then
|
||||
log_success "Conflict resolution messages found in logs"
|
||||
else
|
||||
log_error "No 'Conflict resolved' messages found in logs, but data converged."
|
||||
fi
|
||||
|
||||
else
|
||||
log_error "Conflict resolution failed: N1='$node1_final', N2='$node2_final'"
|
||||
log_error "Conflict resolution failed: N1_final='$node1_final_msg', N2_final='$node2_final_msg'"
|
||||
fi
|
||||
else
|
||||
log_error "Conflict node 2 failed to start"
|
||||
@@ -276,14 +367,176 @@ EOF
|
||||
sleep 2
|
||||
else
|
||||
cd "$TEST_DIR"
|
||||
log_error "Failed to create conflict test data"
|
||||
log_error "Failed to create conflict test data. Ensure test_conflict.go is correct."
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 5: Authentication middleware (Issue #4)
|
||||
test_authentication_middleware() {
|
||||
test_start "Authentication middleware test (Issue #4)"
|
||||
|
||||
# Create auth test config
|
||||
cat > auth_test.yaml <<EOF
|
||||
node_id: "auth-test"
|
||||
bind_address: "127.0.0.1"
|
||||
port: 8095
|
||||
data_dir: "./auth_test_data"
|
||||
seed_nodes: []
|
||||
log_level: "error"
|
||||
auth_enabled: true
|
||||
allow_anonymous_read: false
|
||||
allow_anonymous_write: false
|
||||
EOF
|
||||
|
||||
# Start node
|
||||
$BINARY auth_test.yaml >auth_test.log 2>&1 &
|
||||
local pid=$!
|
||||
|
||||
if wait_for_service 8095; then
|
||||
sleep 2 # Allow root account creation
|
||||
|
||||
# Extract the token from logs
|
||||
local token=$(grep "Token:" auth_test.log | sed 's/.*Token: //' | tr -d '\n\r')
|
||||
|
||||
if [ -z "$token" ]; then
|
||||
log_error "Failed to extract authentication token from logs"
|
||||
kill $pid 2>/dev/null || true
|
||||
return
|
||||
fi
|
||||
|
||||
# Test 1: Admin endpoints should fail without authentication
|
||||
local no_auth_response=$(curl -s -X POST http://localhost:8095/api/users -H "Content-Type: application/json" -d '{"nickname":"test","password":"test"}')
|
||||
if echo "$no_auth_response" | grep -q "Unauthorized"; then
|
||||
log_success "Admin endpoints properly reject unauthenticated requests"
|
||||
else
|
||||
log_error "Admin endpoints should reject unauthenticated requests, got: $no_auth_response"
|
||||
fi
|
||||
|
||||
# Test 2: Admin endpoints should work with valid authentication
|
||||
local auth_response=$(curl -s -X POST http://localhost:8095/api/users -H "Content-Type: application/json" -H "Authorization: Bearer $token" -d '{"nickname":"authtest","password":"authtest"}')
|
||||
if echo "$auth_response" | grep -q "uuid"; then
|
||||
log_success "Admin endpoints work with valid authentication"
|
||||
else
|
||||
log_error "Admin endpoints should work with authentication, got: $auth_response"
|
||||
fi
|
||||
|
||||
# Test 3: KV endpoints should require auth when anonymous access is disabled
|
||||
local kv_no_auth=$(curl -s -X PUT http://localhost:8095/kv/test/auth -H "Content-Type: application/json" -d '{"test":"auth"}')
|
||||
if echo "$kv_no_auth" | grep -q "Unauthorized"; then
|
||||
log_success "KV endpoints properly require authentication when anonymous access disabled"
|
||||
else
|
||||
log_error "KV endpoints should require auth when anonymous access disabled, got: $kv_no_auth"
|
||||
fi
|
||||
|
||||
# Test 4: KV endpoints should work with valid authentication
|
||||
local kv_auth=$(curl -s -X PUT http://localhost:8095/kv/test/auth -H "Content-Type: application/json" -H "Authorization: Bearer $token" -d '{"test":"auth"}')
|
||||
if echo "$kv_auth" | grep -q "uuid\|timestamp" || [ -z "$kv_auth" ]; then
|
||||
log_success "KV endpoints work with valid authentication"
|
||||
else
|
||||
log_error "KV endpoints should work with authentication, got: $kv_auth"
|
||||
fi
|
||||
|
||||
kill $pid 2>/dev/null || true
|
||||
sleep 2
|
||||
else
|
||||
log_error "Auth test node failed to start"
|
||||
kill $pid 2>/dev/null || true
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 6: Resource Metadata Management (Issue #12)
|
||||
test_metadata_management() {
|
||||
test_start "Resource Metadata Management test (Issue #12)"
|
||||
|
||||
# Create metadata test config
|
||||
cat > metadata_test.yaml <<EOF
|
||||
node_id: "metadata-test"
|
||||
bind_address: "127.0.0.1"
|
||||
port: 8096
|
||||
data_dir: "./metadata_test_data"
|
||||
seed_nodes: []
|
||||
log_level: "error"
|
||||
auth_enabled: true
|
||||
allow_anonymous_read: false
|
||||
allow_anonymous_write: false
|
||||
EOF
|
||||
|
||||
# Start node
|
||||
$BINARY metadata_test.yaml >metadata_test.log 2>&1 &
|
||||
local pid=$!
|
||||
|
||||
if wait_for_service 8096; then
|
||||
sleep 2 # Allow root account creation
|
||||
|
||||
# Extract the token from logs
|
||||
local token=$(grep "Token:" metadata_test.log | sed 's/.*Token: //' | tr -d '\n\r')
|
||||
|
||||
if [ -z "$token" ]; then
|
||||
log_error "Failed to extract authentication token from logs"
|
||||
kill $pid 2>/dev/null || true
|
||||
return
|
||||
fi
|
||||
|
||||
# First, create a KV resource
|
||||
curl -s -X PUT http://localhost:8096/kv/test/resource -H "Content-Type: application/json" -H "Authorization: Bearer $token" -d '{"data":"test"}' >/dev/null
|
||||
sleep 1
|
||||
|
||||
# Test 1: Get metadata should fail for non-existent metadata (initially no metadata exists)
|
||||
local get_response=$(curl -s -w "\n%{http_code}" -X GET http://localhost:8096/kv/test/resource/metadata -H "Authorization: Bearer $token")
|
||||
local get_body=$(echo "$get_response" | head -n -1)
|
||||
local get_code=$(echo "$get_response" | tail -n 1)
|
||||
|
||||
if [ "$get_code" = "404" ]; then
|
||||
log_success "GET metadata returns 404 for non-existent metadata"
|
||||
else
|
||||
log_error "GET metadata should return 404 for non-existent metadata, got code: $get_code, body: $get_body"
|
||||
fi
|
||||
|
||||
# Test 2: Update metadata should create new metadata
|
||||
local update_response=$(curl -s -X PUT http://localhost:8096/kv/test/resource/metadata -H "Content-Type: application/json" -H "Authorization: Bearer $token" -d '{"owner_uuid":"test-owner-123","permissions":3840}')
|
||||
if echo "$update_response" | grep -q "owner_uuid"; then
|
||||
log_success "PUT metadata creates metadata successfully"
|
||||
else
|
||||
log_error "PUT metadata should create metadata, got: $update_response"
|
||||
fi
|
||||
|
||||
# Test 3: Get metadata should now return the created metadata
|
||||
local get_response2=$(curl -s -X GET http://localhost:8096/kv/test/resource/metadata -H "Authorization: Bearer $token")
|
||||
if echo "$get_response2" | grep -q "test-owner-123" && echo "$get_response2" | grep -q "3840"; then
|
||||
log_success "GET metadata returns created metadata"
|
||||
else
|
||||
log_error "GET metadata should return created metadata, got: $get_response2"
|
||||
fi
|
||||
|
||||
# Test 4: Update metadata should modify existing metadata
|
||||
local update_response2=$(curl -s -X PUT http://localhost:8096/kv/test/resource/metadata -H "Content-Type: application/json" -H "Authorization: Bearer $token" -d '{"owner_uuid":"new-owner-456"}')
|
||||
if echo "$update_response2" | grep -q "new-owner-456"; then
|
||||
log_success "PUT metadata updates existing metadata"
|
||||
else
|
||||
log_error "PUT metadata should update metadata, got: $update_response2"
|
||||
fi
|
||||
|
||||
# Test 5: Metadata endpoints should require authentication
|
||||
local no_auth=$(curl -s -w "\n%{http_code}" -X GET http://localhost:8096/kv/test/resource/metadata)
|
||||
local no_auth_code=$(echo "$no_auth" | tail -n 1)
|
||||
if [ "$no_auth_code" = "401" ]; then
|
||||
log_success "Metadata endpoints properly require authentication"
|
||||
else
|
||||
log_error "Metadata endpoints should require authentication, got code: $no_auth_code"
|
||||
fi
|
||||
|
||||
kill $pid 2>/dev/null || true
|
||||
sleep 2
|
||||
else
|
||||
log_error "Metadata test node failed to start"
|
||||
kill $pid 2>/dev/null || true
|
||||
fi
|
||||
}
|
||||
|
||||
# Main test execution
|
||||
main() {
|
||||
echo "=================================================="
|
||||
echo " KVS Integration Test Suite"
|
||||
echo " KVS Integration Test Suite (Merkle Tree)"
|
||||
echo "=================================================="
|
||||
|
||||
# Setup
|
||||
@@ -297,7 +550,9 @@ main() {
|
||||
test_basic_functionality
|
||||
test_cluster_formation
|
||||
test_conflict_resolution
|
||||
|
||||
test_authentication_middleware
|
||||
test_metadata_management
|
||||
|
||||
# Results
|
||||
echo "=================================================="
|
||||
echo " Test Results"
|
||||
@@ -308,7 +563,7 @@ main() {
|
||||
echo "=================================================="
|
||||
|
||||
if [ $TESTS_FAILED -eq 0 ]; then
|
||||
echo -e "${GREEN}🎉 All tests passed! KVS is working correctly.${NC}"
|
||||
echo -e "${GREEN}🎉 All tests passed! KVS with Merkle Tree sync is working correctly.${NC}"
|
||||
cleanup
|
||||
exit 0
|
||||
else
|
||||
@@ -322,4 +577,4 @@ main() {
|
||||
trap cleanup INT TERM
|
||||
|
||||
# Run tests
|
||||
main "$@"
|
||||
main "$@"
|
||||
|
65
issues/2.md
Normal file
65
issues/2.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# Issue #2: Update README.md
|
||||
|
||||
**Status:** ✅ **COMPLETED** *(updated during this session)*
|
||||
**Author:** MrKalzu
|
||||
**Created:** 2025-09-12 22:01:34 +03:00
|
||||
**Repository:** https://git.rauhala.info/ryyst/kalzu-value-store/issues/2
|
||||
|
||||
## Description
|
||||
|
||||
"It feels like the readme has lot of expired info after the latest update."
|
||||
|
||||
## Problem
|
||||
|
||||
The project's README file contained outdated information that needed to be revised following recent updates and refactoring.
|
||||
|
||||
## Resolution Status
|
||||
|
||||
**✅ COMPLETED** - The README.md has been comprehensively updated to reflect the current state of the codebase.
|
||||
|
||||
## Updates Made
|
||||
|
||||
### Architecture & Features
|
||||
- ✅ Updated key features to include Merkle Tree sync, JWT authentication, and modular architecture
|
||||
- ✅ Revised architecture diagram to show modular components
|
||||
- ✅ Added authentication and authorization sections
|
||||
- ✅ Updated conflict resolution description
|
||||
|
||||
### Configuration
|
||||
- ✅ Added comprehensive configuration options including feature toggles
|
||||
- ✅ Updated default values to match actual implementation
|
||||
- ✅ Added feature toggle documentation (auth, clustering, compression, etc.)
|
||||
- ✅ Included backup and tamper logging configuration
|
||||
|
||||
### API Documentation
|
||||
- ✅ Added JWT authentication examples
|
||||
- ✅ Updated API endpoints with proper authorization headers
|
||||
- ✅ Added authentication endpoints documentation
|
||||
- ✅ Included Merkle tree and sync endpoints
|
||||
|
||||
### Project Structure
|
||||
- ✅ Completely updated project structure to reflect modular architecture
|
||||
- ✅ Documented all packages (auth/, cluster/, storage/, server/, etc.)
|
||||
- ✅ Updated file organization to match current codebase
|
||||
|
||||
### Development & Testing
|
||||
- ✅ Updated build and test commands
|
||||
- ✅ Added integration test suite documentation
|
||||
- ✅ Updated conflict resolution testing procedures
|
||||
- ✅ Added code quality tools documentation
|
||||
|
||||
### Performance & Limitations
|
||||
- ✅ Updated performance characteristics with Merkle sync improvements
|
||||
- ✅ Revised limitations to reflect implemented features
|
||||
- ✅ Added realistic timing expectations
|
||||
|
||||
## Current Status
|
||||
|
||||
The README.md now accurately reflects:
|
||||
- Current modular architecture
|
||||
- All implemented features and capabilities
|
||||
- Proper configuration options
|
||||
- Updated development workflow
|
||||
- Comprehensive API documentation
|
||||
|
||||
**This issue has been resolved.**
|
71
issues/3.md
Normal file
71
issues/3.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Issue #3: Implement Autogenerated Root Account for Initial Setup
|
||||
|
||||
**Status:** ✅ **COMPLETED**
|
||||
**Author:** MrKalzu
|
||||
**Created:** 2025-09-12 22:17:12 +03:00
|
||||
**Repository:** https://git.rauhala.info/ryyst/kalzu-value-store/issues/3
|
||||
|
||||
## Problem Statement
|
||||
|
||||
The KVS server lacks a mechanism to create an initial administrative user when starting with an empty database and no seed nodes. This makes it impossible to interact with authentication-protected endpoints during initial setup.
|
||||
|
||||
## Current Challenge
|
||||
|
||||
- Empty database + no seed nodes = no way to authenticate
|
||||
- No existing users means no way to create API tokens
|
||||
- Authentication-protected endpoints become inaccessible
|
||||
- Manual database seeding required for initial setup
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
### 1. Detection Logic
|
||||
- Detect empty database condition
|
||||
- Verify no seed nodes are configured
|
||||
- Only trigger on initial startup with empty state
|
||||
|
||||
### 2. Root Account Generation
|
||||
Create a default "root" user with:
|
||||
- **Server-generated UUID**
|
||||
- **Hashed nickname** (e.g., "root")
|
||||
- **Assigned to default "admin" group**
|
||||
- **Full administrative privileges**
|
||||
|
||||
### 3. API Token Creation
|
||||
- Generate API token with administrative scopes
|
||||
- Include all necessary permissions for initial setup
|
||||
- Set reasonable expiration time
|
||||
|
||||
### 4. Secure Token Distribution
|
||||
- **Securely log the token to console** (one-time display)
|
||||
- **Persist user and token in BadgerDB**
|
||||
- **Clear token from memory after logging**
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Relevant Code Sections
|
||||
- `NewServer` function - Add initialization logic
|
||||
- `User`, `Group`, `APIToken` structs - Use existing data structures
|
||||
- Hashing and storage key functions - Leverage existing auth system
|
||||
|
||||
### Proposed Changes (from MrKalzu's comment)
|
||||
- **Added `HasUsers() (bool, error)`** to `auth/auth.go`
|
||||
- **Added "Initial root account setup for empty DB with no seeds"** to `server/server.go`
|
||||
- **Diff file attached** with implementation details
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Token should be displayed only once during startup
|
||||
- Token should have reasonable expiration
|
||||
- Root account should be clearly identified in logs
|
||||
- Consider forcing password change on first use (future enhancement)
|
||||
|
||||
## Benefits
|
||||
|
||||
- Enables zero-configuration initial setup
|
||||
- Provides secure bootstrap process
|
||||
- Eliminates manual database seeding
|
||||
- Supports automated deployment scenarios
|
||||
|
||||
## Dependencies
|
||||
|
||||
This issue blocks **Issue #4** (securing administrative endpoints), as it provides the mechanism for initial administrative access.
|
59
issues/4.md
Normal file
59
issues/4.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# Issue #4: Secure User and Group Management Endpoints with Authentication Middleware
|
||||
|
||||
**Status:** Open
|
||||
**Author:** MrKalzu
|
||||
**Created:** 2025-09-12
|
||||
**Assignee:** ryyst
|
||||
**Repository:** https://git.rauhala.info/ryyst/kalzu-value-store/issues/4
|
||||
|
||||
## Description
|
||||
|
||||
**Security Vulnerability:** User, group, and token management API endpoints are currently exposed without authentication, creating a significant security risk.
|
||||
|
||||
## Current Problem
|
||||
|
||||
The following administrative endpoints are accessible without authentication:
|
||||
- User management endpoints (`createUserHandler`, `getUserHandler`, etc.)
|
||||
- Group management endpoints
|
||||
- Token management endpoints
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
### 1. Define Granular Administrative Scopes
|
||||
|
||||
Create specific administrative scopes for fine-grained access control:
|
||||
- `admin:users:create` - Create new users
|
||||
- `admin:users:read` - View user information
|
||||
- `admin:users:update` - Modify user data
|
||||
- `admin:users:delete` - Remove users
|
||||
- `admin:groups:create` - Create new groups
|
||||
- `admin:groups:read` - View group information
|
||||
- `admin:groups:update` - Modify group membership
|
||||
- `admin:groups:delete` - Remove groups
|
||||
- `admin:tokens:create` - Generate API tokens
|
||||
- `admin:tokens:revoke` - Revoke API tokens
|
||||
|
||||
### 2. Apply Authentication Middleware
|
||||
|
||||
Wrap all administrative handlers with `authMiddleware` and specific scope requirements:
|
||||
|
||||
```go
|
||||
// Example implementation
|
||||
router.Handle("/auth/users", authMiddleware("admin:users:create")(createUserHandler))
|
||||
router.Handle("/auth/users/{id}", authMiddleware("admin:users:read")(getUserHandler))
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
- **Depends on Issue #3**: Requires implementation of autogenerated root account for initial setup
|
||||
|
||||
## Security Benefits
|
||||
|
||||
- Prevents unauthorized administrative access
|
||||
- Implements principle of least privilege
|
||||
- Provides audit trail for administrative operations
|
||||
- Protects against privilege escalation attacks
|
||||
|
||||
## Implementation Priority
|
||||
|
||||
**High Priority** - This addresses a critical security vulnerability that could allow unauthorized access to administrative functions.
|
47
issues/5.md
Normal file
47
issues/5.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Issue #5: Add Configuration for Anonymous Read and Write Access to KV Endpoints
|
||||
|
||||
**Status:** Open
|
||||
**Author:** MrKalzu
|
||||
**Created:** 2025-09-12
|
||||
**Repository:** https://git.rauhala.info/ryyst/kalzu-value-store/issues/5
|
||||
|
||||
## Description
|
||||
|
||||
Currently, KV endpoints are publicly accessible without authentication. This issue proposes adding granular control over public access to key-value store functionality.
|
||||
|
||||
## Proposed Configuration Parameters
|
||||
|
||||
Add two new configuration parameters to the `Config` struct:
|
||||
|
||||
1. **`AllowAnonymousRead`** (boolean, default `false`)
|
||||
- Controls whether unauthenticated users can read data
|
||||
|
||||
2. **`AllowAnonymousWrite`** (boolean, default `false`)
|
||||
- Controls whether unauthenticated users can write data
|
||||
|
||||
## Proposed Implementation Changes
|
||||
|
||||
### Modify `setupRoutes` Function
|
||||
- Conditionally apply authentication middleware based on configuration flags
|
||||
|
||||
### Specific Handler Changes
|
||||
- **`getKVHandler`**: Apply auth middleware with "read" scope if `AllowAnonymousRead` is `false`
|
||||
- **`putKVHandler`**: Apply auth middleware with "write" scope if `AllowAnonymousWrite` is `false`
|
||||
- **`deleteKVHandler`**: Always require authentication (no anonymous delete)
|
||||
|
||||
## Goal
|
||||
|
||||
Provide granular control over public access to key-value store functionality while maintaining security for sensitive operations.
|
||||
|
||||
## Use Cases
|
||||
|
||||
- **Public read-only deployments**: Allow anonymous reading for public data
|
||||
- **Public write scenarios**: Allow anonymous data submission (like forms or logs)
|
||||
- **Secure deployments**: Require authentication for all operations
|
||||
- **Mixed access patterns**: Different permissions for read vs write operations
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Delete operations should always require authentication
|
||||
- Consider rate limiting for anonymous access
|
||||
- Audit logging should track anonymous operations differently
|
46
issues/6.md
Normal file
46
issues/6.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Issue #6: Configuration Options to Disable Optional Functionalities
|
||||
|
||||
**Status:** ✅ **COMPLETED**
|
||||
**Author:** MrKalzu
|
||||
**Created:** 2025-09-12
|
||||
**Repository:** https://git.rauhala.info/ryyst/kalzu-value-store/issues/6
|
||||
|
||||
## Description
|
||||
|
||||
Proposes adding configuration options to disable advanced features in the KVS (Key-Value Store) server to allow more flexible and lightweight deployment scenarios.
|
||||
|
||||
## Suggested Disablement Options
|
||||
|
||||
1. **Authentication System** - Disable JWT authentication entirely
|
||||
2. **Tamper-Evident Logging** - Disable cryptographic audit trails
|
||||
3. **Clustering** - Disable gossip protocol and distributed features
|
||||
4. **Rate Limiting** - Disable per-client rate limiting
|
||||
5. **Revision History** - Disable automatic versioning
|
||||
|
||||
## Proposed Implementation
|
||||
|
||||
- Add boolean flags to the Config struct for each feature
|
||||
- Modify server initialization and request handling to respect these flags
|
||||
- Allow conditional compilation/execution of features based on configuration
|
||||
|
||||
## Pros of Proposed Changes
|
||||
|
||||
- Reduce unnecessary overhead for simple deployments
|
||||
- Simplify setup for different deployment needs
|
||||
- Improve performance for specific use cases
|
||||
- Lower resource consumption
|
||||
|
||||
## Cons of Proposed Changes
|
||||
|
||||
- Potential security risks if features are disabled inappropriately
|
||||
- Loss of advanced functionality like audit trails or data recovery
|
||||
- Increased complexity in codebase with conditional feature logic
|
||||
|
||||
## Already Implemented Features
|
||||
|
||||
- Backup System (configurable)
|
||||
- Compression (configurable)
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
The issue suggests modifying relevant code sections to conditionally enable/disable these features based on configuration, similar to how backup and compression are currently handled.
|
1395
server/handlers.go
Normal file
1395
server/handlers.go
Normal file
File diff suppressed because it is too large
Load Diff
79
server/lifecycle.go
Normal file
79
server/lifecycle.go
Normal file
@@ -0,0 +1,79 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// Start the server and initialize all services
|
||||
func (s *Server) Start() error {
|
||||
router := s.setupRoutes()
|
||||
|
||||
addr := fmt.Sprintf("%s:%d", s.config.BindAddress, s.config.Port)
|
||||
s.httpServer = &http.Server{
|
||||
Addr: addr,
|
||||
Handler: router,
|
||||
}
|
||||
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"node_id": s.config.NodeID,
|
||||
"address": addr,
|
||||
}).Info("Starting KVS server")
|
||||
|
||||
// Start gossip and sync routines
|
||||
s.startBackgroundTasks()
|
||||
|
||||
// Try to join cluster if seed nodes are configured and clustering is enabled
|
||||
if s.config.ClusteringEnabled && len(s.config.SeedNodes) > 0 {
|
||||
go s.bootstrap()
|
||||
}
|
||||
|
||||
return s.httpServer.ListenAndServe()
|
||||
}
|
||||
|
||||
// Stop the server gracefully
|
||||
func (s *Server) Stop() error {
|
||||
s.logger.Info("Shutting down KVS server")
|
||||
|
||||
// Stop cluster services
|
||||
s.gossipService.Stop()
|
||||
s.syncService.Stop()
|
||||
|
||||
// Close storage services
|
||||
if s.storageService != nil {
|
||||
s.storageService.Close()
|
||||
}
|
||||
|
||||
s.cancel()
|
||||
s.wg.Wait()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := s.httpServer.Shutdown(ctx); err != nil {
|
||||
s.logger.WithError(err).Error("HTTP server shutdown error")
|
||||
}
|
||||
|
||||
if err := s.db.Close(); err != nil {
|
||||
s.logger.WithError(err).Error("BadgerDB close error")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// startBackgroundTasks initializes and starts cluster services
|
||||
func (s *Server) startBackgroundTasks() {
|
||||
// Start cluster services
|
||||
s.gossipService.Start()
|
||||
s.syncService.Start()
|
||||
}
|
||||
|
||||
// bootstrap joins cluster using seed nodes via bootstrap service
|
||||
func (s *Server) bootstrap() {
|
||||
// Use bootstrap service to join cluster
|
||||
s.bootstrapService.Bootstrap()
|
||||
}
|
144
server/routes.go
Normal file
144
server/routes.go
Normal file
@@ -0,0 +1,144 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
// setupRoutes configures all HTTP routes and their handlers
|
||||
func (s *Server) setupRoutes() *mux.Router {
|
||||
router := mux.NewRouter()
|
||||
|
||||
// Health endpoint (always available)
|
||||
router.HandleFunc("/health", s.healthHandler).Methods("GET")
|
||||
|
||||
// Resource Metadata Management endpoints (Issue #12) - Must come BEFORE general KV routes
|
||||
// These need to be registered first to prevent /kv/{path:.+} from matching metadata paths
|
||||
if s.config.AuthEnabled {
|
||||
router.Handle("/kv/{path:.+}/metadata", s.authService.Middleware(
|
||||
[]string{"admin:users:read"}, nil, "",
|
||||
)(s.getResourceMetadataHandler)).Methods("GET")
|
||||
|
||||
router.Handle("/kv/{path:.+}/metadata", s.authService.Middleware(
|
||||
[]string{"admin:users:update"}, nil, "",
|
||||
)(s.updateResourceMetadataHandler)).Methods("PUT")
|
||||
}
|
||||
|
||||
// KV endpoints (with conditional authentication based on anonymous access settings)
|
||||
// GET endpoint - require auth if anonymous read is disabled
|
||||
if s.config.AuthEnabled && !s.config.AllowAnonymousRead {
|
||||
router.Handle("/kv/{path:.+}", s.authService.Middleware(
|
||||
[]string{"read"}, nil, "",
|
||||
)(s.getKVHandler)).Methods("GET")
|
||||
} else {
|
||||
router.HandleFunc("/kv/{path:.+}", s.getKVHandler).Methods("GET")
|
||||
}
|
||||
|
||||
// PUT endpoint - require auth if anonymous write is disabled
|
||||
if s.config.AuthEnabled && !s.config.AllowAnonymousWrite {
|
||||
router.Handle("/kv/{path:.+}", s.authService.Middleware(
|
||||
[]string{"write"}, nil, "",
|
||||
)(s.putKVHandler)).Methods("PUT")
|
||||
} else {
|
||||
router.HandleFunc("/kv/{path:.+}", s.putKVHandler).Methods("PUT")
|
||||
}
|
||||
|
||||
// DELETE endpoint - always require authentication (no anonymous delete)
|
||||
if s.config.AuthEnabled {
|
||||
router.Handle("/kv/{path:.+}", s.authService.Middleware(
|
||||
[]string{"delete"}, nil, "",
|
||||
)(s.deleteKVHandler)).Methods("DELETE")
|
||||
} else {
|
||||
router.HandleFunc("/kv/{path:.+}", s.deleteKVHandler).Methods("DELETE")
|
||||
}
|
||||
|
||||
// Member endpoints (available when clustering is enabled)
|
||||
if s.config.ClusteringEnabled {
|
||||
// GET /members/ is unprotected for monitoring/inspection
|
||||
router.HandleFunc("/members/", s.getMembersHandler).Methods("GET")
|
||||
|
||||
// Apply cluster authentication middleware to all cluster communication endpoints
|
||||
if s.clusterAuthService != nil {
|
||||
router.Handle("/members/join", s.clusterAuthService.Middleware(http.HandlerFunc(s.joinMemberHandler))).Methods("POST")
|
||||
router.Handle("/members/leave", s.clusterAuthService.Middleware(http.HandlerFunc(s.leaveMemberHandler))).Methods("DELETE")
|
||||
router.Handle("/members/gossip", s.clusterAuthService.Middleware(http.HandlerFunc(s.gossipHandler))).Methods("POST")
|
||||
router.Handle("/members/pairs_by_time", s.clusterAuthService.Middleware(http.HandlerFunc(s.pairsByTimeHandler))).Methods("POST")
|
||||
|
||||
// Merkle Tree endpoints (clustering feature)
|
||||
router.Handle("/merkle_tree/root", s.clusterAuthService.Middleware(http.HandlerFunc(s.getMerkleRootHandler))).Methods("GET")
|
||||
router.Handle("/merkle_tree/diff", s.clusterAuthService.Middleware(http.HandlerFunc(s.getMerkleDiffHandler))).Methods("POST")
|
||||
router.Handle("/kv_range", s.clusterAuthService.Middleware(http.HandlerFunc(s.getKVRangeHandler))).Methods("POST")
|
||||
} else {
|
||||
// Fallback to unprotected endpoints (for backwards compatibility)
|
||||
router.HandleFunc("/members/join", s.joinMemberHandler).Methods("POST")
|
||||
router.HandleFunc("/members/leave", s.leaveMemberHandler).Methods("DELETE")
|
||||
router.HandleFunc("/members/gossip", s.gossipHandler).Methods("POST")
|
||||
router.HandleFunc("/members/pairs_by_time", s.pairsByTimeHandler).Methods("POST")
|
||||
|
||||
// Merkle Tree endpoints (clustering feature)
|
||||
router.HandleFunc("/merkle_tree/root", s.getMerkleRootHandler).Methods("GET")
|
||||
router.HandleFunc("/merkle_tree/diff", s.getMerkleDiffHandler).Methods("POST")
|
||||
router.HandleFunc("/kv_range", s.getKVRangeHandler).Methods("POST")
|
||||
}
|
||||
}
|
||||
|
||||
// Authentication and user management endpoints (available when auth is enabled)
|
||||
if s.config.AuthEnabled {
|
||||
// User Management endpoints (with authentication middleware)
|
||||
router.Handle("/api/users", s.authService.Middleware(
|
||||
[]string{"admin:users:create"}, nil, "",
|
||||
)(s.createUserHandler)).Methods("POST")
|
||||
|
||||
router.Handle("/api/users/{uuid}", s.authService.Middleware(
|
||||
[]string{"admin:users:read"}, nil, "",
|
||||
)(s.getUserHandler)).Methods("GET")
|
||||
|
||||
router.Handle("/api/users/{uuid}", s.authService.Middleware(
|
||||
[]string{"admin:users:update"}, nil, "",
|
||||
)(s.updateUserHandler)).Methods("PUT")
|
||||
|
||||
router.Handle("/api/users/{uuid}", s.authService.Middleware(
|
||||
[]string{"admin:users:delete"}, nil, "",
|
||||
)(s.deleteUserHandler)).Methods("DELETE")
|
||||
|
||||
// Group Management endpoints (with authentication middleware)
|
||||
router.Handle("/api/groups", s.authService.Middleware(
|
||||
[]string{"admin:groups:create"}, nil, "",
|
||||
)(s.createGroupHandler)).Methods("POST")
|
||||
|
||||
router.Handle("/api/groups/{uuid}", s.authService.Middleware(
|
||||
[]string{"admin:groups:read"}, nil, "",
|
||||
)(s.getGroupHandler)).Methods("GET")
|
||||
|
||||
router.Handle("/api/groups/{uuid}", s.authService.Middleware(
|
||||
[]string{"admin:groups:update"}, nil, "",
|
||||
)(s.updateGroupHandler)).Methods("PUT")
|
||||
|
||||
router.Handle("/api/groups/{uuid}", s.authService.Middleware(
|
||||
[]string{"admin:groups:delete"}, nil, "",
|
||||
)(s.deleteGroupHandler)).Methods("DELETE")
|
||||
|
||||
// Token Management endpoints (with authentication middleware)
|
||||
router.Handle("/api/tokens", s.authService.Middleware(
|
||||
[]string{"admin:tokens:create"}, nil, "",
|
||||
)(s.createTokenHandler)).Methods("POST")
|
||||
|
||||
// Cluster Bootstrap endpoint (Issue #13) - Protected by JWT authentication
|
||||
// Allows authenticated administrators to retrieve the cluster secret for new nodes
|
||||
router.Handle("/auth/cluster-bootstrap", s.authService.Middleware(
|
||||
[]string{"admin:tokens:create"}, nil, "",
|
||||
)(s.clusterBootstrapHandler)).Methods("GET")
|
||||
}
|
||||
|
||||
// Revision History endpoints (available when revision history is enabled)
|
||||
if s.config.RevisionHistoryEnabled {
|
||||
router.HandleFunc("/api/data/{key}/history", s.getRevisionHistoryHandler).Methods("GET")
|
||||
router.HandleFunc("/api/data/{key}/history/{revision}", s.getSpecificRevisionHandler).Methods("GET")
|
||||
}
|
||||
|
||||
// Backup Status endpoint (always available if backup is enabled)
|
||||
router.HandleFunc("/api/backup/status", s.getBackupStatusHandler).Methods("GET")
|
||||
|
||||
return router
|
||||
}
|
335
server/server.go
Normal file
335
server/server.go
Normal file
@@ -0,0 +1,335 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/dgraph-io/badger/v4"
|
||||
"github.com/robfig/cron/v3"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"kvs/auth"
|
||||
"kvs/cluster"
|
||||
"kvs/storage"
|
||||
"kvs/types"
|
||||
"kvs/utils"
|
||||
)
|
||||
|
||||
// Server represents the KVS node
|
||||
type Server struct {
|
||||
config *types.Config
|
||||
db *badger.DB
|
||||
mode string // "normal", "read-only", "syncing"
|
||||
modeMu sync.RWMutex
|
||||
logger *logrus.Logger
|
||||
httpServer *http.Server
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
wg sync.WaitGroup
|
||||
|
||||
// Cluster services
|
||||
gossipService *cluster.GossipService
|
||||
syncService *cluster.SyncService
|
||||
merkleService *cluster.MerkleService
|
||||
bootstrapService *cluster.BootstrapService
|
||||
|
||||
// Storage services
|
||||
storageService *storage.StorageService
|
||||
revisionService *storage.RevisionService
|
||||
|
||||
// Backup system
|
||||
cronScheduler *cron.Cron // Cron scheduler for backups
|
||||
backupStatus types.BackupStatus // Current backup status
|
||||
backupMu sync.RWMutex // Protects backup status
|
||||
|
||||
// Authentication service
|
||||
authService *auth.AuthService
|
||||
clusterAuthService *auth.ClusterAuthService
|
||||
}
|
||||
|
||||
// NewServer initializes and returns a new Server instance
|
||||
func NewServer(config *types.Config) (*Server, error) {
|
||||
logger := logrus.New()
|
||||
logger.SetFormatter(&logrus.JSONFormatter{})
|
||||
|
||||
level, err := logrus.ParseLevel(config.LogLevel)
|
||||
if err != nil {
|
||||
level = logrus.InfoLevel
|
||||
}
|
||||
logger.SetLevel(level)
|
||||
|
||||
// Create data directory
|
||||
if err := os.MkdirAll(config.DataDir, 0755); err != nil {
|
||||
return nil, fmt.Errorf("failed to create data directory: %v", err)
|
||||
}
|
||||
|
||||
// Open BadgerDB
|
||||
opts := badger.DefaultOptions(filepath.Join(config.DataDir, "badger"))
|
||||
opts.Logger = nil // Disable badger's internal logging
|
||||
db, err := badger.Open(opts)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open BadgerDB: %v", err)
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
|
||||
// Initialize cluster services
|
||||
merkleService := cluster.NewMerkleService(db, logger)
|
||||
gossipService := cluster.NewGossipService(config, logger)
|
||||
syncService := cluster.NewSyncService(db, config, gossipService, merkleService, logger)
|
||||
var server *Server // Forward declaration
|
||||
bootstrapService := cluster.NewBootstrapService(config, gossipService, syncService, logger, func(mode string) {
|
||||
if server != nil {
|
||||
server.setMode(mode)
|
||||
}
|
||||
})
|
||||
|
||||
server = &Server{
|
||||
config: config,
|
||||
db: db,
|
||||
mode: "normal",
|
||||
logger: logger,
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
gossipService: gossipService,
|
||||
syncService: syncService,
|
||||
merkleService: merkleService,
|
||||
bootstrapService: bootstrapService,
|
||||
}
|
||||
|
||||
if config.ReadOnly {
|
||||
server.setMode("read-only")
|
||||
}
|
||||
|
||||
// Initialize storage services
|
||||
storageService, err := storage.NewStorageService(db, config, logger)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize storage service: %v", err)
|
||||
}
|
||||
server.storageService = storageService
|
||||
|
||||
// Initialize revision service
|
||||
server.revisionService = storage.NewRevisionService(storageService)
|
||||
|
||||
// Initialize authentication service
|
||||
server.authService = auth.NewAuthService(db, logger, config)
|
||||
|
||||
// Initialize cluster authentication service (Issue #13)
|
||||
if config.ClusteringEnabled {
|
||||
server.clusterAuthService = auth.NewClusterAuthService(config.ClusterSecret, logger)
|
||||
}
|
||||
|
||||
// Setup initial root account if needed (Issue #3)
|
||||
if config.AuthEnabled {
|
||||
if err := server.setupRootAccount(); err != nil {
|
||||
return nil, fmt.Errorf("failed to setup root account: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize Merkle tree using cluster service
|
||||
if err := server.syncService.InitializeMerkleTree(); err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize Merkle tree: %v", err)
|
||||
}
|
||||
|
||||
return server, nil
|
||||
}
|
||||
|
||||
// getMode returns the current server mode
|
||||
func (s *Server) getMode() string {
|
||||
s.modeMu.RLock()
|
||||
defer s.modeMu.RUnlock()
|
||||
return s.mode
|
||||
}
|
||||
|
||||
// setMode sets the server mode
|
||||
func (s *Server) setMode(mode string) {
|
||||
s.modeMu.Lock()
|
||||
defer s.modeMu.Unlock()
|
||||
oldMode := s.mode
|
||||
s.mode = mode
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"old_mode": oldMode,
|
||||
"new_mode": mode,
|
||||
}).Info("Mode changed")
|
||||
}
|
||||
|
||||
// addMember adds a member using cluster service
|
||||
func (s *Server) addMember(member *types.Member) {
|
||||
s.gossipService.AddMember(member)
|
||||
}
|
||||
|
||||
// removeMember removes a member using cluster service
|
||||
func (s *Server) removeMember(nodeID string) {
|
||||
s.gossipService.RemoveMember(nodeID)
|
||||
}
|
||||
|
||||
// getMembers returns all cluster members
|
||||
func (s *Server) getMembers() []*types.Member {
|
||||
return s.gossipService.GetMembers()
|
||||
}
|
||||
|
||||
// getJoinedTimestamp returns this node's joined timestamp (startup time)
|
||||
func (s *Server) getJoinedTimestamp() int64 {
|
||||
// For now, use a simple approach - this should be stored persistently
|
||||
return time.Now().UnixMilli()
|
||||
}
|
||||
|
||||
// getBackupStatus returns the current backup status
|
||||
func (s *Server) getBackupStatus() types.BackupStatus {
|
||||
s.backupMu.RLock()
|
||||
defer s.backupMu.RUnlock()
|
||||
|
||||
status := s.backupStatus
|
||||
|
||||
// Calculate next backup time if scheduler is running
|
||||
if s.cronScheduler != nil && len(s.cronScheduler.Entries()) > 0 {
|
||||
nextRun := s.cronScheduler.Entries()[0].Next
|
||||
if !nextRun.IsZero() {
|
||||
status.NextBackupTime = nextRun.Unix()
|
||||
}
|
||||
}
|
||||
|
||||
return status
|
||||
}
|
||||
|
||||
// setupRootAccount creates an initial root account if no users exist and no seed nodes are configured
|
||||
func (s *Server) setupRootAccount() error {
|
||||
// Only create root account if:
|
||||
// 1. No users exist in the database
|
||||
// 2. No seed nodes are configured (standalone mode)
|
||||
hasUsers, err := s.authService.HasUsers()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to check if users exist: %v", err)
|
||||
}
|
||||
|
||||
// If users already exist or we have seed nodes, no need to create root account
|
||||
if hasUsers || len(s.config.SeedNodes) > 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
s.logger.Info("Creating initial root account for empty database with no seed nodes")
|
||||
|
||||
// Import required packages for user creation
|
||||
// Note: We need these imports at the top of the file
|
||||
return s.createRootUserAndToken()
|
||||
}
|
||||
|
||||
// createRootUserAndToken creates the root user, admin group, and initial token
|
||||
func (s *Server) createRootUserAndToken() error {
|
||||
rootNickname := "root"
|
||||
adminGroupName := "admin"
|
||||
|
||||
// Generate UUIDs
|
||||
rootUserUUID := "root-" + time.Now().Format("20060102-150405")
|
||||
adminGroupUUID := "admin-" + time.Now().Format("20060102-150405")
|
||||
now := time.Now().Unix()
|
||||
|
||||
// Create admin group
|
||||
adminGroup := types.Group{
|
||||
UUID: adminGroupUUID,
|
||||
NameHash: hashGroupName(adminGroupName),
|
||||
Members: []string{rootUserUUID},
|
||||
CreatedAt: now,
|
||||
UpdatedAt: now,
|
||||
}
|
||||
|
||||
// Create root user
|
||||
rootUser := types.User{
|
||||
UUID: rootUserUUID,
|
||||
NicknameHash: hashUserNickname(rootNickname),
|
||||
Groups: []string{adminGroupUUID},
|
||||
CreatedAt: now,
|
||||
UpdatedAt: now,
|
||||
}
|
||||
|
||||
// Store group and user in database
|
||||
if err := s.storeUserAndGroup(&rootUser, &adminGroup); err != nil {
|
||||
return fmt.Errorf("failed to store root user and admin group: %v", err)
|
||||
}
|
||||
|
||||
// Create API token with full administrative scopes
|
||||
adminScopes := []string{
|
||||
"admin:users:create", "admin:users:read", "admin:users:update", "admin:users:delete",
|
||||
"admin:groups:create", "admin:groups:read", "admin:groups:update", "admin:groups:delete",
|
||||
"admin:tokens:create", "admin:tokens:revoke",
|
||||
"read", "write", "delete",
|
||||
}
|
||||
|
||||
// Generate token with 24 hour expiration for initial setup
|
||||
tokenString, expiresAt, err := auth.GenerateJWT(rootUserUUID, adminScopes, 24)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate root token: %v", err)
|
||||
}
|
||||
|
||||
// Store token in database
|
||||
if err := s.storeAPIToken(tokenString, rootUserUUID, adminScopes, expiresAt); err != nil {
|
||||
return fmt.Errorf("failed to store root token: %v", err)
|
||||
}
|
||||
|
||||
// Log the token securely (one-time display)
|
||||
s.logger.WithFields(logrus.Fields{
|
||||
"user_uuid": rootUserUUID,
|
||||
"group_uuid": adminGroupUUID,
|
||||
"expires_at": time.Unix(expiresAt, 0).Format(time.RFC3339),
|
||||
"expires_in": "24 hours",
|
||||
}).Warn("Root account created - SAVE THIS TOKEN:")
|
||||
|
||||
// Display token prominently
|
||||
fmt.Printf("\n" + strings.Repeat("=", 80) + "\n")
|
||||
fmt.Printf("🔐 ROOT ACCOUNT CREATED - INITIAL SETUP TOKEN\n")
|
||||
fmt.Printf("===========================================\n")
|
||||
fmt.Printf("User UUID: %s\n", rootUserUUID)
|
||||
fmt.Printf("Group UUID: %s\n", adminGroupUUID)
|
||||
fmt.Printf("Token: %s\n", tokenString)
|
||||
fmt.Printf("Expires: %s (24 hours)\n", time.Unix(expiresAt, 0).Format(time.RFC3339))
|
||||
fmt.Printf("\n⚠️ IMPORTANT: Save this token immediately!\n")
|
||||
fmt.Printf(" This is the only time it will be displayed.\n")
|
||||
fmt.Printf(" Use this token to authenticate and create additional users.\n")
|
||||
fmt.Printf(strings.Repeat("=", 80) + "\n\n")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// hashUserNickname creates a hash of the user nickname (similar to handlers.go)
|
||||
func hashUserNickname(nickname string) string {
|
||||
return utils.HashSHA3512(nickname)
|
||||
}
|
||||
|
||||
// hashGroupName creates a hash of the group name (similar to handlers.go)
|
||||
func hashGroupName(groupname string) string {
|
||||
return utils.HashSHA3512(groupname)
|
||||
}
|
||||
|
||||
// storeUserAndGroup stores both user and group in the database
|
||||
func (s *Server) storeUserAndGroup(user *types.User, group *types.Group) error {
|
||||
return s.db.Update(func(txn *badger.Txn) error {
|
||||
// Store user
|
||||
userData, err := json.Marshal(user)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal user data: %v", err)
|
||||
}
|
||||
|
||||
if err := txn.Set([]byte(auth.UserStorageKey(user.UUID)), userData); err != nil {
|
||||
return fmt.Errorf("failed to store user: %v", err)
|
||||
}
|
||||
|
||||
// Store group
|
||||
groupData, err := json.Marshal(group)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal group data: %v", err)
|
||||
}
|
||||
|
||||
if err := txn.Set([]byte(auth.GroupStorageKey(group.UUID)), groupData); err != nil {
|
||||
return fmt.Errorf("failed to store group: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
}
|
60
storage/compression.go
Normal file
60
storage/compression.go
Normal file
@@ -0,0 +1,60 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/klauspost/compress/zstd"
|
||||
)
|
||||
|
||||
// CompressionService handles ZSTD compression and decompression
|
||||
type CompressionService struct {
|
||||
compressor *zstd.Encoder
|
||||
decompressor *zstd.Decoder
|
||||
}
|
||||
|
||||
// NewCompressionService creates a new compression service
|
||||
func NewCompressionService() (*CompressionService, error) {
|
||||
// Initialize ZSTD compressor
|
||||
compressor, err := zstd.NewWriter(nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize ZSTD compressor: %v", err)
|
||||
}
|
||||
|
||||
// Initialize ZSTD decompressor
|
||||
decompressor, err := zstd.NewReader(nil)
|
||||
if err != nil {
|
||||
compressor.Close()
|
||||
return nil, fmt.Errorf("failed to initialize ZSTD decompressor: %v", err)
|
||||
}
|
||||
|
||||
return &CompressionService{
|
||||
compressor: compressor,
|
||||
decompressor: decompressor,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close closes the compression and decompression resources
|
||||
func (c *CompressionService) Close() {
|
||||
if c.compressor != nil {
|
||||
c.compressor.Close()
|
||||
}
|
||||
if c.decompressor != nil {
|
||||
c.decompressor.Close()
|
||||
}
|
||||
}
|
||||
|
||||
// CompressData compresses data using ZSTD
|
||||
func (c *CompressionService) CompressData(data []byte) ([]byte, error) {
|
||||
if c.compressor == nil {
|
||||
return nil, fmt.Errorf("compressor not initialized")
|
||||
}
|
||||
return c.compressor.EncodeAll(data, make([]byte, 0, len(data))), nil
|
||||
}
|
||||
|
||||
// DecompressData decompresses ZSTD-compressed data
|
||||
func (c *CompressionService) DecompressData(compressedData []byte) ([]byte, error) {
|
||||
if c.decompressor == nil {
|
||||
return nil, fmt.Errorf("decompressor not initialized")
|
||||
}
|
||||
return c.decompressor.DecodeAll(compressedData, nil)
|
||||
}
|
214
storage/revision.go
Normal file
214
storage/revision.go
Normal file
@@ -0,0 +1,214 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
badger "github.com/dgraph-io/badger/v4"
|
||||
|
||||
"kvs/auth"
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// RevisionService handles revision history management
|
||||
type RevisionService struct {
|
||||
storage *StorageService
|
||||
}
|
||||
|
||||
// NewRevisionService creates a new revision service
|
||||
func NewRevisionService(storage *StorageService) *RevisionService {
|
||||
return &RevisionService{
|
||||
storage: storage,
|
||||
}
|
||||
}
|
||||
|
||||
// GetRevisionKey generates the storage key for a specific revision
|
||||
func GetRevisionKey(baseKey string, revision int) string {
|
||||
return fmt.Sprintf("%s:rev:%d", baseKey, revision)
|
||||
}
|
||||
|
||||
// StoreRevisionHistory stores a value and manages revision history (up to 3 revisions)
|
||||
func (r *RevisionService) StoreRevisionHistory(txn *badger.Txn, key string, storedValue types.StoredValue, ttl time.Duration) error {
|
||||
// Get existing metadata to check current revisions
|
||||
metadataKey := auth.ResourceMetadataKey(key)
|
||||
|
||||
var metadata types.ResourceMetadata
|
||||
var currentRevisions []int
|
||||
|
||||
// Try to get existing metadata
|
||||
metadataData, err := r.storage.RetrieveWithDecompression(txn, []byte(metadataKey))
|
||||
if err == badger.ErrKeyNotFound {
|
||||
// No existing metadata, this is a new key
|
||||
metadata = types.ResourceMetadata{
|
||||
OwnerUUID: "", // Will be set by caller if needed
|
||||
GroupUUID: "",
|
||||
Permissions: types.DefaultPermissions,
|
||||
TTL: "",
|
||||
CreatedAt: time.Now().Unix(),
|
||||
UpdatedAt: time.Now().Unix(),
|
||||
}
|
||||
currentRevisions = []int{}
|
||||
} else if err != nil {
|
||||
// Error reading metadata
|
||||
return fmt.Errorf("failed to read metadata: %v", err)
|
||||
} else {
|
||||
// Parse existing metadata
|
||||
err = json.Unmarshal(metadataData, &metadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to unmarshal metadata: %v", err)
|
||||
}
|
||||
|
||||
// Extract current revisions (we store them as a custom field)
|
||||
if metadata.TTL == "" {
|
||||
currentRevisions = []int{}
|
||||
} else {
|
||||
// For now, we'll manage revisions separately - let's create a new metadata field
|
||||
currentRevisions = []int{1, 2, 3} // Assume all revisions exist for existing keys
|
||||
}
|
||||
}
|
||||
|
||||
// Revision rotation logic: shift existing revisions
|
||||
if len(currentRevisions) >= 3 {
|
||||
// Delete oldest revision (rev:3)
|
||||
oldestRevKey := GetRevisionKey(key, 3)
|
||||
txn.Delete([]byte(oldestRevKey))
|
||||
|
||||
// Shift rev:2 → rev:3
|
||||
rev2Key := GetRevisionKey(key, 2)
|
||||
rev2Data, err := r.storage.RetrieveWithDecompression(txn, []byte(rev2Key))
|
||||
if err == nil {
|
||||
rev3Key := GetRevisionKey(key, 3)
|
||||
r.storage.StoreWithTTL(txn, []byte(rev3Key), rev2Data, ttl)
|
||||
}
|
||||
|
||||
// Shift rev:1 → rev:2
|
||||
rev1Key := GetRevisionKey(key, 1)
|
||||
rev1Data, err := r.storage.RetrieveWithDecompression(txn, []byte(rev1Key))
|
||||
if err == nil {
|
||||
rev2Key := GetRevisionKey(key, 2)
|
||||
r.storage.StoreWithTTL(txn, []byte(rev2Key), rev1Data, ttl)
|
||||
}
|
||||
}
|
||||
|
||||
// Store current value as rev:1
|
||||
currentValueBytes, err := json.Marshal(storedValue)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal current value for revision: %v", err)
|
||||
}
|
||||
|
||||
rev1Key := GetRevisionKey(key, 1)
|
||||
err = r.storage.StoreWithTTL(txn, []byte(rev1Key), currentValueBytes, ttl)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to store revision 1: %v", err)
|
||||
}
|
||||
|
||||
// Update metadata with new revision count
|
||||
metadata.UpdatedAt = time.Now().Unix()
|
||||
metadataBytes, err := json.Marshal(metadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal metadata: %v", err)
|
||||
}
|
||||
|
||||
return r.storage.StoreWithTTL(txn, []byte(metadataKey), metadataBytes, ttl)
|
||||
}
|
||||
|
||||
// GetRevisionHistory retrieves all available revisions for a given key
|
||||
func (r *RevisionService) GetRevisionHistory(key string) ([]map[string]interface{}, error) {
|
||||
var revisions []map[string]interface{}
|
||||
|
||||
err := r.storage.db.View(func(txn *badger.Txn) error {
|
||||
// Check revisions 1, 2, 3
|
||||
for rev := 1; rev <= 3; rev++ {
|
||||
revKey := GetRevisionKey(key, rev)
|
||||
|
||||
revData, err := r.storage.RetrieveWithDecompression(txn, []byte(revKey))
|
||||
if err == badger.ErrKeyNotFound {
|
||||
continue // Skip missing revisions
|
||||
} else if err != nil {
|
||||
return fmt.Errorf("failed to retrieve revision %d: %v", rev, err)
|
||||
}
|
||||
|
||||
var storedValue types.StoredValue
|
||||
err = json.Unmarshal(revData, &storedValue)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to unmarshal revision %d: %v", rev, err)
|
||||
}
|
||||
|
||||
var data interface{}
|
||||
err = json.Unmarshal(storedValue.Data, &data)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to unmarshal revision %d data: %v", rev, err)
|
||||
}
|
||||
|
||||
revision := map[string]interface{}{
|
||||
"revision": rev,
|
||||
"uuid": storedValue.UUID,
|
||||
"timestamp": storedValue.Timestamp,
|
||||
"data": data,
|
||||
}
|
||||
|
||||
revisions = append(revisions, revision)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Sort revisions by revision number (newest first)
|
||||
// Note: they're already in order since we iterate 1->3, but reverse for newest first
|
||||
for i, j := 0, len(revisions)-1; i < j; i, j = i+1, j-1 {
|
||||
revisions[i], revisions[j] = revisions[j], revisions[i]
|
||||
}
|
||||
|
||||
return revisions, nil
|
||||
}
|
||||
|
||||
// GetSpecificRevision retrieves a specific revision of a key
|
||||
func (r *RevisionService) GetSpecificRevision(key string, revision int) (*types.StoredValue, error) {
|
||||
if revision < 1 || revision > 3 {
|
||||
return nil, fmt.Errorf("invalid revision number: %d (must be 1-3)", revision)
|
||||
}
|
||||
|
||||
var storedValue types.StoredValue
|
||||
err := r.storage.db.View(func(txn *badger.Txn) error {
|
||||
revKey := GetRevisionKey(key, revision)
|
||||
|
||||
revData, err := r.storage.RetrieveWithDecompression(txn, []byte(revKey))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return json.Unmarshal(revData, &storedValue)
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &storedValue, nil
|
||||
}
|
||||
|
||||
// GetRevisionFromPath extracts revision number from a path like "key/data/rev/2"
|
||||
func GetRevisionFromPath(path string) (string, int, error) {
|
||||
parts := strings.Split(path, "/")
|
||||
if len(parts) < 4 || parts[len(parts)-2] != "rev" {
|
||||
return "", 0, fmt.Errorf("invalid revision path format")
|
||||
}
|
||||
|
||||
revisionStr := parts[len(parts)-1]
|
||||
revision, err := strconv.Atoi(revisionStr)
|
||||
if err != nil {
|
||||
return "", 0, fmt.Errorf("invalid revision number: %s", revisionStr)
|
||||
}
|
||||
|
||||
// Reconstruct the base key without the "/rev/N" suffix
|
||||
baseKey := strings.Join(parts[:len(parts)-2], "/")
|
||||
|
||||
return baseKey, revision, nil
|
||||
}
|
112
storage/storage.go
Normal file
112
storage/storage.go
Normal file
@@ -0,0 +1,112 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
badger "github.com/dgraph-io/badger/v4"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"kvs/types"
|
||||
)
|
||||
|
||||
// StorageService handles all BadgerDB operations and data management
|
||||
type StorageService struct {
|
||||
db *badger.DB
|
||||
config *types.Config
|
||||
compressionSvc *CompressionService
|
||||
logger *logrus.Logger
|
||||
}
|
||||
|
||||
// NewStorageService creates a new storage service
|
||||
func NewStorageService(db *badger.DB, config *types.Config, logger *logrus.Logger) (*StorageService, error) {
|
||||
var compressionSvc *CompressionService
|
||||
var err error
|
||||
|
||||
// Initialize compression if enabled
|
||||
if config.CompressionEnabled {
|
||||
compressionSvc, err = NewCompressionService()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize compression: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return &StorageService{
|
||||
db: db,
|
||||
config: config,
|
||||
compressionSvc: compressionSvc,
|
||||
logger: logger,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close closes the storage service and its resources
|
||||
func (s *StorageService) Close() {
|
||||
if s.compressionSvc != nil {
|
||||
s.compressionSvc.Close()
|
||||
}
|
||||
}
|
||||
|
||||
// StoreWithTTL stores data with optional TTL and compression
|
||||
func (s *StorageService) StoreWithTTL(txn *badger.Txn, key []byte, data []byte, ttl time.Duration) error {
|
||||
var finalData []byte
|
||||
var err error
|
||||
|
||||
// Compress data if compression is enabled
|
||||
if s.config.CompressionEnabled && s.compressionSvc != nil {
|
||||
finalData, err = s.compressionSvc.CompressData(data)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to compress data: %v", err)
|
||||
}
|
||||
} else {
|
||||
finalData = data
|
||||
}
|
||||
|
||||
entry := badger.NewEntry(key, finalData)
|
||||
|
||||
// Apply TTL if specified
|
||||
if ttl > 0 {
|
||||
entry = entry.WithTTL(ttl)
|
||||
}
|
||||
|
||||
return txn.SetEntry(entry)
|
||||
}
|
||||
|
||||
// RetrieveWithDecompression retrieves and decompresses data from BadgerDB
|
||||
func (s *StorageService) RetrieveWithDecompression(txn *badger.Txn, key []byte) ([]byte, error) {
|
||||
item, err := txn.Get(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var compressedData []byte
|
||||
err = item.Value(func(val []byte) error {
|
||||
compressedData = append(compressedData, val...)
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Decompress data if compression is enabled
|
||||
if s.config.CompressionEnabled && s.compressionSvc != nil {
|
||||
return s.compressionSvc.DecompressData(compressedData)
|
||||
}
|
||||
|
||||
return compressedData, nil
|
||||
}
|
||||
|
||||
// CompressData compresses data using the compression service
|
||||
func (s *StorageService) CompressData(data []byte) ([]byte, error) {
|
||||
if !s.config.CompressionEnabled || s.compressionSvc == nil {
|
||||
return data, nil
|
||||
}
|
||||
return s.compressionSvc.CompressData(data)
|
||||
}
|
||||
|
||||
// DecompressData decompresses data using the compression service
|
||||
func (s *StorageService) DecompressData(compressedData []byte) ([]byte, error) {
|
||||
if !s.config.CompressionEnabled || s.compressionSvc == nil {
|
||||
return compressedData, nil
|
||||
}
|
||||
return s.compressionSvc.DecompressData(compressedData)
|
||||
}
|
@@ -1,3 +1,4 @@
|
||||
//go:build ignore
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
@@ -24,33 +25,33 @@ func createConflictingData(dataDir1, dataDir2 string) error {
|
||||
// Same timestamp, different UUIDs
|
||||
timestamp := time.Now().UnixMilli()
|
||||
path := "test/conflict/data"
|
||||
|
||||
|
||||
// Data for node1
|
||||
data1 := json.RawMessage(`{"message": "from node1", "value": 100}`)
|
||||
uuid1 := uuid.New().String()
|
||||
|
||||
|
||||
// Data for node2 (same timestamp, different UUID and content)
|
||||
data2 := json.RawMessage(`{"message": "from node2", "value": 200}`)
|
||||
uuid2 := uuid.New().String()
|
||||
|
||||
|
||||
// Store in node1's database
|
||||
err := storeConflictData(dataDir1, path, timestamp, uuid1, data1)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to store in node1: %v", err)
|
||||
}
|
||||
|
||||
// Store in node2's database
|
||||
|
||||
// Store in node2's database
|
||||
err = storeConflictData(dataDir2, path, timestamp, uuid2, data2)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to store in node2: %v", err)
|
||||
}
|
||||
|
||||
|
||||
fmt.Printf("Created conflict scenario:\n")
|
||||
fmt.Printf("Path: %s\n", path)
|
||||
fmt.Printf("Timestamp: %d\n", timestamp)
|
||||
fmt.Printf("Node1 UUID: %s, Data: %s\n", uuid1, string(data1))
|
||||
fmt.Printf("Node2 UUID: %s, Data: %s\n", uuid2, string(data2))
|
||||
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -62,24 +63,24 @@ func storeConflictData(dataDir, path string, timestamp int64, uuid string, data
|
||||
return err
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
|
||||
storedValue := StoredValue{
|
||||
UUID: uuid,
|
||||
Timestamp: timestamp,
|
||||
Data: data,
|
||||
}
|
||||
|
||||
|
||||
valueBytes, err := json.Marshal(storedValue)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
return db.Update(func(txn *badger.Txn) error {
|
||||
// Store main data
|
||||
if err := txn.Set([]byte(path), valueBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
// Store timestamp index
|
||||
indexKey := fmt.Sprintf("_ts:%020d:%s", timestamp, path)
|
||||
return txn.Set([]byte(indexKey), []byte(uuid))
|
||||
@@ -91,13 +92,13 @@ func main() {
|
||||
fmt.Println("Usage: go run test_conflict.go <data_dir1> <data_dir2>")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
|
||||
err := createConflictingData(os.Args[1], os.Args[2])
|
||||
if err != nil {
|
||||
fmt.Printf("Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
|
||||
fmt.Println("Conflict data created successfully!")
|
||||
fmt.Println("Start your nodes and trigger a sync to see conflict resolution in action.")
|
||||
}
|
||||
}
|
||||
|
303
types/types.go
Normal file
303
types/types.go
Normal file
@@ -0,0 +1,303 @@
|
||||
package types
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
// Core data structures
|
||||
type StoredValue struct {
|
||||
UUID string `json:"uuid"`
|
||||
Timestamp int64 `json:"timestamp"`
|
||||
Data json.RawMessage `json:"data"`
|
||||
}
|
||||
|
||||
// Authentication & Authorization data structures
|
||||
|
||||
// User represents a system user
|
||||
type User struct {
|
||||
UUID string `json:"uuid"` // Server-generated UUID
|
||||
NicknameHash string `json:"nickname_hash"` // SHA3-512 hash of nickname
|
||||
Groups []string `json:"groups"` // List of group UUIDs this user belongs to
|
||||
CreatedAt int64 `json:"created_at"` // Unix timestamp
|
||||
UpdatedAt int64 `json:"updated_at"` // Unix timestamp
|
||||
}
|
||||
|
||||
// Group represents a user group
|
||||
type Group struct {
|
||||
UUID string `json:"uuid"` // Server-generated UUID
|
||||
NameHash string `json:"name_hash"` // SHA3-512 hash of group name
|
||||
Members []string `json:"members"` // List of user UUIDs in this group
|
||||
CreatedAt int64 `json:"created_at"` // Unix timestamp
|
||||
UpdatedAt int64 `json:"updated_at"` // Unix timestamp
|
||||
}
|
||||
|
||||
// APIToken represents a JWT authentication token
|
||||
type APIToken struct {
|
||||
TokenHash string `json:"token_hash"` // SHA3-512 hash of JWT token
|
||||
UserUUID string `json:"user_uuid"` // UUID of the user who owns this token
|
||||
Scopes []string `json:"scopes"` // List of permitted scopes (e.g., "read", "write")
|
||||
IssuedAt int64 `json:"issued_at"` // Unix timestamp when token was issued
|
||||
ExpiresAt int64 `json:"expires_at"` // Unix timestamp when token expires
|
||||
}
|
||||
|
||||
// ResourceMetadata contains ownership and permission information for stored resources
|
||||
type ResourceMetadata struct {
|
||||
OwnerUUID string `json:"owner_uuid"` // UUID of the resource owner
|
||||
GroupUUID string `json:"group_uuid"` // UUID of the resource group
|
||||
Permissions int `json:"permissions"` // 12-bit permission mask (POSIX-inspired)
|
||||
TTL string `json:"ttl"` // Time-to-live duration (Go format)
|
||||
CreatedAt int64 `json:"created_at"` // Unix timestamp when resource was created
|
||||
UpdatedAt int64 `json:"updated_at"` // Unix timestamp when resource was last updated
|
||||
}
|
||||
|
||||
// Permission constants for POSIX-inspired ACL
|
||||
const (
|
||||
// Owner permissions (bits 11-8)
|
||||
PermOwnerCreate = 1 << 11
|
||||
PermOwnerDelete = 1 << 10
|
||||
PermOwnerWrite = 1 << 9
|
||||
PermOwnerRead = 1 << 8
|
||||
|
||||
// Group permissions (bits 7-4)
|
||||
PermGroupCreate = 1 << 7
|
||||
PermGroupDelete = 1 << 6
|
||||
PermGroupWrite = 1 << 5
|
||||
PermGroupRead = 1 << 4
|
||||
|
||||
// Others permissions (bits 3-0)
|
||||
PermOthersCreate = 1 << 3
|
||||
PermOthersDelete = 1 << 2
|
||||
PermOthersWrite = 1 << 1
|
||||
PermOthersRead = 1 << 0
|
||||
|
||||
// Default permissions: Owner(1111), Group(0110), Others(0010)
|
||||
DefaultPermissions = (PermOwnerCreate | PermOwnerDelete | PermOwnerWrite | PermOwnerRead) |
|
||||
(PermGroupWrite | PermGroupRead) |
|
||||
(PermOthersRead)
|
||||
)
|
||||
|
||||
// API request/response structures for authentication endpoints
|
||||
|
||||
// User Management API structures
|
||||
type CreateUserRequest struct {
|
||||
Nickname string `json:"nickname"`
|
||||
}
|
||||
|
||||
type CreateUserResponse struct {
|
||||
UUID string `json:"uuid"`
|
||||
}
|
||||
|
||||
type UpdateUserRequest struct {
|
||||
Nickname string `json:"nickname,omitempty"`
|
||||
Groups []string `json:"groups,omitempty"`
|
||||
}
|
||||
|
||||
type GetUserResponse struct {
|
||||
UUID string `json:"uuid"`
|
||||
NicknameHash string `json:"nickname_hash"`
|
||||
Groups []string `json:"groups"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
}
|
||||
|
||||
// Group Management API structures
|
||||
type CreateGroupRequest struct {
|
||||
Groupname string `json:"groupname"`
|
||||
Members []string `json:"members,omitempty"`
|
||||
}
|
||||
|
||||
type CreateGroupResponse struct {
|
||||
UUID string `json:"uuid"`
|
||||
}
|
||||
|
||||
type UpdateGroupRequest struct {
|
||||
Members []string `json:"members"`
|
||||
}
|
||||
|
||||
type GetGroupResponse struct {
|
||||
UUID string `json:"uuid"`
|
||||
NameHash string `json:"name_hash"`
|
||||
Members []string `json:"members"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
}
|
||||
|
||||
// Token Management API structures
|
||||
type CreateTokenRequest struct {
|
||||
UserUUID string `json:"user_uuid"`
|
||||
Scopes []string `json:"scopes"`
|
||||
}
|
||||
|
||||
type CreateTokenResponse struct {
|
||||
Token string `json:"token"`
|
||||
ExpiresAt int64 `json:"expires_at"`
|
||||
}
|
||||
|
||||
// Resource Metadata Management API structures (Issue #12)
|
||||
type GetResourceMetadataResponse struct {
|
||||
OwnerUUID string `json:"owner_uuid"`
|
||||
GroupUUID string `json:"group_uuid"`
|
||||
Permissions int `json:"permissions"`
|
||||
TTL string `json:"ttl"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
}
|
||||
|
||||
type UpdateResourceMetadataRequest struct {
|
||||
OwnerUUID *string `json:"owner_uuid,omitempty"`
|
||||
GroupUUID *string `json:"group_uuid,omitempty"`
|
||||
Permissions *int `json:"permissions,omitempty"`
|
||||
}
|
||||
|
||||
// Cluster and member management types
|
||||
type Member struct {
|
||||
ID string `json:"id"`
|
||||
Address string `json:"address"`
|
||||
LastSeen int64 `json:"last_seen"`
|
||||
JoinedTimestamp int64 `json:"joined_timestamp"`
|
||||
}
|
||||
|
||||
type JoinRequest struct {
|
||||
ID string `json:"id"`
|
||||
Address string `json:"address"`
|
||||
JoinedTimestamp int64 `json:"joined_timestamp"`
|
||||
}
|
||||
|
||||
type LeaveRequest struct {
|
||||
ID string `json:"id"`
|
||||
}
|
||||
|
||||
type PairsByTimeRequest struct {
|
||||
StartTimestamp int64 `json:"start_timestamp"`
|
||||
EndTimestamp int64 `json:"end_timestamp"`
|
||||
Limit int `json:"limit"`
|
||||
Prefix string `json:"prefix,omitempty"`
|
||||
}
|
||||
|
||||
type PairsByTimeResponse struct {
|
||||
Path string `json:"path"`
|
||||
UUID string `json:"uuid"`
|
||||
Timestamp int64 `json:"timestamp"`
|
||||
}
|
||||
|
||||
type PutResponse struct {
|
||||
UUID string `json:"uuid"`
|
||||
Timestamp int64 `json:"timestamp"`
|
||||
}
|
||||
|
||||
// TTL-enabled PUT request structure
|
||||
type PutWithTTLRequest struct {
|
||||
Data json.RawMessage `json:"data"`
|
||||
TTL string `json:"ttl,omitempty"` // Go duration format
|
||||
}
|
||||
|
||||
// Tamper-evident logging data structures
|
||||
type TamperLogEntry struct {
|
||||
Timestamp string `json:"timestamp"` // RFC3339 format
|
||||
Action string `json:"action"` // Type of action
|
||||
UserUUID string `json:"user_uuid"` // User who performed the action
|
||||
Resource string `json:"resource"` // Resource affected
|
||||
Signature string `json:"signature"` // SHA3-512 hash of all fields
|
||||
}
|
||||
|
||||
// Backup system data structures
|
||||
type BackupStatus struct {
|
||||
LastBackupTime int64 `json:"last_backup_time"` // Unix timestamp
|
||||
LastBackupSuccess bool `json:"last_backup_success"` // Whether last backup succeeded
|
||||
LastBackupPath string `json:"last_backup_path"` // Path to last backup file
|
||||
NextBackupTime int64 `json:"next_backup_time"` // Unix timestamp of next scheduled backup
|
||||
BackupsRunning int `json:"backups_running"` // Number of backups currently running
|
||||
}
|
||||
|
||||
// Merkle Tree specific data structures
|
||||
type MerkleNode struct {
|
||||
Hash []byte `json:"hash"`
|
||||
StartKey string `json:"start_key"` // The first key in this node's range
|
||||
EndKey string `json:"end_key"` // The last key in this node's range
|
||||
}
|
||||
|
||||
// MerkleRootResponse is the response for getting the root hash
|
||||
type MerkleRootResponse struct {
|
||||
Root *MerkleNode `json:"root"`
|
||||
}
|
||||
|
||||
// MerkleTreeDiffRequest is used to request children hashes for a given key range
|
||||
type MerkleTreeDiffRequest struct {
|
||||
ParentNode MerkleNode `json:"parent_node"` // The node whose children we want to compare (from the remote peer's perspective)
|
||||
LocalHash []byte `json:"local_hash"` // The local hash of this node/range (from the requesting peer's perspective)
|
||||
}
|
||||
|
||||
// MerkleTreeDiffResponse returns the remote children nodes or the actual keys if it's a leaf level
|
||||
type MerkleTreeDiffResponse struct {
|
||||
Children []MerkleNode `json:"children,omitempty"` // Children of the remote node
|
||||
Keys []string `json:"keys,omitempty"` // Actual keys if this is a leaf-level diff
|
||||
}
|
||||
|
||||
// For fetching a range of KV pairs
|
||||
type KVRangeRequest struct {
|
||||
StartKey string `json:"start_key"`
|
||||
EndKey string `json:"end_key"`
|
||||
Limit int `json:"limit"` // Max number of items to return
|
||||
}
|
||||
|
||||
type KVRangeResponse struct {
|
||||
Pairs []struct {
|
||||
Path string `json:"path"`
|
||||
StoredValue StoredValue `json:"stored_value"`
|
||||
} `json:"pairs"`
|
||||
}
|
||||
|
||||
// Configuration
|
||||
type Config struct {
|
||||
NodeID string `yaml:"node_id"`
|
||||
BindAddress string `yaml:"bind_address"`
|
||||
Port int `yaml:"port"`
|
||||
DataDir string `yaml:"data_dir"`
|
||||
SeedNodes []string `yaml:"seed_nodes"`
|
||||
ReadOnly bool `yaml:"read_only"`
|
||||
LogLevel string `yaml:"log_level"`
|
||||
GossipIntervalMin int `yaml:"gossip_interval_min"`
|
||||
GossipIntervalMax int `yaml:"gossip_interval_max"`
|
||||
SyncInterval int `yaml:"sync_interval"`
|
||||
CatchupInterval int `yaml:"catchup_interval"`
|
||||
BootstrapMaxAgeHours int `yaml:"bootstrap_max_age_hours"`
|
||||
ThrottleDelayMs int `yaml:"throttle_delay_ms"`
|
||||
FetchDelayMs int `yaml:"fetch_delay_ms"`
|
||||
|
||||
// Database compression configuration
|
||||
CompressionEnabled bool `yaml:"compression_enabled"`
|
||||
CompressionLevel int `yaml:"compression_level"`
|
||||
|
||||
// TTL configuration
|
||||
DefaultTTL string `yaml:"default_ttl"` // Go duration format, "0" means no default TTL
|
||||
MaxJSONSize int `yaml:"max_json_size"` // Maximum JSON size in bytes
|
||||
|
||||
// Rate limiting configuration
|
||||
RateLimitRequests int `yaml:"rate_limit_requests"` // Max requests per window
|
||||
RateLimitWindow string `yaml:"rate_limit_window"` // Window duration (Go format)
|
||||
|
||||
// Tamper-evident logging configuration
|
||||
TamperLogActions []string `yaml:"tamper_log_actions"` // Actions to log
|
||||
|
||||
// Backup system configuration
|
||||
BackupEnabled bool `yaml:"backup_enabled"` // Enable/disable automated backups
|
||||
BackupSchedule string `yaml:"backup_schedule"` // Cron schedule format
|
||||
BackupPath string `yaml:"backup_path"` // Directory to store backups
|
||||
BackupRetention int `yaml:"backup_retention"` // Days to keep backups
|
||||
|
||||
// Feature toggles for optional functionalities
|
||||
AuthEnabled bool `yaml:"auth_enabled"` // Enable/disable authentication system
|
||||
TamperLoggingEnabled bool `yaml:"tamper_logging_enabled"` // Enable/disable tamper-evident logging
|
||||
ClusteringEnabled bool `yaml:"clustering_enabled"` // Enable/disable clustering/gossip
|
||||
RateLimitingEnabled bool `yaml:"rate_limiting_enabled"` // Enable/disable rate limiting
|
||||
RevisionHistoryEnabled bool `yaml:"revision_history_enabled"` // Enable/disable revision history
|
||||
|
||||
// Anonymous access control (Issue #5)
|
||||
AllowAnonymousRead bool `yaml:"allow_anonymous_read"` // Allow unauthenticated read access to KV endpoints
|
||||
AllowAnonymousWrite bool `yaml:"allow_anonymous_write"` // Allow unauthenticated write access to KV endpoints
|
||||
|
||||
// Cluster authentication (Issue #13)
|
||||
ClusterSecret string `yaml:"cluster_secret"` // Shared secret for cluster authentication (auto-generated if empty)
|
||||
ClusterTLSEnabled bool `yaml:"cluster_tls_enabled"` // Require TLS for inter-node communication
|
||||
ClusterTLSCertFile string `yaml:"cluster_tls_cert_file"` // Path to TLS certificate file
|
||||
ClusterTLSKeyFile string `yaml:"cluster_tls_key_file"` // Path to TLS private key file
|
||||
ClusterTLSSkipVerify bool `yaml:"cluster_tls_skip_verify"` // Skip TLS verification (insecure, for testing only)
|
||||
}
|
25
utils/hash.go
Normal file
25
utils/hash.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"golang.org/x/crypto/sha3"
|
||||
)
|
||||
|
||||
// SHA3-512 hashing utilities for authentication
|
||||
func HashSHA3512(input string) string {
|
||||
hasher := sha3.New512()
|
||||
hasher.Write([]byte(input))
|
||||
return hex.EncodeToString(hasher.Sum(nil))
|
||||
}
|
||||
|
||||
func HashUserNickname(nickname string) string {
|
||||
return HashSHA3512(nickname)
|
||||
}
|
||||
|
||||
func HashGroupName(groupname string) string {
|
||||
return HashSHA3512(groupname)
|
||||
}
|
||||
|
||||
func HashToken(token string) string {
|
||||
return HashSHA3512(token)
|
||||
}
|
Reference in New Issue
Block a user