Compare commits

..

74 Commits

Author SHA1 Message Date
henrygd
75b372437c add small end buffer to chart x axis 2025-10-05 21:18:16 -04:00
henrygd
b661d00159 release 0.13.1 2025-10-05 20:09:49 -04:00
henrygd
898dbf73c8 update agent dockerfile volume 2025-10-05 20:06:17 -04:00
Marrrrrrrrry
e099304948 Add VOLUME to preserve config across container recreations (#1235) 2025-10-05 20:05:00 -04:00
Maximilian Krause
b61b7a12dc New translations en.po (German) 2025-10-05 19:40:44 -04:00
henrygd
37769050e5 fix loading system with direct id url 2025-10-05 19:38:37 -04:00
henrygd
d81e137291 update system permalinks to use id instead of name (#1231)
maintains backward compatibility with old permalinks
2025-10-05 14:18:00 -04:00
henrygd
ae820d348e fix one minute chart on systems without docker (#1237) 2025-10-05 13:19:35 -04:00
henrygd
ddb298ac7c 0.13.0 release 2025-10-03 13:53:12 -04:00
henrygd
cca7b36039 add SYSTEM_NAME env var (#1184) 2025-10-03 13:44:10 -04:00
henrygd
adda381d9d update language files 2025-10-03 13:21:02 -04:00
zoixc
1630b1558f New translations en.po (Russian) 2025-10-03 13:08:58 -04:00
itssloplayz
733c10ff31 New translations en.po (Slovenian) 2025-10-03 13:08:00 -04:00
henrygd
ed3fd185d3 update pocketbase 2025-10-03 12:44:20 -04:00
henrygd
b1fd7e6695 fix intel engine delta tracking across cache keys
- plus a couple tiny lil refactors
2025-10-02 20:24:54 -04:00
henrygd
7d6230de74 add one minute chart + refactor rpc
- add one minute charts
- update disk io to use bytes
- update hub and agent connection interfaces / handlers to be more
flexible
- change agent cache to use cache time instead of session id
- refactor collection of metrics which require deltas to track
separately per cache time
2025-10-02 17:56:51 -04:00
henrygd
f9a39c6004 add noindex meta tag to html (#1218) 2025-09-30 19:16:15 -04:00
henrygd
f21a6d15fe update agent install script to use get.beszel.dev/latest-version (#1212) 2025-09-29 13:37:51 -04:00
Timothy Pillow
bf38716095 Modify GPU usage section in readme (#1216)
Updated GPU metrics to include Intel support and removed temperature. Synced section as currently written in https://beszel.dev/guide/what-is-beszel#supported-metrics
2025-09-29 12:27:20 -04:00
henrygd
45816e7de6 agent install script: refactor mirror handling (#1212)
- add --mirror flag
- use mirror url for api.github.com
- remove prompt confirmation for mirror usage
2025-09-28 13:49:41 -04:00
evrial
2a6946906e Fixed OpenWRT agent restarter logic (#1210)
* Fixed OpenWRT restarter logic

* Update update.go
2025-09-26 12:15:42 -04:00
henrygd
ca58ff66ba 0.12.12 release 2025-09-25 19:37:26 -04:00
henrygd
133d229361 add fallback cache/buff memory calculation when cache/buff isn't available (#1198) 2025-09-25 19:19:32 -04:00
henrygd
960cac4060 fix intel_gpu_top restart loop and add intel gpu pkg power (#1203) 2025-09-25 19:15:36 -04:00
henrygd
d83865cb4f remove NoNewPrivileges from systemd agent service
configuration (#1203)

Prevents service from running `intel_gpu_top`
2025-09-25 15:06:17 -04:00
henrygd
4b43d68da6 add SKIP_GPU=true (#1203) 2025-09-25 14:10:28 -04:00
henrygd
c790d76211 fix command arguments for OpenRC agent restart functionality (#1199) 2025-09-24 23:14:15 -04:00
henrygd
29b182fd7b 0.12.11 release :) 2025-09-24 18:08:54 -04:00
henrygd
fc78b959aa update colors for gpu power chart 2025-09-24 18:01:51 -04:00
henrygd
b8b3604aec update language files 2025-09-24 17:41:11 -04:00
henrygd
e45606fdec New Croatian translations by nikola.smis on Crowdin 2025-09-24 17:40:22 -04:00
aroxu
640afd82ad New Korean translations 2025-09-24 17:31:04 -04:00
henrygd
d025e51c67 make sure agent connection title works in grid layout 2025-09-24 17:15:17 -04:00
henrygd
f70c30345a fix sticky header z-index 2025-09-24 17:07:38 -04:00
henrygd
63bdac83a1 hide interfaces chart legend if interfaces.length > 15 2025-09-24 16:45:43 -04:00
Sven van Ginkel
65897a8df6 add cali to the default nics skip list (#1195) 2025-09-24 16:29:11 -04:00
henrygd
0dc9b3e273 add pattern matching and blacklist functionality to NICS env var. (#1190) 2025-09-24 16:27:37 -04:00
Sven van Ginkel
c1c0d8d672 Fix hub executable (#1193) 2025-09-24 15:13:24 -04:00
henrygd
1811ab64be add migration to fix bad cached mem values (#1196) 2025-09-24 15:07:11 -04:00
henrygd
5578520054 add title to agent connection type in all systems table 2025-09-24 14:18:20 -04:00
henrygd
7b128d09ac Update Intel GPU collector to parse plain text (-l) instead of JSON output (#1150) 2025-09-24 13:24:48 -04:00
henrygd
d295507c0b adjust calculation of cached memory (#1187, #1196) 2025-09-24 13:23:59 -04:00
henrygd
79fbbb7ad0 add ghcr.io image configuration for beszel-agent-intel in gh workflow 2025-09-23 20:16:19 -04:00
henrygd
e7325b23c4 simplify filter bar component 2025-09-23 20:15:42 -04:00
henrygd
c5eba6547a comments 2025-09-22 20:48:37 -04:00
henrygd
82e7c04b25 0.12.10 release :) 2025-09-22 18:32:30 -04:00
henrygd
a9ce16cfdd update language files 2025-09-22 18:28:39 -04:00
henrygd
2af8b6057f new Polish translations 2025-09-22 18:18:39 -04:00
henrygd
3fae4360a8 update changelog 2025-09-22 18:14:36 -04:00
henrygd
10073d85e1 update go deps 2025-09-22 18:13:50 -04:00
henrygd
e240ced018 add support for henrygd/beszel-agent-intel in docker workflow 2025-09-22 17:47:32 -04:00
henrygd
ae1e17f5ed add dockerfile for henrygd/beszel-agent-intel 2025-09-22 17:41:44 -04:00
henrygd
3abb7c213b initial support for one intel gpu with intel_gpu_top 2025-09-22 16:36:10 -04:00
henrygd
240e75f025 add sorted style to home table header buttons 2025-09-21 19:23:34 -04:00
henrygd
ea984844ff update changelog 2025-09-21 17:56:35 -04:00
henrygd
0d157b5857 display agent connection type in hub (ssh, websocket) 2025-09-21 17:49:22 -04:00
henrygd
d0b6e725c8 fix positioning of bandwidth chart button 2025-09-19 12:08:41 -04:00
henrygd
ffe7f8547a fix: update temperature and byte formatting functions to use loose equality checks (#1180) 2025-09-19 11:51:27 -04:00
henrygd
37817b0f15 add --auto-update flag to hub install script 2025-09-18 17:51:22 -04:00
henrygd
a66ac418ae install: remove additional service restart for openwrt 2025-09-18 14:05:19 -04:00
henrygd
2ee2f53267 fix: resolve mipsle architecture detection for install script (#1176)
- Add proper endianness detection using ELF header inspection
- Prevent mipsle devices from downloading incorrect mips binaries
- Maintain backward compatibility for all other architectures
2025-09-18 13:48:24 -04:00
henrygd
e5c766c00b refactoring
- network interface delta
- string concatenation
2025-09-17 21:36:05 -04:00
henrygd
da43ba10e1 add aria-label to button in NetworkSheet for improved accessibility 2025-09-17 16:23:02 -04:00
henrygd
fca13004bd release 0.12.9 2025-09-17 16:06:20 -04:00
henrygd
376a86829c fix divide by zero error (#1175) 2025-09-17 16:00:50 -04:00
henrygd
ef48613f3f improve style of chart sheet button on mobile
- also update changelog
2025-09-17 15:13:10 -04:00
henrygd
49976c6f61 fix nvidia agent dockerfile after project reorganization 2025-09-17 14:10:02 -04:00
henrygd
d68f1f0985 0.12.8 release :) 2025-09-17 14:02:17 -04:00
henrygd
273a090200 update translations 2025-09-17 14:01:20 -04:00
henrygd
59057a2ba4 add check for status alerts which are not properly resolved (#1052) 2025-09-17 13:31:49 -04:00
henrygd
1b9e781d45 refactor deltatracker
- embed mutex
- add example function
2025-09-16 22:09:46 -04:00
henrygd
4e0ca7c2ba formatting (biome) 2025-09-15 18:04:13 -04:00
henrygd
a9e7bcd37f add per-interface and cumulative network traffic charts (#926)
Co-authored-by: Sven van Ginkel <svenvanginkel@icloud.com>
2025-09-15 17:59:21 -04:00
henrygd
4635f24fb2 fix entre arg in makefile dev server 2025-09-15 17:26:07 -04:00
130 changed files with 8884 additions and 1413 deletions

View File

@@ -33,6 +33,14 @@ jobs:
registry: docker.io
username_secret: DOCKERHUB_USERNAME
password_secret: DOCKERHUB_TOKEN
- image: henrygd/beszel-agent-intel
context: ./
dockerfile: ./internal/dockerfile_agent_intel
platforms: linux/amd64
registry: docker.io
username_secret: DOCKERHUB_USERNAME
password_secret: DOCKERHUB_TOKEN
- image: ghcr.io/${{ github.repository }}/beszel
context: ./
@@ -56,6 +64,14 @@ jobs:
username: ${{ github.actor }}
password_secret: GITHUB_TOKEN
- image: ghcr.io/${{ github.repository }}/beszel-agent-intel
context: ./
dockerfile: ./internal/dockerfile_agent_intel
platforms: linux/amd64
registry: ghcr.io
username: ${{ github.actor }}
password_secret: GITHUB_TOKEN
permissions:
contents: read
packages: write

View File

@@ -77,7 +77,7 @@ dev-hub: export ENV=dev
dev-hub:
mkdir -p ./internal/site/dist && touch ./internal/site/dist/index.html
@if command -v entr >/dev/null 2>&1; then \
find ./internal/cmd/hub/*.go ./internal/{alerts,hub,records,users}/*.go | entr -r -s "cd ./internal/cmd/hub && go run -tags development . serve --http 0.0.0.0:8090"; \
find ./internal -type f -name '*.go' | entr -r -s "cd ./internal/cmd/hub && go run -tags development . serve --http 0.0.0.0:8090"; \
else \
cd ./internal/cmd/hub && go run -tags development . serve --http 0.0.0.0:8090; \
fi

View File

@@ -12,33 +12,36 @@ import (
"path/filepath"
"strings"
"sync"
"time"
"github.com/gliderlabs/ssh"
"github.com/henrygd/beszel"
"github.com/henrygd/beszel/agent/deltatracker"
"github.com/henrygd/beszel/internal/entities/system"
"github.com/shirou/gopsutil/v4/host"
gossh "golang.org/x/crypto/ssh"
)
type Agent struct {
sync.Mutex // Used to lock agent while collecting data
debug bool // true if LOG_LEVEL is set to debug
zfs bool // true if system has arcstats
memCalc string // Memory calculation formula
fsNames []string // List of filesystem device names being monitored
fsStats map[string]*system.FsStats // Keeps track of disk stats for each filesystem
netInterfaces map[string]struct{} // Stores all valid network interfaces
netIoStats system.NetIoStats // Keeps track of bandwidth usage
dockerManager *dockerManager // Manages Docker API requests
sensorConfig *SensorConfig // Sensors config
systemInfo system.Info // Host system info
gpuManager *GPUManager // Manages GPU data
cache *SessionCache // Cache for system stats based on primary session ID
connectionManager *ConnectionManager // Channel to signal connection events
server *ssh.Server // SSH server
dataDir string // Directory for persisting data
keys []gossh.PublicKey // SSH public keys
sync.Mutex // Used to lock agent while collecting data
debug bool // true if LOG_LEVEL is set to debug
zfs bool // true if system has arcstats
memCalc string // Memory calculation formula
fsNames []string // List of filesystem device names being monitored
fsStats map[string]*system.FsStats // Keeps track of disk stats for each filesystem
diskPrev map[uint16]map[string]prevDisk // Previous disk I/O counters per cache interval
netInterfaces map[string]struct{} // Stores all valid network interfaces
netIoStats map[uint16]system.NetIoStats // Keeps track of bandwidth usage per cache interval
netInterfaceDeltaTrackers map[uint16]*deltatracker.DeltaTracker[string, uint64] // Per-cache-time NIC delta trackers
dockerManager *dockerManager // Manages Docker API requests
sensorConfig *SensorConfig // Sensors config
systemInfo system.Info // Host system info
gpuManager *GPUManager // Manages GPU data
cache *systemDataCache // Cache for system stats based on cache time
connectionManager *ConnectionManager // Channel to signal connection events
handlerRegistry *HandlerRegistry // Registry for routing incoming messages
server *ssh.Server // SSH server
dataDir string // Directory for persisting data
keys []gossh.PublicKey // SSH public keys
}
// NewAgent creates a new agent with the given data directory for persisting data.
@@ -46,9 +49,15 @@ type Agent struct {
func NewAgent(dataDir ...string) (agent *Agent, err error) {
agent = &Agent{
fsStats: make(map[string]*system.FsStats),
cache: NewSessionCache(69 * time.Second),
cache: NewSystemDataCache(),
}
// Initialize disk I/O previous counters storage
agent.diskPrev = make(map[uint16]map[string]prevDisk)
// Initialize per-cache-time network tracking structures
agent.netIoStats = make(map[uint16]system.NetIoStats)
agent.netInterfaceDeltaTrackers = make(map[uint16]*deltatracker.DeltaTracker[string, uint64])
agent.dataDir, err = getDataDir(dataDir...)
if err != nil {
slog.Warn("Data directory not found")
@@ -79,6 +88,9 @@ func NewAgent(dataDir ...string) (agent *Agent, err error) {
// initialize connection manager
agent.connectionManager = newConnectionManager(agent)
// initialize handler registry
agent.handlerRegistry = NewHandlerRegistry()
// initialize disk info
agent.initializeDiskInfo()
@@ -97,7 +109,7 @@ func NewAgent(dataDir ...string) (agent *Agent, err error) {
// if debugging, print stats
if agent.debug {
slog.Debug("Stats", "data", agent.gatherStats(""))
slog.Debug("Stats", "data", agent.gatherStats(0))
}
return agent, nil
@@ -112,24 +124,24 @@ func GetEnv(key string) (value string, exists bool) {
return os.LookupEnv(key)
}
func (a *Agent) gatherStats(sessionID string) *system.CombinedData {
func (a *Agent) gatherStats(cacheTimeMs uint16) *system.CombinedData {
a.Lock()
defer a.Unlock()
data, isCached := a.cache.Get(sessionID)
data, isCached := a.cache.Get(cacheTimeMs)
if isCached {
slog.Debug("Cached data", "session", sessionID)
slog.Debug("Cached data", "cacheTimeMs", cacheTimeMs)
return data
}
*data = system.CombinedData{
Stats: a.getSystemStats(),
Stats: a.getSystemStats(cacheTimeMs),
Info: a.systemInfo,
}
slog.Debug("System data", "data", data)
// slog.Info("System data", "data", data, "cacheTimeMs", cacheTimeMs)
if a.dockerManager != nil {
if containerStats, err := a.dockerManager.getDockerStats(); err == nil {
if containerStats, err := a.dockerManager.getDockerStats(cacheTimeMs); err == nil {
data.Containers = containerStats
slog.Debug("Containers", "data", data.Containers)
} else {
@@ -145,7 +157,7 @@ func (a *Agent) gatherStats(sessionID string) *system.CombinedData {
}
slog.Debug("Extra FS", "data", data.Stats.ExtraFs)
a.cache.Set(sessionID, data)
a.cache.Set(data, cacheTimeMs)
return data
}

View File

@@ -1,37 +1,55 @@
package agent
import (
"sync"
"time"
"github.com/henrygd/beszel/internal/entities/system"
)
// Not thread safe since we only access from gatherStats which is already locked
type SessionCache struct {
data *system.CombinedData
lastUpdate time.Time
primarySession string
leaseTime time.Duration
type systemDataCache struct {
sync.RWMutex
cache map[uint16]*cacheNode
}
func NewSessionCache(leaseTime time.Duration) *SessionCache {
return &SessionCache{
leaseTime: leaseTime,
data: &system.CombinedData{},
type cacheNode struct {
data *system.CombinedData
lastUpdate time.Time
}
// NewSystemDataCache creates a cache keyed by the polling interval in milliseconds.
func NewSystemDataCache() *systemDataCache {
return &systemDataCache{
cache: make(map[uint16]*cacheNode),
}
}
func (c *SessionCache) Get(sessionID string) (stats *system.CombinedData, isCached bool) {
if sessionID != c.primarySession && time.Since(c.lastUpdate) < c.leaseTime {
return c.data, true
// Get returns cached combined data when the entry is still considered fresh.
func (c *systemDataCache) Get(cacheTimeMs uint16) (stats *system.CombinedData, isCached bool) {
c.RLock()
defer c.RUnlock()
node, ok := c.cache[cacheTimeMs]
if !ok {
return &system.CombinedData{}, false
}
return c.data, false
// allowedSkew := time.Second
// isFresh := time.Since(node.lastUpdate) < time.Duration(cacheTimeMs)*time.Millisecond-allowedSkew
// allow a 50% skew of the cache time
isFresh := time.Since(node.lastUpdate) < time.Duration(cacheTimeMs/2)*time.Millisecond
return node.data, isFresh
}
func (c *SessionCache) Set(sessionID string, data *system.CombinedData) {
if data != nil {
*c.data = *data
// Set stores the latest combined data snapshot for the given interval.
func (c *systemDataCache) Set(data *system.CombinedData, cacheTimeMs uint16) {
c.Lock()
defer c.Unlock()
node, ok := c.cache[cacheTimeMs]
if !ok {
node = &cacheNode{}
c.cache[cacheTimeMs] = node
}
c.primarySession = sessionID
c.lastUpdate = time.Now()
node.data = data
node.lastUpdate = time.Now()
}

View File

@@ -8,82 +8,239 @@ import (
"testing/synctest"
"time"
"github.com/henrygd/beszel/internal/entities/container"
"github.com/henrygd/beszel/internal/entities/system"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSessionCache_GetSet(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
cache := NewSessionCache(69 * time.Second)
func createTestCacheData() *system.CombinedData {
return &system.CombinedData{
Stats: system.Stats{
Cpu: 50.5,
Mem: 8192,
DiskTotal: 100000,
},
Info: system.Info{
Hostname: "test-host",
},
Containers: []*container.Stats{
{
Name: "test-container",
Cpu: 25.0,
},
},
}
}
testData := &system.CombinedData{
Info: system.Info{
Hostname: "test-host",
Cores: 4,
},
func TestNewSystemDataCache(t *testing.T) {
cache := NewSystemDataCache()
require.NotNil(t, cache)
assert.NotNil(t, cache.cache)
assert.Empty(t, cache.cache)
}
func TestCacheGetSet(t *testing.T) {
cache := NewSystemDataCache()
data := createTestCacheData()
// Test setting data
cache.Set(data, 1000) // 1 second cache
// Test getting fresh data
retrieved, isCached := cache.Get(1000)
assert.True(t, isCached)
assert.Equal(t, data, retrieved)
// Test getting non-existent cache key
_, isCached = cache.Get(2000)
assert.False(t, isCached)
}
func TestCacheFreshness(t *testing.T) {
cache := NewSystemDataCache()
data := createTestCacheData()
testCases := []struct {
name string
cacheTimeMs uint16
sleepMs time.Duration
expectFresh bool
}{
{
name: "fresh data - well within cache time",
cacheTimeMs: 1000, // 1 second
sleepMs: 100, // 100ms
expectFresh: true,
},
{
name: "fresh data - at 50% of cache time boundary",
cacheTimeMs: 1000, // 1 second, 50% = 500ms
sleepMs: 499, // just under 500ms
expectFresh: true,
},
{
name: "stale data - exactly at 50% cache time",
cacheTimeMs: 1000, // 1 second, 50% = 500ms
sleepMs: 500, // exactly 500ms
expectFresh: false,
},
{
name: "stale data - well beyond cache time",
cacheTimeMs: 1000, // 1 second
sleepMs: 800, // 800ms
expectFresh: false,
},
{
name: "short cache time",
cacheTimeMs: 200, // 200ms, 50% = 100ms
sleepMs: 150, // 150ms > 100ms
expectFresh: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
// Set data
cache.Set(data, tc.cacheTimeMs)
// Wait for the specified duration
if tc.sleepMs > 0 {
time.Sleep(tc.sleepMs * time.Millisecond)
}
// Check freshness
_, isCached := cache.Get(tc.cacheTimeMs)
assert.Equal(t, tc.expectFresh, isCached)
})
})
}
}
func TestCacheMultipleIntervals(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
cache := NewSystemDataCache()
data1 := createTestCacheData()
data2 := &system.CombinedData{
Stats: system.Stats{
Cpu: 50.0,
MemPct: 30.0,
DiskPct: 40.0,
Cpu: 75.0,
Mem: 16384,
},
Info: system.Info{
Hostname: "test-host-2",
},
Containers: []*container.Stats{},
}
// Test initial state - should not be cached
data, isCached := cache.Get("session1")
assert.False(t, isCached, "Expected no cached data initially")
assert.NotNil(t, data, "Expected data to be initialized")
// Set data for session1
cache.Set("session1", testData)
// Set data for different intervals
cache.Set(data1, 500) // 500ms cache
cache.Set(data2, 1000) // 1000ms cache
time.Sleep(15 * time.Second)
// Both should be fresh immediately
retrieved1, isCached1 := cache.Get(500)
assert.True(t, isCached1)
assert.Equal(t, data1, retrieved1)
// Get data for a different session - should be cached
data, isCached = cache.Get("session2")
assert.True(t, isCached, "Expected data to be cached for non-primary session")
require.NotNil(t, data, "Expected cached data to be returned")
assert.Equal(t, "test-host", data.Info.Hostname, "Hostname should match test data")
assert.Equal(t, 4, data.Info.Cores, "Cores should match test data")
assert.Equal(t, 50.0, data.Stats.Cpu, "CPU should match test data")
assert.Equal(t, 30.0, data.Stats.MemPct, "Memory percentage should match test data")
assert.Equal(t, 40.0, data.Stats.DiskPct, "Disk percentage should match test data")
retrieved2, isCached2 := cache.Get(1000)
assert.True(t, isCached2)
assert.Equal(t, data2, retrieved2)
time.Sleep(10 * time.Second)
// Wait 300ms - 500ms cache should be stale (250ms threshold), 1000ms should still be fresh (500ms threshold)
time.Sleep(300 * time.Millisecond)
// Get data for the primary session - should not be cached
data, isCached = cache.Get("session1")
assert.False(t, isCached, "Expected data not to be cached for primary session")
require.NotNil(t, data, "Expected data to be returned even if not cached")
assert.Equal(t, "test-host", data.Info.Hostname, "Hostname should match test data")
// if not cached, agent will update the data
cache.Set("session1", testData)
_, isCached1 = cache.Get(500)
assert.False(t, isCached1)
time.Sleep(45 * time.Second)
_, isCached2 = cache.Get(1000)
assert.True(t, isCached2)
// Get data for a different session - should still be cached
_, isCached = cache.Get("session2")
assert.True(t, isCached, "Expected data to be cached for non-primary session")
// Wait for the lease to expire
time.Sleep(30 * time.Second)
// Get data for session2 - should not be cached
_, isCached = cache.Get("session2")
assert.False(t, isCached, "Expected data not to be cached after lease expiration")
// Wait another 300ms (total 600ms) - now 1000ms cache should also be stale
time.Sleep(300 * time.Millisecond)
_, isCached2 = cache.Get(1000)
assert.False(t, isCached2)
})
}
func TestSessionCache_NilData(t *testing.T) {
// Create a new SessionCache
cache := NewSessionCache(30 * time.Second)
func TestCacheOverwrite(t *testing.T) {
cache := NewSystemDataCache()
data1 := createTestCacheData()
data2 := &system.CombinedData{
Stats: system.Stats{
Cpu: 90.0,
Mem: 32768,
},
Info: system.Info{
Hostname: "updated-host",
},
Containers: []*container.Stats{},
}
// Test setting nil data (should not panic)
assert.NotPanics(t, func() {
cache.Set("session1", nil)
}, "Setting nil data should not panic")
// Set initial data
cache.Set(data1, 1000)
retrieved, isCached := cache.Get(1000)
assert.True(t, isCached)
assert.Equal(t, data1, retrieved)
// Get data - should not be nil even though we set nil
data, _ := cache.Get("session2")
assert.NotNil(t, data, "Expected data to not be nil after setting nil data")
// Overwrite with new data
cache.Set(data2, 1000)
retrieved, isCached = cache.Get(1000)
assert.True(t, isCached)
assert.Equal(t, data2, retrieved)
assert.NotEqual(t, data1, retrieved)
}
func TestCacheMiss(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
cache := NewSystemDataCache()
// Test getting from empty cache
_, isCached := cache.Get(1000)
assert.False(t, isCached)
// Set data for one interval
data := createTestCacheData()
cache.Set(data, 1000)
// Test getting different interval
_, isCached = cache.Get(2000)
assert.False(t, isCached)
// Test getting after data has expired
time.Sleep(600 * time.Millisecond) // 600ms > 500ms (50% of 1000ms)
_, isCached = cache.Get(1000)
assert.False(t, isCached)
})
}
func TestCacheZeroInterval(t *testing.T) {
cache := NewSystemDataCache()
data := createTestCacheData()
// Set with zero interval - should allow immediate cache
cache.Set(data, 0)
// With 0 interval, 50% is 0, so it should never be considered fresh
// (time.Since(lastUpdate) >= 0, which is not < 0)
_, isCached := cache.Get(0)
assert.False(t, isCached)
}
func TestCacheLargeInterval(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
cache := NewSystemDataCache()
data := createTestCacheData()
// Test with maximum uint16 value
cache.Set(data, 65535) // ~65 seconds
// Should be fresh immediately
_, isCached := cache.Get(65535)
assert.True(t, isCached)
// Should still be fresh after a short time
time.Sleep(100 * time.Millisecond)
_, isCached = cache.Get(65535)
assert.True(t, isCached)
})
}

View File

@@ -15,6 +15,7 @@ import (
"github.com/henrygd/beszel"
"github.com/henrygd/beszel/internal/common"
"github.com/henrygd/beszel/internal/entities/system"
"github.com/fxamacker/cbor/v2"
"github.com/lxzan/gws"
@@ -156,11 +157,15 @@ func (client *WebSocketClient) OnMessage(conn *gws.Conn, message *gws.Message) {
return
}
if err := cbor.NewDecoder(message.Data).Decode(client.hubRequest); err != nil {
var HubRequest common.HubRequest[cbor.RawMessage]
err := cbor.Unmarshal(message.Data.Bytes(), &HubRequest)
if err != nil {
slog.Error("Error parsing message", "err", err)
return
}
if err := client.handleHubRequest(client.hubRequest); err != nil {
if err := client.handleHubRequest(&HubRequest, HubRequest.Id); err != nil {
slog.Error("Error handling message", "err", err)
}
}
@@ -173,7 +178,7 @@ func (client *WebSocketClient) OnPing(conn *gws.Conn, message []byte) {
}
// handleAuthChallenge verifies the authenticity of the hub and returns the system's fingerprint.
func (client *WebSocketClient) handleAuthChallenge(msg *common.HubRequest[cbor.RawMessage]) (err error) {
func (client *WebSocketClient) handleAuthChallenge(msg *common.HubRequest[cbor.RawMessage], requestID *uint32) (err error) {
var authRequest common.FingerprintRequest
if err := cbor.Unmarshal(msg.Data, &authRequest); err != nil {
return err
@@ -191,12 +196,13 @@ func (client *WebSocketClient) handleAuthChallenge(msg *common.HubRequest[cbor.R
}
if authRequest.NeedSysInfo {
response.Name, _ = GetEnv("SYSTEM_NAME")
response.Hostname = client.agent.systemInfo.Hostname
serverAddr := client.agent.connectionManager.serverOptions.Addr
_, response.Port, _ = net.SplitHostPort(serverAddr)
}
return client.sendMessage(response)
return client.sendResponse(response, requestID)
}
// verifySignature verifies the signature of the token using the public keys.
@@ -221,25 +227,17 @@ func (client *WebSocketClient) Close() {
}
}
// handleHubRequest routes the request to the appropriate handler.
// It ensures the hub is verified before processing most requests.
func (client *WebSocketClient) handleHubRequest(msg *common.HubRequest[cbor.RawMessage]) error {
if !client.hubVerified && msg.Action != common.CheckFingerprint {
return errors.New("hub not verified")
// handleHubRequest routes the request to the appropriate handler using the handler registry.
func (client *WebSocketClient) handleHubRequest(msg *common.HubRequest[cbor.RawMessage], requestID *uint32) error {
ctx := &HandlerContext{
Client: client,
Agent: client.agent,
Request: msg,
RequestID: requestID,
HubVerified: client.hubVerified,
SendResponse: client.sendResponse,
}
switch msg.Action {
case common.GetData:
return client.sendSystemData()
case common.CheckFingerprint:
return client.handleAuthChallenge(msg)
}
return nil
}
// sendSystemData gathers and sends current system statistics to the hub.
func (client *WebSocketClient) sendSystemData() error {
sysStats := client.agent.gatherStats(client.token)
return client.sendMessage(sysStats)
return client.agent.handlerRegistry.Handle(ctx)
}
// sendMessage encodes the given data to CBOR and sends it as a binary message over the WebSocket connection to the hub.
@@ -251,6 +249,36 @@ func (client *WebSocketClient) sendMessage(data any) error {
return client.Conn.WriteMessage(gws.OpcodeBinary, bytes)
}
// sendResponse sends a response with optional request ID for the new protocol
func (client *WebSocketClient) sendResponse(data any, requestID *uint32) error {
if requestID != nil {
// New format with ID - use typed fields
response := common.AgentResponse{
Id: requestID,
}
// Set the appropriate typed field based on data type
switch v := data.(type) {
case *system.CombinedData:
response.SystemData = v
case *common.FingerprintResponse:
response.Fingerprint = v
// case []byte:
// response.RawBytes = v
// case string:
// response.RawBytes = []byte(v)
default:
// For any other type, convert to error
response.Error = fmt.Sprintf("unsupported response type: %T", data)
}
return client.sendMessage(response)
} else {
// Legacy format - send data directly
return client.sendMessage(data)
}
}
// getUserAgent returns one of two User-Agent strings based on current time.
// This is used to avoid being blocked by Cloudflare or other anti-bot measures.
func getUserAgent() string {

View File

@@ -301,7 +301,7 @@ func TestWebSocketClient_HandleHubRequest(t *testing.T) {
Data: cbor.RawMessage{},
}
err := client.handleHubRequest(hubRequest)
err := client.handleHubRequest(hubRequest, nil)
if tc.expectError {
assert.Error(t, err)

View File

@@ -9,19 +9,21 @@ import (
"time"
"github.com/henrygd/beszel/agent/health"
"github.com/henrygd/beszel/internal/entities/system"
)
// ConnectionManager manages the connection state and events for the agent.
// It handles both WebSocket and SSH connections, automatically switching between
// them based on availability and managing reconnection attempts.
type ConnectionManager struct {
agent *Agent // Reference to the parent agent
State ConnectionState // Current connection state
eventChan chan ConnectionEvent // Channel for connection events
wsClient *WebSocketClient // WebSocket client for hub communication
serverOptions ServerOptions // Configuration for SSH server
wsTicker *time.Ticker // Ticker for WebSocket connection attempts
isConnecting bool // Prevents multiple simultaneous reconnection attempts
agent *Agent // Reference to the parent agent
State ConnectionState // Current connection state
eventChan chan ConnectionEvent // Channel for connection events
wsClient *WebSocketClient // WebSocket client for hub communication
serverOptions ServerOptions // Configuration for SSH server
wsTicker *time.Ticker // Ticker for WebSocket connection attempts
isConnecting bool // Prevents multiple simultaneous reconnection attempts
ConnectionType system.ConnectionType
}
// ConnectionState represents the current connection state of the agent.
@@ -144,15 +146,18 @@ func (c *ConnectionManager) handleStateChange(newState ConnectionState) {
switch newState {
case WebSocketConnected:
slog.Info("WebSocket connected", "host", c.wsClient.hubURL.Host)
c.ConnectionType = system.ConnectionTypeWebSocket
c.stopWsTicker()
_ = c.agent.StopServer()
c.isConnecting = false
case SSHConnected:
// stop new ws connection attempts
slog.Info("SSH connection established")
c.ConnectionType = system.ConnectionTypeSSH
c.stopWsTicker()
c.isConnecting = false
case Disconnected:
c.ConnectionType = system.ConnectionTypeNone
if c.isConnecting {
// Already handling reconnection, avoid duplicate attempts
return

66
agent/cpu.go Normal file
View File

@@ -0,0 +1,66 @@
package agent
import (
"math"
"runtime"
"github.com/shirou/gopsutil/v4/cpu"
)
var lastCpuTimes = make(map[uint16]cpu.TimesStat)
// init initializes the CPU monitoring by storing the initial CPU times
// for the default 60-second cache interval.
func init() {
if times, err := cpu.Times(false); err == nil {
lastCpuTimes[60000] = times[0]
}
}
// getCpuPercent calculates the CPU usage percentage using cached previous measurements.
// It uses the specified cache time interval to determine the time window for calculation.
// Returns the CPU usage percentage (0-100) and any error encountered.
func getCpuPercent(cacheTimeMs uint16) (float64, error) {
times, err := cpu.Times(false)
if err != nil || len(times) == 0 {
return 0, err
}
// if cacheTimeMs is not in lastCpuTimes, use 60000 as fallback lastCpuTime
if _, ok := lastCpuTimes[cacheTimeMs]; !ok {
lastCpuTimes[cacheTimeMs] = lastCpuTimes[60000]
}
delta := calculateBusy(lastCpuTimes[cacheTimeMs], times[0])
lastCpuTimes[cacheTimeMs] = times[0]
return delta, nil
}
// calculateBusy calculates the CPU busy percentage between two time points.
// It computes the ratio of busy time to total time elapsed between t1 and t2,
// returning a percentage clamped between 0 and 100.
func calculateBusy(t1, t2 cpu.TimesStat) float64 {
t1All, t1Busy := getAllBusy(t1)
t2All, t2Busy := getAllBusy(t2)
if t2Busy <= t1Busy {
return 0
}
if t2All <= t1All {
return 100
}
return math.Min(100, math.Max(0, (t2Busy-t1Busy)/(t2All-t1All)*100))
}
// getAllBusy calculates the total CPU time and busy CPU time from CPU times statistics.
// On Linux, it excludes guest and guest_nice time from the total to match kernel behavior.
// Returns total CPU time and busy CPU time (total minus idle and I/O wait time).
func getAllBusy(t cpu.TimesStat) (float64, float64) {
tot := t.Total()
if runtime.GOOS == "linux" {
tot -= t.Guest // Linux 2.6.24+
tot -= t.GuestNice // Linux 3.2.0+
}
busy := tot - t.Idle - t.Iowait
return tot, busy
}

View File

@@ -0,0 +1,81 @@
// Package deltatracker provides a tracker for calculating differences in numeric values over time.
package deltatracker
import (
"sync"
"golang.org/x/exp/constraints"
)
// Numeric is a constraint that permits any integer or floating-point type.
type Numeric interface {
constraints.Integer | constraints.Float
}
// DeltaTracker is a generic, thread-safe tracker for calculating differences
// in numeric values over time.
// K is the key type (e.g., int, string).
// V is the value type (e.g., int, int64, float32, float64).
type DeltaTracker[K comparable, V Numeric] struct {
sync.RWMutex
current map[K]V
previous map[K]V
}
// NewDeltaTracker creates a new generic tracker.
func NewDeltaTracker[K comparable, V Numeric]() *DeltaTracker[K, V] {
return &DeltaTracker[K, V]{
current: make(map[K]V),
previous: make(map[K]V),
}
}
// Set records the current value for a given ID.
func (t *DeltaTracker[K, V]) Set(id K, value V) {
t.Lock()
defer t.Unlock()
t.current[id] = value
}
// Deltas returns a map of all calculated deltas for the current interval.
func (t *DeltaTracker[K, V]) Deltas() map[K]V {
t.RLock()
defer t.RUnlock()
deltas := make(map[K]V)
for id, currentVal := range t.current {
if previousVal, ok := t.previous[id]; ok {
deltas[id] = currentVal - previousVal
} else {
deltas[id] = 0
}
}
return deltas
}
// Delta returns the delta for a single key.
// Returns 0 if the key doesn't exist or has no previous value.
func (t *DeltaTracker[K, V]) Delta(id K) V {
t.RLock()
defer t.RUnlock()
currentVal, currentOk := t.current[id]
if !currentOk {
return 0
}
previousVal, previousOk := t.previous[id]
if !previousOk {
return 0
}
return currentVal - previousVal
}
// Cycle prepares the tracker for the next interval.
func (t *DeltaTracker[K, V]) Cycle() {
t.Lock()
defer t.Unlock()
t.previous = t.current
t.current = make(map[K]V)
}

View File

@@ -0,0 +1,217 @@
package deltatracker
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
)
func ExampleDeltaTracker() {
tracker := NewDeltaTracker[string, int]()
tracker.Set("key1", 10)
tracker.Set("key2", 20)
tracker.Cycle()
tracker.Set("key1", 15)
tracker.Set("key2", 30)
fmt.Println(tracker.Delta("key1"))
fmt.Println(tracker.Delta("key2"))
fmt.Println(tracker.Deltas())
// Output: 5
// 10
// map[key1:5 key2:10]
}
func TestNewDeltaTracker(t *testing.T) {
tracker := NewDeltaTracker[string, int]()
assert.NotNil(t, tracker)
assert.Empty(t, tracker.current)
assert.Empty(t, tracker.previous)
}
func TestSet(t *testing.T) {
tracker := NewDeltaTracker[string, int]()
tracker.Set("key1", 10)
tracker.RLock()
defer tracker.RUnlock()
assert.Equal(t, 10, tracker.current["key1"])
}
func TestDeltas(t *testing.T) {
tracker := NewDeltaTracker[string, int]()
// Test with no previous values
tracker.Set("key1", 10)
tracker.Set("key2", 20)
deltas := tracker.Deltas()
assert.Equal(t, 0, deltas["key1"])
assert.Equal(t, 0, deltas["key2"])
// Cycle to move current to previous
tracker.Cycle()
// Set new values and check deltas
tracker.Set("key1", 15) // Delta should be 5 (15-10)
tracker.Set("key2", 25) // Delta should be 5 (25-20)
tracker.Set("key3", 30) // New key, delta should be 0
deltas = tracker.Deltas()
assert.Equal(t, 5, deltas["key1"])
assert.Equal(t, 5, deltas["key2"])
assert.Equal(t, 0, deltas["key3"])
}
func TestCycle(t *testing.T) {
tracker := NewDeltaTracker[string, int]()
tracker.Set("key1", 10)
tracker.Set("key2", 20)
// Verify current has values
tracker.RLock()
assert.Equal(t, 10, tracker.current["key1"])
assert.Equal(t, 20, tracker.current["key2"])
assert.Empty(t, tracker.previous)
tracker.RUnlock()
tracker.Cycle()
// After cycle, previous should have the old current values
// and current should be empty
tracker.RLock()
assert.Empty(t, tracker.current)
assert.Equal(t, 10, tracker.previous["key1"])
assert.Equal(t, 20, tracker.previous["key2"])
tracker.RUnlock()
}
func TestCompleteWorkflow(t *testing.T) {
tracker := NewDeltaTracker[string, int]()
// First interval
tracker.Set("server1", 100)
tracker.Set("server2", 200)
// Get deltas for first interval (should be zero)
firstDeltas := tracker.Deltas()
assert.Equal(t, 0, firstDeltas["server1"])
assert.Equal(t, 0, firstDeltas["server2"])
// Cycle to next interval
tracker.Cycle()
// Second interval
tracker.Set("server1", 150) // Delta: 50
tracker.Set("server2", 180) // Delta: -20
tracker.Set("server3", 300) // New server, delta: 300
secondDeltas := tracker.Deltas()
assert.Equal(t, 50, secondDeltas["server1"])
assert.Equal(t, -20, secondDeltas["server2"])
assert.Equal(t, 0, secondDeltas["server3"])
}
func TestDeltaTrackerWithDifferentTypes(t *testing.T) {
// Test with int64
intTracker := NewDeltaTracker[string, int64]()
intTracker.Set("pid1", 1000)
intTracker.Cycle()
intTracker.Set("pid1", 1200)
intDeltas := intTracker.Deltas()
assert.Equal(t, int64(200), intDeltas["pid1"])
// Test with float64
floatTracker := NewDeltaTracker[string, float64]()
floatTracker.Set("cpu1", 1.5)
floatTracker.Cycle()
floatTracker.Set("cpu1", 2.7)
floatDeltas := floatTracker.Deltas()
assert.InDelta(t, 1.2, floatDeltas["cpu1"], 0.0001)
// Test with int keys
pidTracker := NewDeltaTracker[int, int64]()
pidTracker.Set(101, 20000)
pidTracker.Cycle()
pidTracker.Set(101, 22500)
pidDeltas := pidTracker.Deltas()
assert.Equal(t, int64(2500), pidDeltas[101])
}
func TestDelta(t *testing.T) {
tracker := NewDeltaTracker[string, int]()
// Test getting delta for non-existent key
result := tracker.Delta("nonexistent")
assert.Equal(t, 0, result)
// Test getting delta for key with no previous value
tracker.Set("key1", 10)
result = tracker.Delta("key1")
assert.Equal(t, 0, result)
// Cycle to move current to previous
tracker.Cycle()
// Test getting delta for key with previous value
tracker.Set("key1", 15)
result = tracker.Delta("key1")
assert.Equal(t, 5, result)
// Test getting delta for key that exists in previous but not current
result = tracker.Delta("key1")
assert.Equal(t, 5, result) // Should still return 5
// Test getting delta for key that exists in current but not previous
tracker.Set("key2", 20)
result = tracker.Delta("key2")
assert.Equal(t, 0, result)
}
func TestDeltaWithDifferentTypes(t *testing.T) {
// Test with int64
intTracker := NewDeltaTracker[string, int64]()
intTracker.Set("pid1", 1000)
intTracker.Cycle()
intTracker.Set("pid1", 1200)
result := intTracker.Delta("pid1")
assert.Equal(t, int64(200), result)
// Test with float64
floatTracker := NewDeltaTracker[string, float64]()
floatTracker.Set("cpu1", 1.5)
floatTracker.Cycle()
floatTracker.Set("cpu1", 2.7)
floatResult := floatTracker.Delta("cpu1")
assert.InDelta(t, 1.2, floatResult, 0.0001)
// Test with int keys
pidTracker := NewDeltaTracker[int, int64]()
pidTracker.Set(101, 20000)
pidTracker.Cycle()
pidTracker.Set(101, 22500)
pidResult := pidTracker.Delta(101)
assert.Equal(t, int64(2500), pidResult)
}
func TestDeltaConcurrentAccess(t *testing.T) {
tracker := NewDeltaTracker[string, int]()
// Set initial values
tracker.Set("key1", 10)
tracker.Set("key2", 20)
tracker.Cycle()
// Set new values
tracker.Set("key1", 15)
tracker.Set("key2", 25)
// Test concurrent access safety
result1 := tracker.Delta("key1")
result2 := tracker.Delta("key2")
assert.Equal(t, 5, result1)
assert.Equal(t, 5, result2)
}

View File

@@ -189,3 +189,96 @@ func (a *Agent) initializeDiskIoStats(diskIoCounters map[string]disk.IOCountersS
a.fsNames = append(a.fsNames, device)
}
}
// Updates disk usage statistics for all monitored filesystems
func (a *Agent) updateDiskUsage(systemStats *system.Stats) {
// disk usage
for _, stats := range a.fsStats {
if d, err := disk.Usage(stats.Mountpoint); err == nil {
stats.DiskTotal = bytesToGigabytes(d.Total)
stats.DiskUsed = bytesToGigabytes(d.Used)
if stats.Root {
systemStats.DiskTotal = bytesToGigabytes(d.Total)
systemStats.DiskUsed = bytesToGigabytes(d.Used)
systemStats.DiskPct = twoDecimals(d.UsedPercent)
}
} else {
// reset stats if error (likely unmounted)
slog.Error("Error getting disk stats", "name", stats.Mountpoint, "err", err)
stats.DiskTotal = 0
stats.DiskUsed = 0
stats.TotalRead = 0
stats.TotalWrite = 0
}
}
}
// Updates disk I/O statistics for all monitored filesystems
func (a *Agent) updateDiskIo(cacheTimeMs uint16, systemStats *system.Stats) {
// disk i/o (cache-aware per interval)
if ioCounters, err := disk.IOCounters(a.fsNames...); err == nil {
// Ensure map for this interval exists
if _, ok := a.diskPrev[cacheTimeMs]; !ok {
a.diskPrev[cacheTimeMs] = make(map[string]prevDisk)
}
now := time.Now()
for name, d := range ioCounters {
stats := a.fsStats[d.Name]
if stats == nil {
// skip devices not tracked
continue
}
// Previous snapshot for this interval and device
prev, hasPrev := a.diskPrev[cacheTimeMs][name]
if !hasPrev {
// Seed from agent-level fsStats if present, else seed from current
prev = prevDisk{readBytes: stats.TotalRead, writeBytes: stats.TotalWrite, at: stats.Time}
if prev.at.IsZero() {
prev = prevDisk{readBytes: d.ReadBytes, writeBytes: d.WriteBytes, at: now}
}
}
msElapsed := uint64(now.Sub(prev.at).Milliseconds())
if msElapsed < 100 {
// Avoid division by zero or clock issues; update snapshot and continue
a.diskPrev[cacheTimeMs][name] = prevDisk{readBytes: d.ReadBytes, writeBytes: d.WriteBytes, at: now}
continue
}
diskIORead := (d.ReadBytes - prev.readBytes) * 1000 / msElapsed
diskIOWrite := (d.WriteBytes - prev.writeBytes) * 1000 / msElapsed
readMbPerSecond := bytesToMegabytes(float64(diskIORead))
writeMbPerSecond := bytesToMegabytes(float64(diskIOWrite))
// validate values
if readMbPerSecond > 50_000 || writeMbPerSecond > 50_000 {
slog.Warn("Invalid disk I/O. Resetting.", "name", d.Name, "read", readMbPerSecond, "write", writeMbPerSecond)
// Reset interval snapshot and seed from current
a.diskPrev[cacheTimeMs][name] = prevDisk{readBytes: d.ReadBytes, writeBytes: d.WriteBytes, at: now}
// also refresh agent baseline to avoid future negatives
a.initializeDiskIoStats(ioCounters)
continue
}
// Update per-interval snapshot
a.diskPrev[cacheTimeMs][name] = prevDisk{readBytes: d.ReadBytes, writeBytes: d.WriteBytes, at: now}
// Update global fsStats baseline for cross-interval correctness
stats.Time = now
stats.TotalRead = d.ReadBytes
stats.TotalWrite = d.WriteBytes
stats.DiskReadPs = readMbPerSecond
stats.DiskWritePs = writeMbPerSecond
stats.DiskReadBytes = diskIORead
stats.DiskWriteBytes = diskIOWrite
if stats.Root {
systemStats.DiskReadPs = stats.DiskReadPs
systemStats.DiskWritePs = stats.DiskWritePs
systemStats.DiskIO[0] = diskIORead
systemStats.DiskIO[1] = diskIOWrite
}
}
}
}

View File

@@ -14,17 +14,25 @@ import (
"sync"
"time"
"github.com/henrygd/beszel/agent/deltatracker"
"github.com/henrygd/beszel/internal/entities/container"
"github.com/blang/semver"
)
const (
// Docker API timeout in milliseconds
dockerTimeoutMs = 2100
// Maximum realistic network speed (5 GB/s) to detect bad deltas
maxNetworkSpeedBps uint64 = 5e9
)
type dockerManager struct {
client *http.Client // Client to query Docker API
wg sync.WaitGroup // WaitGroup to wait for all goroutines to finish
sem chan struct{} // Semaphore to limit concurrent container requests
containerStatsMutex sync.RWMutex // Mutex to prevent concurrent access to containerStatsMap
apiContainerList []*container.ApiInfo // List of containers from Docker API (no pointer)
apiContainerList []*container.ApiInfo // List of containers from Docker API
containerStatsMap map[string]*container.Stats // Keeps track of container stats
validIds map[string]struct{} // Map of valid container ids, used to prune invalid containers from containerStatsMap
goodDockerVersion bool // Whether docker version is at least 25.0.0 (one-shot works correctly)
@@ -32,6 +40,17 @@ type dockerManager struct {
buf *bytes.Buffer // Buffer to store and read response bodies
decoder *json.Decoder // Reusable JSON decoder that reads from buf
apiStats *container.ApiStats // Reusable API stats object
// Cache-time-aware tracking for CPU stats (similar to cpu.go)
// Maps cache time intervals to container-specific CPU usage tracking
lastCpuContainer map[uint16]map[string]uint64 // cacheTimeMs -> containerId -> last cpu container usage
lastCpuSystem map[uint16]map[string]uint64 // cacheTimeMs -> containerId -> last cpu system usage
lastCpuReadTime map[uint16]map[string]time.Time // cacheTimeMs -> containerId -> last read time (Windows)
// Network delta trackers - one per cache time to avoid interference
// cacheTimeMs -> DeltaTracker for network bytes sent/received
networkSentTrackers map[uint16]*deltatracker.DeltaTracker[string, uint64]
networkRecvTrackers map[uint16]*deltatracker.DeltaTracker[string, uint64]
}
// userAgentRoundTripper is a custom http.RoundTripper that adds a User-Agent header to all requests
@@ -62,8 +81,8 @@ func (d *dockerManager) dequeue() {
}
}
// Returns stats for all running containers
func (dm *dockerManager) getDockerStats() ([]*container.Stats, error) {
// Returns stats for all running containers with cache-time-aware delta tracking
func (dm *dockerManager) getDockerStats(cacheTimeMs uint16) ([]*container.Stats, error) {
resp, err := dm.client.Get("http://localhost/containers/json")
if err != nil {
return nil, err
@@ -87,8 +106,7 @@ func (dm *dockerManager) getDockerStats() ([]*container.Stats, error) {
var failedContainers []*container.ApiInfo
for i := range dm.apiContainerList {
ctr := dm.apiContainerList[i]
for _, ctr := range dm.apiContainerList {
ctr.IdShort = ctr.Id[:12]
dm.validIds[ctr.IdShort] = struct{}{}
// check if container is less than 1 minute old (possible restart)
@@ -98,9 +116,9 @@ func (dm *dockerManager) getDockerStats() ([]*container.Stats, error) {
dm.deleteContainerStatsSync(ctr.IdShort)
}
dm.queue()
go func() {
go func(ctr *container.ApiInfo) {
defer dm.dequeue()
err := dm.updateContainerStats(ctr)
err := dm.updateContainerStats(ctr, cacheTimeMs)
// if error, delete from map and add to failed list to retry
if err != nil {
dm.containerStatsMutex.Lock()
@@ -108,7 +126,7 @@ func (dm *dockerManager) getDockerStats() ([]*container.Stats, error) {
failedContainers = append(failedContainers, ctr)
dm.containerStatsMutex.Unlock()
}
}()
}(ctr)
}
dm.wg.Wait()
@@ -119,13 +137,12 @@ func (dm *dockerManager) getDockerStats() ([]*container.Stats, error) {
for i := range failedContainers {
ctr := failedContainers[i]
dm.queue()
go func() {
go func(ctr *container.ApiInfo) {
defer dm.dequeue()
err = dm.updateContainerStats(ctr)
if err != nil {
slog.Error("Error getting container stats", "err", err)
if err2 := dm.updateContainerStats(ctr, cacheTimeMs); err2 != nil {
slog.Error("Error getting container stats", "err", err2)
}
}()
}(ctr)
}
dm.wg.Wait()
}
@@ -140,18 +157,156 @@ func (dm *dockerManager) getDockerStats() ([]*container.Stats, error) {
}
}
// prepare network trackers for next interval for this cache time
dm.cycleNetworkDeltasForCacheTime(cacheTimeMs)
return stats, nil
}
// Updates stats for individual container
func (dm *dockerManager) updateContainerStats(ctr *container.ApiInfo) error {
// initializeCpuTracking initializes CPU tracking maps for a specific cache time interval
func (dm *dockerManager) initializeCpuTracking(cacheTimeMs uint16) {
// Initialize cache time maps if they don't exist
if dm.lastCpuContainer[cacheTimeMs] == nil {
dm.lastCpuContainer[cacheTimeMs] = make(map[string]uint64)
}
if dm.lastCpuSystem[cacheTimeMs] == nil {
dm.lastCpuSystem[cacheTimeMs] = make(map[string]uint64)
}
// Ensure the outer map exists before indexing
if dm.lastCpuReadTime == nil {
dm.lastCpuReadTime = make(map[uint16]map[string]time.Time)
}
if dm.lastCpuReadTime[cacheTimeMs] == nil {
dm.lastCpuReadTime[cacheTimeMs] = make(map[string]time.Time)
}
}
// getCpuPreviousValues returns previous CPU values for a container and cache time interval
func (dm *dockerManager) getCpuPreviousValues(cacheTimeMs uint16, containerId string) (uint64, uint64) {
return dm.lastCpuContainer[cacheTimeMs][containerId], dm.lastCpuSystem[cacheTimeMs][containerId]
}
// setCpuCurrentValues stores current CPU values for a container and cache time interval
func (dm *dockerManager) setCpuCurrentValues(cacheTimeMs uint16, containerId string, cpuContainer, cpuSystem uint64) {
dm.lastCpuContainer[cacheTimeMs][containerId] = cpuContainer
dm.lastCpuSystem[cacheTimeMs][containerId] = cpuSystem
}
// calculateMemoryUsage calculates memory usage from Docker API stats
func calculateMemoryUsage(apiStats *container.ApiStats, isWindows bool) (uint64, error) {
if isWindows {
return apiStats.MemoryStats.PrivateWorkingSet, nil
}
// Check if container has valid data, otherwise may be in restart loop (#103)
if apiStats.MemoryStats.Usage == 0 {
return 0, fmt.Errorf("no memory stats available")
}
memCache := apiStats.MemoryStats.Stats.InactiveFile
if memCache == 0 {
memCache = apiStats.MemoryStats.Stats.Cache
}
return apiStats.MemoryStats.Usage - memCache, nil
}
// getNetworkTracker returns the DeltaTracker for a specific cache time, creating it if needed
func (dm *dockerManager) getNetworkTracker(cacheTimeMs uint16, isSent bool) *deltatracker.DeltaTracker[string, uint64] {
var trackers map[uint16]*deltatracker.DeltaTracker[string, uint64]
if isSent {
trackers = dm.networkSentTrackers
} else {
trackers = dm.networkRecvTrackers
}
if trackers[cacheTimeMs] == nil {
trackers[cacheTimeMs] = deltatracker.NewDeltaTracker[string, uint64]()
}
return trackers[cacheTimeMs]
}
// cycleNetworkDeltasForCacheTime cycles the network delta trackers for a specific cache time
func (dm *dockerManager) cycleNetworkDeltasForCacheTime(cacheTimeMs uint16) {
if dm.networkSentTrackers[cacheTimeMs] != nil {
dm.networkSentTrackers[cacheTimeMs].Cycle()
}
if dm.networkRecvTrackers[cacheTimeMs] != nil {
dm.networkRecvTrackers[cacheTimeMs].Cycle()
}
}
// calculateNetworkStats calculates network sent/receive deltas using DeltaTracker
func (dm *dockerManager) calculateNetworkStats(ctr *container.ApiInfo, apiStats *container.ApiStats, stats *container.Stats, initialized bool, name string, cacheTimeMs uint16) (uint64, uint64) {
var total_sent, total_recv uint64
for _, v := range apiStats.Networks {
total_sent += v.TxBytes
total_recv += v.RxBytes
}
// Get the DeltaTracker for this specific cache time
sentTracker := dm.getNetworkTracker(cacheTimeMs, true)
recvTracker := dm.getNetworkTracker(cacheTimeMs, false)
// Set current values in the cache-time-specific DeltaTracker
sentTracker.Set(ctr.IdShort, total_sent)
recvTracker.Set(ctr.IdShort, total_recv)
// Get deltas (bytes since last measurement)
sent_delta_raw := sentTracker.Delta(ctr.IdShort)
recv_delta_raw := recvTracker.Delta(ctr.IdShort)
// Calculate bytes per second independently for Tx and Rx if we have previous data
var sent_delta, recv_delta uint64
if initialized {
millisecondsElapsed := uint64(time.Since(stats.PrevReadTime).Milliseconds())
if millisecondsElapsed > 0 {
if sent_delta_raw > 0 {
sent_delta = sent_delta_raw * 1000 / millisecondsElapsed
if sent_delta > maxNetworkSpeedBps {
slog.Warn("Bad network delta", "container", name)
sent_delta = 0
}
}
if recv_delta_raw > 0 {
recv_delta = recv_delta_raw * 1000 / millisecondsElapsed
if recv_delta > maxNetworkSpeedBps {
slog.Warn("Bad network delta", "container", name)
recv_delta = 0
}
}
}
}
return sent_delta, recv_delta
}
// validateCpuPercentage checks if CPU percentage is within valid range
func validateCpuPercentage(cpuPct float64, containerName string) error {
if cpuPct > 100 {
return fmt.Errorf("%s cpu pct greater than 100: %+v", containerName, cpuPct)
}
return nil
}
// updateContainerStatsValues updates the final stats values
func updateContainerStatsValues(stats *container.Stats, cpuPct float64, usedMemory uint64, sent_delta, recv_delta uint64, readTime time.Time) {
stats.Cpu = twoDecimals(cpuPct)
stats.Mem = bytesToMegabytes(float64(usedMemory))
stats.NetworkSent = bytesToMegabytes(float64(sent_delta))
stats.NetworkRecv = bytesToMegabytes(float64(recv_delta))
stats.PrevReadTime = readTime
}
// Updates stats for individual container with cache-time-aware delta tracking
func (dm *dockerManager) updateContainerStats(ctr *container.ApiInfo, cacheTimeMs uint16) error {
name := ctr.Names[0][1:]
resp, err := dm.client.Get("http://localhost/containers/" + ctr.IdShort + "/stats?stream=0&one-shot=1")
if err != nil {
return err
}
defer resp.Body.Close()
dm.containerStatsMutex.Lock()
defer dm.containerStatsMutex.Unlock()
@@ -169,72 +324,58 @@ func (dm *dockerManager) updateContainerStats(ctr *container.ApiInfo) error {
stats.NetworkSent = 0
stats.NetworkRecv = 0
// docker host container stats response
// res := dm.getApiStats()
// defer dm.putApiStats(res)
//
res := dm.apiStats
res.Networks = nil
if err := dm.decode(resp, res); err != nil {
return err
}
// calculate cpu and memory stats
var usedMemory uint64
// Initialize CPU tracking for this cache time interval
dm.initializeCpuTracking(cacheTimeMs)
// Get previous CPU values
prevCpuContainer, prevCpuSystem := dm.getCpuPreviousValues(cacheTimeMs, ctr.IdShort)
// Calculate CPU percentage based on platform
var cpuPct float64
// store current cpu stats
prevCpuContainer, prevCpuSystem := stats.CpuContainer, stats.CpuSystem
stats.CpuContainer = res.CPUStats.CPUUsage.TotalUsage
stats.CpuSystem = res.CPUStats.SystemUsage
if dm.isWindows {
usedMemory = res.MemoryStats.PrivateWorkingSet
cpuPct = res.CalculateCpuPercentWindows(prevCpuContainer, stats.PrevReadTime)
prevRead := dm.lastCpuReadTime[cacheTimeMs][ctr.IdShort]
cpuPct = res.CalculateCpuPercentWindows(prevCpuContainer, prevRead)
} else {
// check if container has valid data, otherwise may be in restart loop (#103)
if res.MemoryStats.Usage == 0 {
return fmt.Errorf("%s - no memory stats - see https://github.com/henrygd/beszel/issues/144", name)
}
memCache := res.MemoryStats.Stats.InactiveFile
if memCache == 0 {
memCache = res.MemoryStats.Stats.Cache
}
usedMemory = res.MemoryStats.Usage - memCache
cpuPct = res.CalculateCpuPercentLinux(prevCpuContainer, prevCpuSystem)
}
if cpuPct > 100 {
return fmt.Errorf("%s cpu pct greater than 100: %+v", name, cpuPct)
// Calculate memory usage
usedMemory, err := calculateMemoryUsage(res, dm.isWindows)
if err != nil {
return fmt.Errorf("%s - %w - see https://github.com/henrygd/beszel/issues/144", name, err)
}
// network
// Store current CPU stats for next calculation
currentCpuContainer := res.CPUStats.CPUUsage.TotalUsage
currentCpuSystem := res.CPUStats.SystemUsage
dm.setCpuCurrentValues(cacheTimeMs, ctr.IdShort, currentCpuContainer, currentCpuSystem)
// Validate CPU percentage
if err := validateCpuPercentage(cpuPct, name); err != nil {
return err
}
// Calculate network stats using DeltaTracker
sent_delta, recv_delta := dm.calculateNetworkStats(ctr, res, stats, initialized, name, cacheTimeMs)
// Store current network values for legacy compatibility
var total_sent, total_recv uint64
for _, v := range res.Networks {
total_sent += v.TxBytes
total_recv += v.RxBytes
}
var sent_delta, recv_delta uint64
millisecondsElapsed := uint64(time.Since(stats.PrevReadTime).Milliseconds())
if initialized && millisecondsElapsed > 0 {
// get bytes per second
sent_delta = (total_sent - stats.PrevNet.Sent) * 1000 / millisecondsElapsed
recv_delta = (total_recv - stats.PrevNet.Recv) * 1000 / millisecondsElapsed
// check for unrealistic network values (> 5GB/s)
if sent_delta > 5e9 || recv_delta > 5e9 {
slog.Warn("Bad network delta", "container", name)
sent_delta, recv_delta = 0, 0
}
}
stats.PrevNet.Sent, stats.PrevNet.Recv = total_sent, total_recv
stats.Cpu = twoDecimals(cpuPct)
stats.Mem = bytesToMegabytes(float64(usedMemory))
stats.NetworkSent = bytesToMegabytes(float64(sent_delta))
stats.NetworkRecv = bytesToMegabytes(float64(recv_delta))
stats.PrevReadTime = res.Read
// Update final stats values
updateContainerStatsValues(stats, cpuPct, usedMemory, sent_delta, recv_delta, res.Read)
// store per-cache-time read time for Windows CPU percent calc
dm.lastCpuReadTime[cacheTimeMs][ctr.IdShort] = res.Read
return nil
}
@@ -244,6 +385,15 @@ func (dm *dockerManager) deleteContainerStatsSync(id string) {
dm.containerStatsMutex.Lock()
defer dm.containerStatsMutex.Unlock()
delete(dm.containerStatsMap, id)
for ct := range dm.lastCpuContainer {
delete(dm.lastCpuContainer[ct], id)
}
for ct := range dm.lastCpuSystem {
delete(dm.lastCpuSystem[ct], id)
}
for ct := range dm.lastCpuReadTime {
delete(dm.lastCpuReadTime[ct], id)
}
}
// Creates a new http client for Docker or Podman API
@@ -283,7 +433,7 @@ func newDockerManager(a *Agent) *dockerManager {
}
// configurable timeout
timeout := time.Millisecond * 2100
timeout := time.Millisecond * time.Duration(dockerTimeoutMs)
if t, set := GetEnv("DOCKER_TIMEOUT"); set {
timeout, err = time.ParseDuration(t)
if err != nil {
@@ -308,6 +458,13 @@ func newDockerManager(a *Agent) *dockerManager {
sem: make(chan struct{}, 5),
apiContainerList: []*container.ApiInfo{},
apiStats: &container.ApiStats{},
// Initialize cache-time-aware tracking structures
lastCpuContainer: make(map[uint16]map[string]uint64),
lastCpuSystem: make(map[uint16]map[string]uint64),
lastCpuReadTime: make(map[uint16]map[string]time.Time),
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
}
// If using podman, return client

875
agent/docker_test.go Normal file
View File

@@ -0,0 +1,875 @@
//go:build testing
// +build testing
package agent
import (
"encoding/json"
"os"
"testing"
"time"
"github.com/henrygd/beszel/agent/deltatracker"
"github.com/henrygd/beszel/internal/entities/container"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var defaultCacheTimeMs = uint16(60_000)
// cycleCpuDeltas cycles the CPU tracking data for a specific cache time interval
func (dm *dockerManager) cycleCpuDeltas(cacheTimeMs uint16) {
// Clear the CPU tracking maps for this cache time interval
if dm.lastCpuContainer[cacheTimeMs] != nil {
clear(dm.lastCpuContainer[cacheTimeMs])
}
if dm.lastCpuSystem[cacheTimeMs] != nil {
clear(dm.lastCpuSystem[cacheTimeMs])
}
}
func TestCalculateMemoryUsage(t *testing.T) {
tests := []struct {
name string
apiStats *container.ApiStats
isWindows bool
expected uint64
expectError bool
}{
{
name: "Linux with valid memory stats",
apiStats: &container.ApiStats{
MemoryStats: container.MemoryStats{
Usage: 1048576, // 1MB
Stats: container.MemoryStatsStats{
Cache: 524288, // 512KB
InactiveFile: 262144, // 256KB
},
},
},
isWindows: false,
expected: 786432, // 1MB - 256KB (inactive_file takes precedence) = 768KB
expectError: false,
},
{
name: "Linux with zero cache uses inactive_file",
apiStats: &container.ApiStats{
MemoryStats: container.MemoryStats{
Usage: 1048576, // 1MB
Stats: container.MemoryStatsStats{
Cache: 0,
InactiveFile: 262144, // 256KB
},
},
},
isWindows: false,
expected: 786432, // 1MB - 256KB = 768KB
expectError: false,
},
{
name: "Windows with valid memory stats",
apiStats: &container.ApiStats{
MemoryStats: container.MemoryStats{
PrivateWorkingSet: 524288, // 512KB
},
},
isWindows: true,
expected: 524288,
expectError: false,
},
{
name: "Linux with zero usage returns error",
apiStats: &container.ApiStats{
MemoryStats: container.MemoryStats{
Usage: 0,
Stats: container.MemoryStatsStats{
Cache: 0,
InactiveFile: 0,
},
},
},
isWindows: false,
expected: 0,
expectError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := calculateMemoryUsage(tt.apiStats, tt.isWindows)
if tt.expectError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expected, result)
}
})
}
}
func TestValidateCpuPercentage(t *testing.T) {
tests := []struct {
name string
cpuPct float64
containerName string
expectError bool
expectedError string
}{
{
name: "valid CPU percentage",
cpuPct: 50.5,
containerName: "test-container",
expectError: false,
},
{
name: "zero CPU percentage",
cpuPct: 0.0,
containerName: "test-container",
expectError: false,
},
{
name: "CPU percentage over 100",
cpuPct: 150.5,
containerName: "test-container",
expectError: true,
expectedError: "test-container cpu pct greater than 100: 150.5",
},
{
name: "CPU percentage exactly 100",
cpuPct: 100.0,
containerName: "test-container",
expectError: false,
},
{
name: "negative CPU percentage",
cpuPct: -10.0,
containerName: "test-container",
expectError: false, // Function only checks for > 100, not negative
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateCpuPercentage(tt.cpuPct, tt.containerName)
if tt.expectError {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
} else {
assert.NoError(t, err)
}
})
}
}
func TestUpdateContainerStatsValues(t *testing.T) {
stats := &container.Stats{
Name: "test-container",
Cpu: 0.0,
Mem: 0.0,
NetworkSent: 0.0,
NetworkRecv: 0.0,
PrevReadTime: time.Time{},
}
testTime := time.Now()
updateContainerStatsValues(stats, 75.5, 1048576, 524288, 262144, testTime)
// Check CPU percentage (should be rounded to 2 decimals)
assert.Equal(t, 75.5, stats.Cpu)
// Check memory (should be converted to MB: 1048576 bytes = 1 MB)
assert.Equal(t, 1.0, stats.Mem)
// Check network sent (should be converted to MB: 524288 bytes = 0.5 MB)
assert.Equal(t, 0.5, stats.NetworkSent)
// Check network recv (should be converted to MB: 262144 bytes = 0.25 MB)
assert.Equal(t, 0.25, stats.NetworkRecv)
// Check read time
assert.Equal(t, testTime, stats.PrevReadTime)
}
func TestTwoDecimals(t *testing.T) {
tests := []struct {
name string
input float64
expected float64
}{
{"round down", 1.234, 1.23},
{"round half up", 1.235, 1.24}, // math.Round rounds half up
{"no rounding needed", 1.23, 1.23},
{"negative number", -1.235, -1.24}, // math.Round rounds half up (more negative)
{"zero", 0.0, 0.0},
{"large number", 123.456, 123.46}, // rounds 5 up
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := twoDecimals(tt.input)
assert.Equal(t, tt.expected, result)
})
}
}
func TestBytesToMegabytes(t *testing.T) {
tests := []struct {
name string
input float64
expected float64
}{
{"1 MB", 1048576, 1.0},
{"512 KB", 524288, 0.5},
{"zero", 0, 0},
{"large value", 1073741824, 1024}, // 1 GB = 1024 MB
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := bytesToMegabytes(tt.input)
assert.Equal(t, tt.expected, result)
})
}
}
func TestInitializeCpuTracking(t *testing.T) {
dm := &dockerManager{
lastCpuContainer: make(map[uint16]map[string]uint64),
lastCpuSystem: make(map[uint16]map[string]uint64),
lastCpuReadTime: make(map[uint16]map[string]time.Time),
}
cacheTimeMs := uint16(30000)
// Test initializing a new cache time
dm.initializeCpuTracking(cacheTimeMs)
// Check that maps were created
assert.NotNil(t, dm.lastCpuContainer[cacheTimeMs])
assert.NotNil(t, dm.lastCpuSystem[cacheTimeMs])
assert.NotNil(t, dm.lastCpuReadTime[cacheTimeMs])
assert.Empty(t, dm.lastCpuContainer[cacheTimeMs])
assert.Empty(t, dm.lastCpuSystem[cacheTimeMs])
// Test initializing existing cache time (should not overwrite)
dm.lastCpuContainer[cacheTimeMs]["test"] = 100
dm.lastCpuSystem[cacheTimeMs]["test"] = 200
dm.initializeCpuTracking(cacheTimeMs)
// Should still have the existing values
assert.Equal(t, uint64(100), dm.lastCpuContainer[cacheTimeMs]["test"])
assert.Equal(t, uint64(200), dm.lastCpuSystem[cacheTimeMs]["test"])
}
func TestGetCpuPreviousValues(t *testing.T) {
dm := &dockerManager{
lastCpuContainer: map[uint16]map[string]uint64{
30000: {"container1": 100, "container2": 200},
},
lastCpuSystem: map[uint16]map[string]uint64{
30000: {"container1": 150, "container2": 250},
},
}
// Test getting existing values
container, system := dm.getCpuPreviousValues(30000, "container1")
assert.Equal(t, uint64(100), container)
assert.Equal(t, uint64(150), system)
// Test getting non-existing container
container, system = dm.getCpuPreviousValues(30000, "nonexistent")
assert.Equal(t, uint64(0), container)
assert.Equal(t, uint64(0), system)
// Test getting non-existing cache time
container, system = dm.getCpuPreviousValues(60000, "container1")
assert.Equal(t, uint64(0), container)
assert.Equal(t, uint64(0), system)
}
func TestSetCpuCurrentValues(t *testing.T) {
dm := &dockerManager{
lastCpuContainer: make(map[uint16]map[string]uint64),
lastCpuSystem: make(map[uint16]map[string]uint64),
}
cacheTimeMs := uint16(30000)
containerId := "test-container"
// Initialize the cache time maps first
dm.initializeCpuTracking(cacheTimeMs)
// Set values
dm.setCpuCurrentValues(cacheTimeMs, containerId, 500, 750)
// Check that values were set
assert.Equal(t, uint64(500), dm.lastCpuContainer[cacheTimeMs][containerId])
assert.Equal(t, uint64(750), dm.lastCpuSystem[cacheTimeMs][containerId])
}
func TestCalculateNetworkStats(t *testing.T) {
// Create docker manager with tracker maps
dm := &dockerManager{
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
}
cacheTimeMs := uint16(30000)
// Pre-populate tracker for this cache time with initial values
sentTracker := deltatracker.NewDeltaTracker[string, uint64]()
recvTracker := deltatracker.NewDeltaTracker[string, uint64]()
sentTracker.Set("container1", 1000)
recvTracker.Set("container1", 800)
sentTracker.Cycle() // Move to previous
recvTracker.Cycle()
dm.networkSentTrackers[cacheTimeMs] = sentTracker
dm.networkRecvTrackers[cacheTimeMs] = recvTracker
ctr := &container.ApiInfo{
IdShort: "container1",
}
apiStats := &container.ApiStats{
Networks: map[string]container.NetworkStats{
"eth0": {TxBytes: 2000, RxBytes: 1800}, // New values
},
}
stats := &container.Stats{
PrevReadTime: time.Now().Add(-time.Second), // 1 second ago
}
// Test with initialized container
sent, recv := dm.calculateNetworkStats(ctr, apiStats, stats, true, "test-container", cacheTimeMs)
// Should return calculated byte rates per second
assert.GreaterOrEqual(t, sent, uint64(0))
assert.GreaterOrEqual(t, recv, uint64(0))
// Cycle and test one-direction change (Tx only) is reflected independently
dm.cycleNetworkDeltasForCacheTime(cacheTimeMs)
apiStats.Networks["eth0"] = container.NetworkStats{TxBytes: 2500, RxBytes: 1800} // +500 Tx only
sent, recv = dm.calculateNetworkStats(ctr, apiStats, stats, true, "test-container", cacheTimeMs)
assert.Greater(t, sent, uint64(0))
assert.Equal(t, uint64(0), recv)
}
func TestDockerManagerCreation(t *testing.T) {
// Test that dockerManager can be created without panicking
dm := &dockerManager{
lastCpuContainer: make(map[uint16]map[string]uint64),
lastCpuSystem: make(map[uint16]map[string]uint64),
lastCpuReadTime: make(map[uint16]map[string]time.Time),
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
}
assert.NotNil(t, dm)
assert.NotNil(t, dm.lastCpuContainer)
assert.NotNil(t, dm.lastCpuSystem)
assert.NotNil(t, dm.networkSentTrackers)
assert.NotNil(t, dm.networkRecvTrackers)
}
func TestCycleCpuDeltas(t *testing.T) {
dm := &dockerManager{
lastCpuContainer: map[uint16]map[string]uint64{
30000: {"container1": 100, "container2": 200},
},
lastCpuSystem: map[uint16]map[string]uint64{
30000: {"container1": 150, "container2": 250},
},
lastCpuReadTime: map[uint16]map[string]time.Time{
30000: {"container1": time.Now()},
},
}
cacheTimeMs := uint16(30000)
// Verify values exist before cycling
assert.Equal(t, uint64(100), dm.lastCpuContainer[cacheTimeMs]["container1"])
assert.Equal(t, uint64(200), dm.lastCpuContainer[cacheTimeMs]["container2"])
// Cycle the CPU deltas
dm.cycleCpuDeltas(cacheTimeMs)
// Verify values are cleared
assert.Empty(t, dm.lastCpuContainer[cacheTimeMs])
assert.Empty(t, dm.lastCpuSystem[cacheTimeMs])
// lastCpuReadTime is not affected by cycleCpuDeltas
assert.NotEmpty(t, dm.lastCpuReadTime[cacheTimeMs])
}
func TestCycleNetworkDeltas(t *testing.T) {
// Create docker manager with tracker maps
dm := &dockerManager{
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
}
cacheTimeMs := uint16(30000)
// Get trackers for this cache time (creates them)
sentTracker := dm.getNetworkTracker(cacheTimeMs, true)
recvTracker := dm.getNetworkTracker(cacheTimeMs, false)
// Set some test data
sentTracker.Set("test", 100)
recvTracker.Set("test", 200)
// This should not panic
assert.NotPanics(t, func() {
dm.cycleNetworkDeltasForCacheTime(cacheTimeMs)
})
// Verify that cycle worked by checking deltas are now zero (no previous values)
assert.Equal(t, uint64(0), sentTracker.Delta("test"))
assert.Equal(t, uint64(0), recvTracker.Delta("test"))
}
func TestConstants(t *testing.T) {
// Test that constants are properly defined
assert.Equal(t, uint16(60000), defaultCacheTimeMs)
assert.Equal(t, uint64(5e9), maxNetworkSpeedBps)
assert.Equal(t, 2100, dockerTimeoutMs)
}
func TestDockerStatsWithMockData(t *testing.T) {
// Create a docker manager with initialized tracking
dm := &dockerManager{
lastCpuContainer: make(map[uint16]map[string]uint64),
lastCpuSystem: make(map[uint16]map[string]uint64),
lastCpuReadTime: make(map[uint16]map[string]time.Time),
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
containerStatsMap: make(map[string]*container.Stats),
}
cacheTimeMs := uint16(30000)
// Test that initializeCpuTracking works
dm.initializeCpuTracking(cacheTimeMs)
assert.NotNil(t, dm.lastCpuContainer[cacheTimeMs])
assert.NotNil(t, dm.lastCpuSystem[cacheTimeMs])
// Test that we can set and get CPU values
dm.setCpuCurrentValues(cacheTimeMs, "test-container", 1000, 2000)
container, system := dm.getCpuPreviousValues(cacheTimeMs, "test-container")
assert.Equal(t, uint64(1000), container)
assert.Equal(t, uint64(2000), system)
}
func TestMemoryStatsEdgeCases(t *testing.T) {
tests := []struct {
name string
usage uint64
cache uint64
inactive uint64
isWindows bool
expected uint64
hasError bool
}{
{"Linux normal case", 1000, 200, 0, false, 800, false},
{"Linux with inactive file", 1000, 0, 300, false, 700, false},
{"Windows normal case", 0, 0, 0, true, 500, false},
{"Linux zero usage error", 0, 0, 0, false, 0, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
apiStats := &container.ApiStats{
MemoryStats: container.MemoryStats{
Usage: tt.usage,
Stats: container.MemoryStatsStats{
Cache: tt.cache,
InactiveFile: tt.inactive,
},
},
}
if tt.isWindows {
apiStats.MemoryStats.PrivateWorkingSet = tt.expected
}
result, err := calculateMemoryUsage(apiStats, tt.isWindows)
if tt.hasError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expected, result)
}
})
}
}
func TestContainerStatsInitialization(t *testing.T) {
stats := &container.Stats{Name: "test-container"}
// Verify initial values
assert.Equal(t, "test-container", stats.Name)
assert.Equal(t, 0.0, stats.Cpu)
assert.Equal(t, 0.0, stats.Mem)
assert.Equal(t, 0.0, stats.NetworkSent)
assert.Equal(t, 0.0, stats.NetworkRecv)
assert.Equal(t, time.Time{}, stats.PrevReadTime)
// Test updating values
testTime := time.Now()
updateContainerStatsValues(stats, 45.67, 2097152, 1048576, 524288, testTime)
assert.Equal(t, 45.67, stats.Cpu)
assert.Equal(t, 2.0, stats.Mem)
assert.Equal(t, 1.0, stats.NetworkSent)
assert.Equal(t, 0.5, stats.NetworkRecv)
assert.Equal(t, testTime, stats.PrevReadTime)
}
// Test with real Docker API test data
func TestCalculateMemoryUsageWithRealData(t *testing.T) {
// Load minimal container stats from test data
data, err := os.ReadFile("test-data/container.json")
require.NoError(t, err)
var apiStats container.ApiStats
err = json.Unmarshal(data, &apiStats)
require.NoError(t, err)
// Test memory calculation with real data
usedMemory, err := calculateMemoryUsage(&apiStats, false)
require.NoError(t, err)
// From the real data: usage - inactive_file = 507400192 - 165130240 = 342269952
expected := uint64(507400192 - 165130240)
assert.Equal(t, expected, usedMemory)
}
func TestCpuPercentageCalculationWithRealData(t *testing.T) {
// Load minimal container stats from test data
data1, err := os.ReadFile("test-data/container.json")
require.NoError(t, err)
data2, err := os.ReadFile("test-data/container2.json")
require.NoError(t, err)
var apiStats1, apiStats2 container.ApiStats
err = json.Unmarshal(data1, &apiStats1)
require.NoError(t, err)
err = json.Unmarshal(data2, &apiStats2)
require.NoError(t, err)
// Calculate delta manually: 314891801000 - 312055276000 = 2836525000
// System delta: 1368474900000000 - 1366399830000000 = 2075070000000
// Expected %: (2836525000 / 2075070000000) * 100 ≈ 0.1367%
expectedPct := float64(2836525000) / float64(2075070000000) * 100.0
actualPct := apiStats2.CalculateCpuPercentLinux(apiStats1.CPUStats.CPUUsage.TotalUsage, apiStats1.CPUStats.SystemUsage)
assert.InDelta(t, expectedPct, actualPct, 0.01)
}
func TestNetworkStatsCalculationWithRealData(t *testing.T) {
// Create synthetic test data to avoid timing issues
apiStats1 := &container.ApiStats{
Networks: map[string]container.NetworkStats{
"eth0": {TxBytes: 1000000, RxBytes: 500000},
},
}
apiStats2 := &container.ApiStats{
Networks: map[string]container.NetworkStats{
"eth0": {TxBytes: 3000000, RxBytes: 1500000}, // 2MB sent, 1MB received increase
},
}
// Create docker manager with tracker maps
dm := &dockerManager{
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
}
ctr := &container.ApiInfo{IdShort: "test-container"}
cacheTimeMs := uint16(30000) // Test with 30 second cache
// Use exact timing for deterministic results
exactly1000msAgo := time.Now().Add(-1000 * time.Millisecond)
stats := &container.Stats{
PrevReadTime: exactly1000msAgo,
}
// First call sets baseline
sent1, recv1 := dm.calculateNetworkStats(ctr, apiStats1, stats, true, "test", cacheTimeMs)
assert.Equal(t, uint64(0), sent1)
assert.Equal(t, uint64(0), recv1)
// Cycle to establish baseline for this cache time
dm.cycleNetworkDeltasForCacheTime(cacheTimeMs)
// Calculate expected results precisely
deltaSent := uint64(2000000) // 3000000 - 1000000
deltaRecv := uint64(1000000) // 1500000 - 500000
expectedElapsedMs := uint64(1000) // Exactly 1000ms
expectedSentRate := deltaSent * 1000 / expectedElapsedMs // Should be exactly 2000000
expectedRecvRate := deltaRecv * 1000 / expectedElapsedMs // Should be exactly 1000000
// Second call with changed data
sent2, recv2 := dm.calculateNetworkStats(ctr, apiStats2, stats, true, "test", cacheTimeMs)
// Should be exactly the expected rates (no tolerance needed)
assert.Equal(t, expectedSentRate, sent2)
assert.Equal(t, expectedRecvRate, recv2)
// Bad speed cap: set absurd delta over 1ms and expect 0 due to cap
dm.cycleNetworkDeltasForCacheTime(cacheTimeMs)
stats.PrevReadTime = time.Now().Add(-1 * time.Millisecond)
apiStats1.Networks["eth0"] = container.NetworkStats{TxBytes: 0, RxBytes: 0}
apiStats2.Networks["eth0"] = container.NetworkStats{TxBytes: 10 * 1024 * 1024 * 1024, RxBytes: 0} // 10GB delta
_, _ = dm.calculateNetworkStats(ctr, apiStats1, stats, true, "test", cacheTimeMs) // baseline
dm.cycleNetworkDeltasForCacheTime(cacheTimeMs)
sent3, recv3 := dm.calculateNetworkStats(ctr, apiStats2, stats, true, "test", cacheTimeMs)
assert.Equal(t, uint64(0), sent3)
assert.Equal(t, uint64(0), recv3)
}
func TestContainerStatsEndToEndWithRealData(t *testing.T) {
// Load minimal container stats
data, err := os.ReadFile("test-data/container.json")
require.NoError(t, err)
var apiStats container.ApiStats
err = json.Unmarshal(data, &apiStats)
require.NoError(t, err)
// Create a docker manager with proper initialization
dm := &dockerManager{
lastCpuContainer: make(map[uint16]map[string]uint64),
lastCpuSystem: make(map[uint16]map[string]uint64),
lastCpuReadTime: make(map[uint16]map[string]time.Time),
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
containerStatsMap: make(map[string]*container.Stats),
}
// Initialize CPU tracking
cacheTimeMs := uint16(30000)
dm.initializeCpuTracking(cacheTimeMs)
// Create container info
ctr := &container.ApiInfo{
IdShort: "abc123",
}
// Initialize container stats
stats := &container.Stats{Name: "jellyfin"}
dm.containerStatsMap[ctr.IdShort] = stats
// Test individual components that we can verify
usedMemory, memErr := calculateMemoryUsage(&apiStats, false)
assert.NoError(t, memErr)
assert.Greater(t, usedMemory, uint64(0))
// Test CPU percentage validation
cpuPct := 85.5
err = validateCpuPercentage(cpuPct, "jellyfin")
assert.NoError(t, err)
err = validateCpuPercentage(150.0, "jellyfin")
assert.Error(t, err)
// Test stats value updates
testStats := &container.Stats{}
testTime := time.Now()
updateContainerStatsValues(testStats, cpuPct, usedMemory, 1000000, 500000, testTime)
assert.Equal(t, cpuPct, testStats.Cpu)
assert.Equal(t, bytesToMegabytes(float64(usedMemory)), testStats.Mem)
assert.Equal(t, bytesToMegabytes(1000000), testStats.NetworkSent)
assert.Equal(t, bytesToMegabytes(500000), testStats.NetworkRecv)
assert.Equal(t, testTime, testStats.PrevReadTime)
}
func TestEdgeCasesWithRealData(t *testing.T) {
// Test with minimal container stats
minimalStats := &container.ApiStats{
CPUStats: container.CPUStats{
CPUUsage: container.CPUUsage{TotalUsage: 1000},
SystemUsage: 50000,
},
MemoryStats: container.MemoryStats{
Usage: 1000000,
Stats: container.MemoryStatsStats{
Cache: 0,
InactiveFile: 0,
},
},
Networks: map[string]container.NetworkStats{
"eth0": {TxBytes: 1000, RxBytes: 500},
},
}
// Test memory calculation with zero cache/inactive
usedMemory, err := calculateMemoryUsage(minimalStats, false)
assert.NoError(t, err)
assert.Equal(t, uint64(1000000), usedMemory) // Should equal usage when no cache
// Test CPU percentage calculation
cpuPct := minimalStats.CalculateCpuPercentLinux(0, 0) // First run
assert.Equal(t, 0.0, cpuPct)
// Test with Windows data
minimalStats.MemoryStats.PrivateWorkingSet = 800000
usedMemory, err = calculateMemoryUsage(minimalStats, true)
assert.NoError(t, err)
assert.Equal(t, uint64(800000), usedMemory)
}
func TestDockerStatsWorkflow(t *testing.T) {
// Test the complete workflow that can be tested without HTTP calls
dm := &dockerManager{
lastCpuContainer: make(map[uint16]map[string]uint64),
lastCpuSystem: make(map[uint16]map[string]uint64),
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
containerStatsMap: make(map[string]*container.Stats),
}
cacheTimeMs := uint16(30000)
// Test CPU tracking workflow
dm.initializeCpuTracking(cacheTimeMs)
assert.NotNil(t, dm.lastCpuContainer[cacheTimeMs])
// Test setting and getting CPU values
dm.setCpuCurrentValues(cacheTimeMs, "test-container", 1000, 50000)
containerVal, systemVal := dm.getCpuPreviousValues(cacheTimeMs, "test-container")
assert.Equal(t, uint64(1000), containerVal)
assert.Equal(t, uint64(50000), systemVal)
// Test network tracking workflow (multi-interface summation)
sentTracker := dm.getNetworkTracker(cacheTimeMs, true)
recvTracker := dm.getNetworkTracker(cacheTimeMs, false)
// Simulate two interfaces summed by setting combined totals
sentTracker.Set("test-container", 1000+2000)
recvTracker.Set("test-container", 500+700)
deltaSent := sentTracker.Delta("test-container")
deltaRecv := recvTracker.Delta("test-container")
assert.Equal(t, uint64(0), deltaSent) // No previous value
assert.Equal(t, uint64(0), deltaRecv)
// Cycle and test again
dm.cycleNetworkDeltasForCacheTime(cacheTimeMs)
// Increase each interface total (combined totals go up by 1500 and 800)
sentTracker.Set("test-container", (1000+2000)+1500)
recvTracker.Set("test-container", (500+700)+800)
deltaSent = sentTracker.Delta("test-container")
deltaRecv = recvTracker.Delta("test-container")
assert.Equal(t, uint64(1500), deltaSent)
assert.Equal(t, uint64(800), deltaRecv)
}
func TestNetworkRateCalculationFormula(t *testing.T) {
// Test the exact formula used in calculateNetworkStats
testCases := []struct {
name string
deltaBytes uint64
elapsedMs uint64
expectedRate uint64
}{
{"1MB over 1 second", 1000000, 1000, 1000000},
{"2MB over 1 second", 2000000, 1000, 2000000},
{"1MB over 2 seconds", 1000000, 2000, 500000},
{"500KB over 500ms", 500000, 500, 1000000},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// This is the exact formula from calculateNetworkStats
actualRate := tc.deltaBytes * 1000 / tc.elapsedMs
assert.Equal(t, tc.expectedRate, actualRate,
"Rate calculation should be exact: %d bytes * 1000 / %d ms = %d",
tc.deltaBytes, tc.elapsedMs, tc.expectedRate)
})
}
}
func TestDeltaTrackerCacheTimeIsolation(t *testing.T) {
// Test that different cache times have separate DeltaTracker instances
dm := &dockerManager{
networkSentTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
networkRecvTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
}
ctr := &container.ApiInfo{IdShort: "web-server"}
cacheTime1 := uint16(30000)
cacheTime2 := uint16(60000)
// Get trackers for different cache times (creates separate instances)
sentTracker1 := dm.getNetworkTracker(cacheTime1, true)
recvTracker1 := dm.getNetworkTracker(cacheTime1, false)
sentTracker2 := dm.getNetworkTracker(cacheTime2, true)
recvTracker2 := dm.getNetworkTracker(cacheTime2, false)
// Verify they are different instances
assert.NotSame(t, sentTracker1, sentTracker2)
assert.NotSame(t, recvTracker1, recvTracker2)
// Set values for cache time 1
sentTracker1.Set(ctr.IdShort, 1000000)
recvTracker1.Set(ctr.IdShort, 500000)
// Set values for cache time 2
sentTracker2.Set(ctr.IdShort, 2000000)
recvTracker2.Set(ctr.IdShort, 1000000)
// Verify they don't interfere (both should return 0 since no previous values)
assert.Equal(t, uint64(0), sentTracker1.Delta(ctr.IdShort))
assert.Equal(t, uint64(0), recvTracker1.Delta(ctr.IdShort))
assert.Equal(t, uint64(0), sentTracker2.Delta(ctr.IdShort))
assert.Equal(t, uint64(0), recvTracker2.Delta(ctr.IdShort))
// Cycle cache time 1 trackers
dm.cycleNetworkDeltasForCacheTime(cacheTime1)
// Set new values for cache time 1
sentTracker1.Set(ctr.IdShort, 3000000) // 2MB increase
recvTracker1.Set(ctr.IdShort, 1500000) // 1MB increase
// Cache time 1 should show deltas, cache time 2 should still be 0
assert.Equal(t, uint64(2000000), sentTracker1.Delta(ctr.IdShort))
assert.Equal(t, uint64(1000000), recvTracker1.Delta(ctr.IdShort))
assert.Equal(t, uint64(0), sentTracker2.Delta(ctr.IdShort)) // Unaffected
assert.Equal(t, uint64(0), recvTracker2.Delta(ctr.IdShort)) // Unaffected
// Cycle cache time 2 and verify it works independently
dm.cycleNetworkDeltasForCacheTime(cacheTime2)
sentTracker2.Set(ctr.IdShort, 2500000) // 0.5MB increase
recvTracker2.Set(ctr.IdShort, 1200000) // 0.2MB increase
assert.Equal(t, uint64(500000), sentTracker2.Delta(ctr.IdShort))
assert.Equal(t, uint64(200000), recvTracker2.Delta(ctr.IdShort))
}
func TestConstantsAndUtilityFunctions(t *testing.T) {
// Test constants are properly defined
assert.Equal(t, uint16(60000), defaultCacheTimeMs)
assert.Equal(t, uint64(5e9), maxNetworkSpeedBps)
assert.Equal(t, 2100, dockerTimeoutMs)
// Test utility functions
assert.Equal(t, 1.5, twoDecimals(1.499))
assert.Equal(t, 1.5, twoDecimals(1.5))
assert.Equal(t, 1.5, twoDecimals(1.501))
assert.Equal(t, 1.0, bytesToMegabytes(1048576)) // 1 MB
assert.Equal(t, 0.5, bytesToMegabytes(524288)) // 512 KB
assert.Equal(t, 0.0, bytesToMegabytes(0))
}

View File

@@ -5,6 +5,7 @@ import (
"bytes"
"encoding/json"
"fmt"
"maps"
"os/exec"
"regexp"
"strconv"
@@ -27,13 +28,10 @@ const (
nvidiaSmiInterval string = "4" // in seconds
tegraStatsInterval string = "3700" // in milliseconds
rocmSmiInterval time.Duration = 4300 * time.Millisecond
// Command retry and timeout constants
retryWaitTime time.Duration = 5 * time.Second
maxFailureRetries int = 5
cmdBufferSize uint16 = 10 * 1024
// Unit Conversions
mebibytesInAMegabyte float64 = 1.024 // nvidia-smi reports memory in MiB
milliwattsInAWatt float64 = 1000.0 // tegrastats reports power in mW
@@ -42,10 +40,26 @@ const (
// GPUManager manages data collection for GPUs (either Nvidia or AMD)
type GPUManager struct {
sync.Mutex
nvidiaSmi bool
rocmSmi bool
tegrastats bool
GpuDataMap map[string]*system.GPUData
nvidiaSmi bool
rocmSmi bool
tegrastats bool
intelGpuStats bool
GpuDataMap map[string]*system.GPUData
// lastAvgData stores the last calculated averages for each GPU
// Used when a collection happens before new data arrives (Count == 0)
lastAvgData map[string]system.GPUData
// Per-cache-key tracking for delta calculations
// cacheKey -> gpuId -> snapshot of last count/usage/power values
lastSnapshots map[uint16]map[string]*gpuSnapshot
}
// gpuSnapshot stores the last observed incremental values for delta tracking
type gpuSnapshot struct {
count uint32
usage float64
power float64
powerPkg float64
engines map[string]float64
}
// RocmSmiJson represents the JSON structure of rocm-smi output
@@ -66,6 +80,7 @@ type gpuCollector struct {
cmdArgs []string
parse func([]byte) bool // returns true if valid data was found
buf []byte
bufSize uint16
}
var errNoValidData = fmt.Errorf("no valid GPU data found") // Error for missing data
@@ -99,7 +114,7 @@ func (c *gpuCollector) collect() error {
scanner := bufio.NewScanner(stdout)
if c.buf == nil {
c.buf = make([]byte, 0, cmdBufferSize)
c.buf = make([]byte, 0, c.bufSize)
}
scanner.Buffer(c.buf, bufio.MaxScanTokenSize)
@@ -230,36 +245,21 @@ func (gm *GPUManager) parseAmdData(output []byte) bool {
return true
}
// sums and resets the current GPU utilization data since the last update
func (gm *GPUManager) GetCurrentData() map[string]system.GPUData {
// GetCurrentData returns GPU utilization data averaged since the last call with this cacheKey
func (gm *GPUManager) GetCurrentData(cacheKey uint16) map[string]system.GPUData {
gm.Lock()
defer gm.Unlock()
// check for GPUs with the same name
nameCounts := make(map[string]int)
for _, gpu := range gm.GpuDataMap {
nameCounts[gpu.Name]++
}
gm.initializeSnapshots(cacheKey)
nameCounts := gm.countGPUNames()
// copy / reset the data
gpuData := make(map[string]system.GPUData, len(gm.GpuDataMap))
for id, gpu := range gm.GpuDataMap {
gpuAvg := *gpu
gpuAvg := gm.calculateGPUAverage(id, gpu, cacheKey)
gm.updateInstantaneousValues(&gpuAvg, gpu)
gm.storeSnapshot(id, gpu, cacheKey)
gpuAvg.Temperature = twoDecimals(gpu.Temperature)
gpuAvg.MemoryUsed = twoDecimals(gpu.MemoryUsed)
gpuAvg.MemoryTotal = twoDecimals(gpu.MemoryTotal)
// avoid division by zero
if gpu.Count > 0 {
gpuAvg.Usage = twoDecimals(gpu.Usage / gpu.Count)
gpuAvg.Power = twoDecimals(gpu.Power / gpu.Count)
}
// reset accumulators in the original
gpu.Usage, gpu.Power, gpu.Count = 0, 0, 0
// append id to the name if there are multiple GPUs with the same name
// Append id to name if there are multiple GPUs with the same name
if nameCounts[gpu.Name] > 1 {
gpuAvg.Name = fmt.Sprintf("%s %s", gpu.Name, id)
}
@@ -269,6 +269,115 @@ func (gm *GPUManager) GetCurrentData() map[string]system.GPUData {
return gpuData
}
// initializeSnapshots ensures snapshot maps are initialized for the given cache key
func (gm *GPUManager) initializeSnapshots(cacheKey uint16) {
if gm.lastAvgData == nil {
gm.lastAvgData = make(map[string]system.GPUData)
}
if gm.lastSnapshots == nil {
gm.lastSnapshots = make(map[uint16]map[string]*gpuSnapshot)
}
if gm.lastSnapshots[cacheKey] == nil {
gm.lastSnapshots[cacheKey] = make(map[string]*gpuSnapshot)
}
}
// countGPUNames returns a map of GPU names to their occurrence count
func (gm *GPUManager) countGPUNames() map[string]int {
nameCounts := make(map[string]int)
for _, gpu := range gm.GpuDataMap {
nameCounts[gpu.Name]++
}
return nameCounts
}
// calculateGPUAverage computes the average GPU metrics since the last snapshot for this cache key
func (gm *GPUManager) calculateGPUAverage(id string, gpu *system.GPUData, cacheKey uint16) system.GPUData {
lastSnapshot := gm.lastSnapshots[cacheKey][id]
currentCount := uint32(gpu.Count)
deltaCount := gm.calculateDeltaCount(currentCount, lastSnapshot)
// If no new data arrived, use last known average
if deltaCount == 0 {
return gm.lastAvgData[id] // zero value if not found
}
// Calculate new average
gpuAvg := *gpu
deltaUsage, deltaPower, deltaPowerPkg := gm.calculateDeltas(gpu, lastSnapshot)
gpuAvg.Power = twoDecimals(deltaPower / float64(deltaCount))
if gpu.Engines != nil {
// make fresh map for averaged engine metrics to avoid mutating
// the accumulator map stored in gm.GpuDataMap
gpuAvg.Engines = make(map[string]float64, len(gpu.Engines))
gpuAvg.Usage = gm.calculateIntelGPUUsage(&gpuAvg, gpu, lastSnapshot, deltaCount)
gpuAvg.PowerPkg = twoDecimals(deltaPowerPkg / float64(deltaCount))
} else {
gpuAvg.Usage = twoDecimals(deltaUsage / float64(deltaCount))
}
gm.lastAvgData[id] = gpuAvg
return gpuAvg
}
// calculateDeltaCount returns the change in count since the last snapshot
func (gm *GPUManager) calculateDeltaCount(currentCount uint32, lastSnapshot *gpuSnapshot) uint32 {
if lastSnapshot != nil {
return currentCount - lastSnapshot.count
}
return currentCount
}
// calculateDeltas computes the change in usage, power, and powerPkg since the last snapshot
func (gm *GPUManager) calculateDeltas(gpu *system.GPUData, lastSnapshot *gpuSnapshot) (deltaUsage, deltaPower, deltaPowerPkg float64) {
if lastSnapshot != nil {
return gpu.Usage - lastSnapshot.usage,
gpu.Power - lastSnapshot.power,
gpu.PowerPkg - lastSnapshot.powerPkg
}
return gpu.Usage, gpu.Power, gpu.PowerPkg
}
// calculateIntelGPUUsage computes Intel GPU usage from engine metrics and returns max engine usage
func (gm *GPUManager) calculateIntelGPUUsage(gpuAvg, gpu *system.GPUData, lastSnapshot *gpuSnapshot, deltaCount uint32) float64 {
maxEngineUsage := 0.0
for name, engine := range gpu.Engines {
var deltaEngine float64
if lastSnapshot != nil && lastSnapshot.engines != nil {
deltaEngine = engine - lastSnapshot.engines[name]
} else {
deltaEngine = engine
}
gpuAvg.Engines[name] = twoDecimals(deltaEngine / float64(deltaCount))
maxEngineUsage = max(maxEngineUsage, deltaEngine/float64(deltaCount))
}
return twoDecimals(maxEngineUsage)
}
// updateInstantaneousValues updates values that should reflect current state, not averages
func (gm *GPUManager) updateInstantaneousValues(gpuAvg *system.GPUData, gpu *system.GPUData) {
gpuAvg.Temperature = twoDecimals(gpu.Temperature)
gpuAvg.MemoryUsed = twoDecimals(gpu.MemoryUsed)
gpuAvg.MemoryTotal = twoDecimals(gpu.MemoryTotal)
}
// storeSnapshot saves the current GPU state for this cache key
func (gm *GPUManager) storeSnapshot(id string, gpu *system.GPUData, cacheKey uint16) {
snapshot := &gpuSnapshot{
count: uint32(gpu.Count),
usage: gpu.Usage,
power: gpu.Power,
powerPkg: gpu.PowerPkg,
}
if gpu.Engines != nil {
snapshot.engines = make(map[string]float64, len(gpu.Engines))
maps.Copy(snapshot.engines, gpu.Engines)
}
gm.lastSnapshots[cacheKey][id] = snapshot
}
// detectGPUs checks for the presence of GPU management tools (nvidia-smi, rocm-smi, tegrastats)
// in the system path. It sets the corresponding flags in the GPUManager struct if any of these
// tools are found. If none of the tools are found, it returns an error indicating that no GPU
@@ -284,18 +393,37 @@ func (gm *GPUManager) detectGPUs() error {
gm.tegrastats = true
gm.nvidiaSmi = false
}
if gm.nvidiaSmi || gm.rocmSmi || gm.tegrastats {
if _, err := exec.LookPath(intelGpuStatsCmd); err == nil {
gm.intelGpuStats = true
}
if gm.nvidiaSmi || gm.rocmSmi || gm.tegrastats || gm.intelGpuStats {
return nil
}
return fmt.Errorf("no GPU found - install nvidia-smi, rocm-smi, or tegrastats")
return fmt.Errorf("no GPU found - install nvidia-smi, rocm-smi, tegrastats, or intel_gpu_top")
}
// startCollector starts the appropriate GPU data collector based on the command
func (gm *GPUManager) startCollector(command string) {
collector := gpuCollector{
name: command,
name: command,
bufSize: 10 * 1024,
}
switch command {
case intelGpuStatsCmd:
go func() {
failures := 0
for {
if err := gm.collectIntelStats(); err != nil {
failures++
if failures > maxFailureRetries {
break
}
slog.Warn("Error collecting Intel GPU data; see https://beszel.dev/guide/gpu", "err", err)
time.Sleep(retryWaitTime)
continue
}
}
}()
case nvidiaSmiCmd:
collector.cmdArgs = []string{
"-l", nvidiaSmiInterval,
@@ -329,6 +457,9 @@ func (gm *GPUManager) startCollector(command string) {
// NewGPUManager creates and initializes a new GPUManager
func NewGPUManager() (*GPUManager, error) {
if skipGPU, _ := GetEnv("SKIP_GPU"); skipGPU == "true" {
return nil, nil
}
var gm GPUManager
if err := gm.detectGPUs(); err != nil {
return nil, err
@@ -344,6 +475,9 @@ func NewGPUManager() (*GPUManager, error) {
if gm.tegrastats {
gm.startCollector(tegraStatsCmd)
}
if gm.intelGpuStats {
gm.startCollector(intelGpuStatsCmd)
}
return &gm, nil
}

199
agent/gpu_intel.go Normal file
View File

@@ -0,0 +1,199 @@
package agent
import (
"bufio"
"io"
"os/exec"
"strconv"
"strings"
"github.com/henrygd/beszel/internal/entities/system"
)
const (
intelGpuStatsCmd string = "intel_gpu_top"
intelGpuStatsInterval string = "3300" // in milliseconds
)
type intelGpuStats struct {
PowerGPU float64
PowerPkg float64
Engines map[string]float64
}
// updateIntelFromStats updates aggregated GPU data from a single intelGpuStats sample
func (gm *GPUManager) updateIntelFromStats(sample *intelGpuStats) bool {
gm.Lock()
defer gm.Unlock()
// only one gpu for now - cmd doesn't provide all by default
gpuData, ok := gm.GpuDataMap["0"]
if !ok {
gpuData = &system.GPUData{Name: "GPU", Engines: make(map[string]float64)}
gm.GpuDataMap["0"] = gpuData
}
gpuData.Power += sample.PowerGPU
gpuData.PowerPkg += sample.PowerPkg
if gpuData.Engines == nil {
gpuData.Engines = make(map[string]float64, len(sample.Engines))
}
for name, engine := range sample.Engines {
gpuData.Engines[name] += engine
}
gpuData.Count++
return true
}
// collectIntelStats executes intel_gpu_top in text mode (-l) and parses the output
func (gm *GPUManager) collectIntelStats() (err error) {
cmd := exec.Command(intelGpuStatsCmd, "-s", intelGpuStatsInterval, "-l")
// Avoid blocking if intel_gpu_top writes to stderr
cmd.Stderr = io.Discard
stdout, err := cmd.StdoutPipe()
if err != nil {
return err
}
if err := cmd.Start(); err != nil {
return err
}
// Ensure we always reap the child to avoid zombies on any return path and
// propagate a non-zero exit code if no other error was set.
defer func() {
// Best-effort close of the pipe (unblock the child if it writes)
_ = stdout.Close()
if cmd.ProcessState == nil || !cmd.ProcessState.Exited() {
_ = cmd.Process.Kill()
}
if waitErr := cmd.Wait(); err == nil && waitErr != nil {
err = waitErr
}
}()
scanner := bufio.NewScanner(stdout)
var header1 string
var engineNames []string
var friendlyNames []string
var preEngineCols int
var powerIndex int
var hadDataRow bool
// skip first data row because it sometimes has erroneous data
var skippedFirstDataRow bool
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if line == "" {
continue
}
// first header line
if strings.HasPrefix(line, "Freq") {
header1 = line
continue
}
// second header line
if strings.HasPrefix(line, "req") {
engineNames, friendlyNames, powerIndex, preEngineCols = gm.parseIntelHeaders(header1, line)
continue
}
// Data row
if !skippedFirstDataRow {
skippedFirstDataRow = true
continue
}
sample, err := gm.parseIntelData(line, engineNames, friendlyNames, powerIndex, preEngineCols)
if err != nil {
return err
}
hadDataRow = true
gm.updateIntelFromStats(&sample)
}
if scanErr := scanner.Err(); scanErr != nil {
return scanErr
}
if !hadDataRow {
return errNoValidData
}
return nil
}
func (gm *GPUManager) parseIntelHeaders(header1 string, header2 string) (engineNames []string, friendlyNames []string, powerIndex int, preEngineCols int) {
// Build indexes
h1 := strings.Fields(header1)
h2 := strings.Fields(header2)
powerIndex = -1 // Initialize to -1, will be set to actual index if found
// Collect engine names from header1
for _, col := range h1 {
key := strings.TrimRightFunc(col, func(r rune) bool { return r >= '0' && r <= '9' })
var friendly string
switch key {
case "RCS":
friendly = "Render/3D"
case "BCS":
friendly = "Blitter"
case "VCS":
friendly = "Video"
case "VECS":
friendly = "VideoEnhance"
case "CCS":
friendly = "Compute"
default:
continue
}
engineNames = append(engineNames, key)
friendlyNames = append(friendlyNames, friendly)
}
// find power gpu index among pre-engine columns
if n := len(engineNames); n > 0 {
preEngineCols = max(len(h2)-3*n, 0)
limit := min(len(h2), preEngineCols)
for i := range limit {
if strings.EqualFold(h2[i], "gpu") {
powerIndex = i
break
}
}
}
return engineNames, friendlyNames, powerIndex, preEngineCols
}
func (gm *GPUManager) parseIntelData(line string, engineNames []string, friendlyNames []string, powerIndex int, preEngineCols int) (sample intelGpuStats, err error) {
fields := strings.Fields(line)
if len(fields) == 0 {
return sample, errNoValidData
}
// Make sure row has enough columns for engines
if need := preEngineCols + 3*len(engineNames); len(fields) < need {
return sample, errNoValidData
}
if powerIndex >= 0 && powerIndex < len(fields) {
if v, perr := strconv.ParseFloat(fields[powerIndex], 64); perr == nil {
sample.PowerGPU = v
}
if v, perr := strconv.ParseFloat(fields[powerIndex+1], 64); perr == nil {
sample.PowerPkg = v
}
}
if len(engineNames) > 0 {
sample.Engines = make(map[string]float64, len(engineNames))
for k := range engineNames {
base := preEngineCols + 3*k
if base < len(fields) {
busy := 0.0
if v, e := strconv.ParseFloat(fields[base], 64); e == nil {
busy = v
}
cur := sample.Engines[friendlyNames[k]]
sample.Engines[friendlyNames[k]] = cur + busy
} else {
sample.Engines[friendlyNames[k]] = 0
}
}
}
return sample, nil
}

View File

@@ -332,7 +332,7 @@ func TestParseJetsonData(t *testing.T) {
}
func TestGetCurrentData(t *testing.T) {
t.Run("calculates averages and resets accumulators", func(t *testing.T) {
t.Run("calculates averages with per-cache-key delta tracking", func(t *testing.T) {
gm := &GPUManager{
GpuDataMap: map[string]*system.GPUData{
"0": {
@@ -365,7 +365,8 @@ func TestGetCurrentData(t *testing.T) {
},
}
result := gm.GetCurrentData()
cacheKey := uint16(5000)
result := gm.GetCurrentData(cacheKey)
// Verify name disambiguation
assert.Equal(t, "GPU1 0", result["0"].Name)
@@ -378,13 +379,19 @@ func TestGetCurrentData(t *testing.T) {
assert.InDelta(t, 30.0, result["1"].Usage, 0.01)
assert.InDelta(t, 60.0, result["1"].Power, 0.01)
// Verify that accumulators in the original map are reset
assert.Equal(t, float64(0), gm.GpuDataMap["0"].Count, "GPU 0 Count should be reset")
assert.Equal(t, float64(0), gm.GpuDataMap["0"].Usage, "GPU 0 Usage should be reset")
assert.Equal(t, float64(0), gm.GpuDataMap["0"].Power, "GPU 0 Power should be reset")
assert.Equal(t, float64(0), gm.GpuDataMap["1"].Count, "GPU 1 Count should be reset")
assert.Equal(t, float64(0), gm.GpuDataMap["1"].Usage, "GPU 1 Usage should be reset")
assert.Equal(t, float64(0), gm.GpuDataMap["1"].Power, "GPU 1 Power should be reset")
// Verify that accumulators in the original map are NOT reset (they keep growing)
assert.EqualValues(t, 2, gm.GpuDataMap["0"].Count, "GPU 0 Count should remain at 2")
assert.EqualValues(t, 100, gm.GpuDataMap["0"].Usage, "GPU 0 Usage should remain at 100")
assert.Equal(t, 200.0, gm.GpuDataMap["0"].Power, "GPU 0 Power should remain at 200")
assert.Equal(t, 1.0, gm.GpuDataMap["1"].Count, "GPU 1 Count should remain at 1")
assert.Equal(t, 30.0, gm.GpuDataMap["1"].Usage, "GPU 1 Usage should remain at 30")
assert.Equal(t, 60.0, gm.GpuDataMap["1"].Power, "GPU 1 Power should remain at 60")
// Verify snapshots were stored for this cache key
assert.NotNil(t, gm.lastSnapshots[cacheKey]["0"])
assert.Equal(t, uint32(2), gm.lastSnapshots[cacheKey]["0"].count)
assert.Equal(t, 100.0, gm.lastSnapshots[cacheKey]["0"].usage)
assert.Equal(t, 200.0, gm.lastSnapshots[cacheKey]["0"].power)
})
t.Run("handles zero count without panicking", func(t *testing.T) {
@@ -399,17 +406,543 @@ func TestGetCurrentData(t *testing.T) {
},
}
cacheKey := uint16(5000)
var result map[string]system.GPUData
assert.NotPanics(t, func() {
result = gm.GetCurrentData()
result = gm.GetCurrentData(cacheKey)
})
// Check that usage and power are 0
assert.Equal(t, 0.0, result["0"].Usage)
assert.Equal(t, 0.0, result["0"].Power)
// Verify reset count
assert.Equal(t, float64(0), gm.GpuDataMap["0"].Count)
// Verify count remains 0
assert.EqualValues(t, 0, gm.GpuDataMap["0"].Count)
})
t.Run("uses last average when no new data arrives", func(t *testing.T) {
gm := &GPUManager{
GpuDataMap: map[string]*system.GPUData{
"0": {
Name: "TestGPU",
Temperature: 55.0,
MemoryUsed: 1500,
MemoryTotal: 8000,
Usage: 100, // Will average to 50
Power: 200, // Will average to 100
Count: 2,
},
},
}
cacheKey := uint16(5000)
// First collection - should calculate averages and store them
result1 := gm.GetCurrentData(cacheKey)
assert.InDelta(t, 50.0, result1["0"].Usage, 0.01)
assert.InDelta(t, 100.0, result1["0"].Power, 0.01)
assert.EqualValues(t, 2, gm.GpuDataMap["0"].Count, "Count should remain at 2")
// Update temperature but no new usage/power data (count stays same)
gm.GpuDataMap["0"].Temperature = 60.0
gm.GpuDataMap["0"].MemoryUsed = 1600
// Second collection - should use last averages since count hasn't changed (delta = 0)
result2 := gm.GetCurrentData(cacheKey)
assert.InDelta(t, 50.0, result2["0"].Usage, 0.01, "Should use last average")
assert.InDelta(t, 100.0, result2["0"].Power, 0.01, "Should use last average")
assert.InDelta(t, 60.0, result2["0"].Temperature, 0.01, "Should use current temperature")
assert.InDelta(t, 1600.0, result2["0"].MemoryUsed, 0.01, "Should use current memory")
assert.EqualValues(t, 2, gm.GpuDataMap["0"].Count, "Count should still be 2")
})
t.Run("tracks separate averages per cache key", func(t *testing.T) {
gm := &GPUManager{
GpuDataMap: map[string]*system.GPUData{
"0": {
Name: "TestGPU",
Temperature: 55.0,
MemoryUsed: 1500,
MemoryTotal: 8000,
Usage: 100, // Initial: 100 over 2 counts = 50 avg
Power: 200, // Initial: 200 over 2 counts = 100 avg
Count: 2,
},
},
}
cacheKey1 := uint16(5000)
cacheKey2 := uint16(10000)
// First check with cacheKey1 - baseline
result1 := gm.GetCurrentData(cacheKey1)
assert.InDelta(t, 50.0, result1["0"].Usage, 0.01, "CacheKey1: Initial average should be 50")
assert.InDelta(t, 100.0, result1["0"].Power, 0.01, "CacheKey1: Initial average should be 100")
// Simulate GPU activity - accumulate more data
gm.GpuDataMap["0"].Usage += 60 // Now total: 160
gm.GpuDataMap["0"].Power += 150 // Now total: 350
gm.GpuDataMap["0"].Count += 3 // Now total: 5
// Check with cacheKey1 again - should get delta since last cacheKey1 check
result2 := gm.GetCurrentData(cacheKey1)
assert.InDelta(t, 20.0, result2["0"].Usage, 0.01, "CacheKey1: Delta average should be 60/3 = 20")
assert.InDelta(t, 50.0, result2["0"].Power, 0.01, "CacheKey1: Delta average should be 150/3 = 50")
// Check with cacheKey2 for the first time - should get average since beginning
result3 := gm.GetCurrentData(cacheKey2)
assert.InDelta(t, 32.0, result3["0"].Usage, 0.01, "CacheKey2: Total average should be 160/5 = 32")
assert.InDelta(t, 70.0, result3["0"].Power, 0.01, "CacheKey2: Total average should be 350/5 = 70")
// Simulate more GPU activity
gm.GpuDataMap["0"].Usage += 80 // Now total: 240
gm.GpuDataMap["0"].Power += 160 // Now total: 510
gm.GpuDataMap["0"].Count += 2 // Now total: 7
// Check with cacheKey1 - should get delta since last cacheKey1 check
result4 := gm.GetCurrentData(cacheKey1)
assert.InDelta(t, 40.0, result4["0"].Usage, 0.01, "CacheKey1: New delta average should be 80/2 = 40")
assert.InDelta(t, 80.0, result4["0"].Power, 0.01, "CacheKey1: New delta average should be 160/2 = 80")
// Check with cacheKey2 - should get delta since last cacheKey2 check
result5 := gm.GetCurrentData(cacheKey2)
assert.InDelta(t, 40.0, result5["0"].Usage, 0.01, "CacheKey2: Delta average should be 80/2 = 40")
assert.InDelta(t, 80.0, result5["0"].Power, 0.01, "CacheKey2: Delta average should be 160/2 = 80")
// Verify snapshots exist for both cache keys
assert.NotNil(t, gm.lastSnapshots[cacheKey1])
assert.NotNil(t, gm.lastSnapshots[cacheKey2])
assert.NotNil(t, gm.lastSnapshots[cacheKey1]["0"])
assert.NotNil(t, gm.lastSnapshots[cacheKey2]["0"])
})
}
func TestCalculateDeltaCount(t *testing.T) {
gm := &GPUManager{}
t.Run("with no previous snapshot", func(t *testing.T) {
delta := gm.calculateDeltaCount(10, nil)
assert.Equal(t, uint32(10), delta, "Should return current count when no snapshot exists")
})
t.Run("with previous snapshot", func(t *testing.T) {
snapshot := &gpuSnapshot{count: 5}
delta := gm.calculateDeltaCount(15, snapshot)
assert.Equal(t, uint32(10), delta, "Should return difference between current and snapshot")
})
t.Run("with same count", func(t *testing.T) {
snapshot := &gpuSnapshot{count: 10}
delta := gm.calculateDeltaCount(10, snapshot)
assert.Equal(t, uint32(0), delta, "Should return zero when count hasn't changed")
})
}
func TestCalculateDeltas(t *testing.T) {
gm := &GPUManager{}
t.Run("with no previous snapshot", func(t *testing.T) {
gpu := &system.GPUData{
Usage: 100.5,
Power: 250.75,
PowerPkg: 300.25,
}
deltaUsage, deltaPower, deltaPowerPkg := gm.calculateDeltas(gpu, nil)
assert.Equal(t, 100.5, deltaUsage)
assert.Equal(t, 250.75, deltaPower)
assert.Equal(t, 300.25, deltaPowerPkg)
})
t.Run("with previous snapshot", func(t *testing.T) {
gpu := &system.GPUData{
Usage: 150.5,
Power: 300.75,
PowerPkg: 400.25,
}
snapshot := &gpuSnapshot{
usage: 100.5,
power: 250.75,
powerPkg: 300.25,
}
deltaUsage, deltaPower, deltaPowerPkg := gm.calculateDeltas(gpu, snapshot)
assert.InDelta(t, 50.0, deltaUsage, 0.01)
assert.InDelta(t, 50.0, deltaPower, 0.01)
assert.InDelta(t, 100.0, deltaPowerPkg, 0.01)
})
}
func TestCalculateIntelGPUUsage(t *testing.T) {
gm := &GPUManager{}
t.Run("with no previous snapshot", func(t *testing.T) {
gpuAvg := &system.GPUData{
Engines: make(map[string]float64),
}
gpu := &system.GPUData{
Engines: map[string]float64{
"Render/3D": 80.0,
"Video": 40.0,
"Compute": 60.0,
},
}
maxUsage := gm.calculateIntelGPUUsage(gpuAvg, gpu, nil, 2)
assert.Equal(t, 40.0, maxUsage, "Should return max engine usage (80/2=40)")
assert.Equal(t, 40.0, gpuAvg.Engines["Render/3D"])
assert.Equal(t, 20.0, gpuAvg.Engines["Video"])
assert.Equal(t, 30.0, gpuAvg.Engines["Compute"])
})
t.Run("with previous snapshot", func(t *testing.T) {
gpuAvg := &system.GPUData{
Engines: make(map[string]float64),
}
gpu := &system.GPUData{
Engines: map[string]float64{
"Render/3D": 180.0,
"Video": 100.0,
"Compute": 140.0,
},
}
snapshot := &gpuSnapshot{
engines: map[string]float64{
"Render/3D": 80.0,
"Video": 40.0,
"Compute": 60.0,
},
}
maxUsage := gm.calculateIntelGPUUsage(gpuAvg, gpu, snapshot, 5)
// Deltas: Render/3D=100, Video=60, Compute=80 over 5 counts
assert.Equal(t, 20.0, maxUsage, "Should return max engine delta (100/5=20)")
assert.Equal(t, 20.0, gpuAvg.Engines["Render/3D"])
assert.Equal(t, 12.0, gpuAvg.Engines["Video"])
assert.Equal(t, 16.0, gpuAvg.Engines["Compute"])
})
t.Run("handles missing engine in snapshot", func(t *testing.T) {
gpuAvg := &system.GPUData{
Engines: make(map[string]float64),
}
gpu := &system.GPUData{
Engines: map[string]float64{
"Render/3D": 100.0,
"NewEngine": 50.0,
},
}
snapshot := &gpuSnapshot{
engines: map[string]float64{
"Render/3D": 80.0,
// NewEngine doesn't exist in snapshot
},
}
maxUsage := gm.calculateIntelGPUUsage(gpuAvg, gpu, snapshot, 2)
assert.Equal(t, 25.0, maxUsage)
assert.Equal(t, 10.0, gpuAvg.Engines["Render/3D"], "Should use delta for existing engine")
assert.Equal(t, 25.0, gpuAvg.Engines["NewEngine"], "Should use full value for new engine")
})
}
func TestUpdateInstantaneousValues(t *testing.T) {
gm := &GPUManager{}
t.Run("updates temperature, memory used and total", func(t *testing.T) {
gpuAvg := &system.GPUData{
Temperature: 50.123,
MemoryUsed: 1000.456,
MemoryTotal: 8000.789,
}
gpu := &system.GPUData{
Temperature: 75.567,
MemoryUsed: 2500.891,
MemoryTotal: 8192.234,
}
gm.updateInstantaneousValues(gpuAvg, gpu)
assert.Equal(t, 75.57, gpuAvg.Temperature, "Should update and round temperature")
assert.Equal(t, 2500.89, gpuAvg.MemoryUsed, "Should update and round memory used")
assert.Equal(t, 8192.23, gpuAvg.MemoryTotal, "Should update and round memory total")
})
}
func TestStoreSnapshot(t *testing.T) {
gm := &GPUManager{
lastSnapshots: make(map[uint16]map[string]*gpuSnapshot),
}
t.Run("stores standard GPU snapshot", func(t *testing.T) {
cacheKey := uint16(5000)
gm.lastSnapshots[cacheKey] = make(map[string]*gpuSnapshot)
gpu := &system.GPUData{
Count: 10.0,
Usage: 150.5,
Power: 250.75,
PowerPkg: 300.25,
}
gm.storeSnapshot("0", gpu, cacheKey)
snapshot := gm.lastSnapshots[cacheKey]["0"]
assert.NotNil(t, snapshot)
assert.Equal(t, uint32(10), snapshot.count)
assert.Equal(t, 150.5, snapshot.usage)
assert.Equal(t, 250.75, snapshot.power)
assert.Equal(t, 300.25, snapshot.powerPkg)
assert.Nil(t, snapshot.engines, "Should not have engines for standard GPU")
})
t.Run("stores Intel GPU snapshot with engines", func(t *testing.T) {
cacheKey := uint16(10000)
gm.lastSnapshots[cacheKey] = make(map[string]*gpuSnapshot)
gpu := &system.GPUData{
Count: 5.0,
Usage: 100.0,
Power: 200.0,
PowerPkg: 250.0,
Engines: map[string]float64{
"Render/3D": 80.0,
"Video": 40.0,
},
}
gm.storeSnapshot("0", gpu, cacheKey)
snapshot := gm.lastSnapshots[cacheKey]["0"]
assert.NotNil(t, snapshot)
assert.Equal(t, uint32(5), snapshot.count)
assert.NotNil(t, snapshot.engines, "Should have engines for Intel GPU")
assert.Equal(t, 80.0, snapshot.engines["Render/3D"])
assert.Equal(t, 40.0, snapshot.engines["Video"])
assert.Len(t, snapshot.engines, 2)
})
t.Run("overwrites existing snapshot", func(t *testing.T) {
cacheKey := uint16(5000)
gm.lastSnapshots[cacheKey] = make(map[string]*gpuSnapshot)
// Store initial snapshot
gpu1 := &system.GPUData{Count: 5.0, Usage: 100.0, Power: 200.0}
gm.storeSnapshot("0", gpu1, cacheKey)
// Store updated snapshot
gpu2 := &system.GPUData{Count: 10.0, Usage: 250.0, Power: 400.0}
gm.storeSnapshot("0", gpu2, cacheKey)
snapshot := gm.lastSnapshots[cacheKey]["0"]
assert.Equal(t, uint32(10), snapshot.count, "Should overwrite previous count")
assert.Equal(t, 250.0, snapshot.usage, "Should overwrite previous usage")
assert.Equal(t, 400.0, snapshot.power, "Should overwrite previous power")
})
}
func TestCountGPUNames(t *testing.T) {
t.Run("returns empty map for no GPUs", func(t *testing.T) {
gm := &GPUManager{
GpuDataMap: make(map[string]*system.GPUData),
}
counts := gm.countGPUNames()
assert.Empty(t, counts)
})
t.Run("counts unique GPU names", func(t *testing.T) {
gm := &GPUManager{
GpuDataMap: map[string]*system.GPUData{
"0": {Name: "GPU A"},
"1": {Name: "GPU B"},
"2": {Name: "GPU C"},
},
}
counts := gm.countGPUNames()
assert.Equal(t, 1, counts["GPU A"])
assert.Equal(t, 1, counts["GPU B"])
assert.Equal(t, 1, counts["GPU C"])
assert.Len(t, counts, 3)
})
t.Run("counts duplicate GPU names", func(t *testing.T) {
gm := &GPUManager{
GpuDataMap: map[string]*system.GPUData{
"0": {Name: "RTX 4090"},
"1": {Name: "RTX 4090"},
"2": {Name: "RTX 4090"},
"3": {Name: "RTX 3080"},
},
}
counts := gm.countGPUNames()
assert.Equal(t, 3, counts["RTX 4090"])
assert.Equal(t, 1, counts["RTX 3080"])
assert.Len(t, counts, 2)
})
}
func TestInitializeSnapshots(t *testing.T) {
t.Run("initializes all maps from scratch", func(t *testing.T) {
gm := &GPUManager{}
cacheKey := uint16(5000)
gm.initializeSnapshots(cacheKey)
assert.NotNil(t, gm.lastAvgData)
assert.NotNil(t, gm.lastSnapshots)
assert.NotNil(t, gm.lastSnapshots[cacheKey])
})
t.Run("initializes only missing maps", func(t *testing.T) {
gm := &GPUManager{
lastAvgData: make(map[string]system.GPUData),
}
cacheKey := uint16(5000)
gm.initializeSnapshots(cacheKey)
assert.NotNil(t, gm.lastAvgData, "Should preserve existing lastAvgData")
assert.NotNil(t, gm.lastSnapshots)
assert.NotNil(t, gm.lastSnapshots[cacheKey])
})
t.Run("adds new cache key to existing snapshots", func(t *testing.T) {
existingKey := uint16(5000)
newKey := uint16(10000)
gm := &GPUManager{
lastSnapshots: map[uint16]map[string]*gpuSnapshot{
existingKey: {"0": {count: 10}},
},
}
gm.initializeSnapshots(newKey)
assert.NotNil(t, gm.lastSnapshots[existingKey], "Should preserve existing cache key")
assert.NotNil(t, gm.lastSnapshots[newKey], "Should add new cache key")
assert.NotNil(t, gm.lastSnapshots[existingKey]["0"], "Should preserve existing snapshot data")
})
}
func TestCalculateGPUAverage(t *testing.T) {
t.Run("returns zero value when deltaCount is zero", func(t *testing.T) {
gm := &GPUManager{
lastSnapshots: map[uint16]map[string]*gpuSnapshot{
5000: {
"0": {count: 10, usage: 100, power: 200},
},
},
lastAvgData: map[string]system.GPUData{
"0": {Usage: 50.0, Power: 100.0},
},
}
gpu := &system.GPUData{
Count: 10.0, // Same as snapshot, so delta = 0
Usage: 100.0,
Power: 200.0,
}
result := gm.calculateGPUAverage("0", gpu, 5000)
assert.Equal(t, 50.0, result.Usage, "Should return cached average")
assert.Equal(t, 100.0, result.Power, "Should return cached average")
})
t.Run("calculates average for standard GPU", func(t *testing.T) {
gm := &GPUManager{
lastSnapshots: map[uint16]map[string]*gpuSnapshot{
5000: {},
},
lastAvgData: make(map[string]system.GPUData),
}
gpu := &system.GPUData{
Name: "Test GPU",
Count: 4.0,
Usage: 200.0, // 200 / 4 = 50
Power: 400.0, // 400 / 4 = 100
}
result := gm.calculateGPUAverage("0", gpu, 5000)
assert.Equal(t, 50.0, result.Usage)
assert.Equal(t, 100.0, result.Power)
assert.Equal(t, "Test GPU", result.Name)
})
t.Run("calculates average for Intel GPU with engines", func(t *testing.T) {
gm := &GPUManager{
lastSnapshots: map[uint16]map[string]*gpuSnapshot{
5000: {},
},
lastAvgData: make(map[string]system.GPUData),
}
gpu := &system.GPUData{
Name: "Intel GPU",
Count: 5.0,
Power: 500.0,
PowerPkg: 600.0,
Engines: map[string]float64{
"Render/3D": 100.0, // 100 / 5 = 20
"Video": 50.0, // 50 / 5 = 10
},
}
result := gm.calculateGPUAverage("0", gpu, 5000)
assert.Equal(t, 100.0, result.Power)
assert.Equal(t, 120.0, result.PowerPkg)
assert.Equal(t, 20.0, result.Usage, "Should use max engine usage")
assert.Equal(t, 20.0, result.Engines["Render/3D"])
assert.Equal(t, 10.0, result.Engines["Video"])
})
t.Run("calculates delta from previous snapshot", func(t *testing.T) {
gm := &GPUManager{
lastSnapshots: map[uint16]map[string]*gpuSnapshot{
5000: {
"0": {
count: 2,
usage: 50.0,
power: 100.0,
powerPkg: 120.0,
},
},
},
lastAvgData: make(map[string]system.GPUData),
}
gpu := &system.GPUData{
Name: "Test GPU",
Count: 7.0, // Delta = 7 - 2 = 5
Usage: 200.0, // Delta = 200 - 50 = 150, avg = 150/5 = 30
Power: 350.0, // Delta = 350 - 100 = 250, avg = 250/5 = 50
PowerPkg: 420.0, // Delta = 420 - 120 = 300, avg = 300/5 = 60
}
result := gm.calculateGPUAverage("0", gpu, 5000)
assert.Equal(t, 30.0, result.Usage)
assert.Equal(t, 50.0, result.Power)
})
t.Run("stores result in lastAvgData", func(t *testing.T) {
gm := &GPUManager{
lastSnapshots: map[uint16]map[string]*gpuSnapshot{
5000: {},
},
lastAvgData: make(map[string]system.GPUData),
}
gpu := &system.GPUData{
Count: 2.0,
Usage: 100.0,
Power: 200.0,
}
result := gm.calculateGPUAverage("0", gpu, 5000)
assert.Equal(t, result, gm.lastAvgData["0"], "Should store calculated average")
})
}
@@ -756,16 +1289,17 @@ func TestAccumulation(t *testing.T) {
continue
}
assert.InDelta(t, expected.temperature, gpu.Temperature, 0.01, "Temperature should match")
assert.InDelta(t, expected.memoryUsed, gpu.MemoryUsed, 0.01, "Memory used should match")
assert.InDelta(t, expected.memoryTotal, gpu.MemoryTotal, 0.01, "Memory total should match")
assert.InDelta(t, expected.usage, gpu.Usage, 0.01, "Usage should match")
assert.InDelta(t, expected.power, gpu.Power, 0.01, "Power should match")
assert.EqualValues(t, expected.temperature, gpu.Temperature, "Temperature should match")
assert.EqualValues(t, expected.memoryUsed, gpu.MemoryUsed, "Memory used should match")
assert.EqualValues(t, expected.memoryTotal, gpu.MemoryTotal, "Memory total should match")
assert.EqualValues(t, expected.usage, gpu.Usage, "Usage should match")
assert.EqualValues(t, expected.power, gpu.Power, "Power should match")
assert.Equal(t, expected.count, gpu.Count, "Count should match")
}
// Verify average calculation in GetCurrentData
result := gm.GetCurrentData()
cacheKey := uint16(5000)
result := gm.GetCurrentData(cacheKey)
for id, expected := range tt.expectedValues {
gpu, exists := result[id]
assert.True(t, exists, "GPU with ID %s should exist in GetCurrentData result", id)
@@ -773,22 +1307,320 @@ func TestAccumulation(t *testing.T) {
continue
}
assert.InDelta(t, expected.temperature, gpu.Temperature, 0.01, "Temperature in GetCurrentData should match")
assert.InDelta(t, expected.avgUsage, gpu.Usage, 0.01, "Average usage in GetCurrentData should match")
assert.InDelta(t, expected.avgPower, gpu.Power, 0.01, "Average power in GetCurrentData should match")
assert.EqualValues(t, expected.temperature, gpu.Temperature, "Temperature in GetCurrentData should match")
assert.EqualValues(t, expected.avgUsage, gpu.Usage, "Average usage in GetCurrentData should match")
assert.EqualValues(t, expected.avgPower, gpu.Power, "Average power in GetCurrentData should match")
}
// Verify that accumulators in the original map are reset
for id := range tt.expectedValues {
// Verify that accumulators in the original map are NOT reset (they keep growing)
for id, expected := range tt.expectedValues {
gpu, exists := gm.GpuDataMap[id]
assert.True(t, exists, "GPU with ID %s should still exist after GetCurrentData", id)
if !exists {
continue
}
assert.Equal(t, float64(0), gpu.Count, "Count should be reset for GPU ID %s", id)
assert.Equal(t, float64(0), gpu.Usage, "Usage should be reset for GPU ID %s", id)
assert.Equal(t, float64(0), gpu.Power, "Power should be reset for GPU ID %s", id)
assert.EqualValues(t, expected.count, gpu.Count, "Count should remain at accumulated value for GPU ID %s", id)
assert.EqualValues(t, expected.usage, gpu.Usage, "Usage should remain at accumulated value for GPU ID %s", id)
assert.EqualValues(t, expected.power, gpu.Power, "Power should remain at accumulated value for GPU ID %s", id)
}
})
}
}
func TestIntelUpdateFromStats(t *testing.T) {
gm := &GPUManager{
GpuDataMap: make(map[string]*system.GPUData),
}
// First sample with power and two engines
sample1 := intelGpuStats{
PowerGPU: 10.5,
Engines: map[string]float64{
"Render/3D": 20.0,
"Video": 5.0,
},
}
ok := gm.updateIntelFromStats(&sample1)
assert.True(t, ok)
gpu := gm.GpuDataMap["0"]
require.NotNil(t, gpu)
assert.Equal(t, "GPU", gpu.Name)
assert.EqualValues(t, 10.5, gpu.Power)
assert.EqualValues(t, 20.0, gpu.Engines["Render/3D"])
assert.EqualValues(t, 5.0, gpu.Engines["Video"])
assert.Equal(t, float64(1), gpu.Count)
// Second sample with zero power (should not add) and additional engine busy
sample2 := intelGpuStats{
PowerGPU: 0.0,
Engines: map[string]float64{
"Render/3D": 10.0,
"Video": 2.5,
"Blitter": 1.0,
},
}
// zero power should not increment power accumulator
ok = gm.updateIntelFromStats(&sample2)
assert.True(t, ok)
gpu = gm.GpuDataMap["0"]
require.NotNil(t, gpu)
assert.EqualValues(t, 10.5, gpu.Power)
assert.EqualValues(t, 30.0, gpu.Engines["Render/3D"]) // 20 + 10
assert.EqualValues(t, 7.5, gpu.Engines["Video"]) // 5 + 2.5
assert.EqualValues(t, 1.0, gpu.Engines["Blitter"])
assert.Equal(t, float64(2), gpu.Count)
}
func TestIntelCollectorStreaming(t *testing.T) {
// Save and override PATH
origPath := os.Getenv("PATH")
defer os.Setenv("PATH", origPath)
dir := t.TempDir()
os.Setenv("PATH", dir)
// Create a fake intel_gpu_top that prints -l format with four samples (first will be skipped) and exits
scriptPath := filepath.Join(dir, "intel_gpu_top")
script := `#!/bin/sh
echo "Freq MHz IRQ RC6 Power W IMC MiB/s RCS BCS VCS"
echo " req act /s % gpu pkg rd wr % se wa % se wa % se wa"
echo "373 373 224 45 1.50 4.13 2554 714 12.34 0 0 0.00 0 0 5.00 0 0"
echo "226 223 338 58 2.00 2.69 1820 965 0.00 0 0 0.00 0 0 0.00 0 0"
echo "189 187 412 67 1.80 2.45 1950 823 8.50 2 1 15.00 1 0 22.00 0 1"
echo "298 295 278 51 2.20 3.12 1675 942 5.75 1 2 9.50 3 1 12.00 1 0"`
if err := os.WriteFile(scriptPath, []byte(script), 0755); err != nil {
t.Fatal(err)
}
gm := &GPUManager{
GpuDataMap: make(map[string]*system.GPUData),
}
// Run the collector once; it should read four samples but skip the first and return
if err := gm.collectIntelStats(); err != nil {
t.Fatalf("collectIntelStats error: %v", err)
}
gpu := gm.GpuDataMap["0"]
require.NotNil(t, gpu)
// Power should be sum of samples 2-4 (first is skipped): 2.0 + 1.8 + 2.2 = 6.0
assert.EqualValues(t, 6.0, gpu.Power)
assert.InDelta(t, 8.26, gpu.PowerPkg, 0.01) // Allow small floating point differences
// Engines aggregated from samples 2-4
assert.EqualValues(t, 14.25, gpu.Engines["Render/3D"]) // 0.00 + 8.50 + 5.75
assert.EqualValues(t, 34.0, gpu.Engines["Video"]) // 0.00 + 22.00 + 12.00
assert.EqualValues(t, 24.5, gpu.Engines["Blitter"]) // 0.00 + 15.00 + 9.50
// Count should be 3 samples (first is skipped)
assert.Equal(t, float64(3), gpu.Count)
}
func TestParseIntelHeaders(t *testing.T) {
tests := []struct {
name string
header1 string
header2 string
wantEngineNames []string
wantFriendlyNames []string
wantPowerIndex int
wantPreEngineCols int
}{
{
name: "basic headers with RCS BCS VCS",
header1: "Freq MHz IRQ RC6 Power W IMC MiB/s RCS BCS VCS",
header2: " req act /s % gpu pkg rd wr % se wa % se wa % se wa",
wantEngineNames: []string{"RCS", "BCS", "VCS"},
wantFriendlyNames: []string{"Render/3D", "Blitter", "Video"},
wantPowerIndex: 4, // "gpu" is at index 4
wantPreEngineCols: 8, // 17 total cols - 3*3 = 8
},
{
name: "headers with only RCS",
header1: "Freq MHz IRQ RC6 Power W IMC MiB/s RCS",
header2: " req act /s % gpu pkg rd wr % se wa",
wantEngineNames: []string{"RCS"},
wantFriendlyNames: []string{"Render/3D"},
wantPowerIndex: 4,
wantPreEngineCols: 8, // 11 total - 3*1 = 8
},
{
name: "headers with VECS and CCS",
header1: "Freq MHz IRQ RC6 Power W IMC MiB/s VECS CCS",
header2: " req act /s % gpu pkg rd wr % se wa % se wa",
wantEngineNames: []string{"VECS", "CCS"},
wantFriendlyNames: []string{"VideoEnhance", "Compute"},
wantPowerIndex: 4,
wantPreEngineCols: 8, // 14 total - 3*2 = 8
},
{
name: "no engines",
header1: "Freq MHz IRQ RC6 Power W IMC MiB/s",
header2: " req act /s % gpu pkg rd wr",
wantEngineNames: nil, // no engines found, slices remain nil
wantFriendlyNames: nil,
wantPowerIndex: -1, // no engines, so no search
wantPreEngineCols: 0,
},
{
name: "power index not found",
header1: "Freq MHz IRQ RC6 Power W IMC MiB/s RCS",
header2: " req act /s % pkg cpu rd wr % se wa", // no "gpu"
wantEngineNames: []string{"RCS"},
wantFriendlyNames: []string{"Render/3D"},
wantPowerIndex: -1, // "gpu" not found
wantPreEngineCols: 8, // 11 total - 3*1 = 8
},
{
name: "empty headers",
header1: "",
header2: "",
wantEngineNames: nil, // empty input, slices remain nil
wantFriendlyNames: nil,
wantPowerIndex: -1,
wantPreEngineCols: 0,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gm := &GPUManager{}
engineNames, friendlyNames, powerIndex, preEngineCols := gm.parseIntelHeaders(tt.header1, tt.header2)
assert.Equal(t, tt.wantEngineNames, engineNames)
assert.Equal(t, tt.wantFriendlyNames, friendlyNames)
assert.Equal(t, tt.wantPowerIndex, powerIndex)
assert.Equal(t, tt.wantPreEngineCols, preEngineCols)
})
}
}
func TestParseIntelData(t *testing.T) {
tests := []struct {
name string
line string
engineNames []string
friendlyNames []string
powerIndex int
preEngineCols int
wantPowerGPU float64
wantEngines map[string]float64
wantErr error
}{
{
name: "basic data with power and engines",
line: "373 373 224 45 1.50 4.13 2554 714 12.34 0 0 0.00 0 0 5.00 0 0",
engineNames: []string{"RCS", "BCS", "VCS"},
friendlyNames: []string{"Render/3D", "Blitter", "Video"},
powerIndex: 4,
preEngineCols: 8,
wantPowerGPU: 1.50,
wantEngines: map[string]float64{
"Render/3D": 12.34,
"Blitter": 0.00,
"Video": 5.00,
},
},
{
name: "data with zero power",
line: "226 223 338 58 0.00 2.69 1820 965 0.00 0 0 0.00 0 0 0.00 0 0",
engineNames: []string{"RCS", "BCS", "VCS"},
friendlyNames: []string{"Render/3D", "Blitter", "Video"},
powerIndex: 4,
preEngineCols: 8,
wantPowerGPU: 0.00,
wantEngines: map[string]float64{
"Render/3D": 0.00,
"Blitter": 0.00,
"Video": 0.00,
},
},
{
name: "data with no power index",
line: "373 373 224 45 1.50 4.13 2554 714 12.34 0 0 0.00 0 0 5.00 0 0",
engineNames: []string{"RCS", "BCS", "VCS"},
friendlyNames: []string{"Render/3D", "Blitter", "Video"},
powerIndex: -1,
preEngineCols: 8,
wantPowerGPU: 0.0, // no power parsed
wantEngines: map[string]float64{
"Render/3D": 12.34,
"Blitter": 0.00,
"Video": 5.00,
},
},
{
name: "data with insufficient columns",
line: "373 373 224 45 1.50", // too few columns
engineNames: []string{"RCS", "BCS", "VCS"},
friendlyNames: []string{"Render/3D", "Blitter", "Video"},
powerIndex: 4,
preEngineCols: 8,
wantPowerGPU: 0.0,
wantEngines: nil, // empty sample returned
wantErr: errNoValidData,
},
{
name: "empty line",
line: "",
engineNames: []string{"RCS"},
friendlyNames: []string{"Render/3D"},
powerIndex: 4,
preEngineCols: 8,
wantPowerGPU: 0.0,
wantEngines: nil,
wantErr: errNoValidData,
},
{
name: "data with invalid power value",
line: "373 373 224 45 N/A 4.13 2554 714 12.34 0 0 0.00 0 0 5.00 0 0",
engineNames: []string{"RCS", "BCS", "VCS"},
friendlyNames: []string{"Render/3D", "Blitter", "Video"},
powerIndex: 4,
preEngineCols: 8,
wantPowerGPU: 0.0, // N/A can't be parsed
wantEngines: map[string]float64{
"Render/3D": 12.34,
"Blitter": 0.00,
"Video": 5.00,
},
},
{
name: "data with invalid engine value",
line: "373 373 224 45 1.50 4.13 2554 714 N/A 0 0 0.00 0 0 5.00 0 0",
engineNames: []string{"RCS", "BCS", "VCS"},
friendlyNames: []string{"Render/3D", "Blitter", "Video"},
powerIndex: 4,
preEngineCols: 8,
wantPowerGPU: 1.50,
wantEngines: map[string]float64{
"Render/3D": 0.0, // N/A becomes 0
"Blitter": 0.00,
"Video": 5.00,
},
},
{
name: "data with no engines",
line: "373 373 224 45 1.50 4.13 2554 714",
engineNames: []string{},
friendlyNames: []string{},
powerIndex: 4,
preEngineCols: 8,
wantPowerGPU: 1.50,
wantEngines: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gm := &GPUManager{}
sample, err := gm.parseIntelData(tt.line, tt.engineNames, tt.friendlyNames, tt.powerIndex, tt.preEngineCols)
assert.Equal(t, tt.wantErr, err)
assert.Equal(t, tt.wantPowerGPU, sample.PowerGPU)
assert.Equal(t, tt.wantEngines, sample.Engines)
})
}
}

101
agent/handlers.go Normal file
View File

@@ -0,0 +1,101 @@
package agent
import (
"errors"
"fmt"
"github.com/fxamacker/cbor/v2"
"github.com/henrygd/beszel/internal/common"
)
// HandlerContext provides context for request handlers
type HandlerContext struct {
Client *WebSocketClient
Agent *Agent
Request *common.HubRequest[cbor.RawMessage]
RequestID *uint32
HubVerified bool
// SendResponse abstracts how a handler sends responses (WS or SSH)
SendResponse func(data any, requestID *uint32) error
}
// RequestHandler defines the interface for handling specific websocket request types
type RequestHandler interface {
// Handle processes the request and returns an error if unsuccessful
Handle(hctx *HandlerContext) error
}
// Responder sends handler responses back to the hub (over WS or SSH)
type Responder interface {
SendResponse(data any, requestID *uint32) error
}
// HandlerRegistry manages the mapping between actions and their handlers
type HandlerRegistry struct {
handlers map[common.WebSocketAction]RequestHandler
}
// NewHandlerRegistry creates a new handler registry with default handlers
func NewHandlerRegistry() *HandlerRegistry {
registry := &HandlerRegistry{
handlers: make(map[common.WebSocketAction]RequestHandler),
}
registry.Register(common.GetData, &GetDataHandler{})
registry.Register(common.CheckFingerprint, &CheckFingerprintHandler{})
return registry
}
// Register registers a handler for a specific action type
func (hr *HandlerRegistry) Register(action common.WebSocketAction, handler RequestHandler) {
hr.handlers[action] = handler
}
// Handle routes the request to the appropriate handler
func (hr *HandlerRegistry) Handle(hctx *HandlerContext) error {
handler, exists := hr.handlers[hctx.Request.Action]
if !exists {
return fmt.Errorf("unknown action: %d", hctx.Request.Action)
}
// Check verification requirement - default to requiring verification
if hctx.Request.Action != common.CheckFingerprint && !hctx.HubVerified {
return errors.New("hub not verified")
}
// Log handler execution for debugging
// slog.Debug("Executing handler", "action", hctx.Request.Action)
return handler.Handle(hctx)
}
// GetHandler returns the handler for a specific action
func (hr *HandlerRegistry) GetHandler(action common.WebSocketAction) (RequestHandler, bool) {
handler, exists := hr.handlers[action]
return handler, exists
}
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
// GetDataHandler handles system data requests
type GetDataHandler struct{}
func (h *GetDataHandler) Handle(hctx *HandlerContext) error {
var options common.DataRequestOptions
_ = cbor.Unmarshal(hctx.Request.Data, &options)
sysStats := hctx.Agent.gatherStats(options.CacheTimeMs)
return hctx.SendResponse(sysStats, hctx.RequestID)
}
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
// CheckFingerprintHandler handles authentication challenges
type CheckFingerprintHandler struct{}
func (h *CheckFingerprintHandler) Handle(hctx *HandlerContext) error {
return hctx.Client.handleAuthChallenge(hctx.Request, hctx.RequestID)
}

112
agent/handlers_test.go Normal file
View File

@@ -0,0 +1,112 @@
//go:build testing
// +build testing
package agent
import (
"testing"
"github.com/fxamacker/cbor/v2"
"github.com/henrygd/beszel/internal/common"
"github.com/stretchr/testify/assert"
)
// MockHandler for testing
type MockHandler struct {
requiresVerification bool
description string
handleFunc func(ctx *HandlerContext) error
}
func (m *MockHandler) Handle(ctx *HandlerContext) error {
if m.handleFunc != nil {
return m.handleFunc(ctx)
}
return nil
}
func (m *MockHandler) RequiresVerification() bool {
return m.requiresVerification
}
// TestHandlerRegistry tests the handler registry functionality
func TestHandlerRegistry(t *testing.T) {
t.Run("default registration", func(t *testing.T) {
registry := NewHandlerRegistry()
// Check default handlers are registered
getDataHandler, exists := registry.GetHandler(common.GetData)
assert.True(t, exists)
assert.IsType(t, &GetDataHandler{}, getDataHandler)
fingerprintHandler, exists := registry.GetHandler(common.CheckFingerprint)
assert.True(t, exists)
assert.IsType(t, &CheckFingerprintHandler{}, fingerprintHandler)
})
t.Run("custom handler registration", func(t *testing.T) {
registry := NewHandlerRegistry()
mockHandler := &MockHandler{
requiresVerification: true,
description: "Test handler",
}
// Register a custom handler for a mock action
const mockAction common.WebSocketAction = 99
registry.Register(mockAction, mockHandler)
// Verify registration
handler, exists := registry.GetHandler(mockAction)
assert.True(t, exists)
assert.Equal(t, mockHandler, handler)
})
t.Run("unknown action", func(t *testing.T) {
registry := NewHandlerRegistry()
ctx := &HandlerContext{
Request: &common.HubRequest[cbor.RawMessage]{
Action: common.WebSocketAction(255), // Unknown action
},
HubVerified: true,
}
err := registry.Handle(ctx)
assert.Error(t, err)
assert.Contains(t, err.Error(), "unknown action: 255")
})
t.Run("verification required", func(t *testing.T) {
registry := NewHandlerRegistry()
ctx := &HandlerContext{
Request: &common.HubRequest[cbor.RawMessage]{
Action: common.GetData, // Requires verification
},
HubVerified: false, // Not verified
}
err := registry.Handle(ctx)
assert.Error(t, err)
assert.Contains(t, err.Error(), "hub not verified")
})
}
// TestCheckFingerprintHandler tests the CheckFingerprint handler
func TestCheckFingerprintHandler(t *testing.T) {
handler := &CheckFingerprintHandler{}
t.Run("handle with invalid data", func(t *testing.T) {
client := &WebSocketClient{}
ctx := &HandlerContext{
Client: client,
HubVerified: false,
Request: &common.HubRequest[cbor.RawMessage]{
Action: common.CheckFingerprint,
Data: cbor.RawMessage{}, // Empty/invalid data
},
}
// Should fail to decode the fingerprint request
err := handler.Handle(ctx)
assert.Error(t, err)
})
}

View File

@@ -1,54 +1,225 @@
package agent
import (
"fmt"
"log/slog"
"path"
"strings"
"time"
"github.com/henrygd/beszel/agent/deltatracker"
"github.com/henrygd/beszel/internal/entities/system"
psutilNet "github.com/shirou/gopsutil/v4/net"
)
// NicConfig controls inclusion/exclusion of network interfaces via the NICS env var
//
// Behavior mirrors SensorConfig's matching logic:
// - Leading '-' means blacklist mode; otherwise whitelist mode
// - Supports '*' wildcards using path.Match
// - In whitelist mode with an empty list, no NICs are selected
// - In blacklist mode with an empty list, all NICs are selected
type NicConfig struct {
nics map[string]struct{}
isBlacklist bool
hasWildcards bool
}
func newNicConfig(nicsEnvVal string) *NicConfig {
cfg := &NicConfig{
nics: make(map[string]struct{}),
}
if strings.HasPrefix(nicsEnvVal, "-") {
cfg.isBlacklist = true
nicsEnvVal = nicsEnvVal[1:]
}
for nic := range strings.SplitSeq(nicsEnvVal, ",") {
nic = strings.TrimSpace(nic)
if nic != "" {
cfg.nics[nic] = struct{}{}
if strings.Contains(nic, "*") {
cfg.hasWildcards = true
}
}
}
return cfg
}
// isValidNic determines if a NIC should be included based on NicConfig rules
func isValidNic(nicName string, cfg *NicConfig) bool {
// Empty list behavior differs by mode: blacklist: allow all; whitelist: allow none
if len(cfg.nics) == 0 {
return cfg.isBlacklist
}
// Exact match: return true if whitelist, false if blacklist
if _, exactMatch := cfg.nics[nicName]; exactMatch {
return !cfg.isBlacklist
}
// If no wildcards, return true if blacklist, false if whitelist
if !cfg.hasWildcards {
return cfg.isBlacklist
}
// Check for wildcard patterns
for pattern := range cfg.nics {
if !strings.Contains(pattern, "*") {
continue
}
if match, _ := path.Match(pattern, nicName); match {
return !cfg.isBlacklist
}
}
return cfg.isBlacklist
}
func (a *Agent) updateNetworkStats(cacheTimeMs uint16, systemStats *system.Stats) {
// network stats
a.ensureNetInterfacesInitialized()
a.ensureNetworkInterfacesMap(systemStats)
if netIO, err := psutilNet.IOCounters(true); err == nil {
nis, msElapsed := a.loadAndTickNetBaseline(cacheTimeMs)
totalBytesSent, totalBytesRecv := a.sumAndTrackPerNicDeltas(cacheTimeMs, msElapsed, netIO, systemStats)
bytesSentPerSecond, bytesRecvPerSecond := a.computeBytesPerSecond(msElapsed, totalBytesSent, totalBytesRecv, nis)
a.applyNetworkTotals(cacheTimeMs, netIO, systemStats, nis, totalBytesSent, totalBytesRecv, bytesSentPerSecond, bytesRecvPerSecond)
}
}
func (a *Agent) initializeNetIoStats() {
// reset valid network interfaces
a.netInterfaces = make(map[string]struct{}, 0)
// map of network interface names passed in via NICS env var
var nicsMap map[string]struct{}
nics, nicsEnvExists := GetEnv("NICS")
// parse NICS env var for whitelist / blacklist
nicsEnvVal, nicsEnvExists := GetEnv("NICS")
var nicCfg *NicConfig
if nicsEnvExists {
nicsMap = make(map[string]struct{}, 0)
for nic := range strings.SplitSeq(nics, ",") {
nicsMap[nic] = struct{}{}
}
nicCfg = newNicConfig(nicsEnvVal)
}
// reset network I/O stats
a.netIoStats.BytesSent = 0
a.netIoStats.BytesRecv = 0
// get intial network I/O stats
// get current network I/O stats and record valid interfaces
if netIO, err := psutilNet.IOCounters(true); err == nil {
a.netIoStats.Time = time.Now()
for _, v := range netIO {
switch {
// skip if nics exists and the interface is not in the list
case nicsEnvExists:
if _, nameInNics := nicsMap[v.Name]; !nameInNics {
continue
}
// otherwise run the interface name through the skipNetworkInterface function
default:
if a.skipNetworkInterface(v) {
continue
}
if nicsEnvExists && !isValidNic(v.Name, nicCfg) {
continue
}
if a.skipNetworkInterface(v) {
continue
}
slog.Info("Detected network interface", "name", v.Name, "sent", v.BytesSent, "recv", v.BytesRecv)
a.netIoStats.BytesSent += v.BytesSent
a.netIoStats.BytesRecv += v.BytesRecv
// store as a valid network interface
a.netInterfaces[v.Name] = struct{}{}
}
}
// Reset per-cache-time trackers and baselines so they will reinitialize on next use
a.netInterfaceDeltaTrackers = make(map[uint16]*deltatracker.DeltaTracker[string, uint64])
a.netIoStats = make(map[uint16]system.NetIoStats)
}
// ensureNetInterfacesInitialized re-initializes NICs if none are currently tracked
func (a *Agent) ensureNetInterfacesInitialized() {
if len(a.netInterfaces) == 0 {
// if no network interfaces, initialize again
// this is a fix if agent started before network is online (#466)
// maybe refactor this in the future to not cache interface names at all so we
// don't miss an interface that's been added after agent started in any circumstance
a.initializeNetIoStats()
}
}
// ensureNetworkInterfacesMap ensures systemStats.NetworkInterfaces map exists
func (a *Agent) ensureNetworkInterfacesMap(systemStats *system.Stats) {
if systemStats.NetworkInterfaces == nil {
systemStats.NetworkInterfaces = make(map[string][4]uint64, 0)
}
}
// loadAndTickNetBaseline returns the NetIoStats baseline and milliseconds elapsed, updating time
func (a *Agent) loadAndTickNetBaseline(cacheTimeMs uint16) (netIoStat system.NetIoStats, msElapsed uint64) {
netIoStat = a.netIoStats[cacheTimeMs]
if netIoStat.Time.IsZero() {
netIoStat.Time = time.Now()
msElapsed = 0
} else {
msElapsed = uint64(time.Since(netIoStat.Time).Milliseconds())
netIoStat.Time = time.Now()
}
return netIoStat, msElapsed
}
// sumAndTrackPerNicDeltas accumulates totals and records per-NIC up/down deltas into systemStats
func (a *Agent) sumAndTrackPerNicDeltas(cacheTimeMs uint16, msElapsed uint64, netIO []psutilNet.IOCountersStat, systemStats *system.Stats) (totalBytesSent, totalBytesRecv uint64) {
tracker := a.netInterfaceDeltaTrackers[cacheTimeMs]
if tracker == nil {
tracker = deltatracker.NewDeltaTracker[string, uint64]()
a.netInterfaceDeltaTrackers[cacheTimeMs] = tracker
}
tracker.Cycle()
for _, v := range netIO {
if _, exists := a.netInterfaces[v.Name]; !exists {
continue
}
totalBytesSent += v.BytesSent
totalBytesRecv += v.BytesRecv
var upDelta, downDelta uint64
upKey, downKey := fmt.Sprintf("%sup", v.Name), fmt.Sprintf("%sdown", v.Name)
tracker.Set(upKey, v.BytesSent)
tracker.Set(downKey, v.BytesRecv)
if msElapsed > 0 {
upDelta = tracker.Delta(upKey) * 1000 / msElapsed
downDelta = tracker.Delta(downKey) * 1000 / msElapsed
}
systemStats.NetworkInterfaces[v.Name] = [4]uint64{upDelta, downDelta, v.BytesSent, v.BytesRecv}
}
return totalBytesSent, totalBytesRecv
}
// computeBytesPerSecond calculates per-second totals from elapsed time and totals
func (a *Agent) computeBytesPerSecond(msElapsed, totalBytesSent, totalBytesRecv uint64, nis system.NetIoStats) (bytesSentPerSecond, bytesRecvPerSecond uint64) {
if msElapsed > 0 {
bytesSentPerSecond = (totalBytesSent - nis.BytesSent) * 1000 / msElapsed
bytesRecvPerSecond = (totalBytesRecv - nis.BytesRecv) * 1000 / msElapsed
}
return bytesSentPerSecond, bytesRecvPerSecond
}
// applyNetworkTotals validates and writes computed network stats, or resets on anomaly
func (a *Agent) applyNetworkTotals(
cacheTimeMs uint16,
netIO []psutilNet.IOCountersStat,
systemStats *system.Stats,
nis system.NetIoStats,
totalBytesSent, totalBytesRecv uint64,
bytesSentPerSecond, bytesRecvPerSecond uint64,
) {
networkSentPs := bytesToMegabytes(float64(bytesSentPerSecond))
networkRecvPs := bytesToMegabytes(float64(bytesRecvPerSecond))
if networkSentPs > 10_000 || networkRecvPs > 10_000 {
slog.Warn("Invalid net stats. Resetting.", "sent", networkSentPs, "recv", networkRecvPs)
for _, v := range netIO {
if _, exists := a.netInterfaces[v.Name]; !exists {
continue
}
slog.Info(v.Name, "recv", v.BytesRecv, "sent", v.BytesSent)
}
a.initializeNetIoStats()
delete(a.netIoStats, cacheTimeMs)
delete(a.netInterfaceDeltaTrackers, cacheTimeMs)
}
systemStats.NetworkSent = networkSentPs
systemStats.NetworkRecv = networkRecvPs
systemStats.Bandwidth[0], systemStats.Bandwidth[1] = bytesSentPerSecond, bytesRecvPerSecond
nis.BytesSent = totalBytesSent
nis.BytesRecv = totalBytesRecv
a.netIoStats[cacheTimeMs] = nis
}
func (a *Agent) skipNetworkInterface(v psutilNet.IOCountersStat) bool {
@@ -58,6 +229,7 @@ func (a *Agent) skipNetworkInterface(v psutilNet.IOCountersStat) bool {
strings.HasPrefix(v.Name, "br-"),
strings.HasPrefix(v.Name, "veth"),
strings.HasPrefix(v.Name, "bond"),
strings.HasPrefix(v.Name, "cali"),
v.BytesRecv == 0,
v.BytesSent == 0:
return true

462
agent/network_test.go Normal file
View File

@@ -0,0 +1,462 @@
//go:build testing
package agent
import (
"testing"
"time"
"github.com/henrygd/beszel/agent/deltatracker"
"github.com/henrygd/beszel/internal/entities/system"
psutilNet "github.com/shirou/gopsutil/v4/net"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestIsValidNic(t *testing.T) {
tests := []struct {
name string
nicName string
config *NicConfig
expectedValid bool
}{
{
name: "Whitelist - NIC in list",
nicName: "eth0",
config: &NicConfig{
nics: map[string]struct{}{"eth0": {}},
isBlacklist: false,
},
expectedValid: true,
},
{
name: "Whitelist - NIC not in list",
nicName: "wlan0",
config: &NicConfig{
nics: map[string]struct{}{"eth0": {}},
isBlacklist: false,
},
expectedValid: false,
},
{
name: "Blacklist - NIC in list",
nicName: "eth0",
config: &NicConfig{
nics: map[string]struct{}{"eth0": {}},
isBlacklist: true,
},
expectedValid: false,
},
{
name: "Blacklist - NIC not in list",
nicName: "wlan0",
config: &NicConfig{
nics: map[string]struct{}{"eth0": {}},
isBlacklist: true,
},
expectedValid: true,
},
{
name: "Whitelist with wildcard - matching pattern",
nicName: "eth1",
config: &NicConfig{
nics: map[string]struct{}{"eth*": {}},
isBlacklist: false,
hasWildcards: true,
},
expectedValid: true,
},
{
name: "Whitelist with wildcard - non-matching pattern",
nicName: "wlan0",
config: &NicConfig{
nics: map[string]struct{}{"eth*": {}},
isBlacklist: false,
hasWildcards: true,
},
expectedValid: false,
},
{
name: "Blacklist with wildcard - matching pattern",
nicName: "eth1",
config: &NicConfig{
nics: map[string]struct{}{"eth*": {}},
isBlacklist: true,
hasWildcards: true,
},
expectedValid: false,
},
{
name: "Blacklist with wildcard - non-matching pattern",
nicName: "wlan0",
config: &NicConfig{
nics: map[string]struct{}{"eth*": {}},
isBlacklist: true,
hasWildcards: true,
},
expectedValid: true,
},
{
name: "Empty whitelist config - no NICs allowed",
nicName: "eth0",
config: &NicConfig{
nics: map[string]struct{}{},
isBlacklist: false,
},
expectedValid: false,
},
{
name: "Empty blacklist config - all NICs allowed",
nicName: "eth0",
config: &NicConfig{
nics: map[string]struct{}{},
isBlacklist: true,
},
expectedValid: true,
},
{
name: "Multiple patterns - exact match",
nicName: "eth0",
config: &NicConfig{
nics: map[string]struct{}{"eth0": {}, "wlan*": {}},
isBlacklist: false,
},
expectedValid: true,
},
{
name: "Multiple patterns - wildcard match",
nicName: "wlan1",
config: &NicConfig{
nics: map[string]struct{}{"eth0": {}, "wlan*": {}},
isBlacklist: false,
hasWildcards: true,
},
expectedValid: true,
},
{
name: "Multiple patterns - no match",
nicName: "bond0",
config: &NicConfig{
nics: map[string]struct{}{"eth0": {}, "wlan*": {}},
isBlacklist: false,
hasWildcards: true,
},
expectedValid: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isValidNic(tt.nicName, tt.config)
assert.Equal(t, tt.expectedValid, result)
})
}
}
func TestNewNicConfig(t *testing.T) {
tests := []struct {
name string
nicsEnvVal string
expectedCfg *NicConfig
}{
{
name: "Empty string",
nicsEnvVal: "",
expectedCfg: &NicConfig{
nics: map[string]struct{}{},
isBlacklist: false,
hasWildcards: false,
},
},
{
name: "Single NIC whitelist",
nicsEnvVal: "eth0",
expectedCfg: &NicConfig{
nics: map[string]struct{}{"eth0": {}},
isBlacklist: false,
hasWildcards: false,
},
},
{
name: "Multiple NICs whitelist",
nicsEnvVal: "eth0,wlan0",
expectedCfg: &NicConfig{
nics: map[string]struct{}{"eth0": {}, "wlan0": {}},
isBlacklist: false,
hasWildcards: false,
},
},
{
name: "Blacklist mode",
nicsEnvVal: "-eth0,wlan0",
expectedCfg: &NicConfig{
nics: map[string]struct{}{"eth0": {}, "wlan0": {}},
isBlacklist: true,
hasWildcards: false,
},
},
{
name: "With wildcards",
nicsEnvVal: "eth*,wlan0",
expectedCfg: &NicConfig{
nics: map[string]struct{}{"eth*": {}, "wlan0": {}},
isBlacklist: false,
hasWildcards: true,
},
},
{
name: "Blacklist with wildcards",
nicsEnvVal: "-eth*,wlan0",
expectedCfg: &NicConfig{
nics: map[string]struct{}{"eth*": {}, "wlan0": {}},
isBlacklist: true,
hasWildcards: true,
},
},
{
name: "With whitespace",
nicsEnvVal: "eth0, wlan0 , eth1",
expectedCfg: &NicConfig{
nics: map[string]struct{}{"eth0": {}, "wlan0": {}, "eth1": {}},
isBlacklist: false,
hasWildcards: false,
},
},
{
name: "Only wildcards",
nicsEnvVal: "eth*,wlan*",
expectedCfg: &NicConfig{
nics: map[string]struct{}{"eth*": {}, "wlan*": {}},
isBlacklist: false,
hasWildcards: true,
},
},
{
name: "Leading dash only",
nicsEnvVal: "-",
expectedCfg: &NicConfig{
nics: map[string]struct{}{},
isBlacklist: true,
hasWildcards: false,
},
},
{
name: "Mixed exact and wildcard",
nicsEnvVal: "eth0,br-*",
expectedCfg: &NicConfig{
nics: map[string]struct{}{"eth0": {}, "br-*": {}},
isBlacklist: false,
hasWildcards: true,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cfg := newNicConfig(tt.nicsEnvVal)
require.NotNil(t, cfg)
assert.Equal(t, tt.expectedCfg.isBlacklist, cfg.isBlacklist)
assert.Equal(t, tt.expectedCfg.hasWildcards, cfg.hasWildcards)
assert.Equal(t, tt.expectedCfg.nics, cfg.nics)
})
}
}
func TestEnsureNetworkInterfacesMap(t *testing.T) {
var a Agent
var stats system.Stats
// Initially nil
assert.Nil(t, stats.NetworkInterfaces)
// Ensure map is created
a.ensureNetworkInterfacesMap(&stats)
assert.NotNil(t, stats.NetworkInterfaces)
// Idempotent
a.ensureNetworkInterfacesMap(&stats)
assert.NotNil(t, stats.NetworkInterfaces)
}
func TestLoadAndTickNetBaseline(t *testing.T) {
a := &Agent{netIoStats: make(map[uint16]system.NetIoStats)}
// First call initializes time and returns 0 elapsed
ni, elapsed := a.loadAndTickNetBaseline(100)
assert.Equal(t, uint64(0), elapsed)
assert.False(t, ni.Time.IsZero())
// Store back what loadAndTick returns to mimic updateNetworkStats behavior
a.netIoStats[100] = ni
time.Sleep(2 * time.Millisecond)
// Next call should produce >= 0 elapsed and update time
ni2, elapsed2 := a.loadAndTickNetBaseline(100)
assert.True(t, elapsed2 > 0)
assert.False(t, ni2.Time.IsZero())
}
func TestComputeBytesPerSecond(t *testing.T) {
a := &Agent{}
// No elapsed -> zero rate
bytesUp, bytesDown := a.computeBytesPerSecond(0, 2000, 3000, system.NetIoStats{BytesSent: 1000, BytesRecv: 1000})
assert.Equal(t, uint64(0), bytesUp)
assert.Equal(t, uint64(0), bytesDown)
// With elapsed -> per-second calculation
bytesUp, bytesDown = a.computeBytesPerSecond(500, 6000, 11000, system.NetIoStats{BytesSent: 1000, BytesRecv: 1000})
// (6000-1000)*1000/500 = 10000; (11000-1000)*1000/500 = 20000
assert.Equal(t, uint64(10000), bytesUp)
assert.Equal(t, uint64(20000), bytesDown)
}
func TestSumAndTrackPerNicDeltas(t *testing.T) {
a := &Agent{
netInterfaces: map[string]struct{}{"eth0": {}, "wlan0": {}},
netInterfaceDeltaTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
}
// Two samples for same cache interval to verify delta behavior
cache := uint16(42)
net1 := []psutilNet.IOCountersStat{{Name: "eth0", BytesSent: 1000, BytesRecv: 2000}}
stats1 := &system.Stats{}
a.ensureNetworkInterfacesMap(stats1)
tx1, rx1 := a.sumAndTrackPerNicDeltas(cache, 0, net1, stats1)
assert.Equal(t, uint64(1000), tx1)
assert.Equal(t, uint64(2000), rx1)
// Second cycle with elapsed, larger counters -> deltas computed inside
net2 := []psutilNet.IOCountersStat{{Name: "eth0", BytesSent: 4000, BytesRecv: 9000}}
stats := &system.Stats{}
a.ensureNetworkInterfacesMap(stats)
tx2, rx2 := a.sumAndTrackPerNicDeltas(cache, 1000, net2, stats)
assert.Equal(t, uint64(4000), tx2)
assert.Equal(t, uint64(9000), rx2)
// Up/Down deltas per second should be (4000-1000)/1s = 3000 and (9000-2000)/1s = 7000
ni, ok := stats.NetworkInterfaces["eth0"]
assert.True(t, ok)
assert.Equal(t, uint64(3000), ni[0])
assert.Equal(t, uint64(7000), ni[1])
}
func TestApplyNetworkTotals(t *testing.T) {
tests := []struct {
name string
bytesSentPerSecond uint64
bytesRecvPerSecond uint64
totalBytesSent uint64
totalBytesRecv uint64
expectReset bool
expectedNetworkSent float64
expectedNetworkRecv float64
expectedBandwidthSent uint64
expectedBandwidthRecv uint64
}{
{
name: "Valid network stats - normal values",
bytesSentPerSecond: 1000000, // 1 MB/s
bytesRecvPerSecond: 2000000, // 2 MB/s
totalBytesSent: 10000000,
totalBytesRecv: 20000000,
expectReset: false,
expectedNetworkSent: 0.95, // ~1 MB/s rounded to 2 decimals
expectedNetworkRecv: 1.91, // ~2 MB/s rounded to 2 decimals
expectedBandwidthSent: 1000000,
expectedBandwidthRecv: 2000000,
},
{
name: "Invalid network stats - sent exceeds threshold",
bytesSentPerSecond: 11000000000, // ~10.5 GB/s > 10 GB/s threshold
bytesRecvPerSecond: 1000000, // 1 MB/s
totalBytesSent: 10000000,
totalBytesRecv: 20000000,
expectReset: true,
},
{
name: "Invalid network stats - recv exceeds threshold",
bytesSentPerSecond: 1000000, // 1 MB/s
bytesRecvPerSecond: 11000000000, // ~10.5 GB/s > 10 GB/s threshold
totalBytesSent: 10000000,
totalBytesRecv: 20000000,
expectReset: true,
},
{
name: "Invalid network stats - both exceed threshold",
bytesSentPerSecond: 12000000000, // ~11.4 GB/s
bytesRecvPerSecond: 13000000000, // ~12.4 GB/s
totalBytesSent: 10000000,
totalBytesRecv: 20000000,
expectReset: true,
},
{
name: "Valid network stats - at threshold boundary",
bytesSentPerSecond: 10485750000, // ~9999.99 MB/s (rounds to 9999.99)
bytesRecvPerSecond: 10485750000, // ~9999.99 MB/s (rounds to 9999.99)
totalBytesSent: 10000000,
totalBytesRecv: 20000000,
expectReset: false,
expectedNetworkSent: 9999.99,
expectedNetworkRecv: 9999.99,
expectedBandwidthSent: 10485750000,
expectedBandwidthRecv: 10485750000,
},
{
name: "Zero values",
bytesSentPerSecond: 0,
bytesRecvPerSecond: 0,
totalBytesSent: 0,
totalBytesRecv: 0,
expectReset: false,
expectedNetworkSent: 0.0,
expectedNetworkRecv: 0.0,
expectedBandwidthSent: 0,
expectedBandwidthRecv: 0,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Setup agent with initialized maps
a := &Agent{
netInterfaces: make(map[string]struct{}),
netIoStats: make(map[uint16]system.NetIoStats),
netInterfaceDeltaTrackers: make(map[uint16]*deltatracker.DeltaTracker[string, uint64]),
}
cacheTimeMs := uint16(100)
netIO := []psutilNet.IOCountersStat{
{Name: "eth0", BytesSent: 1000, BytesRecv: 2000},
}
systemStats := &system.Stats{}
nis := system.NetIoStats{}
a.applyNetworkTotals(
cacheTimeMs,
netIO,
systemStats,
nis,
tt.totalBytesSent,
tt.totalBytesRecv,
tt.bytesSentPerSecond,
tt.bytesRecvPerSecond,
)
if tt.expectReset {
// Should have reset network tracking state - delta trackers should be cleared
// Note: initializeNetIoStats resets the maps, then applyNetworkTotals sets nis back
assert.Contains(t, a.netIoStats, cacheTimeMs, "cache entry should exist after reset")
assert.NotContains(t, a.netInterfaceDeltaTrackers, cacheTimeMs, "tracker should be cleared on reset")
} else {
// Should have applied stats
assert.Equal(t, tt.expectedNetworkSent, systemStats.NetworkSent)
assert.Equal(t, tt.expectedNetworkRecv, systemStats.NetworkRecv)
assert.Equal(t, tt.expectedBandwidthSent, systemStats.Bandwidth[0])
assert.Equal(t, tt.expectedBandwidthRecv, systemStats.Bandwidth[1])
// Should have updated NetIoStats
updatedNis := a.netIoStats[cacheTimeMs]
assert.Equal(t, tt.totalBytesSent, updatedNis.BytesSent)
assert.Equal(t, tt.totalBytesRecv, updatedNis.BytesRecv)
}
})
}
}

View File

@@ -127,15 +127,75 @@ func (a *Agent) handleSession(s ssh.Session) {
hubVersion := a.getHubVersion(sessionID, sessionCtx)
stats := a.gatherStats(sessionID)
err := a.writeToSession(s, stats, hubVersion)
if err != nil {
slog.Error("Error encoding stats", "err", err, "stats", stats)
s.Exit(1)
} else {
s.Exit(0)
// Legacy one-shot behavior for older hubs
if hubVersion.LT(beszel.MinVersionAgentResponse) {
if err := a.handleLegacyStats(s, hubVersion); err != nil {
slog.Error("Error encoding stats", "err", err)
s.Exit(1)
return
}
}
var req common.HubRequest[cbor.RawMessage]
if err := cbor.NewDecoder(s).Decode(&req); err != nil {
// Fallback to legacy one-shot if the first decode fails
if err2 := a.handleLegacyStats(s, hubVersion); err2 != nil {
slog.Error("Error encoding stats (fallback)", "err", err2)
s.Exit(1)
return
}
s.Exit(0)
return
}
if err := a.handleSSHRequest(s, &req); err != nil {
slog.Error("SSH request handling failed", "err", err)
s.Exit(1)
return
}
s.Exit(0)
}
// handleSSHRequest builds a handler context and dispatches to the shared registry
func (a *Agent) handleSSHRequest(w io.Writer, req *common.HubRequest[cbor.RawMessage]) error {
// SSH does not support fingerprint auth action
if req.Action == common.CheckFingerprint {
return cbor.NewEncoder(w).Encode(common.AgentResponse{Error: "unsupported action"})
}
// responder that writes AgentResponse to stdout
sshResponder := func(data any, requestID *uint32) error {
response := common.AgentResponse{Id: requestID}
switch v := data.(type) {
case *system.CombinedData:
response.SystemData = v
default:
response.Error = fmt.Sprintf("unsupported response type: %T", data)
}
return cbor.NewEncoder(w).Encode(response)
}
ctx := &HandlerContext{
Client: nil,
Agent: a,
Request: req,
RequestID: nil,
HubVerified: true,
SendResponse: sshResponder,
}
if handler, ok := a.handlerRegistry.GetHandler(req.Action); ok {
if err := handler.Handle(ctx); err != nil {
return cbor.NewEncoder(w).Encode(common.AgentResponse{Error: err.Error()})
}
return nil
}
return cbor.NewEncoder(w).Encode(common.AgentResponse{Error: fmt.Sprintf("unknown action: %d", req.Action)})
}
// handleLegacyStats serves the legacy one-shot stats payload for older hubs
func (a *Agent) handleLegacyStats(w io.Writer, hubVersion semver.Version) error {
stats := a.gatherStats(60_000)
return a.writeToSession(w, stats, hubVersion)
}
// writeToSession encodes and writes system statistics to the session.

View File

@@ -14,13 +14,18 @@ import (
"github.com/henrygd/beszel/internal/entities/system"
"github.com/shirou/gopsutil/v4/cpu"
"github.com/shirou/gopsutil/v4/disk"
"github.com/shirou/gopsutil/v4/host"
"github.com/shirou/gopsutil/v4/load"
"github.com/shirou/gopsutil/v4/mem"
psutilNet "github.com/shirou/gopsutil/v4/net"
)
// prevDisk stores previous per-device disk counters for a given cache interval
type prevDisk struct {
readBytes uint64
writeBytes uint64
at time.Time
}
// Sets initial / non-changing values about the host system
func (a *Agent) initializeSystemInfo() {
a.systemInfo.AgentVersion = beszel.Version
@@ -32,7 +37,7 @@ func (a *Agent) initializeSystemInfo() {
a.systemInfo.KernelVersion = version
a.systemInfo.Os = system.Darwin
} else if strings.Contains(platform, "indows") {
a.systemInfo.KernelVersion = strings.Replace(platform, "Microsoft ", "", 1) + " " + version
a.systemInfo.KernelVersion = fmt.Sprintf("%s %s", strings.Replace(platform, "Microsoft ", "", 1), version)
a.systemInfo.Os = system.Windows
} else if platform == "freebsd" {
a.systemInfo.Os = system.Freebsd
@@ -69,8 +74,8 @@ func (a *Agent) initializeSystemInfo() {
}
// Returns current info, stats about the host system
func (a *Agent) getSystemStats() system.Stats {
systemStats := system.Stats{}
func (a *Agent) getSystemStats(cacheTimeMs uint16) system.Stats {
var systemStats system.Stats
// battery
if battery.HasReadableBattery() {
@@ -78,11 +83,11 @@ func (a *Agent) getSystemStats() system.Stats {
}
// cpu percent
cpuPct, err := cpu.Percent(0, false)
if err != nil {
cpuPercent, err := getCpuPercent(cacheTimeMs)
if err == nil {
systemStats.Cpu = twoDecimals(cpuPercent)
} else {
slog.Error("Error getting cpu percent", "err", err)
} else if len(cpuPct) > 0 {
systemStats.Cpu = twoDecimals(cpuPct[0])
}
// load average
@@ -101,14 +106,22 @@ func (a *Agent) getSystemStats() system.Stats {
systemStats.Swap = bytesToGigabytes(v.SwapTotal)
systemStats.SwapUsed = bytesToGigabytes(v.SwapTotal - v.SwapFree - v.SwapCached)
// cache + buffers value for default mem calculation
cacheBuff := v.Total - v.Free - v.Used
// htop memory calculation overrides
// note: gopsutil automatically adds SReclaimable to v.Cached
cacheBuff := v.Cached + v.Buffers - v.Shared
if cacheBuff <= 0 {
cacheBuff = max(v.Total-v.Free-v.Used, 0)
}
// htop memory calculation overrides (likely outdated as of mid 2025)
if a.memCalc == "htop" {
// note: gopsutil automatically adds SReclaimable to v.Cached
cacheBuff = v.Cached + v.Buffers - v.Shared
// cacheBuff = v.Cached + v.Buffers - v.Shared
v.Used = v.Total - (v.Free + cacheBuff)
v.UsedPercent = float64(v.Used) / float64(v.Total) * 100.0
}
// if a.memCalc == "legacy" {
// v.Used = v.Total - v.Free - v.Buffers - v.Cached
// cacheBuff = v.Total - v.Free - v.Used
// v.UsedPercent = float64(v.Used) / float64(v.Total) * 100.0
// }
// subtract ZFS ARC size from used memory and add as its own category
if a.zfs {
if arcSize, _ := getARCSize(); arcSize > 0 && arcSize < v.Used {
@@ -124,104 +137,13 @@ func (a *Agent) getSystemStats() system.Stats {
}
// disk usage
for _, stats := range a.fsStats {
if d, err := disk.Usage(stats.Mountpoint); err == nil {
stats.DiskTotal = bytesToGigabytes(d.Total)
stats.DiskUsed = bytesToGigabytes(d.Used)
if stats.Root {
systemStats.DiskTotal = bytesToGigabytes(d.Total)
systemStats.DiskUsed = bytesToGigabytes(d.Used)
systemStats.DiskPct = twoDecimals(d.UsedPercent)
}
} else {
// reset stats if error (likely unmounted)
slog.Error("Error getting disk stats", "name", stats.Mountpoint, "err", err)
stats.DiskTotal = 0
stats.DiskUsed = 0
stats.TotalRead = 0
stats.TotalWrite = 0
}
}
a.updateDiskUsage(&systemStats)
// disk i/o
if ioCounters, err := disk.IOCounters(a.fsNames...); err == nil {
for _, d := range ioCounters {
stats := a.fsStats[d.Name]
if stats == nil {
continue
}
secondsElapsed := time.Since(stats.Time).Seconds()
readPerSecond := bytesToMegabytes(float64(d.ReadBytes-stats.TotalRead) / secondsElapsed)
writePerSecond := bytesToMegabytes(float64(d.WriteBytes-stats.TotalWrite) / secondsElapsed)
// check for invalid values and reset stats if so
if readPerSecond < 0 || writePerSecond < 0 || readPerSecond > 50_000 || writePerSecond > 50_000 {
slog.Warn("Invalid disk I/O. Resetting.", "name", d.Name, "read", readPerSecond, "write", writePerSecond)
a.initializeDiskIoStats(ioCounters)
break
}
stats.Time = time.Now()
stats.DiskReadPs = readPerSecond
stats.DiskWritePs = writePerSecond
stats.TotalRead = d.ReadBytes
stats.TotalWrite = d.WriteBytes
// if root filesystem, update system stats
if stats.Root {
systemStats.DiskReadPs = stats.DiskReadPs
systemStats.DiskWritePs = stats.DiskWritePs
}
}
}
// disk i/o (cache-aware per interval)
a.updateDiskIo(cacheTimeMs, &systemStats)
// network stats
if len(a.netInterfaces) == 0 {
// if no network interfaces, initialize again
// this is a fix if agent started before network is online (#466)
// maybe refactor this in the future to not cache interface names at all so we
// don't miss an interface that's been added after agent started in any circumstance
a.initializeNetIoStats()
}
if netIO, err := psutilNet.IOCounters(true); err == nil {
msElapsed := uint64(time.Since(a.netIoStats.Time).Milliseconds())
a.netIoStats.Time = time.Now()
totalBytesSent := uint64(0)
totalBytesRecv := uint64(0)
// sum all bytes sent and received
for _, v := range netIO {
// skip if not in valid network interfaces list
if _, exists := a.netInterfaces[v.Name]; !exists {
continue
}
totalBytesSent += v.BytesSent
totalBytesRecv += v.BytesRecv
}
// add to systemStats
var bytesSentPerSecond, bytesRecvPerSecond uint64
if msElapsed > 0 {
bytesSentPerSecond = (totalBytesSent - a.netIoStats.BytesSent) * 1000 / msElapsed
bytesRecvPerSecond = (totalBytesRecv - a.netIoStats.BytesRecv) * 1000 / msElapsed
}
networkSentPs := bytesToMegabytes(float64(bytesSentPerSecond))
networkRecvPs := bytesToMegabytes(float64(bytesRecvPerSecond))
// add check for issue (#150) where sent is a massive number
if networkSentPs > 10_000 || networkRecvPs > 10_000 {
slog.Warn("Invalid net stats. Resetting.", "sent", networkSentPs, "recv", networkRecvPs)
for _, v := range netIO {
if _, exists := a.netInterfaces[v.Name]; !exists {
continue
}
slog.Info(v.Name, "recv", v.BytesRecv, "sent", v.BytesSent)
}
// reset network I/O stats
a.initializeNetIoStats()
} else {
systemStats.NetworkSent = networkSentPs
systemStats.NetworkRecv = networkRecvPs
systemStats.Bandwidth[0], systemStats.Bandwidth[1] = bytesSentPerSecond, bytesRecvPerSecond
// update netIoStats
a.netIoStats.BytesSent = totalBytesSent
a.netIoStats.BytesRecv = totalBytesRecv
}
}
// network stats (per cache interval)
a.updateNetworkStats(cacheTimeMs, &systemStats)
// temperatures
// TODO: maybe refactor to methods on systemStats
@@ -232,7 +154,7 @@ func (a *Agent) getSystemStats() system.Stats {
// reset high gpu percent
a.systemInfo.GpuPct = 0
// get current GPU data
if gpuData := a.gpuManager.GetCurrentData(); len(gpuData) > 0 {
if gpuData := a.gpuManager.GetCurrentData(cacheTimeMs); len(gpuData) > 0 {
systemStats.GPUData = gpuData
// add temperatures
@@ -261,6 +183,7 @@ func (a *Agent) getSystemStats() system.Stats {
}
// update base system info
a.systemInfo.ConnectionType = a.connectionManager.ConnectionType
a.systemInfo.Cpu = systemStats.Cpu
a.systemInfo.LoadAvg = systemStats.LoadAvg
// TODO: remove these in future release in favor of load avg array

View File

@@ -0,0 +1,24 @@
{
"cpu_stats": {
"cpu_usage": {
"total_usage": 312055276000
},
"system_cpu_usage": 1366399830000000
},
"memory_stats": {
"usage": 507400192,
"stats": {
"inactive_file": 165130240
}
},
"networks": {
"eth0": {
"tx_bytes": 20376558,
"rx_bytes": 537029455
},
"eth1": {
"tx_bytes": 2003766,
"rx_bytes": 6241
}
}
}

View File

@@ -0,0 +1,24 @@
{
"cpu_stats": {
"cpu_usage": {
"total_usage": 314891801000
},
"system_cpu_usage": 1368474900000000
},
"memory_stats": {
"usage": 507400192,
"stats": {
"inactive_file": 165130240
}
},
"networks": {
"eth0": {
"tx_bytes": 20376558,
"rx_bytes": 537029455
},
"eth1": {
"tx_bytes": 2003766,
"rx_bytes": 6241
}
}
}

View File

@@ -30,21 +30,22 @@ func (s *systemdRestarter) Restart() error {
type openRCRestarter struct{ cmd string }
func (o *openRCRestarter) Restart() error {
if err := exec.Command(o.cmd, "status", "beszel-agent").Run(); err != nil {
if err := exec.Command(o.cmd, "beszel-agent", "status").Run(); err != nil {
return nil
}
ghupdate.ColorPrint(ghupdate.ColorYellow, "Restarting beszel-agent via OpenRC…")
return exec.Command(o.cmd, "restart", "beszel-agent").Run()
return exec.Command(o.cmd, "beszel-agent", "restart").Run()
}
type openWRTRestarter struct{ cmd string }
func (w *openWRTRestarter) Restart() error {
if err := exec.Command(w.cmd, "running", "beszel-agent").Run(); err != nil {
// https://openwrt.org/docs/guide-user/base-system/managing_services?s[]=service
if err := exec.Command("/etc/init.d/beszel-agent", "running").Run(); err != nil {
return nil
}
ghupdate.ColorPrint(ghupdate.ColorYellow, "Restarting beszel-agent via procd…")
return exec.Command(w.cmd, "restart", "beszel-agent").Run()
return exec.Command("/etc/init.d/beszel-agent", "restart").Run()
}
type freeBSDRestarter struct{ cmd string }
@@ -64,11 +65,13 @@ func detectRestarter() restarter {
if path, err := exec.LookPath("rc-service"); err == nil {
return &openRCRestarter{cmd: path}
}
if path, err := exec.LookPath("procd"); err == nil {
return &openWRTRestarter{cmd: path}
}
if path, err := exec.LookPath("service"); err == nil {
if runtime.GOOS == "freebsd" {
return &freeBSDRestarter{cmd: path}
}
return &openWRTRestarter{cmd: path}
}
return nil
}

View File

@@ -6,10 +6,13 @@ import "github.com/blang/semver"
const (
// Version is the current version of the application.
Version = "0.12.7"
Version = "0.13.1"
// AppName is the name of the application.
AppName = "beszel"
)
// MinVersionCbor is the minimum supported version for CBOR compatibility.
var MinVersionCbor = semver.MustParse("0.12.0")
// MinVersionAgentResponse is the minimum supported version for AgentResponse compatibility.
var MinVersionAgentResponse = semver.MustParse("0.13.0")

40
go.mod
View File

@@ -3,7 +3,7 @@ module github.com/henrygd/beszel
go 1.25.1
// lock shoutrrr to specific version to allow review before updating
replace github.com/nicholas-fedor/shoutrrr => github.com/nicholas-fedor/shoutrrr v0.8.8
replace github.com/nicholas-fedor/shoutrrr => github.com/nicholas-fedor/shoutrrr v0.9.1
require (
github.com/blang/semver v3.5.1+incompatible
@@ -12,16 +12,16 @@ require (
github.com/gliderlabs/ssh v0.3.8
github.com/google/uuid v1.6.0
github.com/lxzan/gws v1.8.9
github.com/nicholas-fedor/shoutrrr v0.8.17
github.com/nicholas-fedor/shoutrrr v0.10.0
github.com/pocketbase/dbx v1.11.0
github.com/pocketbase/pocketbase v0.29.3
github.com/shirou/gopsutil/v4 v4.25.6
github.com/spf13/cast v1.9.2
github.com/spf13/cobra v1.9.1
github.com/spf13/pflag v1.0.7
github.com/stretchr/testify v1.11.0
golang.org/x/crypto v0.41.0
golang.org/x/exp v0.0.0-20250819193227-8b4c13bb791b
github.com/pocketbase/pocketbase v0.30.1
github.com/shirou/gopsutil/v4 v4.25.9
github.com/spf13/cast v1.10.0
github.com/spf13/cobra v1.10.1
github.com/spf13/pflag v1.0.10
github.com/stretchr/testify v1.11.1
golang.org/x/crypto v0.42.0
golang.org/x/exp v0.0.0-20251002181428-27f1f14c8bb9
gopkg.in/yaml.v3 v3.0.1
)
@@ -33,9 +33,9 @@ require (
github.com/dolthub/maphash v0.1.0 // indirect
github.com/domodwyer/mailyak/v3 v3.6.2 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/ebitengine/purego v0.9.0 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/gabriel-vasile/mimetype v1.4.10 // indirect
github.com/ganigeorgiev/fexpr v0.5.0 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-ozzo/ozzo-validation/v4 v4.3.0 // indirect
@@ -43,7 +43,7 @@ require (
github.com/golang-jwt/jwt/v5 v5.3.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/lufia/plan9stats v0.0.0-20250821153705-5981dea3221d // indirect
github.com/lufia/plan9stats v0.0.0-20250827001030-24949be3fa54 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
@@ -54,16 +54,16 @@ require (
github.com/tklauser/numcpus v0.10.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
golang.org/x/image v0.30.0 // indirect
golang.org/x/net v0.43.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.35.0 // indirect
golang.org/x/text v0.28.0 // indirect
golang.org/x/image v0.31.0 // indirect
golang.org/x/net v0.44.0 // indirect
golang.org/x/oauth2 v0.31.0 // indirect
golang.org/x/sync v0.17.0 // indirect
golang.org/x/sys v0.36.0 // indirect
golang.org/x/text v0.29.0 // indirect
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect
howett.net/plist v1.0.1 // indirect
modernc.org/libc v1.66.3 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.38.2 // indirect
modernc.org/sqlite v1.39.0 // indirect
)

132
go.sum
View File

@@ -1,5 +1,7 @@
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/asaskevich/govalidator v0.0.0-20200108200545-475eaeb16496/go.mod h1:oGkLhpf+kjZl6xBf758TQhh5XrAeiJv/7FRz/2spLIg=
@@ -21,22 +23,22 @@ github.com/domodwyer/mailyak/v3 v3.6.2 h1:x3tGMsyFhTCaxp6ycgR0FE/bu5QiNp+hetUuCO
github.com/domodwyer/mailyak/v3 v3.6.2/go.mod h1:lOm/u9CyCVWHeaAmHIdF4RiKVxKUT/H5XX10lIKAL6c=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/ebitengine/purego v0.8.4 h1:CF7LEKg5FFOsASUj0+QwaXf8Ht6TlFxg09+S9wz0omw=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/ebitengine/purego v0.9.0 h1:mh0zpKBIXDceC63hpvPuGLiJ8ZAa3DfrFTudmfi8A4k=
github.com/ebitengine/purego v0.9.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/gabriel-vasile/mimetype v1.4.9 h1:5k+WDwEsD9eTLL8Tz3L0VnmVh9QxGjRmjBvAG7U/oYY=
github.com/gabriel-vasile/mimetype v1.4.9/go.mod h1:WnSQhFKJuBlRyLiKohA/2DtIlPFAbguNaG7QCHcyGok=
github.com/gabriel-vasile/mimetype v1.4.10 h1:zyueNbySn/z8mJZHLt6IPw0KoZsiQNszIpU+bX4+ZK0=
github.com/gabriel-vasile/mimetype v1.4.10/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/ganigeorgiev/fexpr v0.5.0 h1:XA9JxtTE/Xm+g/JFI6RfZEHSiQlk+1glLvRK1Lpv/Tk=
github.com/ganigeorgiev/fexpr v0.5.0/go.mod h1:RyGiGqmeXhEQ6+mlGdnUleLHgtzzu/VGO2WtJkF5drE=
github.com/gliderlabs/ssh v0.3.8 h1:a4YXD1V7xMF9g5nTkdfnja3Sxy1PVDCj1Zg4Wb8vY6c=
github.com/gliderlabs/ssh v0.3.8/go.mod h1:xYoytBv1sV0aL3CavoDuJIQNURXkkfPA/wxQ1pL1fAU=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
@@ -52,14 +54,14 @@ github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArs
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY=
github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jarcoal/httpmock v1.4.0 h1:BvhqnH0JAYbNudL2GMJKgOHe2CtKlzJ/5rWKyp+hc2k=
github.com/jarcoal/httpmock v1.4.0/go.mod h1:ftW1xULwo+j0R0JJkJIIi7UKigZUXCLLanykgjwBXL0=
github.com/jarcoal/httpmock v1.4.1 h1:0Ju+VCFuARfFlhVXFc2HxlcQkfB+Xq12/EotHko+x2A=
github.com/jarcoal/httpmock v1.4.1/go.mod h1:ftW1xULwo+j0R0JJkJIIi7UKigZUXCLLanykgjwBXL0=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
@@ -67,8 +69,8 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lufia/plan9stats v0.0.0-20250821153705-5981dea3221d h1:vFzYZc8yji+9DmNRhpEbs8VBK4CgV/DPfGzeVJSSp/8=
github.com/lufia/plan9stats v0.0.0-20250821153705-5981dea3221d/go.mod h1:autxFIvghDt3jPTLoqZ9OZ7s9qTGNAWmYCjVFWPX/zg=
github.com/lufia/plan9stats v0.0.0-20250827001030-24949be3fa54 h1:mFWunSatvkQQDhpdyuFAYwyAan3hzCuma+Pz8sqvOfg=
github.com/lufia/plan9stats v0.0.0-20250827001030-24949be3fa54/go.mod h1:autxFIvghDt3jPTLoqZ9OZ7s9qTGNAWmYCjVFWPX/zg=
github.com/lxzan/gws v1.8.9 h1:VU3SGUeWlQrEwfUSfokcZep8mdg/BrUF+y73YYshdBM=
github.com/lxzan/gws v1.8.9/go.mod h1:d9yHaR1eDTBHagQC6KY7ycUOaz5KWeqQtP3xu7aMK8Y=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
@@ -77,19 +79,19 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/nicholas-fedor/shoutrrr v0.8.8 h1:F/oyoatWK5cbHPPgkjRZrA0262TP7KWuUQz9KskRtR8=
github.com/nicholas-fedor/shoutrrr v0.8.8/go.mod h1:T30Y+eoZFEjDk4HtOItcHQioZSOe3Z6a6aNfSz6jc5c=
github.com/onsi/ginkgo/v2 v2.23.4 h1:ktYTpKJAVZnDT4VjxSbiBenUjmlL/5QkBEocaWXiQus=
github.com/onsi/ginkgo/v2 v2.23.4/go.mod h1:Bt66ApGPBFzHyR+JO10Zbt0Gsp4uWxu5mIOTusL46e8=
github.com/onsi/gomega v1.37.0 h1:CdEG8g0S133B4OswTDC/5XPSzE1OeP29QOioj2PID2Y=
github.com/onsi/gomega v1.37.0/go.mod h1:8D9+Txp43QWKhM24yyOBEdpkzN8FvJyAwecBgsU4KU0=
github.com/nicholas-fedor/shoutrrr v0.9.1 h1:SEBhM6P1favzILO0f55CY3P9JwvM9RZ7B1ZMCl+Injs=
github.com/nicholas-fedor/shoutrrr v0.9.1/go.mod h1:khue5m8LYyMzdPWuJxDTJeT89l9gjwjA+a+r0e8qxxk=
github.com/onsi/ginkgo/v2 v2.25.3 h1:Ty8+Yi/ayDAGtk4XxmmfUy4GabvM+MegeB4cDLRi6nw=
github.com/onsi/ginkgo/v2 v2.25.3/go.mod h1:43uiyQC4Ed2tkOzLsEYm7hnrb7UJTWHYNsuy3bG/snE=
github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pocketbase/dbx v1.11.0 h1:LpZezioMfT3K4tLrqA55wWFw1EtH1pM4tzSVa7kgszU=
github.com/pocketbase/dbx v1.11.0/go.mod h1:xXRCIAKTHMgUCyCKZm55pUOdvFziJjQfXaWKhu2vhMs=
github.com/pocketbase/pocketbase v0.29.3 h1:Mj8o5awsbVJIdIoTuQNhfC2oL/c4aImQ3RyfFZlzFVg=
github.com/pocketbase/pocketbase v0.29.3/go.mod h1:oGpT67LObxCFK4V2fSL7J9YnPbBnnshOpJ5v3zcneww=
github.com/pocketbase/pocketbase v0.30.1 h1:8lgfhH+HiSw1PyKVMq2sjtC4ZNvda2f/envTAzWMLOA=
github.com/pocketbase/pocketbase v0.30.1/go.mod h1:sUI+uekXZam5Wa0eh+DClc+HieKMCeqsHA7Ydd9vwyE=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
@@ -97,19 +99,19 @@ github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qq
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shirou/gopsutil/v4 v4.25.6 h1:kLysI2JsKorfaFPcYmcJqbzROzsBWEOAtw6A7dIfqXs=
github.com/shirou/gopsutil/v4 v4.25.6/go.mod h1:PfybzyydfZcN+JMMjkF6Zb8Mq1A/VcogFFg7hj50W9c=
github.com/spf13/cast v1.9.2 h1:SsGfm7M8QOFtEzumm7UZrZdLLquNdzFYfIbEXntcFbE=
github.com/spf13/cast v1.9.2/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.7 h1:vN6T9TfwStFPFM5XzjsvmzZkLuaLX+HS+0SeFLRgU6M=
github.com/spf13/pflag v1.0.7/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/shirou/gopsutil/v4 v4.25.9 h1:JImNpf6gCVhKgZhtaAHJ0serfFGtlfIlSC08eaKdTrU=
github.com/shirou/gopsutil/v4 v4.25.9/go.mod h1:gxIxoC+7nQRwUl/xNhutXlD8lq+jxTgpIkEf3rADHL8=
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.11.0 h1:ib4sjIrwZKxE5u/Japgo/7SJV3PvgjGiRNAvTVGqQl8=
github.com/stretchr/testify v1.11.0/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tklauser/go-sysconf v0.3.15 h1:VE89k0criAymJ/Os65CSn1IXaol+1wrsFHEB8Ol49K4=
github.com/tklauser/go-sysconf v0.3.15/go.mod h1:Dmjwr6tYFIseJw7a3dRLJfsHAMXZ3nEnL/aZY+0IuI4=
github.com/tklauser/numcpus v0.10.0 h1:18njr6LDBk1zuna922MgdjQuJFjrdppsZG60sHGfjso=
@@ -120,42 +122,44 @@ github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/exp v0.0.0-20250819193227-8b4c13bb791b h1:DXr+pvt3nC887026GRP39Ej11UATqWDmWuS99x26cD0=
golang.org/x/exp v0.0.0-20250819193227-8b4c13bb791b/go.mod h1:4QTo5u+SEIbbKW1RacMZq1YEfOBqeXa19JeshGi+zc4=
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
golang.org/x/exp v0.0.0-20251002181428-27f1f14c8bb9 h1:TQwNpfvNkxAVlItJf6Cr5JTsVZoC/Sj7K3OZv2Pc14A=
golang.org/x/exp v0.0.0-20251002181428-27f1f14c8bb9/go.mod h1:TwQYMMnGpvZyc+JpB/UAuTNIsVJifOlSkrZkhcvpVUk=
golang.org/x/image v0.0.0-20191009234506-e7c1f5e7dbb8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.30.0 h1:jD5RhkmVAnjqaCUXfbGBrn3lpxbknfN9w2UhHHU+5B4=
golang.org/x/image v0.30.0/go.mod h1:SAEUTxCCMWSrJcCy/4HwavEsfZZJlYxeHLc6tTiAe/c=
golang.org/x/mod v0.27.0 h1:kb+q2PyFnEADO2IEF935ehFUXlWiNjJWtRNgBLSfbxQ=
golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc=
golang.org/x/image v0.31.0 h1:mLChjE2MV6g1S7oqbXC0/UcKijjm5fnJLUYKIYrLESA=
golang.org/x/image v0.31.0/go.mod h1:R9ec5Lcp96v9FTF+ajwaH3uGxPH4fKfHHAVbUILxghA=
golang.org/x/mod v0.28.0 h1:gQBtGhjxykdjY9YhZpSlZIsbnaE2+PgjfLWUQTnoZ1U=
golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I=
golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo=
golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.34.0 h1:O/2T7POpk0ZZ7MAzMeWFSg6S5IpWd/RXDlM9hgM3DR4=
golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.35.0 h1:bZBVKBudEyhRcajGcNc3jIfWPqV4y/Kt2XcoigOWtDQ=
golang.org/x/term v0.35.0/go.mod h1:TPGtkTLesOwf2DE8CgVYiZinHAOuy5AYUYT1lENIZnA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk=
golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg=
golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s=
golang.org/x/tools v0.37.0 h1:DVSRzp7FwePZW356yEAChSdNcQo6Nsp+fex1SUW09lE=
golang.org/x/tools v0.37.0/go.mod h1:MBN5QPQtLMHVdvsbtarmTNukZDdgwdwlO5qGacAzF0w=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
google.golang.org/protobuf v1.36.7 h1:IgrO7UwFQGJdRNXH/sQux4R1Dj1WAKcLElzeeRaXV2A=
google.golang.org/protobuf v1.36.7/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -165,18 +169,20 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
howett.net/plist v1.0.1 h1:37GdZ8tP09Q35o9ych3ehygcsL+HqKSwzctveSlarvM=
howett.net/plist v1.0.1/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g=
modernc.org/cc/v4 v4.26.2 h1:991HMkLjJzYBIfha6ECZdjrIYz2/1ayr+FL8GN+CNzM=
modernc.org/cc/v4 v4.26.2/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.28.0 h1:rjznn6WWehKq7dG4JtLRKxb52Ecv8OUGah8+Z/SfpNU=
modernc.org/ccgo/v4 v4.28.0/go.mod h1:JygV3+9AV6SmPhDasu4JgquwU81XAKLd3OKTUDNOiKE=
modernc.org/fileutil v1.3.8 h1:qtzNm7ED75pd1C7WgAGcK4edm4fvhtBsEiI/0NQ54YM=
modernc.org/fileutil v1.3.8/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/cc/v4 v4.26.5 h1:xM3bX7Mve6G8K8b+T11ReenJOT+BmVqQj0FY5T4+5Y4=
modernc.org/cc/v4 v4.26.5/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.28.1 h1:wPKYn5EC/mYTqBO373jKjvX2n+3+aK7+sICCv4Fjy1A=
modernc.org/ccgo/v4 v4.28.1/go.mod h1:uD+4RnfrVgE6ec9NGguUNdhqzNIeeomeXf6CL0GTE5Q=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.66.3 h1:cfCbjTUcdsKyyZZfEUKfoHcP3S0Wkvz3jgSzByEWVCQ=
modernc.org/libc v1.66.3/go.mod h1:XD9zO8kt59cANKvHPXpx7yS2ELPheAey0vjIuZOhOU8=
modernc.org/libc v1.66.10 h1:yZkb3YeLx4oynyR+iUsXsybsX4Ubx7MQlSYEw4yj59A=
modernc.org/libc v1.66.10/go.mod h1:8vGSEwvoUoltr4dlywvHqjtAqHBaw0j1jI7iFBTAr2I=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
@@ -185,8 +191,8 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.38.2 h1:Aclu7+tgjgcQVShZqim41Bbw9Cho0y/7WzYptXqkEek=
modernc.org/sqlite v1.38.2/go.mod h1:cPTJYSlgg3Sfg046yBShXENNtPrWrDX8bsbAQBzgQ5E=
modernc.org/sqlite v1.39.0 h1:6bwu9Ooim0yVYA7IZn9demiQk/Ejp0BtTjBWFLymSeY=
modernc.org/sqlite v1.39.0/go.mod h1:cPTJYSlgg3Sfg046yBShXENNtPrWrDX8bsbAQBzgQ5E=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=

View File

@@ -25,7 +25,12 @@ type alertInfo struct {
// startWorker is a long-running goroutine that processes alert tasks
// every x seconds. It must be running to process status alerts.
func (am *AlertManager) startWorker() {
tick := time.Tick(15 * time.Second)
processPendingAlerts := time.Tick(15 * time.Second)
// check for status alerts that are not resolved when system comes up
// (can be removed if we figure out core bug in #1052)
checkStatusAlerts := time.Tick(561 * time.Second)
for {
select {
case <-am.stopChan:
@@ -41,7 +46,9 @@ func (am *AlertManager) startWorker() {
case "cancel":
am.pendingAlerts.Delete(task.alertRecord.Id)
}
case <-tick:
case <-checkStatusAlerts:
resolveStatusAlerts(am.hub)
case <-processPendingAlerts:
// Check for expired alerts every tick
now := time.Now()
for key, value := range am.pendingAlerts.Range {
@@ -170,3 +177,35 @@ func (am *AlertManager) sendStatusAlert(alertStatus string, systemName string, a
LinkText: "View " + systemName,
})
}
// resolveStatusAlerts resolves any status alerts that weren't resolved
// when system came up (https://github.com/henrygd/beszel/issues/1052)
func resolveStatusAlerts(app core.App) error {
db := app.DB()
// Find all active status alerts where the system is actually up
var alertIds []string
err := db.NewQuery(`
SELECT a.id
FROM alerts a
JOIN systems s ON a.system = s.id
WHERE a.name = 'Status'
AND a.triggered = true
AND s.status = 'up'
`).Column(&alertIds)
if err != nil {
return err
}
// resolve all matching alert records
for _, alertId := range alertIds {
alert, err := app.FindRecordById("alerts", alertId)
if err != nil {
return err
}
alert.Set("triggered", false)
err = app.Save(alert)
if err != nil {
return err
}
}
return nil
}

View File

@@ -13,6 +13,7 @@ import (
"testing/synctest"
"time"
"github.com/henrygd/beszel/internal/alerts"
beszelTests "github.com/henrygd/beszel/internal/tests"
"github.com/pocketbase/dbx"
@@ -369,33 +370,9 @@ func TestUserAlertsApi(t *testing.T) {
}
}
func getHubWithUser(t *testing.T) (*beszelTests.TestHub, *core.Record) {
hub, err := beszelTests.NewTestHub(t.TempDir())
assert.NoError(t, err)
hub.StartHub()
// Manually initialize the system manager to bind event hooks
err = hub.GetSystemManager().Initialize()
assert.NoError(t, err)
// Create a test user
user, err := beszelTests.CreateUser(hub, "test@example.com", "password")
assert.NoError(t, err)
// Create user settings for the test user (required for alert notifications)
userSettingsData := map[string]any{
"user": user.Id,
"settings": `{"emails":[test@example.com],"webhooks":[]}`,
}
_, err = beszelTests.CreateRecord(hub, "user_settings", userSettingsData)
assert.NoError(t, err)
return hub, user
}
func TestStatusAlerts(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
hub, user := getHubWithUser(t)
hub, user := beszelTests.GetHubWithUser(t)
defer hub.Cleanup()
systems, err := beszelTests.CreateSystems(hub, 4, user.Id, "paused")
@@ -476,7 +453,7 @@ func TestStatusAlerts(t *testing.T) {
func TestAlertsHistory(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
hub, user := getHubWithUser(t)
hub, user := beszelTests.GetHubWithUser(t)
defer hub.Cleanup()
// Create systems and alerts
@@ -602,3 +579,102 @@ func TestAlertsHistory(t *testing.T) {
assert.EqualValues(t, 2, totalHistoryCount, "Should have 2 total alert history records")
})
}
func TestResolveStatusAlerts(t *testing.T) {
hub, user := beszelTests.GetHubWithUser(t)
defer hub.Cleanup()
// Create a systemUp
systemUp, err := beszelTests.CreateRecord(hub, "systems", map[string]any{
"name": "test-system",
"users": []string{user.Id},
"host": "127.0.0.1",
"status": "up",
})
assert.NoError(t, err)
systemDown, err := beszelTests.CreateRecord(hub, "systems", map[string]any{
"name": "test-system-2",
"users": []string{user.Id},
"host": "127.0.0.2",
"status": "up",
})
assert.NoError(t, err)
// Create a status alertUp for the system
alertUp, err := beszelTests.CreateRecord(hub, "alerts", map[string]any{
"name": "Status",
"system": systemUp.Id,
"user": user.Id,
"min": 1,
})
assert.NoError(t, err)
alertDown, err := beszelTests.CreateRecord(hub, "alerts", map[string]any{
"name": "Status",
"system": systemDown.Id,
"user": user.Id,
"min": 1,
})
assert.NoError(t, err)
// Verify alert is not triggered initially
assert.False(t, alertUp.GetBool("triggered"), "Alert should not be triggered initially")
// Set the system to 'up' (this should not trigger the alert)
systemUp.Set("status", "up")
err = hub.SaveNoValidate(systemUp)
assert.NoError(t, err)
systemDown.Set("status", "down")
err = hub.SaveNoValidate(systemDown)
assert.NoError(t, err)
// Wait a moment for any processing
time.Sleep(10 * time.Millisecond)
// Verify alertUp is still not triggered after setting system to up
alertUp, err = hub.FindFirstRecordByFilter("alerts", "id={:id}", dbx.Params{"id": alertUp.Id})
assert.NoError(t, err)
assert.False(t, alertUp.GetBool("triggered"), "Alert should not be triggered when system is up")
// Manually set both alerts triggered to true
alertUp.Set("triggered", true)
err = hub.SaveNoValidate(alertUp)
assert.NoError(t, err)
alertDown.Set("triggered", true)
err = hub.SaveNoValidate(alertDown)
assert.NoError(t, err)
// Verify we have exactly one alert with triggered true
triggeredCount, err := hub.CountRecords("alerts", dbx.HashExp{"triggered": true})
assert.NoError(t, err)
assert.EqualValues(t, 2, triggeredCount, "Should have exactly two alerts with triggered true")
// Verify the specific alertUp is triggered
alertUp, err = hub.FindFirstRecordByFilter("alerts", "id={:id}", dbx.Params{"id": alertUp.Id})
assert.NoError(t, err)
assert.True(t, alertUp.GetBool("triggered"), "Alert should be triggered")
// Verify we have two unresolved alert history records
alertHistoryCount, err := hub.CountRecords("alerts_history", dbx.HashExp{"resolved": ""})
assert.NoError(t, err)
assert.EqualValues(t, 2, alertHistoryCount, "Should have exactly two unresolved alert history records")
err = alerts.ResolveStatusAlerts(hub)
assert.NoError(t, err)
// Verify alertUp is not triggered after resolving
alertUp, err = hub.FindFirstRecordByFilter("alerts", "id={:id}", dbx.Params{"id": alertUp.Id})
assert.NoError(t, err)
assert.False(t, alertUp.GetBool("triggered"), "Alert should not be triggered after resolving")
// Verify alertDown is still triggered
alertDown, err = hub.FindFirstRecordByFilter("alerts", "id={:id}", dbx.Params{"id": alertDown.Id})
assert.NoError(t, err)
assert.True(t, alertDown.GetBool("triggered"), "Alert should still be triggered after resolving")
// Verify we have one unresolved alert history record
alertHistoryCount, err = hub.CountRecords("alerts_history", dbx.HashExp{"resolved": ""})
assert.NoError(t, err)
assert.EqualValues(t, 1, alertHistoryCount, "Should have exactly one unresolved alert history record")
}

View File

@@ -1,3 +1,6 @@
//go:build testing
// +build testing
package alerts
import (
@@ -53,3 +56,7 @@ func (am *AlertManager) ForceExpirePendingAlerts() {
return true
})
}
func ResolveStatusAlerts(app core.App) error {
return resolveStatusAlerts(app)
}

View File

@@ -1,22 +1,33 @@
package common
type WebSocketAction = uint8
import (
"github.com/henrygd/beszel/internal/entities/system"
)
// Not implemented yet
// type AgentError = uint8
type WebSocketAction = uint8
const (
// Request system data from agent
GetData WebSocketAction = iota
// Check the fingerprint of the agent
CheckFingerprint
// Add new actions here...
)
// HubRequest defines the structure for requests sent from hub to agent.
type HubRequest[T any] struct {
Action WebSocketAction `cbor:"0,keyasint"`
Data T `cbor:"1,keyasint,omitempty,omitzero"`
// Error AgentError `cbor:"error,omitempty,omitzero"`
Id *uint32 `cbor:"2,keyasint,omitempty"`
}
// AgentResponse defines the structure for responses sent from agent to hub.
type AgentResponse struct {
Id *uint32 `cbor:"0,keyasint,omitempty"`
SystemData *system.CombinedData `cbor:"1,keyasint,omitempty,omitzero"`
Fingerprint *FingerprintResponse `cbor:"2,keyasint,omitempty,omitzero"`
Error string `cbor:"3,keyasint,omitempty,omitzero"`
// RawBytes []byte `cbor:"4,keyasint,omitempty,omitzero"`
}
type FingerprintRequest struct {
@@ -27,6 +38,12 @@ type FingerprintRequest struct {
type FingerprintResponse struct {
Fingerprint string `cbor:"0,keyasint"`
// Optional system info for universal token system creation
Hostname string `cbor:"1,keyasint,omitempty,omitzero"`
Port string `cbor:"2,keyasint,omitempty,omitzero"`
Hostname string `cbor:"1,keyasint,omitzero"`
Port string `cbor:"2,keyasint,omitzero"`
Name string `cbor:"3,keyasint,omitzero"`
}
type DataRequestOptions struct {
CacheTimeMs uint16 `cbor:"0,keyasint"`
// ResourceType uint8 `cbor:"1,keyasint,omitempty,omitzero"`
}

View File

@@ -23,4 +23,7 @@ COPY --from=builder /agent /agent
# this is so we don't need to create the /tmp directory in the scratch container
COPY --from=builder /tmp /tmp
# Ensure data persistence across container recreations
VOLUME ["/var/lib/beszel-agent"]
ENTRYPOINT ["/agent"]

View File

@@ -0,0 +1,28 @@
FROM --platform=$BUILDPLATFORM golang:alpine AS builder
WORKDIR /app
COPY ../go.mod ../go.sum ./
RUN go mod download
# Copy source files
COPY . ./
# Build
ARG TARGETOS TARGETARCH
RUN CGO_ENABLED=0 GOGC=75 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -ldflags "-w -s" -o /agent ./internal/cmd/agent
# --------------------------
# Final image
# Note: must cap_add: [CAP_PERFMON] and mount /dev/dri/ as volume
# --------------------------
FROM alpine:edge
COPY --from=builder /agent /agent
RUN apk add --no-cache -X https://dl-cdn.alpinelinux.org/alpine/edge/testing igt-gpu-tools
# Ensure data persistence across container recreations
VOLUME ["/var/lib/beszel-agent"]
ENTRYPOINT ["/agent"]

View File

@@ -2,15 +2,18 @@ FROM --platform=$BUILDPLATFORM golang:alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
# RUN go mod download
COPY *.go ./
COPY cmd ./cmd
COPY internal ./internal
COPY ../go.mod ../go.sum ./
RUN go mod download
# Copy source files
COPY . ./
# Build
ARG TARGETOS TARGETARCH
RUN CGO_ENABLED=0 GOGC=75 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -ldflags "-w -s" -o /agent ./cmd/agent
RUN CGO_ENABLED=0 GOGC=75 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -ldflags "-w -s" -o /agent ./internal/cmd/agent
RUN rm -rf /tmp/*
# --------------------------
# Final image: GPU-enabled agent with nvidia-smi
@@ -18,4 +21,10 @@ RUN CGO_ENABLED=0 GOGC=75 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -ldflags "-
FROM nvidia/cuda:12.2.2-base-ubuntu22.04
COPY --from=builder /agent /agent
# this is so we don't need to create the /tmp directory in the scratch container
COPY --from=builder /tmp /tmp
# Ensure data persistence across container recreations
VOLUME ["/var/lib/beszel-agent"]
ENTRYPOINT ["/agent"]

View File

@@ -25,6 +25,9 @@ FROM scratch
COPY --from=builder /beszel /
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Ensure data persistence across container recreations
VOLUME ["/beszel_data"]
EXPOSE 8090
ENTRYPOINT [ "/beszel" ]

View File

@@ -38,19 +38,24 @@ type Stats struct {
Bandwidth [2]uint64 `json:"b,omitzero" cbor:"26,keyasint,omitzero"` // [sent bytes, recv bytes]
MaxBandwidth [2]uint64 `json:"bm,omitzero" cbor:"27,keyasint,omitzero"` // [sent bytes, recv bytes]
// TODO: remove other load fields in future release in favor of load avg array
LoadAvg [3]float64 `json:"la,omitempty" cbor:"28,keyasint"`
Battery [2]uint8 `json:"bat,omitzero" cbor:"29,keyasint,omitzero"` // [percent, charge state, current]
MaxMem float64 `json:"mm,omitempty" cbor:"30,keyasint,omitempty"`
LoadAvg [3]float64 `json:"la,omitempty" cbor:"28,keyasint"`
Battery [2]uint8 `json:"bat,omitzero" cbor:"29,keyasint,omitzero"` // [percent, charge state, current]
MaxMem float64 `json:"mm,omitempty" cbor:"30,keyasint,omitempty"`
NetworkInterfaces map[string][4]uint64 `json:"ni,omitempty" cbor:"31,keyasint,omitempty"` // [upload bytes, download bytes, total upload, total download]
DiskIO [2]uint64 `json:"dio,omitzero" cbor:"32,keyasint,omitzero"` // [read bytes, write bytes]
MaxDiskIO [2]uint64 `json:"diom,omitzero" cbor:"-"` // [max read bytes, max write bytes]
}
type GPUData struct {
Name string `json:"n" cbor:"0,keyasint"`
Temperature float64 `json:"-"`
MemoryUsed float64 `json:"mu,omitempty" cbor:"1,keyasint,omitempty"`
MemoryTotal float64 `json:"mt,omitempty" cbor:"2,keyasint,omitempty"`
Usage float64 `json:"u" cbor:"3,keyasint"`
Power float64 `json:"p,omitempty" cbor:"4,keyasint,omitempty"`
Count float64 `json:"-"`
Name string `json:"n" cbor:"0,keyasint"`
Temperature float64 `json:"-"`
MemoryUsed float64 `json:"mu,omitempty,omitzero" cbor:"1,keyasint,omitempty,omitzero"`
MemoryTotal float64 `json:"mt,omitempty,omitzero" cbor:"2,keyasint,omitempty,omitzero"`
Usage float64 `json:"u" cbor:"3,keyasint,omitempty"`
Power float64 `json:"p,omitempty" cbor:"4,keyasint,omitempty"`
Count float64 `json:"-"`
Engines map[string]float64 `json:"e,omitempty" cbor:"5,keyasint,omitempty"`
PowerPkg float64 `json:"pp,omitempty" cbor:"6,keyasint,omitempty"`
}
type FsStats struct {
@@ -65,6 +70,11 @@ type FsStats struct {
DiskWritePs float64 `json:"w" cbor:"3,keyasint"`
MaxDiskReadPS float64 `json:"rm,omitempty" cbor:"4,keyasint,omitempty"`
MaxDiskWritePS float64 `json:"wm,omitempty" cbor:"5,keyasint,omitempty"`
// TODO: remove DiskReadPs and DiskWritePs in future release in favor of DiskReadBytes and DiskWriteBytes
DiskReadBytes uint64 `json:"rb" cbor:"6,keyasint,omitempty"`
DiskWriteBytes uint64 `json:"wb" cbor:"7,keyasint,omitempty"`
MaxDiskReadBytes uint64 `json:"rbm,omitempty" cbor:"-"`
MaxDiskWriteBytes uint64 `json:"wbm,omitempty" cbor:"-"`
}
type NetIoStats struct {
@@ -83,6 +93,14 @@ const (
Freebsd
)
type ConnectionType = uint8
const (
ConnectionTypeNone ConnectionType = iota
ConnectionTypeSSH
ConnectionTypeWebSocket
)
type Info struct {
Hostname string `json:"h" cbor:"0,keyasint"`
KernelVersion string `json:"k,omitempty" cbor:"1,keyasint,omitempty"`
@@ -104,7 +122,8 @@ type Info struct {
LoadAvg15 float64 `json:"l15,omitempty" cbor:"17,keyasint,omitempty"`
BandwidthBytes uint64 `json:"bb" cbor:"18,keyasint"`
// TODO: remove load fields in future release in favor of load avg array
LoadAvg [3]float64 `json:"la,omitempty" cbor:"19,keyasint"`
LoadAvg [3]float64 `json:"la,omitempty" cbor:"19,keyasint"`
ConnectionType ConnectionType `json:"ct,omitempty" cbor:"20,keyasint,omitempty,omitzero"`
}
// Final data structure to return to the hub

View File

@@ -1,6 +1,7 @@
package hub
import (
"context"
"errors"
"net"
"net/http"
@@ -93,7 +94,7 @@ func (acr *agentConnectRequest) agentConnect() (err error) {
// verifyWsConn verifies the WebSocket connection using the agent's fingerprint and
// SSH key signature, then adds the system to the system manager.
func (acr *agentConnectRequest) verifyWsConn(conn *gws.Conn, fpRecords []ws.FingerprintRecord) (err error) {
wsConn := ws.NewWsConnection(conn)
wsConn := ws.NewWsConnection(conn, acr.agentSemVer)
// must set wsConn in connection store before the read loop
conn.Session().Store("wsConn", wsConn)
@@ -112,7 +113,7 @@ func (acr *agentConnectRequest) verifyWsConn(conn *gws.Conn, fpRecords []ws.Fing
return err
}
agentFingerprint, err := wsConn.GetFingerprint(acr.token, signer, acr.isUniversalToken)
agentFingerprint, err := wsConn.GetFingerprint(context.Background(), acr.token, signer, acr.isUniversalToken)
if err != nil {
return err
}
@@ -267,9 +268,12 @@ func (acr *agentConnectRequest) createSystem(agentFingerprint common.Fingerprint
if agentFingerprint.Port == "" {
agentFingerprint.Port = "45876"
}
if agentFingerprint.Name == "" {
agentFingerprint.Name = agentFingerprint.Hostname
}
// create new record
systemRecord := core.NewRecord(systemsCollection)
systemRecord.Set("name", agentFingerprint.Hostname)
systemRecord.Set("name", agentFingerprint.Name)
systemRecord.Set("host", remoteAddr)
systemRecord.Set("port", agentFingerprint.Port)
systemRecord.Set("users", []string{acr.userId})

View File

@@ -10,6 +10,7 @@ import (
"strings"
"time"
"github.com/henrygd/beszel/internal/common"
"github.com/henrygd/beszel/internal/hub/ws"
"github.com/henrygd/beszel/internal/entities/system"
@@ -107,7 +108,7 @@ func (sys *System) update() error {
sys.handlePaused()
return nil
}
data, err := sys.fetchDataFromAgent()
data, err := sys.fetchDataFromAgent(common.DataRequestOptions{CacheTimeMs: uint16(interval)})
if err == nil {
_, err = sys.createRecords(data)
}
@@ -209,13 +210,13 @@ func (sys *System) getContext() (context.Context, context.CancelFunc) {
// fetchDataFromAgent attempts to fetch data from the agent,
// prioritizing WebSocket if available.
func (sys *System) fetchDataFromAgent() (*system.CombinedData, error) {
func (sys *System) fetchDataFromAgent(options common.DataRequestOptions) (*system.CombinedData, error) {
if sys.data == nil {
sys.data = &system.CombinedData{}
}
if sys.WsConn != nil && sys.WsConn.IsConnected() {
wsData, err := sys.fetchDataViaWebSocket()
wsData, err := sys.fetchDataViaWebSocket(options)
if err == nil {
return wsData, nil
}
@@ -223,18 +224,18 @@ func (sys *System) fetchDataFromAgent() (*system.CombinedData, error) {
sys.closeWebSocketConnection()
}
sshData, err := sys.fetchDataViaSSH()
sshData, err := sys.fetchDataViaSSH(options)
if err != nil {
return nil, err
}
return sshData, nil
}
func (sys *System) fetchDataViaWebSocket() (*system.CombinedData, error) {
func (sys *System) fetchDataViaWebSocket(options common.DataRequestOptions) (*system.CombinedData, error) {
if sys.WsConn == nil || !sys.WsConn.IsConnected() {
return nil, errors.New("no websocket connection")
}
err := sys.WsConn.RequestSystemData(sys.data)
err := sys.WsConn.RequestSystemData(context.Background(), sys.data, options)
if err != nil {
return nil, err
}
@@ -244,7 +245,7 @@ func (sys *System) fetchDataViaWebSocket() (*system.CombinedData, error) {
// fetchDataViaSSH handles fetching data using SSH.
// This function encapsulates the original SSH logic.
// It updates sys.data directly upon successful fetch.
func (sys *System) fetchDataViaSSH() (*system.CombinedData, error) {
func (sys *System) fetchDataViaSSH(options common.DataRequestOptions) (*system.CombinedData, error) {
maxRetries := 1
for attempt := 0; attempt <= maxRetries; attempt++ {
if sys.client == nil || sys.Status == down {
@@ -269,12 +270,31 @@ func (sys *System) fetchDataViaSSH() (*system.CombinedData, error) {
if err != nil {
return nil, err
}
stdin, stdinErr := session.StdinPipe()
if err := session.Shell(); err != nil {
return nil, err
}
*sys.data = system.CombinedData{}
if sys.agentVersion.GTE(beszel.MinVersionAgentResponse) && stdinErr == nil {
req := common.HubRequest[any]{Action: common.GetData, Data: options}
_ = cbor.NewEncoder(stdin).Encode(req)
// Close write side to signal end of request
_ = stdin.Close()
var resp common.AgentResponse
if decErr := cbor.NewDecoder(stdout).Decode(&resp); decErr == nil && resp.SystemData != nil {
*sys.data = *resp.SystemData
// wait for the session to complete
if err := session.Wait(); err != nil {
return nil, err
}
return sys.data, nil
}
// If decoding failed, fall back below
}
if sys.agentVersion.GTE(beszel.MinVersionCbor) {
err = cbor.NewDecoder(stdout).Decode(sys.data)
} else {
@@ -379,11 +399,11 @@ func extractAgentVersion(versionString string) (semver.Version, error) {
}
// getJitter returns a channel that will be triggered after a random delay
// between 40% and 90% of the interval.
// between 51% and 95% of the interval.
// This is used to stagger the initial WebSocket connections to prevent clustering.
func getJitter() <-chan time.Time {
minPercent := 40
maxPercent := 90
minPercent := 51
maxPercent := 95
jitterRange := maxPercent - minPercent
msDelay := (interval * minPercent / 100) + rand.Intn(interval*jitterRange/100)
return time.After(time.Duration(msDelay) * time.Millisecond)

View File

@@ -106,6 +106,8 @@ func (sm *SystemManager) bindEventHooks() {
sm.hub.OnRecordAfterUpdateSuccess("systems").BindFunc(sm.onRecordAfterUpdateSuccess)
sm.hub.OnRecordAfterDeleteSuccess("systems").BindFunc(sm.onRecordAfterDeleteSuccess)
sm.hub.OnRecordAfterUpdateSuccess("fingerprints").BindFunc(sm.onTokenRotated)
sm.hub.OnRealtimeSubscribeRequest().BindFunc(sm.onRealtimeSubscribeRequest)
sm.hub.OnRealtimeConnectRequest().BindFunc(sm.onRealtimeConnectRequest)
}
// onTokenRotated handles fingerprint token rotation events.

View File

@@ -0,0 +1,187 @@
package systems
import (
"encoding/json"
"strings"
"sync"
"time"
"github.com/henrygd/beszel/internal/common"
"github.com/pocketbase/pocketbase/core"
"github.com/pocketbase/pocketbase/tools/subscriptions"
)
type subscriptionInfo struct {
subscription string
connectedClients uint8
}
var (
activeSubscriptions = make(map[string]*subscriptionInfo)
workerRunning bool
realtimeTicker *time.Ticker
tickerStopChan chan struct{}
realtimeMutex sync.Mutex
)
// onRealtimeConnectRequest handles client connection events for realtime subscriptions.
// It cleans up existing subscriptions when a client connects.
func (sm *SystemManager) onRealtimeConnectRequest(e *core.RealtimeConnectRequestEvent) error {
// after e.Next() is the client disconnection
e.Next()
subscriptions := e.Client.Subscriptions()
for k := range subscriptions {
sm.removeRealtimeSubscription(k, subscriptions[k])
}
return nil
}
// onRealtimeSubscribeRequest handles client subscription events for realtime metrics.
// It tracks new subscriptions and unsubscriptions to manage the realtime worker lifecycle.
func (sm *SystemManager) onRealtimeSubscribeRequest(e *core.RealtimeSubscribeRequestEvent) error {
oldSubs := e.Client.Subscriptions()
// after e.Next() is the result of the subscribe request
err := e.Next()
newSubs := e.Client.Subscriptions()
// handle new subscriptions
for k, options := range newSubs {
if _, ok := oldSubs[k]; !ok {
if strings.HasPrefix(k, "rt_metrics") {
systemId := options.Query["system"]
if _, ok := activeSubscriptions[systemId]; !ok {
activeSubscriptions[systemId] = &subscriptionInfo{
subscription: k,
}
}
activeSubscriptions[systemId].connectedClients += 1
sm.onRealtimeSubscriptionAdded()
}
}
}
// handle unsubscriptions
for k := range oldSubs {
if _, ok := newSubs[k]; !ok {
sm.removeRealtimeSubscription(k, oldSubs[k])
}
}
return err
}
// onRealtimeSubscriptionAdded initializes or starts the realtime worker when the first subscription is added.
// It ensures only one worker runs at a time and creates the ticker for periodic data fetching.
func (sm *SystemManager) onRealtimeSubscriptionAdded() {
realtimeMutex.Lock()
defer realtimeMutex.Unlock()
// Start the worker if it's not already running
if !workerRunning {
workerRunning = true
// Create a new stop channel for this worker instance
tickerStopChan = make(chan struct{})
go sm.startRealtimeWorker()
}
// If no ticker exists, create one
if realtimeTicker == nil {
realtimeTicker = time.NewTicker(1 * time.Second)
}
}
// checkSubscriptions stops the realtime worker when there are no active subscriptions.
// This prevents unnecessary resource usage when no clients are listening for realtime data.
func (sm *SystemManager) checkSubscriptions() {
if !workerRunning || len(activeSubscriptions) > 0 {
return
}
realtimeMutex.Lock()
defer realtimeMutex.Unlock()
// Signal the worker to stop
if tickerStopChan != nil {
select {
case tickerStopChan <- struct{}{}:
default:
}
}
if realtimeTicker != nil {
realtimeTicker.Stop()
realtimeTicker = nil
}
// Mark worker as stopped (will be reset when next subscription comes in)
workerRunning = false
}
// removeRealtimeSubscription removes a realtime subscription and checks if the worker should be stopped.
// It only processes subscriptions with the "rt_metrics" prefix and triggers cleanup when subscriptions are removed.
func (sm *SystemManager) removeRealtimeSubscription(subscription string, options subscriptions.SubscriptionOptions) {
if strings.HasPrefix(subscription, "rt_metrics") {
systemId := options.Query["system"]
if info, ok := activeSubscriptions[systemId]; ok {
info.connectedClients -= 1
if info.connectedClients <= 0 {
delete(activeSubscriptions, systemId)
}
}
sm.checkSubscriptions()
}
}
// startRealtimeWorker runs the main loop for fetching realtime data from agents.
// It continuously fetches system data and broadcasts it to subscribed clients via WebSocket.
func (sm *SystemManager) startRealtimeWorker() {
sm.fetchRealtimeDataAndNotify()
for {
select {
case <-tickerStopChan:
return
case <-realtimeTicker.C:
// Check if ticker is still valid (might have been stopped)
if realtimeTicker == nil || len(activeSubscriptions) == 0 {
return
}
// slog.Debug("activeSubscriptions", "count", len(activeSubscriptions))
sm.fetchRealtimeDataAndNotify()
}
}
}
// fetchRealtimeDataAndNotify fetches realtime data for all active subscriptions and notifies the clients.
func (sm *SystemManager) fetchRealtimeDataAndNotify() {
for systemId, info := range activeSubscriptions {
system, ok := sm.systems.GetOk(systemId)
if ok {
go func() {
data, err := system.fetchDataFromAgent(common.DataRequestOptions{CacheTimeMs: 1000})
if err != nil {
return
}
bytes, err := json.Marshal(data)
if err == nil {
notify(sm.hub, info.subscription, bytes)
}
}()
}
}
}
// notify broadcasts realtime data to all clients subscribed to a specific subscription.
// It iterates through all connected clients and sends the data only to those with matching subscriptions.
func notify(app core.App, subscription string, data []byte) error {
message := subscriptions.Message{
Name: subscription,
Data: data,
}
for _, client := range app.SubscriptionsBroker().Clients() {
if !client.HasSubscription(subscription) {
continue
}
client.Send(message)
}
return nil
}

View File

@@ -22,6 +22,12 @@ func Update(cmd *cobra.Command, _ []string) {
// Check if china-mirrors flag is set
useMirror, _ := cmd.Flags().GetBool("china-mirrors")
// Get the executable path before update
exePath, err := os.Executable()
if err != nil {
log.Fatal(err)
}
updated, err := ghupdate.Update(ghupdate.Config{
ArchiveExecutable: "beszel",
DataDir: dataDir,
@@ -35,11 +41,8 @@ func Update(cmd *cobra.Command, _ []string) {
}
// make sure the file is executable
exePath, err := os.Executable()
if err == nil {
if err := os.Chmod(exePath, 0755); err != nil {
fmt.Printf("Warning: failed to set executable permissions: %v\n", err)
}
if err := os.Chmod(exePath, 0755); err != nil {
fmt.Printf("Warning: failed to set executable permissions: %v\n", err)
}
// Try to restart the service if it's running

107
internal/hub/ws/handlers.go Normal file
View File

@@ -0,0 +1,107 @@
package ws
import (
"context"
"errors"
"github.com/fxamacker/cbor/v2"
"github.com/henrygd/beszel/internal/common"
"github.com/henrygd/beszel/internal/entities/system"
"github.com/lxzan/gws"
"golang.org/x/crypto/ssh"
)
// ResponseHandler defines interface for handling agent responses
type ResponseHandler interface {
Handle(agentResponse common.AgentResponse) error
HandleLegacy(rawData []byte) error
}
// BaseHandler provides a default implementation that can be embedded to make HandleLegacy optional
// type BaseHandler struct{}
// func (h *BaseHandler) HandleLegacy(rawData []byte) error {
// return errors.New("legacy format not supported")
// }
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
// systemDataHandler implements ResponseHandler for system data requests
type systemDataHandler struct {
data *system.CombinedData
}
func (h *systemDataHandler) HandleLegacy(rawData []byte) error {
return cbor.Unmarshal(rawData, h.data)
}
func (h *systemDataHandler) Handle(agentResponse common.AgentResponse) error {
if agentResponse.SystemData != nil {
*h.data = *agentResponse.SystemData
}
return nil
}
// RequestSystemData requests system metrics from the agent and unmarshals the response.
func (ws *WsConn) RequestSystemData(ctx context.Context, data *system.CombinedData, options common.DataRequestOptions) error {
if !ws.IsConnected() {
return gws.ErrConnClosed
}
req, err := ws.requestManager.SendRequest(ctx, common.GetData, options)
if err != nil {
return err
}
handler := &systemDataHandler{data: data}
return ws.handleAgentRequest(req, handler)
}
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
// fingerprintHandler implements ResponseHandler for fingerprint requests
type fingerprintHandler struct {
result *common.FingerprintResponse
}
func (h *fingerprintHandler) HandleLegacy(rawData []byte) error {
return cbor.Unmarshal(rawData, h.result)
}
func (h *fingerprintHandler) Handle(agentResponse common.AgentResponse) error {
if agentResponse.Fingerprint != nil {
*h.result = *agentResponse.Fingerprint
return nil
}
return errors.New("no fingerprint data in response")
}
// GetFingerprint authenticates with the agent using SSH signature and returns the agent's fingerprint.
func (ws *WsConn) GetFingerprint(ctx context.Context, token string, signer ssh.Signer, needSysInfo bool) (common.FingerprintResponse, error) {
if !ws.IsConnected() {
return common.FingerprintResponse{}, gws.ErrConnClosed
}
challenge := []byte(token)
signature, err := signer.Sign(nil, challenge)
if err != nil {
return common.FingerprintResponse{}, err
}
req, err := ws.requestManager.SendRequest(ctx, common.CheckFingerprint, common.FingerprintRequest{
Signature: signature.Blob,
NeedSysInfo: needSysInfo,
})
if err != nil {
return common.FingerprintResponse{}, err
}
var result common.FingerprintResponse
handler := &fingerprintHandler{result: &result}
err = ws.handleAgentRequest(req, handler)
return result, err
}

View File

@@ -0,0 +1,186 @@
package ws
import (
"context"
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/fxamacker/cbor/v2"
"github.com/henrygd/beszel/internal/common"
"github.com/lxzan/gws"
)
// RequestID uniquely identifies a request
type RequestID uint32
// PendingRequest tracks an in-flight request
type PendingRequest struct {
ID RequestID
ResponseCh chan *gws.Message
Context context.Context
Cancel context.CancelFunc
CreatedAt time.Time
}
// RequestManager handles concurrent requests to an agent
type RequestManager struct {
sync.RWMutex
conn *gws.Conn
pendingReqs map[RequestID]*PendingRequest
nextID atomic.Uint32
}
// NewRequestManager creates a new request manager for a WebSocket connection
func NewRequestManager(conn *gws.Conn) *RequestManager {
rm := &RequestManager{
conn: conn,
pendingReqs: make(map[RequestID]*PendingRequest),
}
return rm
}
// SendRequest sends a request and returns a channel for the response
func (rm *RequestManager) SendRequest(ctx context.Context, action common.WebSocketAction, data any) (*PendingRequest, error) {
reqID := RequestID(rm.nextID.Add(1))
reqCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
req := &PendingRequest{
ID: reqID,
ResponseCh: make(chan *gws.Message, 1),
Context: reqCtx,
Cancel: cancel,
CreatedAt: time.Now(),
}
rm.Lock()
rm.pendingReqs[reqID] = req
rm.Unlock()
hubReq := common.HubRequest[any]{
Id: (*uint32)(&reqID),
Action: action,
Data: data,
}
// Send the request
if err := rm.sendMessage(hubReq); err != nil {
rm.cancelRequest(reqID)
return nil, fmt.Errorf("failed to send request: %w", err)
}
// Start cleanup watcher for timeout/cancellation
go rm.cleanupRequest(req)
return req, nil
}
// sendMessage encodes and sends a message over WebSocket
func (rm *RequestManager) sendMessage(data any) error {
if rm.conn == nil {
return gws.ErrConnClosed
}
bytes, err := cbor.Marshal(data)
if err != nil {
return fmt.Errorf("failed to marshal request: %w", err)
}
return rm.conn.WriteMessage(gws.OpcodeBinary, bytes)
}
// handleResponse processes a single response message
func (rm *RequestManager) handleResponse(message *gws.Message) {
var response common.AgentResponse
if err := cbor.Unmarshal(message.Data.Bytes(), &response); err != nil {
// Legacy response without ID - route to first pending request of any type
rm.routeLegacyResponse(message)
return
}
reqID := RequestID(*response.Id)
rm.RLock()
req, exists := rm.pendingReqs[reqID]
rm.RUnlock()
if !exists {
// Request not found (might have timed out) - close the message
message.Close()
return
}
select {
case req.ResponseCh <- message:
// Message successfully delivered - the receiver will close it
rm.deleteRequest(reqID)
case <-req.Context.Done():
// Request was cancelled/timed out - close the message
message.Close()
}
}
// routeLegacyResponse handles responses that don't have request IDs (backwards compatibility)
func (rm *RequestManager) routeLegacyResponse(message *gws.Message) {
// Snapshot the oldest pending request without holding the lock during send
rm.RLock()
var oldestReq *PendingRequest
for _, req := range rm.pendingReqs {
if oldestReq == nil || req.CreatedAt.Before(oldestReq.CreatedAt) {
oldestReq = req
}
}
rm.RUnlock()
if oldestReq != nil {
select {
case oldestReq.ResponseCh <- message:
// Message successfully delivered - the receiver will close it
rm.deleteRequest(oldestReq.ID)
case <-oldestReq.Context.Done():
// Request was cancelled - close the message
message.Close()
}
} else {
// No pending requests - close the message
message.Close()
}
}
// cleanupRequest handles request timeout and cleanup
func (rm *RequestManager) cleanupRequest(req *PendingRequest) {
<-req.Context.Done()
rm.cancelRequest(req.ID)
}
// cancelRequest removes a request and cancels its context
func (rm *RequestManager) cancelRequest(reqID RequestID) {
rm.Lock()
defer rm.Unlock()
if req, exists := rm.pendingReqs[reqID]; exists {
req.Cancel()
delete(rm.pendingReqs, reqID)
}
}
// deleteRequest removes a request from the pending map without cancelling its context.
func (rm *RequestManager) deleteRequest(reqID RequestID) {
rm.Lock()
defer rm.Unlock()
delete(rm.pendingReqs, reqID)
}
// Close shuts down the request manager
func (rm *RequestManager) Close() {
rm.Lock()
defer rm.Unlock()
// Cancel all pending requests
for _, req := range rm.pendingReqs {
req.Cancel()
}
rm.pendingReqs = make(map[RequestID]*PendingRequest)
}

View File

@@ -0,0 +1,81 @@
//go:build testing
// +build testing
package ws
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
// TestRequestManager_BasicFunctionality tests the request manager without mocking gws.Conn
func TestRequestManager_BasicFunctionality(t *testing.T) {
// We'll test the core logic without mocking the connection
// since the gws.Conn interface is complex to mock properly
t.Run("request ID generation", func(t *testing.T) {
// Test that request IDs are generated sequentially and uniquely
rm := &RequestManager{}
// Simulate multiple ID generations
id1 := rm.nextID.Add(1)
id2 := rm.nextID.Add(1)
id3 := rm.nextID.Add(1)
assert.NotEqual(t, id1, id2)
assert.NotEqual(t, id2, id3)
assert.Greater(t, id2, id1)
assert.Greater(t, id3, id2)
})
t.Run("pending request tracking", func(t *testing.T) {
rm := &RequestManager{
pendingReqs: make(map[RequestID]*PendingRequest),
}
// Initially no pending requests
assert.Equal(t, 0, rm.GetPendingCount())
// Add some fake pending requests
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
req1 := &PendingRequest{
ID: RequestID(1),
Context: ctx,
Cancel: cancel,
}
req2 := &PendingRequest{
ID: RequestID(2),
Context: ctx,
Cancel: cancel,
}
rm.pendingReqs[req1.ID] = req1
rm.pendingReqs[req2.ID] = req2
assert.Equal(t, 2, rm.GetPendingCount())
// Remove one
delete(rm.pendingReqs, req1.ID)
assert.Equal(t, 1, rm.GetPendingCount())
// Remove all
delete(rm.pendingReqs, req2.ID)
assert.Equal(t, 0, rm.GetPendingCount())
})
t.Run("context cancellation", func(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Millisecond)
defer cancel()
// Wait for context to timeout
<-ctx.Done()
// Verify context was cancelled
assert.Equal(t, context.DeadlineExceeded, ctx.Err())
})
}

View File

@@ -5,13 +5,13 @@ import (
"time"
"weak"
"github.com/henrygd/beszel/internal/entities/system"
"github.com/blang/semver"
"github.com/henrygd/beszel"
"github.com/henrygd/beszel/internal/common"
"github.com/fxamacker/cbor/v2"
"github.com/lxzan/gws"
"golang.org/x/crypto/ssh"
)
const (
@@ -25,9 +25,10 @@ type Handler struct {
// WsConn represents a WebSocket connection to an agent.
type WsConn struct {
conn *gws.Conn
responseChan chan *gws.Message
DownChan chan struct{}
conn *gws.Conn
requestManager *RequestManager
DownChan chan struct{}
agentVersion semver.Version
}
// FingerprintRecord is fingerprints collection record data in the hub
@@ -50,21 +51,22 @@ func GetUpgrader() *gws.Upgrader {
return upgrader
}
// NewWsConnection creates a new WebSocket connection wrapper.
func NewWsConnection(conn *gws.Conn) *WsConn {
// NewWsConnection creates a new WebSocket connection wrapper with agent version.
func NewWsConnection(conn *gws.Conn, agentVersion semver.Version) *WsConn {
return &WsConn{
conn: conn,
responseChan: make(chan *gws.Message, 1),
DownChan: make(chan struct{}, 1),
conn: conn,
requestManager: NewRequestManager(conn),
DownChan: make(chan struct{}, 1),
agentVersion: agentVersion,
}
}
// OnOpen sets a deadline for the WebSocket connection.
// OnOpen sets a deadline for the WebSocket connection and extracts agent version.
func (h *Handler) OnOpen(conn *gws.Conn) {
conn.SetDeadline(time.Now().Add(deadline))
}
// OnMessage routes incoming WebSocket messages to the response channel.
// OnMessage routes incoming WebSocket messages to the request manager.
func (h *Handler) OnMessage(conn *gws.Conn, message *gws.Message) {
conn.SetDeadline(time.Now().Add(deadline))
if message.Opcode != gws.OpcodeBinary || message.Data.Len() == 0 {
@@ -75,12 +77,7 @@ func (h *Handler) OnMessage(conn *gws.Conn, message *gws.Message) {
_ = conn.WriteClose(1000, nil)
return
}
select {
case wsConn.(*WsConn).responseChan <- message:
default:
// close if the connection is not expecting a response
wsConn.(*WsConn).Close(nil)
}
wsConn.(*WsConn).requestManager.handleResponse(message)
}
// OnClose handles WebSocket connection closures and triggers system down status after delay.
@@ -106,6 +103,9 @@ func (ws *WsConn) Close(msg []byte) {
if ws.IsConnected() {
ws.conn.WriteClose(1000, msg)
}
if ws.requestManager != nil {
ws.requestManager.Close()
}
}
// Ping sends a ping frame to keep the connection alive.
@@ -115,6 +115,7 @@ func (ws *WsConn) Ping() error {
}
// sendMessage encodes data to CBOR and sends it as a binary message to the agent.
// This is kept for backwards compatibility but new actions should use RequestManager.
func (ws *WsConn) sendMessage(data common.HubRequest[any]) error {
if ws.conn == nil {
return gws.ErrConnClosed
@@ -126,54 +127,34 @@ func (ws *WsConn) sendMessage(data common.HubRequest[any]) error {
return ws.conn.WriteMessage(gws.OpcodeBinary, bytes)
}
// RequestSystemData requests system metrics from the agent and unmarshals the response.
func (ws *WsConn) RequestSystemData(data *system.CombinedData) error {
var message *gws.Message
ws.sendMessage(common.HubRequest[any]{
Action: common.GetData,
})
// handleAgentRequest processes a request to the agent, handling both legacy and new formats.
func (ws *WsConn) handleAgentRequest(req *PendingRequest, handler ResponseHandler) error {
// Wait for response
select {
case <-time.After(10 * time.Second):
ws.Close(nil)
return gws.ErrConnClosed
case message = <-ws.responseChan:
case message := <-req.ResponseCh:
defer message.Close()
// Cancel request context to stop timeout watcher promptly
defer req.Cancel()
data := message.Data.Bytes()
// Legacy format - unmarshal directly
if ws.agentVersion.LT(beszel.MinVersionAgentResponse) {
return handler.HandleLegacy(data)
}
// New format with AgentResponse wrapper
var agentResponse common.AgentResponse
if err := cbor.Unmarshal(data, &agentResponse); err != nil {
return err
}
if agentResponse.Error != "" {
return errors.New(agentResponse.Error)
}
return handler.Handle(agentResponse)
case <-req.Context.Done():
return req.Context.Err()
}
defer message.Close()
return cbor.Unmarshal(message.Data.Bytes(), data)
}
// GetFingerprint authenticates with the agent using SSH signature and returns the agent's fingerprint.
func (ws *WsConn) GetFingerprint(token string, signer ssh.Signer, needSysInfo bool) (common.FingerprintResponse, error) {
var clientFingerprint common.FingerprintResponse
challenge := []byte(token)
signature, err := signer.Sign(nil, challenge)
if err != nil {
return clientFingerprint, err
}
err = ws.sendMessage(common.HubRequest[any]{
Action: common.CheckFingerprint,
Data: common.FingerprintRequest{
Signature: signature.Blob,
NeedSysInfo: needSysInfo,
},
})
if err != nil {
return clientFingerprint, err
}
var message *gws.Message
select {
case message = <-ws.responseChan:
case <-time.After(10 * time.Second):
return clientFingerprint, errors.New("request expired")
}
defer message.Close()
err = cbor.Unmarshal(message.Data.Bytes(), &clientFingerprint)
return clientFingerprint, err
}
// IsConnected returns true if the WebSocket connection is active.

View File

@@ -8,6 +8,7 @@ import (
"testing"
"time"
"github.com/blang/semver"
"github.com/henrygd/beszel/internal/common"
"github.com/fxamacker/cbor/v2"
@@ -36,26 +37,25 @@ func TestGetUpgrader(t *testing.T) {
// TestNewWsConnection tests WebSocket connection creation
func TestNewWsConnection(t *testing.T) {
// We can't easily mock gws.Conn, so we'll pass nil and test the structure
wsConn := NewWsConnection(nil)
wsConn := NewWsConnection(nil, semver.MustParse("0.12.10"))
assert.NotNil(t, wsConn, "WebSocket connection should not be nil")
assert.Nil(t, wsConn.conn, "Connection should be nil as passed")
assert.NotNil(t, wsConn.responseChan, "Response channel should be initialized")
assert.NotNil(t, wsConn.requestManager, "Request manager should be initialized")
assert.NotNil(t, wsConn.DownChan, "Down channel should be initialized")
assert.Equal(t, 1, cap(wsConn.responseChan), "Response channel should have capacity of 1")
assert.Equal(t, 1, cap(wsConn.DownChan), "Down channel should have capacity of 1")
}
// TestWsConn_IsConnected tests the connection status check
func TestWsConn_IsConnected(t *testing.T) {
// Test with nil connection
wsConn := NewWsConnection(nil)
wsConn := NewWsConnection(nil, semver.MustParse("0.12.10"))
assert.False(t, wsConn.IsConnected(), "Should not be connected when conn is nil")
}
// TestWsConn_Close tests the connection closing with nil connection
func TestWsConn_Close(t *testing.T) {
wsConn := NewWsConnection(nil)
wsConn := NewWsConnection(nil, semver.MustParse("0.12.10"))
// Should handle nil connection gracefully
assert.NotPanics(t, func() {
@@ -65,7 +65,7 @@ func TestWsConn_Close(t *testing.T) {
// TestWsConn_SendMessage_CBOR tests CBOR encoding in sendMessage
func TestWsConn_SendMessage_CBOR(t *testing.T) {
wsConn := NewWsConnection(nil)
wsConn := NewWsConnection(nil, semver.MustParse("0.12.10"))
testData := common.HubRequest[any]{
Action: common.GetData,
@@ -194,7 +194,7 @@ func TestHandler(t *testing.T) {
// TestWsConnChannelBehavior tests channel behavior without WebSocket connections
func TestWsConnChannelBehavior(t *testing.T) {
wsConn := NewWsConnection(nil)
wsConn := NewWsConnection(nil, semver.MustParse("0.12.10"))
// Test that channels are properly initialized and can be used
select {
@@ -212,11 +212,6 @@ func TestWsConnChannelBehavior(t *testing.T) {
t.Error("Should be able to read from DownChan")
}
// Response channel should be empty initially
select {
case <-wsConn.responseChan:
t.Error("Response channel should be empty initially")
default:
// Expected - channel should be empty
}
// Request manager should have no pending requests initially
assert.Equal(t, 0, wsConn.requestManager.GetPendingCount(), "Should have no pending requests initially")
}

View File

@@ -0,0 +1,11 @@
//go:build testing
// +build testing
package ws
// GetPendingCount returns the number of pending requests (for monitoring)
func (rm *RequestManager) GetPendingCount() int {
rm.RLock()
defer rm.RUnlock()
return len(rm.pendingReqs)
}

View File

@@ -0,0 +1,50 @@
package migrations
import (
"github.com/henrygd/beszel/internal/entities/system"
"github.com/pocketbase/pocketbase/core"
m "github.com/pocketbase/pocketbase/migrations"
)
// This can be deleted after Nov 2025 or so
func init() {
m.Register(func(app core.App) error {
app.RunInTransaction(func(txApp core.App) error {
var systemIds []string
txApp.DB().NewQuery("SELECT id FROM systems").Column(&systemIds)
for _, systemId := range systemIds {
var statRecordIds []string
txApp.DB().NewQuery("SELECT id FROM system_stats WHERE system = {:system} AND created > {:created}").Bind(map[string]any{"system": systemId, "created": "2025-09-21"}).Column(&statRecordIds)
for _, statRecordId := range statRecordIds {
statRecord, err := txApp.FindRecordById("system_stats", statRecordId)
if err != nil {
return err
}
var systemStats system.Stats
err = statRecord.UnmarshalJSONField("stats", &systemStats)
if err != nil {
return err
}
// if mem buff cache is less than total mem, we don't need to fix it
if systemStats.MemBuffCache < systemStats.Mem {
continue
}
systemStats.MemBuffCache = 0
statRecord.Set("stats", systemStats)
err = txApp.SaveNoValidate(statRecord)
if err != nil {
return err
}
}
}
return nil
})
return nil
}, func(app core.App) error {
return nil
})
}

View File

@@ -213,6 +213,8 @@ func (rm *RecordManager) AverageSystemStats(db dbx.Builder, records RecordIds) *
sum.LoadAvg[2] += stats.LoadAvg[2]
sum.Bandwidth[0] += stats.Bandwidth[0]
sum.Bandwidth[1] += stats.Bandwidth[1]
sum.DiskIO[0] += stats.DiskIO[0]
sum.DiskIO[1] += stats.DiskIO[1]
batterySum += int(stats.Battery[0])
sum.Battery[1] = stats.Battery[1]
// Set peak values
@@ -224,6 +226,21 @@ func (rm *RecordManager) AverageSystemStats(db dbx.Builder, records RecordIds) *
sum.MaxDiskWritePs = max(sum.MaxDiskWritePs, stats.MaxDiskWritePs, stats.DiskWritePs)
sum.MaxBandwidth[0] = max(sum.MaxBandwidth[0], stats.MaxBandwidth[0], stats.Bandwidth[0])
sum.MaxBandwidth[1] = max(sum.MaxBandwidth[1], stats.MaxBandwidth[1], stats.Bandwidth[1])
sum.MaxDiskIO[0] = max(sum.MaxDiskIO[0], stats.MaxDiskIO[0], stats.DiskIO[0])
sum.MaxDiskIO[1] = max(sum.MaxDiskIO[1], stats.MaxDiskIO[1], stats.DiskIO[1])
// Accumulate network interfaces
if sum.NetworkInterfaces == nil {
sum.NetworkInterfaces = make(map[string][4]uint64, len(stats.NetworkInterfaces))
}
for key, value := range stats.NetworkInterfaces {
sum.NetworkInterfaces[key] = [4]uint64{
sum.NetworkInterfaces[key][0] + value[0],
sum.NetworkInterfaces[key][1] + value[1],
max(sum.NetworkInterfaces[key][2], value[2]),
max(sum.NetworkInterfaces[key][3], value[3]),
}
}
// Accumulate temperatures
if stats.Temperatures != nil {
@@ -271,6 +288,16 @@ func (rm *RecordManager) AverageSystemStats(db dbx.Builder, records RecordIds) *
gpu.Usage += value.Usage
gpu.Power += value.Power
gpu.Count += value.Count
if value.Engines != nil {
if gpu.Engines == nil {
gpu.Engines = make(map[string]float64, len(value.Engines))
}
for engineKey, engineValue := range value.Engines {
gpu.Engines[engineKey] += engineValue
}
}
sum.GPUData[id] = gpu
}
}
@@ -291,6 +318,8 @@ func (rm *RecordManager) AverageSystemStats(db dbx.Builder, records RecordIds) *
sum.DiskPct = twoDecimals(sum.DiskPct / count)
sum.DiskReadPs = twoDecimals(sum.DiskReadPs / count)
sum.DiskWritePs = twoDecimals(sum.DiskWritePs / count)
sum.DiskIO[0] = sum.DiskIO[0] / uint64(count)
sum.DiskIO[1] = sum.DiskIO[1] / uint64(count)
sum.NetworkSent = twoDecimals(sum.NetworkSent / count)
sum.NetworkRecv = twoDecimals(sum.NetworkRecv / count)
sum.LoadAvg[0] = twoDecimals(sum.LoadAvg[0] / count)
@@ -299,6 +328,19 @@ func (rm *RecordManager) AverageSystemStats(db dbx.Builder, records RecordIds) *
sum.Bandwidth[0] = sum.Bandwidth[0] / uint64(count)
sum.Bandwidth[1] = sum.Bandwidth[1] / uint64(count)
sum.Battery[0] = uint8(batterySum / int(count))
// Average network interfaces
if sum.NetworkInterfaces != nil {
for key := range sum.NetworkInterfaces {
sum.NetworkInterfaces[key] = [4]uint64{
sum.NetworkInterfaces[key][0] / uint64(count),
sum.NetworkInterfaces[key][1] / uint64(count),
sum.NetworkInterfaces[key][2],
sum.NetworkInterfaces[key][3],
}
}
}
// Average temperatures
if sum.Temperatures != nil && tempCount > 0 {
for key := range sum.Temperatures {
@@ -327,6 +369,13 @@ func (rm *RecordManager) AverageSystemStats(db dbx.Builder, records RecordIds) *
gpu.Usage = twoDecimals(gpu.Usage / count)
gpu.Power = twoDecimals(gpu.Power / count)
gpu.Count = twoDecimals(gpu.Count / count)
if gpu.Engines != nil {
for engineKey := range gpu.Engines {
gpu.Engines[engineKey] = twoDecimals(gpu.Engines[engineKey] / count)
}
}
sum.GPUData[id] = gpu
}
}

View File

@@ -175,7 +175,7 @@ func TestDeleteOldSystemStats(t *testing.T) {
}
// Run deletion
err = records.TestDeleteOldSystemStats(hub)
err = records.DeleteOldSystemStats(hub)
require.NoError(t, err)
// Verify results
@@ -268,7 +268,7 @@ func TestDeleteOldAlertsHistory(t *testing.T) {
assert.Equal(t, int64(tc.alertCount), countBefore, "Initial count should match")
// Run deletion
err = records.TestDeleteOldAlertsHistory(hub, tc.countToKeep, tc.countBeforeDeletion)
err = records.DeleteOldAlertsHistory(hub, tc.countToKeep, tc.countBeforeDeletion)
require.NoError(t, err)
// Count after deletion
@@ -332,7 +332,7 @@ func TestDeleteOldAlertsHistoryEdgeCases(t *testing.T) {
}
// Should not error and should not delete anything
err = records.TestDeleteOldAlertsHistory(hub, 10, 20)
err = records.DeleteOldAlertsHistory(hub, 10, 20)
require.NoError(t, err)
count, err := hub.CountRecords("alerts_history")
@@ -346,7 +346,7 @@ func TestDeleteOldAlertsHistoryEdgeCases(t *testing.T) {
require.NoError(t, err)
// Should not error with empty table
err = records.TestDeleteOldAlertsHistory(hub, 10, 20)
err = records.DeleteOldAlertsHistory(hub, 10, 20)
require.NoError(t, err)
})
}
@@ -376,7 +376,7 @@ func TestTwoDecimals(t *testing.T) {
}
for _, tc := range testCases {
result := records.TestTwoDecimals(tc.input)
result := records.TwoDecimals(tc.input)
assert.InDelta(t, tc.expected, result, 0.02, "twoDecimals(%f) should equal %f", tc.input, tc.expected)
}
}

View File

@@ -7,17 +7,17 @@ import (
"github.com/pocketbase/pocketbase/core"
)
// TestDeleteOldSystemStats exposes deleteOldSystemStats for testing
func TestDeleteOldSystemStats(app core.App) error {
// DeleteOldSystemStats exposes deleteOldSystemStats for testing
func DeleteOldSystemStats(app core.App) error {
return deleteOldSystemStats(app)
}
// TestDeleteOldAlertsHistory exposes deleteOldAlertsHistory for testing
func TestDeleteOldAlertsHistory(app core.App, countToKeep, countBeforeDeletion int) error {
// DeleteOldAlertsHistory exposes deleteOldAlertsHistory for testing
func DeleteOldAlertsHistory(app core.App, countToKeep, countBeforeDeletion int) error {
return deleteOldAlertsHistory(app, countToKeep, countBeforeDeletion)
}
// TestTwoDecimals exposes twoDecimals for testing
func TestTwoDecimals(value float64) float64 {
// TwoDecimals exposes twoDecimals for testing
func TwoDecimals(value float64) float64 {
return twoDecimals(value)
}

View File

@@ -1,41 +1,83 @@
{
"$schema": "https://biomejs.dev/schemas/2.2.3/schema.json",
"vcs": {
"enabled": false,
"enabled": true,
"clientKind": "git",
"useIgnoreFile": false
},
"files": {
"ignoreUnknown": false
"useIgnoreFile": true,
"defaultBranch": "main"
},
"formatter": {
"enabled": true,
"indentStyle": "tab",
"indentWidth": 2,
"lineWidth": 120
"lineWidth": 120,
"formatWithErrors": true
},
"assist": { "actions": { "source": { "organizeImports": "on" } } },
"linter": {
"enabled": true,
"rules": {
"recommended": true,
"complexity": {
"noUselessStringConcat": "error",
"noUselessUndefinedInitialization": "error",
"noVoid": "error",
"useDateNow": "error"
},
"correctness": {
"useUniqueElementIds": "off"
"noConstantMathMinMaxClamp": "error",
"noUndeclaredVariables": "error",
"noUnusedImports": "error",
"noUnusedFunctionParameters": "error",
"noUnusedPrivateClassMembers": "error",
"useExhaustiveDependencies": {
"level": "error",
"options": {
"reportUnnecessaryDependencies": false
}
},
"noUnusedVariables": "error"
},
"style": {
"noParameterProperties": "error",
"noYodaExpression": "error",
"useConsistentBuiltinInstantiation": "error",
"useFragmentSyntax": "error",
"useShorthandAssign": "error",
"useArrayLiterals": "error"
},
"suspicious": {
"useAwait": "error",
"noEvolvingTypes": "error"
}
}
},
"javascript": {
"formatter": {
"quoteStyle": "double",
"semicolons": "asNeeded",
"trailingCommas": "es5"
"trailingCommas": "es5",
"semicolons": "asNeeded"
}
},
"assist": {
"enabled": true,
"actions": {
"source": {
"organizeImports": "on"
"overrides": [
{
"includes": ["**/*.jsx", "**/*.tsx"],
"linter": {
"rules": {
"style": {
"noParameterAssign": "error"
}
}
}
},
{
"includes": ["**/*.ts", "**/*.tsx"],
"linter": {
"rules": {
"correctness": {
"noUnusedVariables": "off"
}
}
}
}
}
]
}

View File

@@ -5,6 +5,7 @@
<link rel="manifest" href="./static/manifest.json" />
<link rel="icon" type="image/svg+xml" href="./static/favicon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0,maximum-scale=1.0, user-scalable=no, viewport-fit=cover" />
<meta name="robots" content="noindex, nofollow" />
<title>Beszel</title>
<script>
globalThis.BESZEL = {

View File

@@ -1,12 +1,12 @@
{
"name": "beszel",
"version": "0.12.7",
"version": "0.13.1",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "beszel",
"version": "0.12.7",
"version": "0.13.1",
"dependencies": {
"@henrygd/queue": "^1.0.7",
"@henrygd/semaphore": "^0.0.2",
@@ -46,6 +46,7 @@
"valibot": "^0.42.1"
},
"devDependencies": {
"@biomejs/biome": "2.2.3",
"@lingui/cli": "^5.4.1",
"@lingui/swc-plugin": "^5.6.1",
"@lingui/vite-plugin": "^5.4.1",
@@ -330,6 +331,169 @@
"node": ">=6.9.0"
}
},
"node_modules/@biomejs/biome": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/biome/-/biome-2.2.3.tgz",
"integrity": "sha512-9w0uMTvPrIdvUrxazZ42Ib7t8Y2yoGLKLdNne93RLICmaHw7mcLv4PPb5LvZLJF3141gQHiCColOh/v6VWlWmg==",
"dev": true,
"license": "MIT OR Apache-2.0",
"bin": {
"biome": "bin/biome"
},
"engines": {
"node": ">=14.21.3"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/biome"
},
"optionalDependencies": {
"@biomejs/cli-darwin-arm64": "2.2.3",
"@biomejs/cli-darwin-x64": "2.2.3",
"@biomejs/cli-linux-arm64": "2.2.3",
"@biomejs/cli-linux-arm64-musl": "2.2.3",
"@biomejs/cli-linux-x64": "2.2.3",
"@biomejs/cli-linux-x64-musl": "2.2.3",
"@biomejs/cli-win32-arm64": "2.2.3",
"@biomejs/cli-win32-x64": "2.2.3"
}
},
"node_modules/@biomejs/cli-darwin-arm64": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-darwin-arm64/-/cli-darwin-arm64-2.2.3.tgz",
"integrity": "sha512-OrqQVBpadB5eqzinXN4+Q6honBz+tTlKVCsbEuEpljK8ASSItzIRZUA02mTikl3H/1nO2BMPFiJ0nkEZNy3B1w==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT OR Apache-2.0",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-darwin-x64": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-darwin-x64/-/cli-darwin-x64-2.2.3.tgz",
"integrity": "sha512-OCdBpb1TmyfsTgBAM1kPMXyYKTohQ48WpiN9tkt9xvU6gKVKHY4oVwteBebiOqyfyzCNaSiuKIPjmHjUZ2ZNMg==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT OR Apache-2.0",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-linux-arm64": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-linux-arm64/-/cli-linux-arm64-2.2.3.tgz",
"integrity": "sha512-g/Uta2DqYpECxG+vUmTAmUKlVhnGEcY7DXWgKP8ruLRa8Si1QHsWknPY3B/wCo0KgYiFIOAZ9hjsHfNb9L85+g==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT OR Apache-2.0",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-linux-arm64-musl": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-linux-arm64-musl/-/cli-linux-arm64-musl-2.2.3.tgz",
"integrity": "sha512-q3w9jJ6JFPZPeqyvwwPeaiS/6NEszZ+pXKF+IczNo8Xj6fsii45a4gEEicKyKIytalV+s829ACZujQlXAiVLBQ==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT OR Apache-2.0",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-linux-x64": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-linux-x64/-/cli-linux-x64-2.2.3.tgz",
"integrity": "sha512-LEtyYL1fJsvw35CxrbQ0gZoxOG3oZsAjzfRdvRBRHxOpQ91Q5doRVjvWW/wepgSdgk5hlaNzfeqpyGmfSD0Eyw==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT OR Apache-2.0",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-linux-x64-musl": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-linux-x64-musl/-/cli-linux-x64-musl-2.2.3.tgz",
"integrity": "sha512-y76Dn4vkP1sMRGPFlNc+OTETBhGPJ90jY3il6jAfur8XWrYBQV3swZ1Jo0R2g+JpOeeoA0cOwM7mJG6svDz79w==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT OR Apache-2.0",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-win32-arm64": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-win32-arm64/-/cli-win32-arm64-2.2.3.tgz",
"integrity": "sha512-Ms9zFYzjcJK7LV+AOMYnjN3pV3xL8Prxf9aWdDVL74onLn5kcvZ1ZMQswE5XHtnd/r/0bnUd928Rpbs14BzVmA==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT OR Apache-2.0",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-win32-x64": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-win32-x64/-/cli-win32-x64-2.2.3.tgz",
"integrity": "sha512-gvCpewE7mBwBIpqk1YrUqNR4mCiyJm6UI3YWQQXkedSSEwzRdodRpaKhbdbHw1/hmTWOVXQ+Eih5Qctf4TCVOQ==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT OR Apache-2.0",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@esbuild/aix-ppc64": {
"version": "0.25.6",
"resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.25.6.tgz",
@@ -5763,14 +5927,14 @@
"license": "MIT"
},
"node_modules/tinyglobby": {
"version": "0.2.14",
"resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.14.tgz",
"integrity": "sha512-tX5e7OM1HnYr2+a2C/4V0htOcSQcoSTH9KgJnVvNm5zm/cyEWKJ7j7YutsH9CxMdtOkkLFy2AHrMci9IM8IPZQ==",
"version": "0.2.15",
"resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
"integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"fdir": "^6.4.4",
"picomatch": "^4.0.2"
"fdir": "^6.5.0",
"picomatch": "^4.0.3"
},
"engines": {
"node": ">=12.0.0"
@@ -5957,9 +6121,9 @@
}
},
"node_modules/vite": {
"version": "7.1.3",
"resolved": "https://registry.npmjs.org/vite/-/vite-7.1.3.tgz",
"integrity": "sha512-OOUi5zjkDxYrKhTV3V7iKsoS37VUM7v40+HuwEmcrsf11Cdx9y3DIr2Px6liIcZFwt3XSRpQvFpL3WVy7ApkGw==",
"version": "7.1.5",
"resolved": "https://registry.npmjs.org/vite/-/vite-7.1.5.tgz",
"integrity": "sha512-4cKBO9wR75r0BeIWWWId9XK9Lj6La5X846Zw9dFfzMRw38IlTk2iCcUt6hsyiDRcPidc55ZParFYDXi0nXOeLQ==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -5968,7 +6132,7 @@
"picomatch": "^4.0.3",
"postcss": "^8.5.6",
"rollup": "^4.43.0",
"tinyglobby": "^0.2.14"
"tinyglobby": "^0.2.15"
},
"bin": {
"vite": "bin/vite.js"

View File

@@ -1,7 +1,7 @@
{
"name": "beszel",
"private": true,
"version": "0.12.7",
"version": "0.13.1",
"type": "module",
"scripts": {
"dev": "vite --host",
@@ -76,4 +76,4 @@
"optionalDependencies": {
"@esbuild/linux-arm64": "^0.21.5"
}
}
}

View File

@@ -1,5 +1,9 @@
import { Trans } from "@lingui/react/macro"
import { t } from "@lingui/core/macro"
import { Trans } from "@lingui/react/macro"
import { useStore } from "@nanostores/react"
import { getPagePath } from "@nanostores/router"
import { ChevronDownIcon, ExternalLinkIcon, PlusIcon } from "lucide-react"
import { memo, useEffect, useRef, useState } from "react"
import { Button } from "@/components/ui/button"
import {
Dialog,
@@ -10,34 +14,30 @@ import {
DialogTitle,
DialogTrigger,
} from "@/components/ui/dialog"
import { Tabs, TabsContent, TabsList, TabsTrigger } from "@/components/ui/tabs"
import { Input } from "@/components/ui/input"
import { Label } from "@/components/ui/label"
import { Tabs, TabsContent, TabsList, TabsTrigger } from "@/components/ui/tabs"
import { isReadOnlyUser, pb } from "@/lib/api"
import { SystemStatus } from "@/lib/enums"
import { $publicKey } from "@/lib/stores"
import { cn, generateToken, tokenMap, useBrowserStorage } from "@/lib/utils"
import { pb, isReadOnlyUser } from "@/lib/api"
import { useStore } from "@nanostores/react"
import { ChevronDownIcon, ExternalLinkIcon, PlusIcon } from "lucide-react"
import { memo, useEffect, useRef, useState } from "react"
import { $router, basePath, Link, navigate } from "./router"
import { SystemRecord } from "@/types"
import { SystemStatus } from "@/lib/enums"
import { AppleIcon, DockerIcon, FreeBsdIcon, TuxIcon, WindowsIcon } from "./ui/icons"
import { InputCopy } from "./ui/input-copy"
import { getPagePath } from "@nanostores/router"
import type { SystemRecord } from "@/types"
import {
copyDockerCompose,
copyDockerRun,
copyLinuxCommand,
copyWindowsCommand,
DropdownItem,
type DropdownItem,
InstallDropdown,
} from "./install-dropdowns"
import { $router, basePath, Link, navigate } from "./router"
import { DropdownMenu, DropdownMenuTrigger } from "./ui/dropdown-menu"
import { AppleIcon, DockerIcon, FreeBsdIcon, TuxIcon, WindowsIcon } from "./ui/icons"
import { InputCopy } from "./ui/input-copy"
export function AddSystemButton({ className }: { className?: string }) {
const [open, setOpen] = useState(false)
let opened = useRef(false)
const opened = useRef(false)
if (open) {
opened.current = true
}

View File

@@ -1,11 +1,11 @@
import { ColumnDef } from "@tanstack/react-table"
import { AlertsHistoryRecord } from "@/types"
import { Button } from "@/components/ui/button"
import { Badge } from "@/components/ui/badge"
import { formatShortDate, toFixedFloat, formatDuration, cn } from "@/lib/utils"
import { alertInfo } from "@/lib/alerts"
import { Trans } from "@lingui/react/macro"
import { t } from "@lingui/core/macro"
import { Trans } from "@lingui/react/macro"
import type { ColumnDef } from "@tanstack/react-table"
import { Badge } from "@/components/ui/badge"
import { Button } from "@/components/ui/button"
import { alertInfo } from "@/lib/alerts"
import { cn, formatDuration, formatShortDate, toFixedFloat } from "@/lib/utils"
import type { AlertsHistoryRecord } from "@/types"
export const alertsHistoryColumns: ColumnDef<AlertsHistoryRecord>[] = [
{
@@ -38,7 +38,7 @@ export const alertsHistoryColumns: ColumnDef<AlertsHistoryRecord>[] = [
</Button>
),
cell: ({ getValue, row }) => {
let name = getValue() as string
const name = getValue() as string
const info = alertInfo[row.original.name]
const Icon = info?.icon

View File

@@ -1,13 +1,13 @@
import { t } from "@lingui/core/macro"
import { memo, useMemo, useState } from "react"
import { useStore } from "@nanostores/react"
import { $alerts } from "@/lib/stores"
import { BellIcon } from "lucide-react"
import { cn } from "@/lib/utils"
import { memo, useMemo, useState } from "react"
import { Button } from "@/components/ui/button"
import { SystemRecord } from "@/types"
import { AlertDialogContent } from "./alerts-sheet"
import { Sheet, SheetContent, SheetTrigger } from "@/components/ui/sheet"
import { $alerts } from "@/lib/stores"
import { cn } from "@/lib/utils"
import type { SystemRecord } from "@/types"
import { AlertDialogContent } from "./alerts-sheet"
export default memo(function AlertsButton({ system }: { system: SystemRecord }) {
const [opened, setOpened] = useState(false)

View File

@@ -1,21 +1,20 @@
import { t } from "@lingui/core/macro"
import { Trans, Plural } from "@lingui/react/macro"
import { $alerts, $systems } from "@/lib/stores"
import { cn, debounce } from "@/lib/utils"
import { alertInfo } from "@/lib/alerts"
import { Switch } from "@/components/ui/switch"
import { AlertInfo, AlertRecord, SystemRecord } from "@/types"
import { lazy, memo, Suspense, useMemo, useState } from "react"
import { toast } from "@/components/ui/use-toast"
import { Plural, Trans } from "@lingui/react/macro"
import { useStore } from "@nanostores/react"
import { getPagePath } from "@nanostores/router"
import { Checkbox } from "@/components/ui/checkbox"
import { DialogTitle, DialogDescription } from "@/components/ui/dialog"
import { Tabs, TabsList, TabsTrigger, TabsContent } from "@/components/ui/tabs"
import { ServerIcon, GlobeIcon } from "lucide-react"
import { GlobeIcon, ServerIcon } from "lucide-react"
import { lazy, memo, Suspense, useMemo, useState } from "react"
import { $router, Link } from "@/components/router"
import { DialogHeader } from "@/components/ui/dialog"
import { Checkbox } from "@/components/ui/checkbox"
import { DialogDescription, DialogHeader, DialogTitle } from "@/components/ui/dialog"
import { Switch } from "@/components/ui/switch"
import { Tabs, TabsContent, TabsList, TabsTrigger } from "@/components/ui/tabs"
import { toast } from "@/components/ui/use-toast"
import { alertInfo } from "@/lib/alerts"
import { pb } from "@/lib/api"
import { $alerts, $systems } from "@/lib/stores"
import { cn, debounce } from "@/lib/utils"
import type { AlertInfo, AlertRecord, SystemRecord } from "@/types"
const Slider = lazy(() => import("@/components/ui/slider"))
@@ -172,7 +171,7 @@ export function AlertContent({
const [checked, setChecked] = useState(global ? false : !!alert)
const [min, setMin] = useState(alert?.min || 10)
const [value, setValue] = useState(alert?.value || (singleDescription ? 0 : alertData.start ?? 80))
const [value, setValue] = useState(alert?.value || (singleDescription ? 0 : (alertData.start ?? 80)))
const Icon = alertData.icon

View File

@@ -1,9 +1,16 @@
import { Area, AreaChart, CartesianGrid, YAxis } from "recharts"
import { ChartContainer, ChartTooltip, ChartTooltipContent, xAxis } from "@/components/ui/chart"
import { cn, formatShortDate, chartMargin } from "@/lib/utils"
import { useYAxisWidth } from "./hooks"
import { ChartData, SystemStatsRecord } from "@/types"
import { useMemo } from "react"
import { Area, AreaChart, CartesianGrid, YAxis } from "recharts"
import {
ChartContainer,
ChartLegend,
ChartLegendContent,
ChartTooltip,
ChartTooltipContent,
xAxis,
} from "@/components/ui/chart"
import { chartMargin, cn, formatShortDate } from "@/lib/utils"
import type { ChartData, SystemStatsRecord } from "@/types"
import { useYAxisWidth } from "./hooks"
export type DataPoint = {
label: string
@@ -20,6 +27,8 @@ export default function AreaChartDefault({
contentFormatter,
dataPoints,
domain,
legend,
itemSorter,
}: // logRender = false,
{
chartData: ChartData
@@ -29,10 +38,13 @@ export default function AreaChartDefault({
contentFormatter: ({ value, payload }: { value: number; payload: SystemStatsRecord }) => string
dataPoints?: DataPoint[]
domain?: [number, number]
legend?: boolean
itemSorter?: (a: any, b: any) => number
// logRender?: boolean
}) {
const { yAxisWidth, updateYAxisWidth } = useYAxisWidth()
// biome-ignore lint/correctness/useExhaustiveDependencies: ignore
return useMemo(() => {
if (chartData.systemStats.length === 0) {
return null
@@ -63,6 +75,8 @@ export default function AreaChartDefault({
<ChartTooltip
animationEasing="ease-out"
animationDuration={150}
// @ts-expect-error
itemSorter={itemSorter}
content={
<ChartTooltipContent
labelFormatter={(_, data) => formatShortDate(data[0].payload.created)}
@@ -70,11 +84,14 @@ export default function AreaChartDefault({
/>
}
/>
{dataPoints?.map((dataPoint, i) => {
const color = `var(--chart-${dataPoint.color})`
{dataPoints?.map((dataPoint) => {
let { color } = dataPoint
if (typeof color === "number") {
color = `var(--chart-${color})`
}
return (
<Area
key={i}
key={dataPoint.label}
dataKey={dataPoint.dataKey}
name={dataPoint.label}
type="monotoneX"
@@ -85,7 +102,7 @@ export default function AreaChartDefault({
/>
)
})}
{/* <ChartLegend content={<ChartLegendContent />} /> */}
{legend && <ChartLegend content={<ChartLegendContent />} />}
</AreaChart>
</ChartContainer>
</div>

View File

@@ -1,13 +1,28 @@
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from "@/components/ui/select"
import { $chartTime } from "@/lib/stores"
import { chartTimeData, cn } from "@/lib/utils"
import { ChartTimes } from "@/types"
import { useStore } from "@nanostores/react"
import { HistoryIcon } from "lucide-react"
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from "@/components/ui/select"
import { $chartTime } from "@/lib/stores"
import { chartTimeData, cn, compareSemVer, parseSemVer } from "@/lib/utils"
import type { ChartTimes, SemVer } from "@/types"
import { memo } from "react"
export default function ChartTimeSelect({ className }: { className?: string }) {
export default memo(function ChartTimeSelect({
className,
agentVersion,
}: {
className?: string
agentVersion: SemVer
}) {
const chartTime = useStore($chartTime)
// remove chart times that are not supported by the system agent version
const availableChartTimes = Object.entries(chartTimeData).filter(([_, { minVersion }]) => {
if (!minVersion) {
return true
}
return compareSemVer(agentVersion, parseSemVer(minVersion)) >= 0
})
return (
<Select defaultValue="1h" value={chartTime} onValueChange={(value: ChartTimes) => $chartTime.set(value)}>
<SelectTrigger className={cn(className, "relative ps-10 pe-5")}>
@@ -15,7 +30,7 @@ export default function ChartTimeSelect({ className }: { className?: string }) {
<SelectValue />
</SelectTrigger>
<SelectContent>
{Object.entries(chartTimeData).map(([value, { label }]) => (
{availableChartTimes.map(([value, { label }]) => (
<SelectItem key={value} value={value}>
{label()}
</SelectItem>
@@ -23,4 +38,4 @@ export default function ChartTimeSelect({ className }: { className?: string }) {
</SelectContent>
</Select>
)
}
})

View File

@@ -1,13 +1,13 @@
import { Area, AreaChart, CartesianGrid, YAxis } from "recharts"
import { type ChartConfig, ChartContainer, ChartTooltip, ChartTooltipContent, xAxis } from "@/components/ui/chart"
import { memo, useMemo } from "react"
import { cn, formatShortDate, chartMargin, toFixedFloat, formatBytes, decimalString } from "@/lib/utils"
// import Spinner from '../spinner'
import { useStore } from "@nanostores/react"
import { memo, useMemo } from "react"
import { Area, AreaChart, CartesianGrid, YAxis } from "recharts"
import { type ChartConfig, ChartContainer, ChartTooltip, ChartTooltipContent, xAxis } from "@/components/ui/chart"
import { ChartType, Unit } from "@/lib/enums"
import { $containerFilter, $userSettings } from "@/lib/stores"
import { chartMargin, cn, decimalString, formatBytes, formatShortDate, toFixedFloat } from "@/lib/utils"
import type { ChartData } from "@/types"
import { Separator } from "../ui/separator"
import { ChartType, Unit } from "@/lib/enums"
import { useYAxisWidth } from "./hooks"
export default memo(function ContainerChart({

View File

@@ -1,10 +1,10 @@
import { useLingui } from "@lingui/react/macro"
import { memo } from "react"
import { Area, AreaChart, CartesianGrid, YAxis } from "recharts"
import { ChartContainer, ChartTooltip, ChartTooltipContent, xAxis } from "@/components/ui/chart"
import { cn, formatShortDate, decimalString, chartMargin, formatBytes, toFixedFloat } from "@/lib/utils"
import { ChartData } from "@/types"
import { memo } from "react"
import { useLingui } from "@lingui/react/macro"
import { Unit } from "@/lib/enums"
import { chartMargin, cn, decimalString, formatBytes, formatShortDate, toFixedFloat } from "@/lib/utils"
import type { ChartData } from "@/types"
import { useYAxisWidth } from "./hooks"
export default memo(function DiskChart({

View File

@@ -1,5 +1,5 @@
import { memo, useMemo } from "react"
import { CartesianGrid, Line, LineChart, YAxis } from "recharts"
import {
ChartContainer,
ChartLegend,
@@ -8,48 +8,59 @@ import {
ChartTooltipContent,
xAxis,
} from "@/components/ui/chart"
import { cn, formatShortDate, toFixedFloat, decimalString, chartMargin } from "@/lib/utils"
import { ChartData } from "@/types"
import { memo, useMemo } from "react"
import { chartMargin, cn, decimalString, formatShortDate, toFixedFloat } from "@/lib/utils"
import type { ChartData, GPUData } from "@/types"
import { useYAxisWidth } from "./hooks"
import type { DataPoint } from "./line-chart"
export default memo(function GpuPowerChart({ chartData }: { chartData: ChartData }) {
const { yAxisWidth, updateYAxisWidth } = useYAxisWidth()
const packageKey = " package"
const { gpuData, dataPoints } = useMemo(() => {
const dataPoints = [] as DataPoint[]
const gpuData = [] as Record<string, GPUData | string>[]
const addedKeys = new Map<string, number>()
const addKey = (key: string, value: number) => {
addedKeys.set(key, (addedKeys.get(key) ?? 0) + value)
}
for (const stats of chartData.systemStats) {
const gpus = stats.stats?.g ?? {}
const data = { created: stats.created } as Record<string, GPUData | string>
for (const id in gpus) {
const gpu = gpus[id] as GPUData
data[gpu.n] = gpu
addKey(gpu.n, gpu.p ?? 0)
if (gpu.pp) {
data[`${gpu.n}${packageKey}`] = gpu
addKey(`${gpu.n}${packageKey}`, gpu.pp ?? 0)
}
}
gpuData.push(data)
}
const sortedKeys = Array.from(addedKeys.entries())
.sort(([, a], [, b]) => b - a)
.map(([key]) => key)
for (let i = 0; i < sortedKeys.length; i++) {
const id = sortedKeys[i]
dataPoints.push({
label: id,
dataKey: (gpuData: Record<string, GPUData>) => {
return id.endsWith(packageKey) ? (gpuData[id]?.pp ?? 0) : (gpuData[id]?.p ?? 0)
},
color: `hsl(${226 + (((i * 360) / addedKeys.size) % 360)}, 65%, 52%)`,
})
}
return { gpuData, dataPoints }
}, [chartData])
if (chartData.systemStats.length === 0) {
return null
}
/** Format temperature data for chart and assign colors */
const newChartData = useMemo(() => {
const newChartData = { data: [], colors: {} } as {
data: Record<string, number | string>[]
colors: Record<string, string>
}
const powerSums = {} as Record<string, number>
for (let data of chartData.systemStats) {
let newData = { created: data.created } as Record<string, number | string>
for (let gpu of Object.values(data.stats?.g ?? {})) {
if (gpu.p) {
const name = gpu.n
newData[name] = gpu.p
powerSums[name] = (powerSums[name] ?? 0) + newData[name]
}
}
newChartData.data.push(newData)
}
const keys = Object.keys(powerSums).sort((a, b) => powerSums[b] - powerSums[a])
for (let key of keys) {
newChartData.colors[key] = `hsl(${((keys.indexOf(key) * 360) / keys.length) % 360}, 60%, 55%)`
}
return newChartData
}, [chartData])
const colors = Object.keys(newChartData.colors)
// console.log('rendered at', new Date())
return (
<div>
<ChartContainer
@@ -57,7 +68,7 @@ export default memo(function GpuPowerChart({ chartData }: { chartData: ChartData
"opacity-100": yAxisWidth,
})}
>
<LineChart accessibilityLayer data={newChartData.data} margin={chartMargin}>
<LineChart accessibilityLayer data={gpuData} margin={chartMargin}>
<CartesianGrid vertical={false} />
<YAxis
direction="ltr"
@@ -67,7 +78,7 @@ export default memo(function GpuPowerChart({ chartData }: { chartData: ChartData
width={yAxisWidth}
tickFormatter={(value) => {
const val = toFixedFloat(value, 2)
return updateYAxisWidth(val + "W")
return updateYAxisWidth(`${val}W`)
}}
tickLine={false}
axisLine={false}
@@ -76,29 +87,29 @@ export default memo(function GpuPowerChart({ chartData }: { chartData: ChartData
<ChartTooltip
animationEasing="ease-out"
animationDuration={150}
// @ts-ignore
// @ts-expect-error
itemSorter={(a, b) => b.value - a.value}
content={
<ChartTooltipContent
labelFormatter={(_, data) => formatShortDate(data[0].payload.created)}
contentFormatter={(item) => decimalString(item.value) + "W"}
contentFormatter={(item) => `${decimalString(item.value)}W`}
// indicator="line"
/>
}
/>
{colors.map((key) => (
{dataPoints.map((dataPoint) => (
<Line
key={key}
dataKey={key}
name={key}
key={dataPoint.label}
dataKey={dataPoint.dataKey}
name={dataPoint.label}
type="monotoneX"
dot={false}
strokeWidth={1.5}
stroke={newChartData.colors[key]}
stroke={dataPoint.color as string}
isAnimationActive={false}
/>
))}
{colors.length > 1 && <ChartLegend content={<ChartLegendContent />} />}
{dataPoints.length > 1 && <ChartLegend content={<ChartLegendContent />} />}
</LineChart>
</ChartContainer>
</div>

View File

@@ -1,6 +1,6 @@
import { useMemo, useState } from "react"
import { ChartConfig } from "@/components/ui/chart"
import { ChartData } from "@/types"
import type { ChartConfig } from "@/components/ui/chart"
import type { ChartData, SystemStats, SystemStatsRecord } from "@/types"
/** Chart configurations for CPU, memory, and network usage charts */
export interface ContainerChartConfigs {
@@ -105,3 +105,21 @@ export function useYAxisWidth() {
}
return { yAxisWidth, updateYAxisWidth }
}
// Assures consistent colors for network interfaces
export function useNetworkInterfaces(interfaces: SystemStats["ni"]) {
const keys = Object.keys(interfaces ?? {})
const sortedKeys = keys.sort((a, b) => (interfaces?.[b]?.[3] ?? 0) - (interfaces?.[a]?.[3] ?? 0))
return {
length: sortedKeys.length,
data: (index = 3) => {
return sortedKeys.map((key) => ({
label: key,
dataKey: ({ stats }: SystemStatsRecord) => stats?.ni?.[key]?.[index],
color: `hsl(${220 + (((sortedKeys.indexOf(key) * 360) / sortedKeys.length) % 360)}, 70%, 50%)`,
opacity: 0.3,
}))
},
}
}

View File

@@ -0,0 +1,110 @@
import { useMemo } from "react"
import { CartesianGrid, Line, LineChart, YAxis } from "recharts"
import {
ChartContainer,
ChartLegend,
ChartLegendContent,
ChartTooltip,
ChartTooltipContent,
xAxis,
} from "@/components/ui/chart"
import { chartMargin, cn, formatShortDate } from "@/lib/utils"
import type { ChartData, SystemStatsRecord } from "@/types"
import { useYAxisWidth } from "./hooks"
export type DataPoint = {
label: string
dataKey: (data: SystemStatsRecord) => number | undefined
color: number | string
}
export default function LineChartDefault({
chartData,
max,
maxToggled,
tickFormatter,
contentFormatter,
dataPoints,
domain,
legend,
itemSorter,
}: // logRender = false,
{
chartData: ChartData
max?: number
maxToggled?: boolean
tickFormatter: (value: number, index: number) => string
contentFormatter: ({ value, payload }: { value: number; payload: SystemStatsRecord }) => string
dataPoints?: DataPoint[]
domain?: [number, number]
legend?: boolean
itemSorter?: (a: any, b: any) => number
// logRender?: boolean
}) {
const { yAxisWidth, updateYAxisWidth } = useYAxisWidth()
// biome-ignore lint/correctness/useExhaustiveDependencies: ignore
return useMemo(() => {
if (chartData.systemStats.length === 0) {
return null
}
// if (logRender) {
// console.log("Rendered at", new Date())
// }
return (
<div>
<ChartContainer
className={cn("h-full w-full absolute aspect-auto bg-card opacity-0 transition-opacity", {
"opacity-100": yAxisWidth,
})}
>
<LineChart accessibilityLayer data={chartData.systemStats} margin={chartMargin}>
<CartesianGrid vertical={false} />
<YAxis
direction="ltr"
orientation={chartData.orientation}
className="tracking-tighter"
width={yAxisWidth}
domain={domain ?? [0, max ?? "auto"]}
tickFormatter={(value, index) => updateYAxisWidth(tickFormatter(value, index))}
tickLine={false}
axisLine={false}
/>
{xAxis(chartData)}
<ChartTooltip
animationEasing="ease-out"
animationDuration={150}
// @ts-expect-error
itemSorter={itemSorter}
content={
<ChartTooltipContent
labelFormatter={(_, data) => formatShortDate(data[0].payload.created)}
contentFormatter={contentFormatter}
/>
}
/>
{dataPoints?.map((dataPoint) => {
let { color } = dataPoint
if (typeof color === "number") {
color = `var(--chart-${color})`
}
return (
<Line
key={dataPoint.label}
dataKey={dataPoint.dataKey}
name={dataPoint.label}
type="monotoneX"
dot={false}
strokeWidth={1.5}
stroke={color}
isAnimationActive={false}
/>
)
})}
{legend && <ChartLegend content={<ChartLegendContent />} />}
</LineChart>
</ChartContainer>
</div>
)
}, [chartData.systemStats.at(-1), yAxisWidth, maxToggled])
}

View File

@@ -1,5 +1,6 @@
import { t } from "@lingui/core/macro"
import { memo } from "react"
import { CartesianGrid, Line, LineChart, YAxis } from "recharts"
import {
ChartContainer,
ChartLegend,
@@ -8,10 +9,8 @@ import {
ChartTooltipContent,
xAxis,
} from "@/components/ui/chart"
import { cn, formatShortDate, toFixedFloat, decimalString, chartMargin } from "@/lib/utils"
import { ChartData, SystemStats } from "@/types"
import { memo } from "react"
import { t } from "@lingui/core/macro"
import { chartMargin, cn, decimalString, formatShortDate, toFixedFloat } from "@/lib/utils"
import type { ChartData, SystemStats } from "@/types"
import { useYAxisWidth } from "./hooks"
export default memo(function LoadAverageChart({ chartData }: { chartData: ChartData }) {
@@ -60,8 +59,6 @@ export default memo(function LoadAverageChart({ chartData }: { chartData: ChartD
<ChartTooltip
animationEasing="ease-out"
animationDuration={150}
// @ts-ignore
// itemSorter={(a, b) => b.value - a.value}
content={
<ChartTooltipContent
labelFormatter={(_, data) => formatShortDate(data[0].payload.created)}
@@ -71,14 +68,15 @@ export default memo(function LoadAverageChart({ chartData }: { chartData: ChartD
/>
{keys.map(({ legacy, color, label }, i) => {
const dataKey = (value: { stats: SystemStats }) => {
if (chartData.agentVersion.patch < 1) {
const { minor, patch } = chartData.agentVersion
if (minor <= 12 && patch < 1) {
return value.stats?.[legacy]
}
return value.stats?.la?.[i] ?? value.stats?.[legacy]
}
return (
<Line
key={i}
key={label}
dataKey={dataKey}
name={label}
type="monotoneX"

View File

@@ -1,10 +1,10 @@
import { useLingui } from "@lingui/react/macro"
import { memo } from "react"
import { Area, AreaChart, CartesianGrid, YAxis } from "recharts"
import { ChartContainer, ChartTooltip, ChartTooltipContent, xAxis } from "@/components/ui/chart"
import { cn, decimalString, formatShortDate, chartMargin, formatBytes, toFixedFloat } from "@/lib/utils"
import { memo } from "react"
import { ChartData } from "@/types"
import { useLingui } from "@lingui/react/macro"
import { Unit } from "@/lib/enums"
import { chartMargin, cn, decimalString, formatBytes, formatShortDate, toFixedFloat } from "@/lib/utils"
import type { ChartData } from "@/types"
import { useYAxisWidth } from "./hooks"
export default memo(function MemChart({ chartData, showMax }: { chartData: ChartData; showMax: boolean }) {
@@ -53,7 +53,7 @@ export default memo(function MemChart({ chartData, showMax }: { chartData: Chart
animationDuration={150}
content={
<ChartTooltipContent
// @ts-ignore
// @ts-expect-error
itemSorter={(a, b) => a.order - b.order}
labelFormatter={(_, data) => formatShortDate(data[0].payload.created)}
contentFormatter={({ value }) => {

View File

@@ -1,12 +1,11 @@
import { t } from "@lingui/core/macro"
import { useStore } from "@nanostores/react"
import { memo } from "react"
import { Area, AreaChart, CartesianGrid, YAxis } from "recharts"
import { ChartContainer, ChartTooltip, ChartTooltipContent, xAxis } from "@/components/ui/chart"
import { cn, formatShortDate, decimalString, chartMargin, formatBytes, toFixedFloat } from "@/lib/utils"
import { ChartData } from "@/types"
import { memo } from "react"
import { $userSettings } from "@/lib/stores"
import { useStore } from "@nanostores/react"
import { chartMargin, cn, decimalString, formatBytes, formatShortDate, toFixedFloat } from "@/lib/utils"
import type { ChartData } from "@/types"
import { useYAxisWidth } from "./hooks"
export default memo(function SwapChart({ chartData }: { chartData: ChartData }) {

View File

@@ -1,5 +1,6 @@
import { useStore } from "@nanostores/react"
import { memo, useMemo } from "react"
import { CartesianGrid, Line, LineChart, YAxis } from "recharts"
import {
ChartContainer,
ChartLegend,
@@ -8,11 +9,9 @@ import {
ChartTooltipContent,
xAxis,
} from "@/components/ui/chart"
import { cn, formatShortDate, toFixedFloat, chartMargin, formatTemperature, decimalString } from "@/lib/utils"
import { ChartData } from "@/types"
import { memo, useMemo } from "react"
import { $temperatureFilter, $userSettings } from "@/lib/stores"
import { useStore } from "@nanostores/react"
import { chartMargin, cn, decimalString, formatShortDate, formatTemperature, toFixedFloat } from "@/lib/utils"
import type { ChartData } from "@/types"
import { useYAxisWidth } from "./hooks"
export default memo(function TemperatureChart({ chartData }: { chartData: ChartData }) {
@@ -31,18 +30,18 @@ export default memo(function TemperatureChart({ chartData }: { chartData: ChartD
colors: Record<string, string>
}
const tempSums = {} as Record<string, number>
for (let data of chartData.systemStats) {
let newData = { created: data.created } as Record<string, number | string>
let keys = Object.keys(data.stats?.t ?? {})
for (const data of chartData.systemStats) {
const newData = { created: data.created } as Record<string, number | string>
const keys = Object.keys(data.stats?.t ?? {})
for (let i = 0; i < keys.length; i++) {
let key = keys[i]
const key = keys[i]
newData[key] = data.stats.t![key]
tempSums[key] = (tempSums[key] ?? 0) + newData[key]
}
newChartData.data.push(newData)
}
const keys = Object.keys(tempSums).sort((a, b) => tempSums[b] - tempSums[a])
for (let key of keys) {
for (const key of keys) {
newChartData.colors[key] = `hsl(${((keys.indexOf(key) * 360) / keys.length) % 360}, 60%, 55%)`
}
return newChartData
@@ -78,7 +77,7 @@ export default memo(function TemperatureChart({ chartData }: { chartData: ChartD
<ChartTooltip
animationEasing="ease-out"
animationDuration={150}
// @ts-ignore
// @ts-expect-error
itemSorter={(a, b) => b.value - a.value}
content={
<ChartTooltipContent
@@ -93,7 +92,7 @@ export default memo(function TemperatureChart({ chartData }: { chartData: ChartD
/>
{colors.map((key) => {
const filtered = filter && !key.toLowerCase().includes(filter.toLowerCase())
let strokeOpacity = filtered ? 0.1 : 1
const strokeOpacity = filtered ? 0.1 : 1
return (
<Line
key={key}

View File

@@ -1,3 +1,7 @@
import { t } from "@lingui/core/macro"
import { Trans } from "@lingui/react/macro"
import { getPagePath } from "@nanostores/router"
import { DialogDescription } from "@radix-ui/react-dialog"
import {
AlertOctagonIcon,
BookIcon,
@@ -10,7 +14,7 @@ import {
SettingsIcon,
UsersIcon,
} from "lucide-react"
import { memo, useEffect, useMemo } from "react"
import {
CommandDialog,
CommandEmpty,
@@ -21,15 +25,10 @@ import {
CommandSeparator,
CommandShortcut,
} from "@/components/ui/command"
import { memo, useEffect, useMemo } from "react"
import { isAdmin } from "@/lib/api"
import { $systems } from "@/lib/stores"
import { getHostDisplayValue, listen } from "@/lib/utils"
import { $router, basePath, navigate, prependBasePath } from "./router"
import { Trans } from "@lingui/react/macro"
import { t } from "@lingui/core/macro"
import { getPagePath } from "@nanostores/router"
import { DialogDescription } from "@radix-ui/react-dialog"
import { isAdmin } from "@/lib/api"
export default memo(function CommandPalette({ open, setOpen }: { open: boolean; setOpen: (open: boolean) => void }) {
useEffect(() => {
@@ -66,7 +65,7 @@ export default memo(function CommandPalette({ open, setOpen }: { open: boolean;
<CommandItem
key={system.id}
onSelect={() => {
navigate(getPagePath($router, "system", { name: system.name }))
navigate(getPagePath($router, "system", { id: system.id }))
setOpen(false)
}}
>

View File

@@ -1,8 +1,8 @@
import { Trans } from "@lingui/react/macro";
import { Trans } from "@lingui/react/macro"
import { useEffect, useMemo, useRef } from "react"
import { $copyContent } from "@/lib/stores"
import { Dialog, DialogContent, DialogDescription, DialogHeader, DialogTitle } from "./ui/dialog"
import { Textarea } from "./ui/textarea"
import { $copyContent } from "@/lib/stores"
export default function CopyToClipboard({ content }: { content: string }) {
return (

View File

@@ -1,7 +1,7 @@
import { memo } from "react"
import { DropdownMenuContent, DropdownMenuItem } from "./ui/dropdown-menu"
import { copyToClipboard, getHubURL } from "@/lib/utils"
import { i18n } from "@lingui/core"
import { memo } from "react"
import { copyToClipboard, getHubURL } from "@/lib/utils"
import { DropdownMenuContent, DropdownMenuItem } from "./ui/dropdown-menu"
// const isbeta = beszel.hub_version.includes("beta")
// const imagetag = isbeta ? ":edge" : ""

View File

@@ -1,11 +1,10 @@
import { useLingui } from "@lingui/react/macro"
import { LanguagesIcon } from "lucide-react"
import { Button } from "@/components/ui/button"
import { DropdownMenu, DropdownMenuContent, DropdownMenuItem, DropdownMenuTrigger } from "@/components/ui/dropdown-menu"
import { dynamicActivate } from "@/lib/i18n"
import languages from "@/lib/languages"
import { cn } from "@/lib/utils"
import { useLingui } from "@lingui/react/macro"
import { dynamicActivate } from "@/lib/i18n"
export function LangToggle() {
const { i18n } = useLingui()

View File

@@ -1,19 +1,19 @@
import { t } from "@lingui/core/macro"
import { Trans } from "@lingui/react/macro"
import { cn } from "@/lib/utils"
import { getPagePath } from "@nanostores/router"
import { KeyIcon, LoaderCircle, LockIcon, LogInIcon, MailIcon } from "lucide-react"
import type { AuthMethodsList, AuthProviderInfo, OAuth2AuthConfig } from "pocketbase"
import { useCallback, useEffect, useState } from "react"
import * as v from "valibot"
import { buttonVariants } from "@/components/ui/button"
import { Dialog, DialogContent, DialogHeader, DialogTitle, DialogTrigger } from "@/components/ui/dialog"
import { Input } from "@/components/ui/input"
import { Label } from "@/components/ui/label"
import { KeyIcon, LoaderCircle, LockIcon, LogInIcon, MailIcon } from "lucide-react"
import { $authenticated } from "@/lib/stores"
import * as v from "valibot"
import { toast } from "../ui/use-toast"
import { Dialog, DialogContent, DialogTrigger, DialogHeader, DialogTitle } from "@/components/ui/dialog"
import { useCallback, useEffect, useState } from "react"
import { AuthMethodsList, AuthProviderInfo, OAuth2AuthConfig } from "pocketbase"
import { $router, Link, prependBasePath } from "../router"
import { getPagePath } from "@nanostores/router"
import { pb } from "@/lib/api"
import { $authenticated } from "@/lib/stores"
import { cn } from "@/lib/utils"
import { $router, Link, prependBasePath } from "../router"
import { toast } from "../ui/use-toast"
import { OtpInputForm } from "./otp-forms"
const honeypot = v.literal("")
@@ -83,9 +83,9 @@ export function UserAuthForm({
const result = v.safeParse(Schema, data)
if (!result.success) {
console.log(result)
let errors = {}
const errors = {}
for (const issue of result.issues) {
// @ts-ignore
// @ts-expect-error
errors[issue.path[0].key] = issue.message
}
setErrors(errors)
@@ -96,7 +96,7 @@ export function UserAuthForm({
if (isFirstRun) {
// check that passwords match
if (password !== passwordConfirm) {
let msg = "Passwords do not match"
const msg = "Passwords do not match"
setErrors({ passwordConfirm: msg })
return
}

View File

@@ -1,15 +1,14 @@
import { Trans } from "@lingui/react/macro"
import { t } from "@lingui/core/macro"
import { Trans } from "@lingui/react/macro"
import { LoaderCircle, MailIcon, SendHorizonalIcon } from "lucide-react"
import { useCallback, useState } from "react"
import { pb } from "@/lib/api"
import { cn } from "@/lib/utils"
import { buttonVariants } from "../ui/button"
import { Dialog, DialogContent, DialogHeader, DialogTitle, DialogTrigger } from "../ui/dialog"
import { Input } from "../ui/input"
import { Label } from "../ui/label"
import { useCallback, useState } from "react"
import { toast } from "../ui/use-toast"
import { buttonVariants } from "../ui/button"
import { cn } from "@/lib/utils"
import { Dialog, DialogHeader } from "../ui/dialog"
import { DialogContent, DialogTrigger, DialogTitle } from "../ui/dialog"
import { pb } from "@/lib/api"
const showLoginFaliedToast = () => {
toast({

View File

@@ -1,14 +1,14 @@
import { t } from "@lingui/core/macro"
import { UserAuthForm } from "@/components/login/auth-form"
import { Logo } from "../logo"
import { useEffect, useMemo, useState } from "react"
import { useStore } from "@nanostores/react"
import ForgotPassword from "./forgot-pass-form"
import { $router } from "../router"
import { AuthMethodsList } from "pocketbase"
import { useTheme } from "../theme-provider"
import type { AuthMethodsList } from "pocketbase"
import { useEffect, useMemo, useState } from "react"
import { UserAuthForm } from "@/components/login/auth-form"
import { pb } from "@/lib/api"
import { Logo } from "../logo"
import { ModeToggle } from "../mode-toggle"
import { $router } from "../router"
import { useTheme } from "../theme-provider"
import ForgotPassword from "./forgot-pass-form"
import { OtpRequestForm } from "./otp-forms"
export default function () {
@@ -53,7 +53,7 @@ export default function () {
<div className="min-h-svh grid items-center py-12">
<div
className="grid gap-5 w-full px-4 mx-auto"
// @ts-ignore
// @ts-expect-error
style={{ maxWidth: "21.5em", "--border": theme == "light" ? "hsl(30, 8%, 70%)" : "hsl(220, 3%, 25%)" }}
>
<div className="absolute top-3 right-3">

View File

@@ -1,15 +1,15 @@
import { Trans } from "@lingui/react/macro"
import { LoaderCircle, MailIcon, SendHorizonalIcon } from "lucide-react"
import { useCallback, useState } from "react"
import { InputOTP, InputOTPGroup, InputOTPSlot } from "@/components/ui/otp"
import { pb } from "@/lib/api"
import { $authenticated } from "@/lib/stores"
import { InputOTP, InputOTPGroup, InputOTPSlot } from "@/components/ui/otp"
import { Trans } from "@lingui/react/macro"
import { showLoginFaliedToast } from "./auth-form"
import { cn } from "@/lib/utils"
import { MailIcon, LoaderCircle, SendHorizonalIcon } from "lucide-react"
import { Label } from "../ui/label"
import { $router } from "../router"
import { buttonVariants } from "../ui/button"
import { Input } from "../ui/input"
import { $router } from "../router"
import { Label } from "../ui/label"
import { showLoginFaliedToast } from "./auth-form"
export function OtpInputForm({ otpId, mfaId }: { otpId: string; mfaId: string }) {
const [value, setValue] = useState("")

View File

@@ -1,8 +1,7 @@
import { t } from "@lingui/core/macro"
import { MoonStarIcon, SunIcon } from "lucide-react"
import { Button } from "@/components/ui/button"
import { useTheme } from "@/components/theme-provider"
import { Button } from "@/components/ui/button"
export function ModeToggle() {
const { theme, setTheme } = useTheme()

View File

@@ -1,6 +1,5 @@
import { Trans } from "@lingui/react/macro"
import { useState, lazy, Suspense } from "react"
import { Button, buttonVariants } from "@/components/ui/button"
import { getPagePath } from "@nanostores/router"
import {
DatabaseBackupIcon,
LogOutIcon,
@@ -11,23 +10,24 @@ import {
UserIcon,
UsersIcon,
} from "lucide-react"
import { $router, basePath, Link, prependBasePath } from "./router"
import { LangToggle } from "./lang-toggle"
import { ModeToggle } from "./mode-toggle"
import { Logo } from "./logo"
import { cn, runOnce } from "@/lib/utils"
import { isReadOnlyUser, isAdmin, logOut, pb } from "@/lib/api"
import { lazy, Suspense, useState } from "react"
import { Button, buttonVariants } from "@/components/ui/button"
import {
DropdownMenu,
DropdownMenuTrigger,
DropdownMenuContent,
DropdownMenuLabel,
DropdownMenuSeparator,
DropdownMenuGroup,
DropdownMenuItem,
DropdownMenuLabel,
DropdownMenuSeparator,
DropdownMenuTrigger,
} from "@/components/ui/dropdown-menu"
import { isAdmin, isReadOnlyUser, logOut, pb } from "@/lib/api"
import { cn, runOnce } from "@/lib/utils"
import { AddSystemButton } from "./add-system"
import { getPagePath } from "@nanostores/router"
import { LangToggle } from "./lang-toggle"
import { Logo } from "./logo"
import { ModeToggle } from "./mode-toggle"
import { $router, basePath, Link, prependBasePath } from "./router"
const CommandPalette = lazy(() => import("./command-palette"))

View File

@@ -2,7 +2,7 @@ import { createRouter } from "@nanostores/router"
const routes = {
home: "/",
system: `/system/:name`,
system: `/system/:id`,
settings: `/settings/:name?`,
forgot_password: `/forgot-password`,
request_otp: `/request-otp`,
@@ -23,7 +23,7 @@ export const prependBasePath = (path: string) => (basePath + path).replaceAll("/
// prepend base path to routes
for (const route in routes) {
// @ts-ignore need as const above to get nanostores to parse types properly
// @ts-expect-error need as const above to get nanostores to parse types properly
routes[route] = prependBasePath(routes[route])
}

View File

@@ -112,7 +112,7 @@ const ActiveAlerts = () => {
)}
</AlertDescription>
<Link
href={getPagePath($router, "system", { name: systems[alert.system]?.name })}
href={getPagePath($router, "system", { id: systems[alert.system]?.id })}
className="absolute inset-0 w-full h-full"
aria-label="View system"
></Link>

View File

@@ -3,13 +3,13 @@ import { Trans, useLingui } from "@lingui/react/macro"
import { redirectPage } from "@nanostores/router"
import {
CopyIcon,
ExternalLinkIcon,
FingerprintIcon,
KeyIcon,
MoreHorizontalIcon,
RotateCwIcon,
ServerIcon,
Trash2Icon,
ExternalLinkIcon,
} from "lucide-react"
import { memo, useEffect, useMemo, useState } from "react"
import {

View File

@@ -3,10 +3,18 @@ import { Plural, Trans, useLingui } from "@lingui/react/macro"
import { useStore } from "@nanostores/react"
import { getPagePath } from "@nanostores/router"
import { timeTicks } from "d3-time"
import { ClockArrowUp, CpuIcon, GlobeIcon, LayoutGridIcon, MonitorIcon, XIcon } from "lucide-react"
import {
ChevronRightSquareIcon,
ClockArrowUp,
CpuIcon,
GlobeIcon,
LayoutGridIcon,
MonitorIcon,
XIcon,
} from "lucide-react"
import { subscribeKeys } from "nanostores"
import React, { type JSX, memo, useCallback, useEffect, useMemo, useRef, useState } from "react"
import AreaChartDefault from "@/components/charts/area-chart"
import AreaChartDefault, { type DataPoint } from "@/components/charts/area-chart"
import ContainerChart from "@/components/charts/container-chart"
import DiskChart from "@/components/charts/disk-chart"
import GpuPowerChart from "@/components/charts/gpu-power-chart"
@@ -16,9 +24,10 @@ import MemChart from "@/components/charts/mem-chart"
import SwapChart from "@/components/charts/swap-chart"
import TemperatureChart from "@/components/charts/temperature-chart"
import { getPbTimestamp, pb } from "@/lib/api"
import { ChartType, Os, SystemStatus, Unit } from "@/lib/enums"
import { ChartType, ConnectionType, connectionTypeLabels, Os, SystemStatus, Unit } from "@/lib/enums"
import { batteryStateTranslations } from "@/lib/i18n"
import {
$allSystemsById,
$allSystemsByName,
$chartTime,
$containerFilter,
@@ -41,17 +50,28 @@ import {
toFixedFloat,
useBrowserStorage,
} from "@/lib/utils"
import type { ChartData, ChartTimes, ContainerStatsRecord, GPUData, SystemRecord, SystemStatsRecord } from "@/types"
import type {
ChartData,
ChartTimes,
ContainerStatsRecord,
GPUData,
SystemInfo,
SystemRecord,
SystemStats,
SystemStatsRecord,
} from "@/types"
import ChartTimeSelect from "../charts/chart-time-select"
import { $router, navigate } from "../router"
import Spinner from "../spinner"
import { Button } from "../ui/button"
import { Card, CardDescription, CardHeader, CardTitle } from "../ui/card"
import { AppleIcon, ChartAverage, ChartMax, FreeBsdIcon, Rows, TuxIcon, WindowsIcon } from "../ui/icons"
import { AppleIcon, ChartAverage, ChartMax, FreeBsdIcon, Rows, TuxIcon, WebSocketIcon, WindowsIcon } from "../ui/icons"
import { Input } from "../ui/input"
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from "../ui/select"
import { Separator } from "../ui/separator"
import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from "../ui/tooltip"
import NetworkSheet from "./system/network-sheet"
import LineChartDefault from "../charts/line-chart"
type ChartTimeData = {
time: number
@@ -73,7 +93,8 @@ function getTimeData(chartTime: ChartTimes, lastCreated: number) {
}
}
const now = new Date()
const buffer = chartTime === "1m" ? 400 : 20_000
const now = new Date(Date.now() + buffer)
const startTime = chartTimeData[chartTime].getOffset(now)
const ticks = timeTicks(startTime, now, chartTimeData[chartTime].ticks ?? 12).map((date) => date.getTime())
const data = {
@@ -85,25 +106,28 @@ function getTimeData(chartTime: ChartTimes, lastCreated: number) {
}
// add empty values between records to make gaps if interval is too large
function addEmptyValues<T extends SystemStatsRecord | ContainerStatsRecord>(
function addEmptyValues<T extends { created: string | number | null }>(
prevRecords: T[],
newRecords: T[],
expectedInterval: number
) {
): T[] {
const modifiedRecords: T[] = []
let prevTime = (prevRecords.at(-1)?.created ?? 0) as number
for (let i = 0; i < newRecords.length; i++) {
const record = newRecords[i]
record.created = new Date(record.created).getTime()
if (prevTime) {
if (record.created !== null) {
record.created = new Date(record.created).getTime()
}
if (prevTime && record.created !== null) {
const interval = record.created - prevTime
// if interval is too large, add a null record
if (interval > expectedInterval / 2 + expectedInterval) {
// @ts-expect-error
modifiedRecords.push({ created: null, stats: null })
modifiedRecords.push({ created: null, ...("stats" in record ? { stats: null } : {}) } as T)
}
}
prevTime = record.created
if (record.created !== null) {
prevTime = record.created
}
modifiedRecords.push(record)
}
return modifiedRecords
@@ -127,14 +151,14 @@ async function getStats<T extends SystemStatsRecord | ContainerStatsRecord>(
})
}
function dockerOrPodman(str: string, system: SystemRecord) {
function dockerOrPodman(str: string, system: SystemRecord): string {
if (system.info.p) {
str = str.replace("docker", "podman").replace("Docker", "Podman")
return str.replace("docker", "podman").replace("Docker", "Podman")
}
return str
}
export default memo(function SystemDetail({ name }: { name: string }) {
export default memo(function SystemDetail({ id }: { id: string }) {
const direction = useStore($direction)
const { t } = useLingui()
const systems = useStore($systems)
@@ -146,15 +170,13 @@ export default memo(function SystemDetail({ name }: { name: string }) {
const [containerData, setContainerData] = useState([] as ChartData["containerData"])
const netCardRef = useRef<HTMLDivElement>(null)
const persistChartTime = useRef(false)
const [containerFilterBar, setContainerFilterBar] = useState(null as null | JSX.Element)
const [bottomSpacing, setBottomSpacing] = useState(0)
const [chartLoading, setChartLoading] = useState(true)
const isLongerChart = chartTime !== "1h"
const isLongerChart = !["1m", "1h"].includes(chartTime) // true if chart time is not 1m or 1h
const userSettings = $userSettings.get()
const chartWrapRef = useRef<HTMLDivElement>(null)
useEffect(() => {
document.title = `${name} / Beszel`
return () => {
if (!persistChartTime.current) {
$chartTime.set($userSettings.get().chartTime)
@@ -162,18 +184,71 @@ export default memo(function SystemDetail({ name }: { name: string }) {
persistChartTime.current = false
setSystemStats([])
setContainerData([])
setContainerFilterBar(null)
$containerFilter.set("")
}
}, [name])
}, [id])
// find matching system and update when it changes
useEffect(() => {
return subscribeKeys($allSystemsByName, [name], (newSystems) => {
const sys = newSystems[name]
sys?.id && setSystem(sys)
if (!systems.length) {
return
}
// allow old system-name slug to work
const store = $allSystemsById.get()[id] ? $allSystemsById : $allSystemsByName
return subscribeKeys(store, [id], (newSystems) => {
const sys = newSystems[id]
if (sys) {
setSystem(sys)
document.title = `${sys?.name} / Beszel`
}
})
}, [name])
}, [id, systems.length])
// hide 1m chart time if system agent version is less than 0.13.0
useEffect(() => {
if (parseSemVer(system?.info?.v) < parseSemVer("0.13.0")) {
$chartTime.set("1h")
}
}, [system?.info?.v])
// subscribe to realtime metrics if chart time is 1m
// biome-ignore lint/correctness/useExhaustiveDependencies: not necessary
useEffect(() => {
let unsub = () => {}
if (!system.id || chartTime !== "1m") {
return
}
if (system.status !== SystemStatus.Up || parseSemVer(system?.info?.v).minor < 13) {
$chartTime.set("1h")
return
}
pb.realtime
.subscribe(
`rt_metrics`,
(data: { container: ContainerStatsRecord[]; info: SystemInfo; stats: SystemStats }) => {
if (data.container?.length > 0) {
const newContainerData = makeContainerData([
{ created: Date.now(), stats: data.container } as unknown as ContainerStatsRecord,
])
setContainerData((prevData) => addEmptyValues(prevData, prevData.slice(-59).concat(newContainerData), 1000))
}
setSystemStats((prevStats) =>
addEmptyValues(
prevStats,
prevStats.slice(-59).concat({ created: Date.now(), stats: data.stats } as SystemStatsRecord),
1000
)
)
},
{ query: { system: system.id } }
)
.then((us) => {
unsub = us
})
return () => {
unsub?.()
}
}, [chartTime, system.id])
// biome-ignore lint/correctness/useExhaustiveDependencies: not necessary
const chartData: ChartData = useMemo(() => {
@@ -211,13 +286,13 @@ export default memo(function SystemDetail({ name }: { name: string }) {
}
containerData.push(containerStats)
}
setContainerData(containerData)
return containerData
}, [])
// get stats
// biome-ignore lint/correctness/useExhaustiveDependencies: not necessary
useEffect(() => {
if (!system.id || !chartTime) {
if (!system.id || !chartTime || chartTime === "1m") {
return
}
// loading: true
@@ -251,12 +326,7 @@ export default memo(function SystemDetail({ name }: { name: string }) {
}
cache.set(cs_cache_key, containerData)
}
if (containerData.length) {
!containerFilterBar && setContainerFilterBar(<FilterBar />)
} else if (containerFilterBar) {
setContainerFilterBar(null)
}
makeContainerData(containerData)
setContainerData(makeContainerData(containerData))
})
}, [system, chartTime])
@@ -354,7 +424,7 @@ export default memo(function SystemDetail({ name }: { name: string }) {
) {
return
}
const currentIndex = systems.findIndex((s) => s.name === name)
const currentIndex = systems.findIndex((s) => s.id === id)
if (currentIndex === -1 || systems.length <= 1) {
return
}
@@ -363,18 +433,18 @@ export default memo(function SystemDetail({ name }: { name: string }) {
case "h": {
const prevIndex = (currentIndex - 1 + systems.length) % systems.length
persistChartTime.current = true
return navigate(getPagePath($router, "system", { name: systems[prevIndex].name }))
return navigate(getPagePath($router, "system", { id: systems[prevIndex].id }))
}
case "ArrowRight":
case "l": {
const nextIndex = (currentIndex + 1) % systems.length
persistChartTime.current = true
return navigate(getPagePath($router, "system", { name: systems[nextIndex].name }))
return navigate(getPagePath($router, "system", { id: systems[nextIndex].id }))
}
}
}
return listen(document, "keyup", handleKeyUp)
}, [name, systems])
}, [id, systems])
if (!system.id) {
return null
@@ -382,13 +452,15 @@ export default memo(function SystemDetail({ name }: { name: string }) {
// select field for switching between avg and max values
const maxValSelect = isLongerChart ? <SelectAvgMax max={maxValues} /> : null
const showMax = chartTime !== "1h" && maxValues
const showMax = maxValues && isLongerChart
const containerFilterBar = containerData.length ? <FilterBar /> : null
// if no data, show empty message
const dataEmpty = !chartLoading && chartData.systemStats.length === 0
const lastGpuVals = Object.values(systemStats.at(-1)?.stats.g ?? {})
const hasGpuData = lastGpuVals.length > 0
const hasGpuPowerData = lastGpuVals.some((gpu) => gpu.p !== undefined)
const hasGpuPowerData = lastGpuVals.some((gpu) => gpu.p !== undefined || gpu.pp !== undefined)
const hasGpuEnginesData = lastGpuVals.some((gpu) => gpu.e !== undefined)
let translatedStatus: string = system.status
if (system.status === SystemStatus.Up) {
@@ -406,25 +478,44 @@ export default memo(function SystemDetail({ name }: { name: string }) {
<div>
<h1 className="text-[1.6rem] font-semibold mb-1.5">{system.name}</h1>
<div className="flex flex-wrap items-center gap-3 gap-y-2 text-sm opacity-90">
<div className="capitalize flex gap-2 items-center">
<span className={cn("relative flex h-3 w-3")}>
{system.status === SystemStatus.Up && (
<span
className="animate-ping absolute inline-flex h-full w-full rounded-full bg-green-400 opacity-75"
style={{ animationDuration: "1.5s" }}
></span>
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<div className="capitalize flex gap-2 items-center">
<span className={cn("relative flex h-3 w-3")}>
{system.status === SystemStatus.Up && (
<span
className="animate-ping absolute inline-flex h-full w-full rounded-full bg-green-400 opacity-75"
style={{ animationDuration: "1.5s" }}
></span>
)}
<span
className={cn("relative inline-flex rounded-full h-3 w-3", {
"bg-green-500": system.status === SystemStatus.Up,
"bg-red-500": system.status === SystemStatus.Down,
"bg-primary/40": system.status === SystemStatus.Paused,
"bg-yellow-500": system.status === SystemStatus.Pending,
})}
></span>
</span>
{translatedStatus}
</div>
</TooltipTrigger>
{system.info.ct && (
<TooltipContent>
<div className="flex gap-1 items-center">
{system.info.ct === ConnectionType.WebSocket ? (
<WebSocketIcon className="size-4" />
) : (
<ChevronRightSquareIcon className="size-4" strokeWidth={2} />
)}
{connectionTypeLabels[system.info.ct as ConnectionType]}
</div>
</TooltipContent>
)}
<span
className={cn("relative inline-flex rounded-full h-3 w-3", {
"bg-green-500": system.status === SystemStatus.Up,
"bg-red-500": system.status === SystemStatus.Down,
"bg-primary/40": system.status === SystemStatus.Paused,
"bg-yellow-500": system.status === SystemStatus.Pending,
})}
></span>
</span>
{translatedStatus}
</div>
</Tooltip>
</TooltipProvider>
{systemInfo.map(({ value, label, Icon, hide }) => {
if (hide || !value) {
return null
@@ -453,7 +544,7 @@ export default memo(function SystemDetail({ name }: { name: string }) {
</div>
</div>
<div className="xl:ms-auto flex items-center gap-2 max-sm:-mb-1">
<ChartTimeSelect className="w-full xl:w-40" />
<ChartTimeSelect className="w-full xl:w-40" agentVersion={chartData.agentVersion} />
<TooltipProvider delayDuration={100}>
<Tooltip>
<TooltipTrigger asChild>
@@ -564,23 +655,33 @@ export default memo(function SystemDetail({ name }: { name: string }) {
dataPoints={[
{
label: t({ message: "Write", comment: "Disk write" }),
dataKey: ({ stats }) => (showMax ? stats?.dwm : stats?.dw),
dataKey: ({ stats }: SystemStatsRecord) => {
if (showMax) {
return stats?.dio?.[1] ?? (stats?.dwm ?? 0) * 1024 * 1024
}
return stats?.dio?.[1] ?? (stats?.dw ?? 0) * 1024 * 1024
},
color: 3,
opacity: 0.3,
},
{
label: t({ message: "Read", comment: "Disk read" }),
dataKey: ({ stats }) => (showMax ? stats?.drm : stats?.dr),
dataKey: ({ stats }: SystemStatsRecord) => {
if (showMax) {
return stats?.diom?.[0] ?? (stats?.drm ?? 0) * 1024 * 1024
}
return stats?.dio?.[0] ?? (stats?.dr ?? 0) * 1024 * 1024
},
color: 1,
opacity: 0.3,
},
]}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, true, userSettings.unitDisk, true)
const { value, unit } = formatBytes(val, true, userSettings.unitDisk, false)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
}}
contentFormatter={({ value }) => {
const { value: convertedValue, unit } = formatBytes(value, true, userSettings.unitDisk, true)
const { value: convertedValue, unit } = formatBytes(value, true, userSettings.unitDisk, false)
return `${decimalString(convertedValue, convertedValue >= 100 ? 1 : 2)} ${unit}`
}}
/>
@@ -590,7 +691,12 @@ export default memo(function SystemDetail({ name }: { name: string }) {
empty={dataEmpty}
grid={grid}
title={t`Bandwidth`}
cornerEl={maxValSelect}
cornerEl={
<div className="flex gap-2">
{maxValSelect}
<NetworkSheet chartData={chartData} dataEmpty={dataEmpty} grid={grid} maxValues={maxValues} />
</div>
}
description={t`Network traffic of public interfaces`}
>
<AreaChartDefault
@@ -600,7 +706,7 @@ export default memo(function SystemDetail({ name }: { name: string }) {
{
label: t`Sent`,
// use bytes if available, otherwise multiply old MB (can remove in future)
dataKey(data) {
dataKey(data: SystemStatsRecord) {
if (showMax) {
return data?.stats?.bm?.[0] ?? (data?.stats?.nsm ?? 0) * 1024 * 1024
}
@@ -611,7 +717,7 @@ export default memo(function SystemDetail({ name }: { name: string }) {
},
{
label: t`Received`,
dataKey(data) {
dataKey(data: SystemStatsRecord) {
if (showMax) {
return data?.stats?.bm?.[1] ?? (data?.stats?.nrm ?? 0) * 1024 * 1024
}
@@ -620,7 +726,9 @@ export default memo(function SystemDetail({ name }: { name: string }) {
color: 2,
opacity: 0.2,
},
]}
]
// try to place the lesser number in front for better visibility
.sort(() => (systemStats.at(-1)?.stats.b?.[1] ?? 0) - (systemStats.at(-1)?.stats.b?.[0] ?? 0))}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, true, userSettings.unitNet, false)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
@@ -674,6 +782,7 @@ export default memo(function SystemDetail({ name }: { name: string }) {
grid={grid}
title={t`Load Average`}
description={t`System load averages over time`}
legend={true}
>
<LoadAverageChart chartData={chartData} />
</ChartCard>
@@ -687,6 +796,7 @@ export default memo(function SystemDetail({ name }: { name: string }) {
title={t`Temperature`}
description={t`Temperatures of system sensors`}
cornerEl={<FilterBar store={$temperatureFilter} />}
legend={Object.keys(systemStats.at(-1)?.stats.t ?? {}).length < 12}
>
<TemperatureChart chartData={chartData} />
</ChartCard>
@@ -720,7 +830,6 @@ export default memo(function SystemDetail({ name }: { name: string }) {
/>
</ChartCard>
)}
{/* GPU power draw chart */}
{hasGpuPowerData && (
<ChartCard
@@ -734,14 +843,26 @@ export default memo(function SystemDetail({ name }: { name: string }) {
)}
</div>
{/* GPU charts */}
{/* Non-power GPU charts */}
{hasGpuData && (
<div className="grid xl:grid-cols-2 gap-4">
{hasGpuEnginesData && (
<ChartCard
legend={true}
empty={dataEmpty}
grid={grid}
title={t`GPU Engines`}
description={t`Average utilization of GPU engines`}
>
<GpuEnginesChart chartData={chartData} />
</ChartCard>
)}
{Object.keys(systemStats.at(-1)?.stats.g ?? {}).map((id) => {
const gpu = systemStats.at(-1)?.stats.g?.[id] as GPUData
return (
<div key={id} className="contents">
<ChartCard
className={cn(grid && "!col-span-1")}
empty={dataEmpty}
grid={grid}
title={`${gpu.n} ${t`Usage`}`}
@@ -761,33 +882,36 @@ export default memo(function SystemDetail({ name }: { name: string }) {
contentFormatter={({ value }) => `${decimalString(value)}%`}
/>
</ChartCard>
<ChartCard
empty={dataEmpty}
grid={grid}
title={`${gpu.n} VRAM`}
description={t`Precise utilization at the recorded time`}
>
<AreaChartDefault
chartData={chartData}
dataPoints={[
{
label: t`Usage`,
dataKey: ({ stats }) => stats?.g?.[id]?.mu ?? 0,
color: 2,
opacity: 0.25,
},
]}
max={gpu.mt}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, false, Unit.Bytes, true)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
}}
contentFormatter={({ value }) => {
const { value: convertedValue, unit } = formatBytes(value, false, Unit.Bytes, true)
return `${decimalString(convertedValue)} ${unit}`
}}
/>
</ChartCard>
{(gpu.mt ?? 0) > 0 && (
<ChartCard
empty={dataEmpty}
grid={grid}
title={`${gpu.n} VRAM`}
description={t`Precise utilization at the recorded time`}
>
<AreaChartDefault
chartData={chartData}
dataPoints={[
{
label: t`Usage`,
dataKey: ({ stats }) => stats?.g?.[id]?.mu ?? 0,
color: 2,
opacity: 0.25,
},
]}
max={gpu.mt}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, false, Unit.Bytes, true)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
}}
contentFormatter={({ value }) => {
const { value: convertedValue, unit } = formatBytes(value, false, Unit.Bytes, true)
return `${decimalString(convertedValue)} ${unit}`
}}
/>
</ChartCard>
)}
</div>
)
})}
@@ -824,24 +948,36 @@ export default memo(function SystemDetail({ name }: { name: string }) {
dataPoints={[
{
label: t`Write`,
dataKey: ({ stats }) => stats?.efs?.[extraFsName]?.[showMax ? "wm" : "w"] ?? 0,
dataKey: ({ stats }) => {
if (showMax) {
return stats?.efs?.[extraFsName]?.wb ?? (stats?.efs?.[extraFsName]?.wm ?? 0) * 1024 * 1024
}
return stats?.efs?.[extraFsName]?.wb ?? (stats?.efs?.[extraFsName]?.w ?? 0) * 1024 * 1024
},
color: 3,
opacity: 0.3,
},
{
label: t`Read`,
dataKey: ({ stats }) => stats?.efs?.[extraFsName]?.[showMax ? "rm" : "r"] ?? 0,
dataKey: ({ stats }) => {
if (showMax) {
return (
stats?.efs?.[extraFsName]?.rbm ?? (stats?.efs?.[extraFsName]?.rm ?? 0) * 1024 * 1024
)
}
return stats?.efs?.[extraFsName]?.rb ?? (stats?.efs?.[extraFsName]?.r ?? 0) * 1024 * 1024
},
color: 1,
opacity: 0.3,
},
]}
maxToggled={maxValues}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, true, userSettings.unitDisk, true)
const { value, unit } = formatBytes(val, true, userSettings.unitDisk, false)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
}}
contentFormatter={({ value }) => {
const { value: convertedValue, unit } = formatBytes(value, true, userSettings.unitDisk, true)
const { value: convertedValue, unit } = formatBytes(value, true, userSettings.unitDisk, false)
return `${decimalString(convertedValue, convertedValue >= 100 ? 1 : 2)} ${unit}`
}}
/>
@@ -859,27 +995,47 @@ export default memo(function SystemDetail({ name }: { name: string }) {
)
})
function GpuEnginesChart({ chartData }: { chartData: ChartData }) {
const dataPoints: DataPoint[] = []
const engines = Object.keys(chartData.systemStats?.at(-1)?.stats.g?.[0]?.e ?? {}).sort()
for (const engine of engines) {
dataPoints.push({
label: engine,
dataKey: ({ stats }: SystemStatsRecord) => stats?.g?.[0]?.e?.[engine] ?? 0,
color: `hsl(${140 + (((engines.indexOf(engine) * 360) / engines.length) % 360)}, 65%, 52%)`,
opacity: 0.35,
})
}
return (
<LineChartDefault
legend={true}
chartData={chartData}
dataPoints={dataPoints}
tickFormatter={(val) => `${toFixedFloat(val, 2)}%`}
contentFormatter={({ value }) => `${decimalString(value)}%`}
/>
)
}
function FilterBar({ store = $containerFilter }: { store?: typeof $containerFilter }) {
const containerFilter = useStore(store)
const { t } = useLingui()
const inputRef = useRef<HTMLInputElement>(null)
const debouncedStoreSet = useMemo(() => debounce((value: string) => store.set(value), 150), [store])
const debouncedStoreSet = useMemo(() => debounce((value: string) => store.set(value), 80), [store])
const handleChange = useCallback(
(e: React.ChangeEvent<HTMLInputElement>) => {
const value = e.target.value
if (inputRef.current) {
inputRef.current.value = value
}
debouncedStoreSet(value)
},
(e: React.ChangeEvent<HTMLInputElement>) => debouncedStoreSet(e.target.value),
[debouncedStoreSet]
)
return (
<>
<Input placeholder={t`Filter...`} className="ps-4 pe-8" onChange={handleChange} ref={inputRef} />
<Input
placeholder={t`Filter...`}
className="ps-4 pe-8 w-full sm:w-44"
onChange={handleChange}
value={containerFilter}
/>
{containerFilter && (
<Button
type="button"
@@ -887,12 +1043,7 @@ function FilterBar({ store = $containerFilter }: { store?: typeof $containerFilt
size="icon"
aria-label="Clear"
className="absolute right-1 top-1/2 -translate-y-1/2 h-7 w-7 text-gray-500 hover:text-gray-900 dark:text-gray-400 dark:hover:text-gray-100"
onClick={() => {
if (inputRef.current) {
inputRef.current.value = ""
}
store.set("")
}}
onClick={() => store.set("")}
>
<XIcon className="h-4 w-4" />
</Button>
@@ -905,7 +1056,7 @@ const SelectAvgMax = memo(({ max }: { max: boolean }) => {
const Icon = max ? ChartMax : ChartAverage
return (
<Select value={max ? "max" : "avg"} onValueChange={(e) => $maxValues.set(e === "max")}>
<SelectTrigger className="relative ps-10 pe-5">
<SelectTrigger className="relative ps-10 pe-5 w-full sm:w-44">
<Icon className="h-4 w-4 absolute start-4 top-1/2 -translate-y-1/2 opacity-85" />
<SelectValue />
</SelectTrigger>
@@ -921,13 +1072,15 @@ const SelectAvgMax = memo(({ max }: { max: boolean }) => {
)
})
function ChartCard({
export function ChartCard({
title,
description,
children,
grid,
empty,
cornerEl,
legend,
className,
}: {
title: string
description: string
@@ -935,17 +1088,22 @@ function ChartCard({
grid?: boolean
empty?: boolean
cornerEl?: JSX.Element | null
legend?: boolean
className?: string
}) {
const { isIntersecting, ref } = useIntersectionObserver()
return (
<Card className={cn("pb-2 sm:pb-4 odd:last-of-type:col-span-full", { "col-span-full": !grid })} ref={ref}>
<Card
className={cn("pb-2 sm:pb-4 odd:last-of-type:col-span-full min-h-full", { "col-span-full": !grid }, className)}
ref={ref}
>
<CardHeader className="pb-5 pt-4 gap-1 relative max-sm:py-3 max-sm:px-4">
<CardTitle className="text-xl sm:text-2xl">{title}</CardTitle>
<CardDescription>{description}</CardDescription>
{cornerEl && <div className="relative py-1 block sm:w-44 sm:absolute sm:top-3.5 sm:end-3.5">{cornerEl}</div>}
{cornerEl && <div className="py-1 grid sm:justify-end sm:absolute sm:top-3.5 sm:end-3.5">{cornerEl}</div>}
</CardHeader>
<div className="ps-0 w-[calc(100%-1.5em)] h-48 md:h-52 relative group">
<div className={cn("ps-0 w-[calc(100%-1.5em)] relative group", legend ? "h-54 md:h-56" : "h-48 md:h-52")}>
{
<Spinner
msg={empty ? t`Waiting for enough records to display` : undefined}

View File

@@ -0,0 +1,156 @@
import { t } from "@lingui/core/macro"
import { useStore } from "@nanostores/react"
import { MoreHorizontalIcon } from "lucide-react"
import { memo, useRef, useState } from "react"
import AreaChartDefault from "@/components/charts/area-chart"
import ChartTimeSelect from "@/components/charts/chart-time-select"
import { useNetworkInterfaces } from "@/components/charts/hooks"
import { Button } from "@/components/ui/button"
import { Sheet, SheetContent, SheetTrigger } from "@/components/ui/sheet"
import { DialogTitle } from "@/components/ui/dialog"
import { $userSettings } from "@/lib/stores"
import { decimalString, formatBytes, toFixedFloat } from "@/lib/utils"
import type { ChartData } from "@/types"
import { ChartCard } from "../system"
export default memo(function NetworkSheet({
chartData,
dataEmpty,
grid,
maxValues,
}: {
chartData: ChartData
dataEmpty: boolean
grid: boolean
maxValues: boolean
}) {
const [netInterfacesOpen, setNetInterfacesOpen] = useState(false)
const userSettings = useStore($userSettings)
const netInterfaces = useNetworkInterfaces(chartData.systemStats.at(-1)?.stats?.ni ?? {})
const showNetLegend = netInterfaces.length > 0 && netInterfaces.length < 15
const hasOpened = useRef(false)
if (netInterfacesOpen && !hasOpened.current) {
hasOpened.current = true
}
if (!netInterfaces.length) {
return null
}
return (
<Sheet open={netInterfacesOpen} onOpenChange={setNetInterfacesOpen}>
<DialogTitle className="sr-only">{t`Network traffic of public interfaces`}</DialogTitle>
<SheetTrigger asChild>
<Button
title={t`View more`}
variant="outline"
size="icon"
className="shrink-0 max-sm:absolute max-sm:top-3 max-sm:end-3"
>
<MoreHorizontalIcon />
</Button>
</SheetTrigger>
{hasOpened.current && (
<SheetContent aria-describedby={undefined} className="overflow-auto w-200 !max-w-full p-4 sm:p-6">
<ChartTimeSelect className="w-[calc(100%-2em)]" agentVersion={chartData.agentVersion} />
<ChartCard
empty={dataEmpty}
grid={grid}
title={t`Download`}
description={t`Network traffic of public interfaces`}
legend={showNetLegend}
className="min-h-auto"
>
<AreaChartDefault
chartData={chartData}
maxToggled={maxValues}
itemSorter={(a, b) => b.value - a.value}
dataPoints={netInterfaces.data(1)}
legend={showNetLegend}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, true, userSettings.unitNet, false)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
}}
contentFormatter={({ value }) => {
const { value: convertedValue, unit } = formatBytes(value, true, userSettings.unitNet, false)
return `${decimalString(convertedValue, convertedValue >= 100 ? 1 : 2)} ${unit}`
}}
/>
</ChartCard>
<ChartCard
empty={dataEmpty}
grid={grid}
title={t`Upload`}
description={t`Network traffic of public interfaces`}
legend={showNetLegend}
className="min-h-auto"
>
<AreaChartDefault
chartData={chartData}
maxToggled={maxValues}
itemSorter={(a, b) => b.value - a.value}
legend={showNetLegend}
dataPoints={netInterfaces.data(0)}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, true, userSettings.unitNet, false)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
}}
contentFormatter={({ value }) => {
const { value: convertedValue, unit } = formatBytes(value, true, userSettings.unitNet, false)
return `${decimalString(convertedValue, convertedValue >= 100 ? 1 : 2)} ${unit}`
}}
/>
</ChartCard>
<ChartCard
empty={dataEmpty}
grid={grid}
title={t`Cumulative Download`}
description={t`Total data received for each interface`}
legend={showNetLegend}
className="min-h-auto"
>
<AreaChartDefault
chartData={chartData}
legend={showNetLegend}
dataPoints={netInterfaces.data(3)}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, false, userSettings.unitNet, false)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
}}
contentFormatter={({ value }) => {
const { value: convertedValue, unit } = formatBytes(value, false, userSettings.unitNet, false)
return `${decimalString(convertedValue, convertedValue >= 100 ? 1 : 2)} ${unit}`
}}
/>
</ChartCard>
<ChartCard
empty={dataEmpty}
grid={grid}
title={t`Cumulative Upload`}
description={t`Total data sent for each interface`}
legend={showNetLegend}
className="min-h-auto"
>
<AreaChartDefault
chartData={chartData}
legend={showNetLegend}
dataPoints={netInterfaces.data(2)}
tickFormatter={(val) => {
const { value, unit } = formatBytes(val, false, userSettings.unitNet, false)
return `${toFixedFloat(value, value >= 10 ? 0 : 1)} ${unit}`
}}
contentFormatter={({ value }) => {
const { value: convertedValue, unit } = formatBytes(value, false, userSettings.unitNet, false)
return `${decimalString(convertedValue, convertedValue >= 100 ? 1 : 2)} ${unit}`
}}
/>
</ChartCard>
</SheetContent>
)}
</Sheet>
)
})

View File

@@ -1,5 +1,5 @@
import { cn } from "@/lib/utils"
import { LoaderCircleIcon } from "lucide-react"
import { cn } from "@/lib/utils"
export default function ({ msg, className }: { msg?: string; className?: string }) {
return (

View File

@@ -1,8 +1,12 @@
import { SystemRecord } from "@/types"
import { CellContext, ColumnDef, HeaderContext } from "@tanstack/react-table"
import { ClassValue } from "clsx"
import { t } from "@lingui/core/macro"
import { Trans, useLingui } from "@lingui/react/macro"
import { useStore } from "@nanostores/react"
import { getPagePath } from "@nanostores/router"
import type { CellContext, ColumnDef, HeaderContext } from "@tanstack/react-table"
import type { ClassValue } from "clsx"
import {
ArrowUpDownIcon,
ChevronRightSquareIcon,
CopyIcon,
CpuIcon,
HardDriveIcon,
@@ -15,7 +19,10 @@ import {
Trash2Icon,
WifiIcon,
} from "lucide-react"
import { Button } from "../ui/button"
import { memo, useMemo, useRef, useState } from "react"
import { isReadOnlyUser, pb } from "@/lib/api"
import { ConnectionType, connectionTypeLabels, MeterState, SystemStatus } from "@/lib/enums"
import { $longestSystemNameLen, $userSettings } from "@/lib/stores"
import {
cn,
copyToClipboard,
@@ -25,24 +32,12 @@ import {
getMeterState,
parseSemVer,
} from "@/lib/utils"
import { EthernetIcon, GpuIcon, HourglassIcon, ThermometerIcon } from "../ui/icons"
import { useStore } from "@nanostores/react"
import { $longestSystemNameLen, $userSettings } from "@/lib/stores"
import { Trans, useLingui } from "@lingui/react/macro"
import { useMemo, useRef, useState } from "react"
import { memo } from "react"
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuSeparator,
DropdownMenuTrigger,
} from "../ui/dropdown-menu"
import AlertButton from "../alerts/alert-button"
import { Dialog } from "../ui/dialog"
import type { SystemRecord } from "@/types"
import { SystemDialog } from "../add-system"
import { AlertDialog } from "../ui/alert-dialog"
import AlertButton from "../alerts/alert-button"
import { $router, Link } from "../router"
import {
AlertDialog,
AlertDialogAction,
AlertDialogCancel,
AlertDialogContent,
@@ -51,12 +46,16 @@ import {
AlertDialogHeader,
AlertDialogTitle,
} from "../ui/alert-dialog"
import { buttonVariants } from "../ui/button"
import { t } from "@lingui/core/macro"
import { MeterState, SystemStatus } from "@/lib/enums"
import { $router, Link } from "../router"
import { getPagePath } from "@nanostores/router"
import { isReadOnlyUser, pb } from "@/lib/api"
import { Button, buttonVariants } from "../ui/button"
import { Dialog } from "../ui/dialog"
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuSeparator,
DropdownMenuTrigger,
} from "../ui/dropdown-menu"
import { EthernetIcon, GpuIcon, HourglassIcon, ThermometerIcon, WebSocketIcon } from "../ui/icons"
const STATUS_COLORS = {
[SystemStatus.Up]: "bg-green-500",
@@ -78,6 +77,7 @@ export default function SystemsTableColumns(viewMode: "table" | "grid"): ColumnD
accessorKey: "name",
id: "system",
name: () => t`System`,
sortingFn: (a, b) => a.original.name.localeCompare(b.original.name),
filterFn: (() => {
let filterInput = ""
let filterInputLower = ""
@@ -111,7 +111,7 @@ export default function SystemsTableColumns(viewMode: "table" | "grid"): ColumnD
invertSorting: false,
Icon: ServerIcon,
cell: (info) => {
const { name } = info.row.original
const { name, id } = info.row.original
const longestName = useStore($longestSystemNameLen)
return (
<>
@@ -123,7 +123,7 @@ export default function SystemsTableColumns(viewMode: "table" | "grid"): ColumnD
</span>
</span>
<Link
href={getPagePath($router, "system", { name })}
href={getPagePath($router, "system", { id })}
className="inset-0 absolute size-full"
aria-label={name}
></Link>
@@ -273,24 +273,37 @@ export default function SystemsTableColumns(viewMode: "table" | "grid"): ColumnD
return null
}
const system = info.row.original
const color = {
"text-green-500": version === globalThis.BESZEL.HUB_VERSION,
"text-yellow-500": version !== globalThis.BESZEL.HUB_VERSION,
"text-red-500": system.status !== SystemStatus.Up,
}
return (
<span className={cn("flex gap-1.5 items-center md:pe-5 tabular-nums", viewMode === "table" && "ps-0.5")}>
<IndicatorDot
system={system}
className={
(system.status !== SystemStatus.Up && STATUS_COLORS[SystemStatus.Paused]) ||
(version === globalThis.BESZEL.HUB_VERSION && STATUS_COLORS[SystemStatus.Up]) ||
STATUS_COLORS[SystemStatus.Pending]
}
/>
<Link
href={getPagePath($router, "system", { id: system.id })}
className={cn(
"flex gap-1.5 items-center md:pe-5 tabular-nums relative z-10",
viewMode === "table" && "ps-0.5"
)}
tabIndex={-1}
title={connectionTypeLabels[system.info.ct as ConnectionType]}
role="none"
>
{system.info.ct === ConnectionType.WebSocket && (
<WebSocketIcon className={cn("size-3 pointer-events-none", color)} />
)}
{system.info.ct === ConnectionType.SSH && (
<ChevronRightSquareIcon className={cn("size-3 pointer-events-none", color)} />
)}
{!system.info.ct && <IndicatorDot system={system} className={cn(color, "bg-current mx-0.5")} />}
<span className="truncate max-w-14">{info.getValue() as string}</span>
</span>
</Link>
)
},
},
{
id: "actions",
// @ts-ignore
// @ts-expect-error
name: () => t({ message: "Actions", comment: "Table column" }),
size: 50,
cell: ({ row }) => (
@@ -305,12 +318,13 @@ export default function SystemsTableColumns(viewMode: "table" | "grid"): ColumnD
function sortableHeader(context: HeaderContext<SystemRecord, unknown>) {
const { column } = context
// @ts-ignore
// @ts-expect-error
const { Icon, hideSort, name }: { Icon: React.ElementType; name: () => string; hideSort: boolean } = column.columnDef
const isSorted = column.getIsSorted()
return (
<Button
variant="ghost"
className="h-9 px-3 flex"
className={cn("h-9 px-3 flex duration-50", isSorted && "bg-accent/70 light:bg-accent text-accent-foreground/90")}
onClick={() => column.toggleSorting(column.getIsSorted() === "asc")}
>
{Icon && <Icon className="me-2 size-4" />}
@@ -353,7 +367,7 @@ export function IndicatorDot({ system, className }: { system: SystemRecord; clas
export const ActionsButton = memo(({ system }: { system: SystemRecord }) => {
const [deleteOpen, setDeleteOpen] = useState(false)
const [editOpen, setEditOpen] = useState(false)
let editOpened = useRef(false)
const editOpened = useRef(false)
const { t } = useLingui()
const { id, status, host, name } = system

View File

@@ -1,17 +1,31 @@
import { Trans, useLingui } from "@lingui/react/macro"
import { useStore } from "@nanostores/react"
import { getPagePath } from "@nanostores/router"
import {
ColumnDef,
ColumnFiltersState,
getFilteredRowModel,
SortingState,
getSortedRowModel,
type ColumnDef,
type ColumnFiltersState,
flexRender,
VisibilityState,
getCoreRowModel,
getFilteredRowModel,
getSortedRowModel,
type Row,
type SortingState,
type Table as TableType,
useReactTable,
Row,
Table as TableType,
type VisibilityState,
} from "@tanstack/react-table"
import { TableBody, TableCell, TableHead, TableHeader, TableRow } from "@/components/ui/table"
import { useVirtualizer, type VirtualItem } from "@tanstack/react-virtual"
import {
ArrowDownIcon,
ArrowUpDownIcon,
ArrowUpIcon,
EyeIcon,
FilterIcon,
LayoutGridIcon,
LayoutListIcon,
Settings2Icon,
} from "lucide-react"
import { memo, useEffect, useMemo, useRef, useState } from "react"
import { Button } from "@/components/ui/button"
import {
DropdownMenu,
@@ -24,30 +38,16 @@ import {
DropdownMenuSeparator,
DropdownMenuTrigger,
} from "@/components/ui/dropdown-menu"
import { SystemRecord } from "@/types"
import {
ArrowUpDownIcon,
LayoutGridIcon,
LayoutListIcon,
ArrowDownIcon,
ArrowUpIcon,
Settings2Icon,
EyeIcon,
FilterIcon,
} from "lucide-react"
import { memo, useEffect, useMemo, useRef, useState } from "react"
import { $pausedSystems, $downSystems, $upSystems, $systems } from "@/lib/stores"
import { useStore } from "@nanostores/react"
import { cn, runOnce, useBrowserStorage } from "@/lib/utils"
import { $router, Link } from "../router"
import { useLingui, Trans } from "@lingui/react/macro"
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "../ui/card"
import { Input } from "@/components/ui/input"
import { getPagePath } from "@nanostores/router"
import SystemsTableColumns, { ActionsButton, IndicatorDot } from "./systems-table-columns"
import AlertButton from "../alerts/alert-button"
import { TableBody, TableCell, TableHead, TableHeader, TableRow } from "@/components/ui/table"
import { SystemStatus } from "@/lib/enums"
import { useVirtualizer, VirtualItem } from "@tanstack/react-virtual"
import { $downSystems, $pausedSystems, $systems, $upSystems } from "@/lib/stores"
import { cn, runOnce, useBrowserStorage } from "@/lib/utils"
import type { SystemRecord } from "@/types"
import AlertButton from "../alerts/alert-button"
import { $router, Link } from "../router"
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "../ui/card"
import SystemsTableColumns, { ActionsButton, IndicatorDot } from "./systems-table-columns"
type ViewMode = "table" | "grid"
type StatusFilter = "all" | SystemRecord["status"]
@@ -131,7 +131,6 @@ export default function SystemsTable() {
return [Object.values(upSystems).length, Object.values(downSystems).length, Object.values(pausedSystems).length]
}, [upSystems, downSystems, pausedSystems])
// TODO: hiding temp then gpu messes up table headers
const CardHead = useMemo(() => {
return (
<CardHeader className="pb-4.5 px-2 sm:px-6 max-sm:pt-5 max-sm:pb-1">
@@ -309,128 +308,121 @@ export default function SystemsTable() {
)
}
const AllSystemsTable = memo(function ({
table,
rows,
colLength,
}: {
table: TableType<SystemRecord>
rows: Row<SystemRecord>[]
colLength: number
}) {
// The virtualizer will need a reference to the scrollable container element
const scrollRef = useRef<HTMLDivElement>(null)
const AllSystemsTable = memo(
({ table, rows, colLength }: { table: TableType<SystemRecord>; rows: Row<SystemRecord>[]; colLength: number }) => {
// The virtualizer will need a reference to the scrollable container element
const scrollRef = useRef<HTMLDivElement>(null)
const virtualizer = useVirtualizer<HTMLDivElement, HTMLTableRowElement>({
count: rows.length,
estimateSize: () => (rows.length > 10 ? 56 : 60),
getScrollElement: () => scrollRef.current,
overscan: 5,
})
const virtualRows = virtualizer.getVirtualItems()
const virtualizer = useVirtualizer<HTMLDivElement, HTMLTableRowElement>({
count: rows.length,
estimateSize: () => (rows.length > 10 ? 56 : 60),
getScrollElement: () => scrollRef.current,
overscan: 5,
})
const virtualRows = virtualizer.getVirtualItems()
const paddingTop = Math.max(0, virtualRows[0]?.start ?? 0 - virtualizer.options.scrollMargin)
const paddingBottom = Math.max(0, virtualizer.getTotalSize() - (virtualRows[virtualRows.length - 1]?.end ?? 0))
const paddingTop = Math.max(0, virtualRows[0]?.start ?? 0 - virtualizer.options.scrollMargin)
const paddingBottom = Math.max(0, virtualizer.getTotalSize() - (virtualRows[virtualRows.length - 1]?.end ?? 0))
return (
<div
className={cn(
"h-min max-h-[calc(100dvh-17rem)] max-w-full relative overflow-auto border rounded-md",
// don't set min height if there are less than 2 rows, do set if we need to display the empty state
(!rows.length || rows.length > 2) && "min-h-50"
)}
ref={scrollRef}
>
{/* add header height to table size */}
<div style={{ height: `${virtualizer.getTotalSize() + 50}px`, paddingTop, paddingBottom }}>
<table className="text-sm w-full h-full">
<SystemsTableHead table={table} colLength={colLength} />
<TableBody onMouseEnter={preloadSystemDetail}>
{rows.length ? (
virtualRows.map((virtualRow) => {
const row = rows[virtualRow.index] as Row<SystemRecord>
return (
<SystemTableRow
key={row.id}
row={row}
virtualRow={virtualRow}
length={rows.length}
colLength={colLength}
/>
)
})
) : (
<TableRow>
<TableCell colSpan={colLength} className="h-37 text-center pointer-events-none">
<Trans>No systems found.</Trans>
</TableCell>
</TableRow>
)}
</TableBody>
</table>
</div>
</div>
)
})
function SystemsTableHead({ table, colLength }: { table: TableType<SystemRecord>; colLength: number }) {
const { i18n } = useLingui()
return useMemo(() => {
return (
<TableHeader className="sticky top-0 z-20 w-full border-b-2">
{table.getHeaderGroups().map((headerGroup) => (
<tr key={headerGroup.id}>
{headerGroup.headers.map((header) => {
return (
<TableHead className="px-1.5" key={header.id}>
{flexRender(header.column.columnDef.header, header.getContext())}
</TableHead>
)
})}
</tr>
))}
</TableHeader>
<div
className={cn(
"h-min max-h-[calc(100dvh-17rem)] max-w-full relative overflow-auto border rounded-md",
// don't set min height if there are less than 2 rows, do set if we need to display the empty state
(!rows.length || rows.length > 2) && "min-h-50"
)}
ref={scrollRef}
>
{/* add header height to table size */}
<div style={{ height: `${virtualizer.getTotalSize() + 50}px`, paddingTop, paddingBottom }}>
<table className="text-sm w-full h-full">
<SystemsTableHead table={table} />
<TableBody onMouseEnter={preloadSystemDetail}>
{rows.length ? (
virtualRows.map((virtualRow) => {
const row = rows[virtualRow.index] as Row<SystemRecord>
return (
<SystemTableRow
key={row.id}
row={row}
virtualRow={virtualRow}
length={rows.length}
colLength={colLength}
/>
)
})
) : (
<TableRow>
<TableCell colSpan={colLength} className="h-37 text-center pointer-events-none">
<Trans>No systems found.</Trans>
</TableCell>
</TableRow>
)}
</TableBody>
</table>
</div>
</div>
)
}, [i18n.locale, colLength])
}
)
function SystemsTableHead({ table }: { table: TableType<SystemRecord> }) {
const { t } = useLingui()
return (
<TableHeader className="sticky top-0 z-50 w-full border-b-2">
{table.getHeaderGroups().map((headerGroup) => (
<tr key={headerGroup.id}>
{headerGroup.headers.map((header) => {
return (
<TableHead className="px-1.5" key={header.id}>
{flexRender(header.column.columnDef.header, header.getContext())}
</TableHead>
)
})}
</tr>
))}
</TableHeader>
)
}
const SystemTableRow = memo(function ({
row,
virtualRow,
colLength,
}: {
row: Row<SystemRecord>
virtualRow: VirtualItem
length: number
colLength: number
}) {
const system = row.original
const { t } = useLingui()
return useMemo(() => {
return (
<TableRow
// data-state={row.getIsSelected() && "selected"}
className={cn("cursor-pointer transition-opacity relative safari:transform-3d", {
"opacity-50": system.status === SystemStatus.Paused,
})}
>
{row.getVisibleCells().map((cell) => (
<TableCell
key={cell.id}
style={{
width: cell.column.getSize(),
height: virtualRow.size,
}}
className="py-0"
>
{flexRender(cell.column.columnDef.cell, cell.getContext())}
</TableCell>
))}
</TableRow>
)
}, [system, system.status, colLength, t])
})
const SystemTableRow = memo(
({
row,
virtualRow,
colLength,
}: {
row: Row<SystemRecord>
virtualRow: VirtualItem
length: number
colLength: number
}) => {
const system = row.original
const { t } = useLingui()
return useMemo(() => {
return (
<TableRow
// data-state={row.getIsSelected() && "selected"}
className={cn("cursor-pointer transition-opacity relative safari:transform-3d", {
"opacity-50": system.status === SystemStatus.Paused,
})}
>
{row.getVisibleCells().map((cell) => (
<TableCell
key={cell.id}
style={{
width: cell.column.getSize(),
height: virtualRow.size,
}}
className="py-0"
>
{flexRender(cell.column.columnDef.cell, cell.getContext())}
</TableCell>
))}
</TableRow>
)
}, [system, system.status, colLength, t])
}
)
const SystemCard = memo(
({ row, table, colLength }: { row: Row<SystemRecord>; table: TableType<SystemRecord>; colLength: number }) => {
@@ -471,7 +463,7 @@ const SystemCard = memo(
if (!column.getIsVisible() || column.id === "system" || column.id === "actions") return null
const cell = row.getAllCells().find((cell) => cell.column.id === column.id)
if (!cell) return null
// @ts-ignore
// @ts-expect-error
const { Icon, name } = column.columnDef as ColumnDef<SystemRecord, unknown>
return (
<>
@@ -494,7 +486,7 @@ const SystemCard = memo(
</div>
</CardContent>
<Link
href={getPagePath($router, "system", { name: row.original.name })}
href={getPagePath($router, "system", { id: row.original.id })}
className="inset-0 absolute w-full h-full"
>
<span className="sr-only">{row.original.name}</span>

View File

@@ -130,3 +130,12 @@ export function HourglassIcon(props: SVGProps<SVGSVGElement>) {
</svg>
)
}
export function WebSocketIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg viewBox="0 0 256 193" {...props} fill="currentColor">
<title>WebSocket</title>
<path d="M192 145h32V68l-36-35-22 22 26 27zm32 16H113l-26-27 11-11 22 22h45l-44-45 11-11 44 44V88l-21-22 11-11-55-55H0l32 32h65l24 23-34 34-24-23V48H32v31l55 55-23 22 36 36h156z" />
</svg>
)
}

View File

@@ -26,7 +26,7 @@ export const verifyAuth = () => {
}
/** Logs the user out by clearing the auth store and unsubscribing from realtime updates. */
export async function logOut() {
export function logOut() {
$allSystemsByName.set({})
$alerts.set({})
$userSettings.set({} as UserSettings)

View File

@@ -53,3 +53,11 @@ export enum HourFormat {
"12h" = "12h",
"24h" = "24h",
}
/** Connection type */
export enum ConnectionType {
SSH = 1,
WebSocket,
}
export const connectionTypeLabels = ["", "SSH", "WebSocket"] as const

View File

@@ -1,6 +1,7 @@
import { t } from "@lingui/core/macro"
import { type ClassValue, clsx } from "clsx"
import { timeDay, timeHour } from "d3-time"
import { listenKeys } from "nanostores"
import { timeDay, timeHour, timeMinute } from "d3-time"
import { useEffect, useState } from "react"
import { twMerge } from "tailwind-merge"
import { prependBasePath } from "@/components/router"
@@ -8,7 +9,6 @@ import { toast } from "@/components/ui/use-toast"
import type { ChartTimeData, FingerprintRecord, SemVer, SystemRecord } from "@/types"
import { HourFormat, MeterState, Unit } from "./enums"
import { $copyContent, $userSettings } from "./stores"
import { listenKeys } from "nanostores"
export const FAVICON_DEFAULT = "favicon.svg"
export const FAVICON_GREEN = "favicon-green.svg"
@@ -54,9 +54,18 @@ const createShortDateFormatter = (hour12?: boolean) =>
hour12,
})
const createHourWithSecondsFormatter = (hour12?: boolean) =>
new Intl.DateTimeFormat(undefined, {
hour: "numeric",
minute: "numeric",
second: "numeric",
hour12,
})
// Initialize formatters with default values
let hourWithMinutesFormatter = createHourWithMinutesFormatter()
let shortDateFormatter = createShortDateFormatter()
let hourWithSecondsFormatter = createHourWithSecondsFormatter()
export const currentHour12 = () => shortDateFormatter.resolvedOptions().hour12
@@ -68,6 +77,10 @@ export const formatShortDate = (timestamp: string) => {
return shortDateFormatter.format(new Date(timestamp))
}
export const hourWithSeconds = (timestamp: string) => {
return hourWithSecondsFormatter.format(new Date(timestamp))
}
// Update the time formatters if user changes hourFormat
listenKeys($userSettings, ["hourFormat"], ({ hourFormat }) => {
if (!hourFormat) return
@@ -75,6 +88,7 @@ listenKeys($userSettings, ["hourFormat"], ({ hourFormat }) => {
if (currentHour12() !== newHour12) {
hourWithMinutesFormatter = createHourWithMinutesFormatter(newHour12)
shortDateFormatter = createShortDateFormatter(newHour12)
hourWithSecondsFormatter = createHourWithSecondsFormatter(newHour12)
}
})
@@ -91,6 +105,15 @@ export const updateFavicon = (newIcon: string) => {
}
export const chartTimeData: ChartTimeData = {
"1m": {
type: "1m",
expectedInterval: 1000,
label: () => t`1 minute`,
format: (timestamp: string) => hourWithSeconds(timestamp),
ticks: 3,
getOffset: (endTime: Date) => timeMinute.offset(endTime, -1),
minVersion: "0.13.0",
},
"1h": {
type: "1m",
expectedInterval: 60_000,
@@ -179,8 +202,8 @@ export function formatTemperature(celsius: number, unit?: Unit): { value: number
if (!unit) {
unit = $userSettings.get().unitTemp || Unit.Celsius
}
// need loose equality check due to form data being strings
if (unit === Unit.Fahrenheit) {
// biome-ignore lint/suspicious/noDoubleEquals: need loose equality check due to form data being strings
if (unit == Unit.Fahrenheit) {
return {
value: celsius * 1.8 + 32,
unit: "°F",
@@ -202,8 +225,8 @@ export function formatBytes(
// Convert MB to bytes if isMegabytes is true
if (isMegabytes) size *= 1024 * 1024
// need loose equality check due to form data being strings
if (unit === Unit.Bits) {
// biome-ignore lint/suspicious/noDoubleEquals: need loose equality check due to form data being strings
if (unit == Unit.Bits) {
const bits = size * 8
const suffix = perSecond ? "ps" : ""
if (bits < 1000) return { value: bits, unit: `b${suffix}` }
@@ -278,7 +301,7 @@ export const generateToken = () => {
}
/** Get the hub URL from the global BESZEL object */
export const getHubURL = () => BESZEL?.HUB_URL || window.location.origin
export const getHubURL = () => globalThis.BESZEL?.HUB_URL || window.location.origin
/** Map of system IDs to their corresponding tokens (used to avoid fetching in add-system dialog) */
export const tokenMap = new Map<SystemRecord["id"], FingerprintRecord["token"]>()
@@ -333,6 +356,17 @@ export const parseSemVer = (semVer = ""): SemVer => {
return { major: parts?.[0] ?? 0, minor: parts?.[1] ?? 0, patch: parts?.[2] ?? 0 }
}
/** Compare two semver strings. Returns -1 if a is less than b, 0 if a is equal to b, and 1 if a is greater than b. */
export function compareSemVer(a: SemVer, b: SemVer) {
if (a.major !== b.major) {
return a.major - b.major
}
if (a.minor !== b.minor) {
return a.minor - b.minor
}
return a.patch - b.patch
}
/** Get meter state from 0-100 value. Used for color coding meters. */
export function getMeterState(value: number): MeterState {
const { colorWarn = 65, colorCrit = 90 } = $userSettings.get()

View File

@@ -48,6 +48,10 @@ msgstr "1 ساعة"
msgid "1 min"
msgstr "دقيقة واحدة"
#: src/lib/utils.ts
msgid "1 minute"
msgstr "1 دقيقة"
#: src/lib/utils.ts
msgid "1 week"
msgstr "1 أسبوع"
@@ -173,6 +177,10 @@ msgstr "متوسط استخدام وحدة المعالجة المركزية ع
msgid "Average utilization of {0}"
msgstr "متوسط ​​استخدام {0}"
#: src/components/routes/system.tsx
msgid "Average utilization of GPU engines"
msgstr "متوسط استغلال محركات GPU"
#: src/components/command-palette.tsx
#: src/components/navbar.tsx
msgid "Backups"
@@ -363,6 +371,14 @@ msgstr "أنشئت"
msgid "Critical (%)"
msgstr "حرج (%)"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Download"
msgstr "التنزيل التراكمي"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Upload"
msgstr "الرفع التراكمي"
#. Context: Battery state
#: src/components/routes/system.tsx
msgid "Current state"
@@ -441,6 +457,10 @@ msgstr "معطل"
msgid "Down ({downSystemsLength})"
msgstr "معطل ({downSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Download"
msgstr "تنزيل"
#: src/components/alerts-history-columns.tsx
msgid "Duration"
msgstr "المدة"
@@ -452,6 +472,7 @@ msgstr "تعديل"
#: src/components/login/auth-form.tsx
#: src/components/login/forgot-pass-form.tsx
#: src/components/login/otp-forms.tsx
msgid "Email"
msgstr "البريد الإشباكي"
@@ -472,6 +493,10 @@ msgstr "أدخل عنوان البريد الإشباكي لإعادة تعيي
msgid "Enter email address..."
msgstr "أدخل عنوان البريد الإشباكي..."
#: src/components/login/otp-forms.tsx
msgid "Enter your one-time password."
msgstr "أدخل كلمة المرور لمرة واحدة الخاصة بك."
#: src/components/login/auth-form.tsx
#: src/components/routes/settings/alerts-history-data-table.tsx
#: src/components/routes/settings/config-yaml.tsx
@@ -542,6 +567,12 @@ msgstr "لمدة <0>{min}</0> {min, plural, one {دقيقة} other {دقائق}}
msgid "Forgot password?"
msgstr "هل نسيت كلمة المرور؟"
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgctxt "Button to copy install command"
msgid "FreeBSD command"
msgstr "أمر FreeBSD"
#. Context: Battery state
#: src/lib/i18n.ts
msgid "Full"
@@ -553,6 +584,10 @@ msgstr "ممتلئة"
msgid "General"
msgstr "عام"
#: src/components/routes/system.tsx
msgid "GPU Engines"
msgstr "محركات GPU"
#: src/components/routes/system.tsx
msgid "GPU Power Draw"
msgstr "استهلاك طاقة وحدة معالجة الرسوميات"
@@ -645,6 +680,7 @@ msgid "Manage display and notification preferences."
msgstr "إدارة تفضيلات العرض والإشعارات."
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgid "Manual setup instructions"
msgstr "تعليمات الإعداد اليدوي"
@@ -680,6 +716,9 @@ msgid "Network traffic of docker containers"
msgstr "حركة مرور الشبكة لحاويات الدوكر"
#: src/components/routes/system.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
msgid "Network traffic of public interfaces"
msgstr "حركة مرور الشبكة للواجهات العامة"
@@ -715,6 +754,10 @@ msgstr "دعم OAuth 2 / OIDC"
msgid "On each restart, systems in the database will be updated to match the systems defined in the file."
msgstr "في كل إعادة تشغيل، سيتم تحديث الأنظمة في قاعدة البيانات لتتطابق مع الأنظمة المعرفة في الملف."
#: src/components/login/auth-form.tsx
msgid "One-time password"
msgstr "كلمة مرور لمرة واحدة"
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/systems-table/systems-table-columns.tsx
@@ -833,6 +876,14 @@ msgstr "قراءة"
msgid "Received"
msgstr "تم الاستلام"
#: src/components/login/login.tsx
msgid "Request a one-time password"
msgstr "طلب كلمة مرور لمرة واحدة"
#: src/components/login/otp-forms.tsx
msgid "Request OTP"
msgstr "طلب OTP"
#: src/components/login/forgot-pass-form.tsx
msgid "Reset Password"
msgstr "إعادة تعيين كلمة المرور"
@@ -888,10 +939,6 @@ msgstr "تم الإرسال"
msgid "Set percentage thresholds for meter colors."
msgstr "تعيين عتبات النسبة المئوية لألوان العداد."
#: src/components/routes/settings/general.tsx
msgid "Sets the default time range for charts when a system is viewed."
msgstr "يحدد النطاق الزمني الافتراضي للرسوم البيانية عند عرض النظام."
#: src/components/command-palette.tsx
#: src/components/command-palette.tsx
#: src/components/routes/settings/layout.tsx
@@ -1002,6 +1049,10 @@ msgstr "معدل نقل {extraFsName}"
msgid "Throughput of root filesystem"
msgstr "معدل نقل نظام الملفات الجذر"
#: src/components/routes/settings/general.tsx
msgid "Time format"
msgstr "تنسيق الوقت"
#: src/components/routes/settings/notifications.tsx
msgid "To email(s)"
msgstr "إلى البريد الإشباكي"
@@ -1034,6 +1085,14 @@ msgstr "تسمح الرموز المميزة للوكلاء بالاتصال و
msgid "Tokens and fingerprints are used to authenticate WebSocket connections to the hub."
msgstr "تُستخدم الرموز المميزة والبصمات للمصادقة على اتصالات WebSocket إلى المحور."
#: src/components/routes/system/network-sheet.tsx
msgid "Total data received for each interface"
msgstr "إجمالي البيانات المستلمة لكل واجهة"
#: src/components/routes/system/network-sheet.tsx
msgid "Total data sent for each interface"
msgstr "إجمالي البيانات المرسلة لكل واجهة"
#: src/lib/alerts.ts
msgid "Triggers when 1 minute load average exceeds a threshold"
msgstr "يتم التفعيل عندما يتجاوز متوسط التحميل لمدة دقيقة واحدة عتبة معينة"
@@ -1048,7 +1107,7 @@ msgstr "يتم التفعيل عندما يتجاوز متوسط التحميل
#: src/lib/alerts.ts
msgid "Triggers when any sensor exceeds a threshold"
msgstr "يتم التفعيل عندما <EFBFBD><EFBFBD>تجاوز أي مستشعر عتبة معينة"
msgstr "يتم التفعيل عندما يتجاوز أي مستشعر عتبة معينة"
#: src/lib/alerts.ts
msgid "Triggers when combined up/down exceeds a threshold"
@@ -1095,6 +1154,10 @@ msgstr "قيد التشغيل"
msgid "Up ({upSystemsLength})"
msgstr "قيد التشغيل ({upSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Upload"
msgstr "رفع"
#: src/components/routes/system.tsx
msgid "Uptime"
msgstr "مدة التشغيل"
@@ -1128,6 +1191,10 @@ msgstr "القيمة"
msgid "View"
msgstr "عرض"
#: src/components/routes/system/network-sheet.tsx
msgid "View more"
msgstr "عرض المزيد"
#: src/components/routes/settings/alerts-history-data-table.tsx
msgid "View your 200 most recent alerts."
msgstr "عرض أحدث 200 تنبيه."

View File

@@ -48,6 +48,10 @@ msgstr "1 час"
msgid "1 min"
msgstr "1 минута"
#: src/lib/utils.ts
msgid "1 minute"
msgstr "1 минута"
#: src/lib/utils.ts
msgid "1 week"
msgstr "1 седмица"
@@ -173,6 +177,10 @@ msgstr "Средно използване на процесора на цяла
msgid "Average utilization of {0}"
msgstr "Средно използване на {0}"
#: src/components/routes/system.tsx
msgid "Average utilization of GPU engines"
msgstr "Средно използване на GPU двигатели"
#: src/components/command-palette.tsx
#: src/components/navbar.tsx
msgid "Backups"
@@ -363,6 +371,14 @@ msgstr "Създаден"
msgid "Critical (%)"
msgstr "Критично (%)"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Download"
msgstr "Кумулативно изтегляне"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Upload"
msgstr "Кумулативно качване"
#. Context: Battery state
#: src/components/routes/system.tsx
msgid "Current state"
@@ -441,6 +457,10 @@ msgstr "Офлайн"
msgid "Down ({downSystemsLength})"
msgstr "Офлайн ({downSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Download"
msgstr "Изтегляне"
#: src/components/alerts-history-columns.tsx
msgid "Duration"
msgstr "Продължителност"
@@ -452,6 +472,7 @@ msgstr "Редактирай"
#: src/components/login/auth-form.tsx
#: src/components/login/forgot-pass-form.tsx
#: src/components/login/otp-forms.tsx
msgid "Email"
msgstr "Имейл"
@@ -472,6 +493,10 @@ msgstr "Въведи имейл адрес за да нулираш парола
msgid "Enter email address..."
msgstr "Въведи имейл адрес..."
#: src/components/login/otp-forms.tsx
msgid "Enter your one-time password."
msgstr "Въведете Вашата еднократна парола."
#: src/components/login/auth-form.tsx
#: src/components/routes/settings/alerts-history-data-table.tsx
#: src/components/routes/settings/config-yaml.tsx
@@ -542,6 +567,12 @@ msgstr "За <0>{min}</0> {min, plural, one {минута} other {минути}}
msgid "Forgot password?"
msgstr "Забравена парола?"
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgctxt "Button to copy install command"
msgid "FreeBSD command"
msgstr "FreeBSD команда"
#. Context: Battery state
#: src/lib/i18n.ts
msgid "Full"
@@ -553,6 +584,10 @@ msgstr "Пълна"
msgid "General"
msgstr "Общо"
#: src/components/routes/system.tsx
msgid "GPU Engines"
msgstr "GPU двигатели"
#: src/components/routes/system.tsx
msgid "GPU Power Draw"
msgstr "Консумация на ток от графична карта"
@@ -645,6 +680,7 @@ msgid "Manage display and notification preferences."
msgstr "Управление на предпочитанията за показване и уведомяване."
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgid "Manual setup instructions"
msgstr "Инструкции за ръчна настройка"
@@ -680,6 +716,9 @@ msgid "Network traffic of docker containers"
msgstr "Мрежов трафик на docker контейнери"
#: src/components/routes/system.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
msgid "Network traffic of public interfaces"
msgstr "Мрежов трафик на публични интерфейси"
@@ -715,6 +754,10 @@ msgstr "Поддръжка на OAuth 2 / OIDC"
msgid "On each restart, systems in the database will be updated to match the systems defined in the file."
msgstr "На всеки рестарт, системите в датабазата ще бъдат обновени да съвпадат със системите зададени във файла."
#: src/components/login/auth-form.tsx
msgid "One-time password"
msgstr "Еднократна парола"
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/systems-table/systems-table-columns.tsx
@@ -833,6 +876,14 @@ msgstr "Прочети"
msgid "Received"
msgstr "Получени"
#: src/components/login/login.tsx
msgid "Request a one-time password"
msgstr "Заявка за еднократна парола"
#: src/components/login/otp-forms.tsx
msgid "Request OTP"
msgstr "Заявка OTP"
#: src/components/login/forgot-pass-form.tsx
msgid "Reset Password"
msgstr "Нулиране на парола"
@@ -888,10 +939,6 @@ msgstr "Изпратени"
msgid "Set percentage thresholds for meter colors."
msgstr "Задайте процентни прагове за цветовете на измервателните уреди."
#: src/components/routes/settings/general.tsx
msgid "Sets the default time range for charts when a system is viewed."
msgstr "Задава диапазона за време за диаграмите, когато се разглежда система."
#: src/components/command-palette.tsx
#: src/components/command-palette.tsx
#: src/components/routes/settings/layout.tsx
@@ -1002,6 +1049,10 @@ msgstr "Пропускателна способност на {extraFsName}"
msgid "Throughput of root filesystem"
msgstr "Пропускателна способност на root файловата система"
#: src/components/routes/settings/general.tsx
msgid "Time format"
msgstr "Формат на времето"
#: src/components/routes/settings/notifications.tsx
msgid "To email(s)"
msgstr "До имейл(ите)"
@@ -1034,6 +1085,14 @@ msgstr "Токените позволяват на агентите да се с
msgid "Tokens and fingerprints are used to authenticate WebSocket connections to the hub."
msgstr "Токените и пръстовите отпечатъци се използват за удостоверяване на WebSocket връзките към концентратора."
#: src/components/routes/system/network-sheet.tsx
msgid "Total data received for each interface"
msgstr "Общо получени данни за всеки интерфейс"
#: src/components/routes/system/network-sheet.tsx
msgid "Total data sent for each interface"
msgstr "Общо изпратени данни за всеки интерфейс"
#: src/lib/alerts.ts
msgid "Triggers when 1 minute load average exceeds a threshold"
msgstr "Задейства се, когато употребата на паметта за 1 минута надвиши зададен праг"
@@ -1095,6 +1154,10 @@ msgstr "Нагоре"
msgid "Up ({upSystemsLength})"
msgstr "Нагоре ({upSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Upload"
msgstr "Качване"
#: src/components/routes/system.tsx
msgid "Uptime"
msgstr "Време на работа"
@@ -1128,6 +1191,10 @@ msgstr "Стойност"
msgid "View"
msgstr "Изглед"
#: src/components/routes/system/network-sheet.tsx
msgid "View more"
msgstr "Виж повече"
#: src/components/routes/settings/alerts-history-data-table.tsx
msgid "View your 200 most recent alerts."
msgstr "Прегледайте последните си 200 сигнала."

View File

@@ -48,6 +48,10 @@ msgstr "1 hodina"
msgid "1 min"
msgstr "1 min"
#: src/lib/utils.ts
msgid "1 minute"
msgstr "1 minuta"
#: src/lib/utils.ts
msgid "1 week"
msgstr "1 týden"
@@ -173,6 +177,10 @@ msgstr "Průměrné využití CPU v celém systému"
msgid "Average utilization of {0}"
msgstr "Průměrné využití {0}"
#: src/components/routes/system.tsx
msgid "Average utilization of GPU engines"
msgstr "Průměrné využití GPU engine"
#: src/components/command-palette.tsx
#: src/components/navbar.tsx
msgid "Backups"
@@ -363,6 +371,14 @@ msgstr "Vytvořeno"
msgid "Critical (%)"
msgstr "Kritické (%)"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Download"
msgstr "Kumulativní stažení"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Upload"
msgstr "Kumulativní odeslání"
#. Context: Battery state
#: src/components/routes/system.tsx
msgid "Current state"
@@ -441,6 +457,10 @@ msgstr "Nefunkční"
msgid "Down ({downSystemsLength})"
msgstr "Nefunkční ({downSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Download"
msgstr "Stažení"
#: src/components/alerts-history-columns.tsx
msgid "Duration"
msgstr "Doba trvání"
@@ -452,6 +472,7 @@ msgstr "Upravit"
#: src/components/login/auth-form.tsx
#: src/components/login/forgot-pass-form.tsx
#: src/components/login/otp-forms.tsx
msgid "Email"
msgstr "Email"
@@ -472,6 +493,10 @@ msgstr "Zadejte e-mailovou adresu pro obnovu hesla"
msgid "Enter email address..."
msgstr "Zadejte e-mailovou adresu..."
#: src/components/login/otp-forms.tsx
msgid "Enter your one-time password."
msgstr "Zadejte Vaše jednorázové heslo."
#: src/components/login/auth-form.tsx
#: src/components/routes/settings/alerts-history-data-table.tsx
#: src/components/routes/settings/config-yaml.tsx
@@ -542,6 +567,12 @@ msgstr "Za <0>{min}</0> {min, plural, one {minutu} few {minuty} other {minut}}"
msgid "Forgot password?"
msgstr "Zapomněli jste heslo?"
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgctxt "Button to copy install command"
msgid "FreeBSD command"
msgstr "FreeBSD příkaz"
#. Context: Battery state
#: src/lib/i18n.ts
msgid "Full"
@@ -553,6 +584,10 @@ msgstr "Plná"
msgid "General"
msgstr "Obecné"
#: src/components/routes/system.tsx
msgid "GPU Engines"
msgstr "GPU enginy"
#: src/components/routes/system.tsx
msgid "GPU Power Draw"
msgstr "Spotřeba energie GPU"
@@ -645,6 +680,7 @@ msgid "Manage display and notification preferences."
msgstr "Správa nastavení zobrazení a oznámení."
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgid "Manual setup instructions"
msgstr "Pokyny k manuálnímu nastavení"
@@ -680,6 +716,9 @@ msgid "Network traffic of docker containers"
msgstr "Síťový provoz kontejnerů docker"
#: src/components/routes/system.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
msgid "Network traffic of public interfaces"
msgstr "Síťový provoz veřejných rozhraní"
@@ -715,6 +754,10 @@ msgstr "Podpora OAuth 2 / OIDC"
msgid "On each restart, systems in the database will be updated to match the systems defined in the file."
msgstr "Při každém restartu budou systémy v databázi aktualizovány tak, aby odpovídaly systémům definovaným v souboru."
#: src/components/login/auth-form.tsx
msgid "One-time password"
msgstr "Jednorázové heslo"
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/systems-table/systems-table-columns.tsx
@@ -833,6 +876,14 @@ msgstr "Číst"
msgid "Received"
msgstr "Přijato"
#: src/components/login/login.tsx
msgid "Request a one-time password"
msgstr "Požádat o jednorázové heslo"
#: src/components/login/otp-forms.tsx
msgid "Request OTP"
msgstr "Požádat OTP"
#: src/components/login/forgot-pass-form.tsx
msgid "Reset Password"
msgstr "Obnovit heslo"
@@ -888,10 +939,6 @@ msgstr "Odeslat"
msgid "Set percentage thresholds for meter colors."
msgstr "Nastavte procentuální prahové hodnoty pro barvy měřičů."
#: src/components/routes/settings/general.tsx
msgid "Sets the default time range for charts when a system is viewed."
msgstr "Nastaví výchozí časový rozsah grafů, když je systém zobrazen."
#: src/components/command-palette.tsx
#: src/components/command-palette.tsx
#: src/components/routes/settings/layout.tsx
@@ -1002,6 +1049,10 @@ msgstr "Propustnost {extraFsName}"
msgid "Throughput of root filesystem"
msgstr "Propustnost kořenového souborového systému"
#: src/components/routes/settings/general.tsx
msgid "Time format"
msgstr "Formát času"
#: src/components/routes/settings/notifications.tsx
msgid "To email(s)"
msgstr "Na email(y)"
@@ -1034,6 +1085,14 @@ msgstr "Tokeny umožňují agentům připojení a registraci. Otisky jsou stabil
msgid "Tokens and fingerprints are used to authenticate WebSocket connections to the hub."
msgstr "Tokeny a otisky slouží k ověření připojení WebSocket k uzlu."
#: src/components/routes/system/network-sheet.tsx
msgid "Total data received for each interface"
msgstr "Celkový přijatý objem dat pro každé rozhraní"
#: src/components/routes/system/network-sheet.tsx
msgid "Total data sent for each interface"
msgstr "Celkový odeslaný objem dat pro každé rozhraní"
#: src/lib/alerts.ts
msgid "Triggers when 1 minute load average exceeds a threshold"
msgstr "Spustí se, když využití paměti během 1 minuty překročí prahovou hodnotu"
@@ -1095,6 +1154,10 @@ msgstr "Funkční"
msgid "Up ({upSystemsLength})"
msgstr "Funkční ({upSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Upload"
msgstr "Odeslání"
#: src/components/routes/system.tsx
msgid "Uptime"
msgstr "Doba provozu"
@@ -1128,6 +1191,10 @@ msgstr "Hodnota"
msgid "View"
msgstr "Zobrazení"
#: src/components/routes/system/network-sheet.tsx
msgid "View more"
msgstr "Zobrazit více"
#: src/components/routes/settings/alerts-history-data-table.tsx
msgid "View your 200 most recent alerts."
msgstr "Zobrazit vašich 200 nejnovějších upozornění."

View File

@@ -48,6 +48,10 @@ msgstr "1 time"
msgid "1 min"
msgstr ""
#: src/lib/utils.ts
msgid "1 minute"
msgstr "1 minut"
#: src/lib/utils.ts
msgid "1 week"
msgstr "1 uge"
@@ -173,6 +177,10 @@ msgstr "Gennemsnitlig systembaseret CPU-udnyttelse"
msgid "Average utilization of {0}"
msgstr "Gennemsnitlig udnyttelse af {0}"
#: src/components/routes/system.tsx
msgid "Average utilization of GPU engines"
msgstr "Gennemsnitlig udnyttelse af GPU-motorer"
#: src/components/command-palette.tsx
#: src/components/navbar.tsx
msgid "Backups"
@@ -363,6 +371,14 @@ msgstr ""
msgid "Critical (%)"
msgstr "Kritisk (%)"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Download"
msgstr "Kumulativ download"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Upload"
msgstr "Kumulativ upload"
#. Context: Battery state
#: src/components/routes/system.tsx
msgid "Current state"
@@ -441,6 +457,10 @@ msgstr "Nede"
msgid "Down ({downSystemsLength})"
msgstr ""
#: src/components/routes/system/network-sheet.tsx
msgid "Download"
msgstr "Download"
#: src/components/alerts-history-columns.tsx
msgid "Duration"
msgstr ""
@@ -452,6 +472,7 @@ msgstr "Rediger"
#: src/components/login/auth-form.tsx
#: src/components/login/forgot-pass-form.tsx
#: src/components/login/otp-forms.tsx
msgid "Email"
msgstr "E-mail"
@@ -472,6 +493,10 @@ msgstr "Indtast e-mailadresse for at nulstille adgangskoden"
msgid "Enter email address..."
msgstr "Indtast e-mailadresse..."
#: src/components/login/otp-forms.tsx
msgid "Enter your one-time password."
msgstr "Indtast din engangsadgangskode."
#: src/components/login/auth-form.tsx
#: src/components/routes/settings/alerts-history-data-table.tsx
#: src/components/routes/settings/config-yaml.tsx
@@ -542,6 +567,12 @@ msgstr "For <0>{min}</0> {min, plural, one {minut} other {minutter}}"
msgid "Forgot password?"
msgstr "Glemt adgangskode?"
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgctxt "Button to copy install command"
msgid "FreeBSD command"
msgstr "FreeBSD kommando"
#. Context: Battery state
#: src/lib/i18n.ts
msgid "Full"
@@ -553,6 +584,10 @@ msgstr "Fuldt opladt"
msgid "General"
msgstr "Generelt"
#: src/components/routes/system.tsx
msgid "GPU Engines"
msgstr "GPU-motorer"
#: src/components/routes/system.tsx
msgid "GPU Power Draw"
msgstr "Gpu Strøm Træk"
@@ -645,6 +680,7 @@ msgid "Manage display and notification preferences."
msgstr "Administrer display og notifikationsindstillinger."
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgid "Manual setup instructions"
msgstr "Manuel opsætningsvejledning"
@@ -680,6 +716,9 @@ msgid "Network traffic of docker containers"
msgstr "Netværkstrafik af dockercontainere"
#: src/components/routes/system.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
msgid "Network traffic of public interfaces"
msgstr "Netværkstrafik af offentlige grænseflader"
@@ -715,6 +754,10 @@ msgstr "OAuth 2 / OIDC understøttelse"
msgid "On each restart, systems in the database will be updated to match the systems defined in the file."
msgstr "Ved hver genstart vil systemer i databasen blive opdateret til at matche de systemer, der er defineret i filen."
#: src/components/login/auth-form.tsx
msgid "One-time password"
msgstr "Engangsadgangskode"
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/systems-table/systems-table-columns.tsx
@@ -833,6 +876,14 @@ msgstr "Læs"
msgid "Received"
msgstr "Modtaget"
#: src/components/login/login.tsx
msgid "Request a one-time password"
msgstr "Anmod om engangsadgangskode"
#: src/components/login/otp-forms.tsx
msgid "Request OTP"
msgstr "Anmod OTP"
#: src/components/login/forgot-pass-form.tsx
msgid "Reset Password"
msgstr "Nulstil adgangskode"
@@ -888,10 +939,6 @@ msgstr "Sendt"
msgid "Set percentage thresholds for meter colors."
msgstr "Indstil procentvise tærskler for målerfarver."
#: src/components/routes/settings/general.tsx
msgid "Sets the default time range for charts when a system is viewed."
msgstr "Sætter standardtidsintervallet for diagrammer når et system vises."
#: src/components/command-palette.tsx
#: src/components/command-palette.tsx
#: src/components/routes/settings/layout.tsx
@@ -1002,6 +1049,10 @@ msgstr "Gennemløb af {extraFsName}"
msgid "Throughput of root filesystem"
msgstr "Gennemløb af rodfilsystemet"
#: src/components/routes/settings/general.tsx
msgid "Time format"
msgstr "Tidsformat"
#: src/components/routes/settings/notifications.tsx
msgid "To email(s)"
msgstr "Til email(s)"
@@ -1034,6 +1085,14 @@ msgstr ""
msgid "Tokens and fingerprints are used to authenticate WebSocket connections to the hub."
msgstr ""
#: src/components/routes/system/network-sheet.tsx
msgid "Total data received for each interface"
msgstr "Samlet modtaget data for hver interface"
#: src/components/routes/system/network-sheet.tsx
msgid "Total data sent for each interface"
msgstr "Samlet sendt data for hver interface"
#: src/lib/alerts.ts
msgid "Triggers when 1 minute load average exceeds a threshold"
msgstr ""
@@ -1095,6 +1154,10 @@ msgstr "Oppe"
msgid "Up ({upSystemsLength})"
msgstr ""
#: src/components/routes/system/network-sheet.tsx
msgid "Upload"
msgstr "Upload"
#: src/components/routes/system.tsx
msgid "Uptime"
msgstr "Oppetid"
@@ -1128,6 +1191,10 @@ msgstr ""
msgid "View"
msgstr "Vis"
#: src/components/routes/system/network-sheet.tsx
msgid "View more"
msgstr "Se mere"
#: src/components/routes/settings/alerts-history-data-table.tsx
msgid "View your 200 most recent alerts."
msgstr ""

View File

@@ -8,7 +8,7 @@ msgstr ""
"Language: de\n"
"Project-Id-Version: beszel\n"
"Report-Msgid-Bugs-To: \n"
"PO-Revision-Date: 2025-08-28 23:21\n"
"PO-Revision-Date: 2025-10-05 16:13\n"
"Last-Translator: \n"
"Language-Team: German\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
@@ -48,6 +48,10 @@ msgstr "1 Stunde"
msgid "1 min"
msgstr "1 Min"
#: src/lib/utils.ts
msgid "1 minute"
msgstr "1 Minute"
#: src/lib/utils.ts
msgid "1 week"
msgstr "1 Woche"
@@ -173,6 +177,10 @@ msgstr "Durchschnittliche systemweite CPU-Auslastung"
msgid "Average utilization of {0}"
msgstr "Durchschnittliche Auslastung von {0}"
#: src/components/routes/system.tsx
msgid "Average utilization of GPU engines"
msgstr "Durchschnittliche Auslastung der GPU-Engines"
#: src/components/command-palette.tsx
#: src/components/navbar.tsx
msgid "Backups"
@@ -363,6 +371,14 @@ msgstr "Erstellt"
msgid "Critical (%)"
msgstr "Kritisch (%)"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Download"
msgstr "Kumulativer Download"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Upload"
msgstr "Kumulativer Upload"
#. Context: Battery state
#: src/components/routes/system.tsx
msgid "Current state"
@@ -441,6 +457,10 @@ msgstr "Offline"
msgid "Down ({downSystemsLength})"
msgstr "Offline ({downSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Download"
msgstr "Herunterladen"
#: src/components/alerts-history-columns.tsx
msgid "Duration"
msgstr "Dauer"
@@ -452,6 +472,7 @@ msgstr "Bearbeiten"
#: src/components/login/auth-form.tsx
#: src/components/login/forgot-pass-form.tsx
#: src/components/login/otp-forms.tsx
msgid "Email"
msgstr "E-Mail"
@@ -472,6 +493,10 @@ msgstr "E-Mail-Adresse eingeben, um das Passwort zurückzusetzen"
msgid "Enter email address..."
msgstr "E-Mail-Adresse eingeben..."
#: src/components/login/otp-forms.tsx
msgid "Enter your one-time password."
msgstr "Geben Sie Ihr Einmalpasswort ein."
#: src/components/login/auth-form.tsx
#: src/components/routes/settings/alerts-history-data-table.tsx
#: src/components/routes/settings/config-yaml.tsx
@@ -542,6 +567,12 @@ msgstr "Für <0>{min}</0> {min, plural, one {Minute} other {Minuten}}"
msgid "Forgot password?"
msgstr "Passwort vergessen?"
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgctxt "Button to copy install command"
msgid "FreeBSD command"
msgstr "FreeBSD Befehl"
#. Context: Battery state
#: src/lib/i18n.ts
msgid "Full"
@@ -553,6 +584,10 @@ msgstr "Voll"
msgid "General"
msgstr "Allgemein"
#: src/components/routes/system.tsx
msgid "GPU Engines"
msgstr "GPU-Engines"
#: src/components/routes/system.tsx
msgid "GPU Power Draw"
msgstr "GPU-Leistungsaufnahme"
@@ -645,6 +680,7 @@ msgid "Manage display and notification preferences."
msgstr "Anzeige- und Benachrichtigungseinstellungen verwalten."
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgid "Manual setup instructions"
msgstr "Anleitung zur manuellen Einrichtung"
@@ -680,6 +716,9 @@ msgid "Network traffic of docker containers"
msgstr "Netzwerkverkehr der Docker-Container"
#: src/components/routes/system.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
msgid "Network traffic of public interfaces"
msgstr "Netzwerkverkehr der öffentlichen Schnittstellen"
@@ -715,6 +754,10 @@ msgstr "OAuth 2 / OIDC-Unterstützung"
msgid "On each restart, systems in the database will be updated to match the systems defined in the file."
msgstr "Bei jedem Neustart werden die Systeme in der Datenbank aktualisiert, um den in der Datei definierten Systemen zu entsprechen."
#: src/components/login/auth-form.tsx
msgid "One-time password"
msgstr "Einmalpasswort"
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/systems-table/systems-table-columns.tsx
@@ -833,6 +876,14 @@ msgstr "Lesen"
msgid "Received"
msgstr "Empfangen"
#: src/components/login/login.tsx
msgid "Request a one-time password"
msgstr "Einmalpasswort anfordern"
#: src/components/login/otp-forms.tsx
msgid "Request OTP"
msgstr "OTP anfordern"
#: src/components/login/forgot-pass-form.tsx
msgid "Reset Password"
msgstr "Passwort zurücksetzen"
@@ -888,10 +939,6 @@ msgstr "Gesendet"
msgid "Set percentage thresholds for meter colors."
msgstr "Prozentuale Schwellenwerte für Zählerfarben festlegen."
#: src/components/routes/settings/general.tsx
msgid "Sets the default time range for charts when a system is viewed."
msgstr "Legt den Standardzeitraum für Diagramme fest, wenn ein System angezeigt wird."
#: src/components/command-palette.tsx
#: src/components/command-palette.tsx
#: src/components/routes/settings/layout.tsx
@@ -1002,6 +1049,10 @@ msgstr "Durchsatz von {extraFsName}"
msgid "Throughput of root filesystem"
msgstr "Durchsatz des Root-Dateisystems"
#: src/components/routes/settings/general.tsx
msgid "Time format"
msgstr "Zeitformat"
#: src/components/routes/settings/notifications.tsx
msgid "To email(s)"
msgstr "An E-Mail(s)"
@@ -1034,6 +1085,14 @@ msgstr "Tokens ermöglichen es Agents, sich zu verbinden und zu registrieren. Fi
msgid "Tokens and fingerprints are used to authenticate WebSocket connections to the hub."
msgstr "Tokens und Fingerabdrücke werden verwendet, um WebSocket-Verbindungen zum Hub zu authentifizieren."
#: src/components/routes/system/network-sheet.tsx
msgid "Total data received for each interface"
msgstr "Gesamtdatenmenge für jede Schnittstelle empfangen"
#: src/components/routes/system/network-sheet.tsx
msgid "Total data sent for each interface"
msgstr "Gesamtdatenmenge für jede Schnittstelle gesendet"
#: src/lib/alerts.ts
msgid "Triggers when 1 minute load average exceeds a threshold"
msgstr "Löst aus, wenn der Lastdurchschnitt der letzten Minute einen Schwellenwert überschreitet"
@@ -1095,6 +1154,10 @@ msgstr "aktiv"
msgid "Up ({upSystemsLength})"
msgstr "aktiv ({upSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Upload"
msgstr "Hochladen"
#: src/components/routes/system.tsx
msgid "Uptime"
msgstr "Betriebszeit"
@@ -1128,6 +1191,10 @@ msgstr "Wert"
msgid "View"
msgstr "Ansicht"
#: src/components/routes/system/network-sheet.tsx
msgid "View more"
msgstr "Mehr anzeigen"
#: src/components/routes/settings/alerts-history-data-table.tsx
msgid "View your 200 most recent alerts."
msgstr "Sieh dir die neusten 200 Alarme an."

View File

@@ -43,6 +43,10 @@ msgstr "1 hour"
msgid "1 min"
msgstr "1 min"
#: src/lib/utils.ts
msgid "1 minute"
msgstr "1 minute"
#: src/lib/utils.ts
msgid "1 week"
msgstr "1 week"
@@ -168,6 +172,10 @@ msgstr "Average system-wide CPU utilization"
msgid "Average utilization of {0}"
msgstr "Average utilization of {0}"
#: src/components/routes/system.tsx
msgid "Average utilization of GPU engines"
msgstr "Average utilization of GPU engines"
#: src/components/command-palette.tsx
#: src/components/navbar.tsx
msgid "Backups"
@@ -358,6 +366,14 @@ msgstr "Created"
msgid "Critical (%)"
msgstr "Critical (%)"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Download"
msgstr "Cumulative Download"
#: src/components/routes/system/network-sheet.tsx
msgid "Cumulative Upload"
msgstr "Cumulative Upload"
#. Context: Battery state
#: src/components/routes/system.tsx
msgid "Current state"
@@ -436,6 +452,10 @@ msgstr "Down"
msgid "Down ({downSystemsLength})"
msgstr "Down ({downSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Download"
msgstr "Download"
#: src/components/alerts-history-columns.tsx
msgid "Duration"
msgstr "Duration"
@@ -447,6 +467,7 @@ msgstr "Edit"
#: src/components/login/auth-form.tsx
#: src/components/login/forgot-pass-form.tsx
#: src/components/login/otp-forms.tsx
msgid "Email"
msgstr "Email"
@@ -467,6 +488,10 @@ msgstr "Enter email address to reset password"
msgid "Enter email address..."
msgstr "Enter email address..."
#: src/components/login/otp-forms.tsx
msgid "Enter your one-time password."
msgstr "Enter your one-time password."
#: src/components/login/auth-form.tsx
#: src/components/routes/settings/alerts-history-data-table.tsx
#: src/components/routes/settings/config-yaml.tsx
@@ -537,6 +562,12 @@ msgstr "For <0>{min}</0> {min, plural, one {minute} other {minutes}}"
msgid "Forgot password?"
msgstr "Forgot password?"
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgctxt "Button to copy install command"
msgid "FreeBSD command"
msgstr "FreeBSD command"
#. Context: Battery state
#: src/lib/i18n.ts
msgid "Full"
@@ -548,6 +579,10 @@ msgstr "Full"
msgid "General"
msgstr "General"
#: src/components/routes/system.tsx
msgid "GPU Engines"
msgstr "GPU Engines"
#: src/components/routes/system.tsx
msgid "GPU Power Draw"
msgstr "GPU Power Draw"
@@ -640,6 +675,7 @@ msgid "Manage display and notification preferences."
msgstr "Manage display and notification preferences."
#: src/components/add-system.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
msgid "Manual setup instructions"
msgstr "Manual setup instructions"
@@ -675,6 +711,9 @@ msgid "Network traffic of docker containers"
msgstr "Network traffic of docker containers"
#: src/components/routes/system.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
#: src/components/routes/system/network-sheet.tsx
msgid "Network traffic of public interfaces"
msgstr "Network traffic of public interfaces"
@@ -710,6 +749,10 @@ msgstr "OAuth 2 / OIDC support"
msgid "On each restart, systems in the database will be updated to match the systems defined in the file."
msgstr "On each restart, systems in the database will be updated to match the systems defined in the file."
#: src/components/login/auth-form.tsx
msgid "One-time password"
msgstr "One-time password"
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/routes/settings/tokens-fingerprints.tsx
#: src/components/systems-table/systems-table-columns.tsx
@@ -828,6 +871,14 @@ msgstr "Read"
msgid "Received"
msgstr "Received"
#: src/components/login/login.tsx
msgid "Request a one-time password"
msgstr "Request a one-time password"
#: src/components/login/otp-forms.tsx
msgid "Request OTP"
msgstr "Request OTP"
#: src/components/login/forgot-pass-form.tsx
msgid "Reset Password"
msgstr "Reset Password"
@@ -883,10 +934,6 @@ msgstr "Sent"
msgid "Set percentage thresholds for meter colors."
msgstr "Set percentage thresholds for meter colors."
#: src/components/routes/settings/general.tsx
msgid "Sets the default time range for charts when a system is viewed."
msgstr "Sets the default time range for charts when a system is viewed."
#: src/components/command-palette.tsx
#: src/components/command-palette.tsx
#: src/components/routes/settings/layout.tsx
@@ -997,6 +1044,10 @@ msgstr "Throughput of {extraFsName}"
msgid "Throughput of root filesystem"
msgstr "Throughput of root filesystem"
#: src/components/routes/settings/general.tsx
msgid "Time format"
msgstr "Time format"
#: src/components/routes/settings/notifications.tsx
msgid "To email(s)"
msgstr "To email(s)"
@@ -1029,6 +1080,14 @@ msgstr "Tokens allow agents to connect and register. Fingerprints are stable ide
msgid "Tokens and fingerprints are used to authenticate WebSocket connections to the hub."
msgstr "Tokens and fingerprints are used to authenticate WebSocket connections to the hub."
#: src/components/routes/system/network-sheet.tsx
msgid "Total data received for each interface"
msgstr "Total data received for each interface"
#: src/components/routes/system/network-sheet.tsx
msgid "Total data sent for each interface"
msgstr "Total data sent for each interface"
#: src/lib/alerts.ts
msgid "Triggers when 1 minute load average exceeds a threshold"
msgstr "Triggers when 1 minute load average exceeds a threshold"
@@ -1090,6 +1149,10 @@ msgstr "Up"
msgid "Up ({upSystemsLength})"
msgstr "Up ({upSystemsLength})"
#: src/components/routes/system/network-sheet.tsx
msgid "Upload"
msgstr "Upload"
#: src/components/routes/system.tsx
msgid "Uptime"
msgstr "Uptime"
@@ -1123,6 +1186,10 @@ msgstr "Value"
msgid "View"
msgstr "View"
#: src/components/routes/system/network-sheet.tsx
msgid "View more"
msgstr "View more"
#: src/components/routes/settings/alerts-history-data-table.tsx
msgid "View your 200 most recent alerts."
msgstr "View your 200 most recent alerts."

Some files were not shown because too many files have changed in this diff Show More