Compare commits

...

171 Commits

Author SHA1 Message Date
CI
dee8d0dae8 chore: release version v1.87.0 2025-11-25 20:01:36 +00:00
Dmitry Popov
147dd5880e Merge pull request #559 from wanderer-industries/markdown-description
feat: Add support markdown for system description
2025-11-26 00:01:09 +04:00
DanSylvest
69991fff72 feat: Add support markdown for system description 2025-11-25 22:50:11 +03:00
CI
de4e1f859f chore: [skip ci] 2025-11-25 19:07:31 +00:00
CI
8e2a19540c chore: release version v1.86.1 2025-11-25 19:07:31 +00:00
Dmitry Popov
855c596672 Merge pull request #558 from wanderer-industries/show-passage-direction
fix(Map): Add ability to see character passage direction in list of p…
2025-11-25 23:06:45 +04:00
DanSylvest
36d3c0937b chore: Add ability to see character passage direction in list of passages - remove unnecessary log 2025-11-25 22:04:12 +03:00
CI
d8fb1f78cf chore: [skip ci] 2025-11-25 19:03:24 +00:00
CI
98fa7e0235 chore: release version v1.86.0 2025-11-25 19:03:24 +00:00
Dmitry Popov
e4396fe2f9 Merge pull request #557 from guarzo/guarzo/filteractivity
feat: add date filter for character activity
2025-11-25 23:02:58 +04:00
DanSylvest
1c117903f6 fix(Map): Add ability to see character passage direction in list of passages 2025-11-25 21:51:01 +03:00
Guarzo
88ed9cd39e feat: add date filter for character activity 2025-11-25 01:52:06 +00:00
CI
b7c0b45c15 chore: [skip ci] 2025-11-24 11:23:10 +00:00
CI
0874e3c51c chore: release version v1.85.5 2025-11-24 11:23:10 +00:00
Dmitry Popov
369b08a9ae fix(core): fixed connections cleanup and rally points delete issues 2025-11-24 12:22:40 +01:00
CI
01192dc637 chore: [skip ci] 2025-11-22 11:25:53 +00:00
CI
957cbcc561 chore: release version v1.85.4 2025-11-22 11:25:53 +00:00
Dmitry Popov
7eb6d093cf fix(core): invalidate map characters every 1 hour for any missing/revoked permissions 2025-11-22 12:25:24 +01:00
CI
a23e544a9f chore: [skip ci] 2025-11-22 09:42:11 +00:00
CI
845ea7a576 chore: release version v1.85.3 2025-11-22 09:42:11 +00:00
Dmitry Popov
ae8fbf30e4 fix(core): fixed connection time status issues. fixed character alliance update issues 2025-11-22 10:41:35 +01:00
CI
3de385c902 chore: [skip ci] 2025-11-20 10:57:05 +00:00
CI
5f3d4dba37 chore: release version v1.85.2 2025-11-20 10:57:05 +00:00
Dmitry Popov
8acc7ddc25 fix(core): increased API pool limits 2025-11-20 11:56:31 +01:00
CI
ed6d25f3ea chore: [skip ci] 2025-11-20 10:35:09 +00:00
CI
ab07d1321d chore: release version v1.85.1 2025-11-20 10:35:09 +00:00
Dmitry Popov
a81e61bd70 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-20 11:31:39 +01:00
Dmitry Popov
d2d33619c2 fix(core): increased API pool limits 2025-11-20 11:31:36 +01:00
CI
fa464110c6 chore: [skip ci] 2025-11-19 23:13:02 +00:00
CI
a5fa60e699 chore: release version v1.85.0 2025-11-19 23:13:02 +00:00
Dmitry Popov
6db994852f feat(core): added support for new ship types 2025-11-20 00:12:30 +01:00
CI
0a68676957 chore: [skip ci] 2025-11-19 21:06:28 +00:00
CI
9b82dd8f43 chore: release version v1.84.37 2025-11-19 21:06:28 +00:00
Dmitry Popov
aac2c33fd2 fix(auth): fixed character auth issues 2025-11-19 22:05:49 +01:00
CI
1665b65619 chore: [skip ci] 2025-11-19 10:33:10 +00:00
CI
e1a946bb1d chore: release version v1.84.36 2025-11-19 10:33:10 +00:00
Dmitry Popov
543ec7f071 fix: fixed duplicated map slugs 2025-11-19 11:32:35 +01:00
CI
bf40d2cb8d chore: [skip ci] 2025-11-19 09:44:24 +00:00
CI
48ac40ea55 chore: release version v1.84.35 2025-11-19 09:44:24 +00:00
Dmitry Popov
5a3f3c40fe Merge pull request #552 from guarzo/guarzo/structurefix
fix: structure search / paste issues
2025-11-19 13:43:52 +04:00
guarzo
d5bac311ff Merge branch 'main' into guarzo/structurefix 2025-11-18 22:24:30 -05:00
Guarzo
34a7c854ed fix: structure search / paste issues 2025-11-18 22:19:04 -05:00
CI
ebb6090be9 chore: [skip ci] 2025-11-18 11:47:15 +00:00
CI
7a4d31db60 chore: release version v1.84.34 2025-11-18 11:47:15 +00:00
Dmitry Popov
2acf9ed5dc Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-18 12:46:45 +01:00
Dmitry Popov
46df025200 fix(core): fixed character tracking issues 2025-11-18 12:46:42 +01:00
CI
43a363b5ab chore: [skip ci] 2025-11-18 11:00:34 +00:00
CI
03688387d8 chore: release version v1.84.33 2025-11-18 11:00:34 +00:00
Dmitry Popov
5060852918 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-18 12:00:04 +01:00
Dmitry Popov
57381b9782 fix(core): fixed character tracking issues 2025-11-18 12:00:01 +01:00
CI
6014c60e13 chore: [skip ci] 2025-11-18 10:08:04 +00:00
CI
1b711d7b4b chore: release version v1.84.32 2025-11-18 10:08:04 +00:00
Dmitry Popov
f761ba9746 fix(core): fixed character tracking issues 2025-11-18 11:04:32 +01:00
CI
20a795c5b5 chore: [skip ci] 2025-11-17 13:41:22 +00:00
CI
0c80894c65 chore: release version v1.84.31 2025-11-17 13:41:22 +00:00
Dmitry Popov
21844f0550 fix(core): fixed connactions validation logic 2025-11-17 14:40:46 +01:00
CI
f7716ca45a chore: [skip ci] 2025-11-17 12:38:04 +00:00
CI
de74714c77 chore: release version v1.84.30 2025-11-17 12:38:04 +00:00
Dmitry Popov
4dfa83bd30 chore: fixed character updates issue 2025-11-17 13:37:30 +01:00
CI
cb4dba8dc2 chore: [skip ci] 2025-11-17 12:09:39 +00:00
CI
1d75b8f063 chore: release version v1.84.29 2025-11-17 12:09:39 +00:00
Dmitry Popov
2a42c4e6df Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-17 13:09:08 +01:00
Dmitry Popov
0ee6160bcd chore: fixed MapEventRelay logs 2025-11-17 13:09:05 +01:00
CI
5826d2492b chore: [skip ci] 2025-11-17 11:53:30 +00:00
CI
a643e20247 chore: release version v1.84.28 2025-11-17 11:53:30 +00:00
Dmitry Popov
66dc680281 fix(core): fixed ACL updates 2025-11-17 12:52:59 +01:00
CI
46f46c745e chore: [skip ci] 2025-11-17 09:16:32 +00:00
CI
00bf620e35 chore: release version v1.84.27 2025-11-17 09:16:32 +00:00
Dmitry Popov
46eef60d86 chore: fixed warnings 2025-11-17 10:15:57 +01:00
Dmitry Popov
fe836442ab fix(core): supported characters_updates for external events 2025-11-17 00:08:08 +01:00
Dmitry Popov
9514806dbb fix(core): improved character tracking 2025-11-16 23:45:39 +01:00
Dmitry Popov
4e6423ebc8 fix(core): improved character tracking 2025-11-16 18:28:58 +01:00
Dmitry Popov
a97e598299 fix(core): improved character location tracking 2025-11-16 16:39:39 +01:00
CI
9c26b50aac chore: [skip ci] 2025-11-16 01:14:41 +00:00
CI
3f2ddf5cc4 chore: release version v1.84.26 2025-11-16 01:14:41 +00:00
Dmitry Popov
233b2bd7a4 fix(core): disable character tracker pausing 2025-11-16 02:14:05 +01:00
CI
0d35268efc chore: [skip ci] 2025-11-16 01:01:35 +00:00
CI
d169220eb2 chore: release version v1.84.25 2025-11-16 01:01:35 +00:00
Dmitry Popov
182d5ec9fb fix(core): used upsert for adding map systems 2025-11-16 02:00:59 +01:00
CI
32958253b7 chore: [skip ci] 2025-11-15 22:50:08 +00:00
CI
c011d56ce7 chore: release version v1.84.24 2025-11-15 22:50:08 +00:00
Dmitry Popov
73d1921d42 Merge pull request #549 from wanderer-industries/redesign-and-fixes
fix(Map): New design and prepared main pages for new patch
2025-11-16 02:49:36 +04:00
Dmitry Popov
7bb810e1e6 chore: update bg image url 2025-11-15 23:35:59 +01:00
CI
c90ac7b1e3 chore: [skip ci] 2025-11-15 22:17:28 +00:00
CI
005e0c2bc6 chore: release version v1.84.23 2025-11-15 22:17:28 +00:00
Dmitry Popov
808acb540e fix(core): fixed map pings cancel errors 2025-11-15 23:16:58 +01:00
DanSylvest
06626f910b fix(Map): Fixed problem related with error if settings was removed and mapper crashed. Fixed settings reset. 2025-11-15 21:30:45 +03:00
CI
812582d955 chore: [skip ci] 2025-11-15 11:38:00 +00:00
CI
f3077c0bf1 chore: release version v1.84.22 2025-11-15 11:38:00 +00:00
Dmitry Popov
32c70cbbad Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-15 12:37:31 +01:00
Dmitry Popov
8934935e10 fix(core): fixed map initialization 2025-11-15 12:37:27 +01:00
CI
20c8a53712 chore: [skip ci] 2025-11-15 08:48:30 +00:00
CI
b22970fef3 chore: release version v1.84.21 2025-11-15 08:48:30 +00:00
Dmitry Popov
cf72394ef9 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-15 09:47:53 +01:00
Dmitry Popov
e6dbba7283 fix(core): fixed map characters adding 2025-11-15 09:47:48 +01:00
CI
843b3b86b2 chore: [skip ci] 2025-11-15 07:29:25 +00:00
CI
bd865b9f64 chore: release version v1.84.20 2025-11-15 07:29:25 +00:00
Dmitry Popov
ae91cd2f92 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-15 08:25:59 +01:00
Dmitry Popov
0be7a5f9d0 fix(core): fixed map start issues 2025-11-15 08:25:55 +01:00
CI
e15bfa426a chore: [skip ci] 2025-11-14 19:28:51 +00:00
CI
4198e4b07a chore: release version v1.84.19 2025-11-14 19:28:51 +00:00
Dmitry Popov
03ee08ff67 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-14 20:28:16 +01:00
Dmitry Popov
ac4dd4c28b fix(core): fixed map start issues 2025-11-14 20:28:12 +01:00
CI
308e81a464 chore: [skip ci] 2025-11-14 18:36:20 +00:00
CI
6f4240d931 chore: release version v1.84.18 2025-11-14 18:36:20 +00:00
Dmitry Popov
847b45a431 fix(core): added gracefull map poll recovery from saved state. added map slug unique checks 2025-11-14 19:35:45 +01:00
CI
5ec97d74ca chore: [skip ci] 2025-11-14 13:43:40 +00:00
CI
74359a5542 chore: release version v1.84.17 2025-11-14 13:43:40 +00:00
Dmitry Popov
0020f46dd8 fix(core): fixed activity tracking issues 2025-11-14 14:42:44 +01:00
CI
a6751b45c6 chore: [skip ci] 2025-11-13 16:20:24 +00:00
CI
f48aeb5cec chore: release version v1.84.16 2025-11-13 16:20:24 +00:00
Dmitry Popov
a5f25646c9 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 17:19:47 +01:00
Dmitry Popov
23cf1fd96f fix(core): removed maps auto-start logic 2025-11-13 17:19:44 +01:00
CI
6f15521069 chore: [skip ci] 2025-11-13 14:49:32 +00:00
CI
9d41e57c06 chore: release version v1.84.15 2025-11-13 14:49:32 +00:00
Dmitry Popov
ea9a22df09 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 15:49:01 +01:00
Dmitry Popov
0d4fd6f214 fix(core): fixed maps start/stop logic, added server downtime period support 2025-11-13 15:48:56 +01:00
CI
87a6c20545 chore: [skip ci] 2025-11-13 14:46:26 +00:00
CI
c375f4e4ce chore: release version v1.84.14 2025-11-13 14:46:26 +00:00
Dmitry Popov
843a6d7320 Merge pull request #543 from wanderer-industries/fix-error-on-remove-settings
fix(Map): Fixed problem related with error if settings was removed an…
2025-11-13 18:43:13 +04:00
DanSylvest
98c54a3413 fix(Map): Fixed problem related with error if settings was removed and mapper crashed. Fixed settings reset. 2025-11-13 12:53:40 +03:00
CI
0439110938 chore: [skip ci] 2025-11-13 07:52:33 +00:00
CI
8ce1e5fa3e chore: release version v1.84.13 2025-11-13 07:52:33 +00:00
Dmitry Popov
ebaf6bcdc6 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 08:52:00 +01:00
Dmitry Popov
40d947bebc chore: updated RELEASE_NODE for server defaults 2025-11-13 08:51:56 +01:00
CI
61d1c3848f chore: [skip ci] 2025-11-13 07:39:29 +00:00
CI
e152ce179f chore: release version v1.84.12 2025-11-13 07:39:29 +00:00
Dmitry Popov
7bbe387183 chore: reduce garbage collection interval 2025-11-13 08:38:52 +01:00
CI
b1555ff03c chore: [skip ci] 2025-11-12 18:53:48 +00:00
CI
e624499244 chore: release version v1.84.11 2025-11-12 18:53:48 +00:00
Dmitry Popov
6a1976dec6 Merge pull request #541 from guarzo/guarzo/apifun2
fix: api and doc updates
2025-11-12 22:53:17 +04:00
Guarzo
3db24c4344 fix: api and doc updates 2025-11-12 18:39:21 +00:00
CI
883c09f255 chore: [skip ci] 2025-11-12 17:28:54 +00:00
CI
ff24d80038 chore: release version v1.84.10 2025-11-12 17:28:54 +00:00
Dmitry Popov
63cbc9c0b9 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-12 18:28:20 +01:00
Dmitry Popov
8056972a27 fix(core): Fixed adding system on character dock 2025-11-12 18:28:16 +01:00
CI
1759d46740 chore: [skip ci] 2025-11-12 13:28:14 +00:00
CI
e4b7d2e45b chore: release version v1.84.9 2025-11-12 13:28:14 +00:00
Dmitry Popov
41573cbee3 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-12 14:27:43 +01:00
Dmitry Popov
24ffc20bb8 chore: added ccp attribution to footer 2025-11-12 14:27:40 +01:00
CI
e077849b66 chore: [skip ci] 2025-11-12 12:42:09 +00:00
CI
375a9ef65b chore: release version v1.84.8 2025-11-12 12:42:08 +00:00
Dmitry Popov
9bf90ab752 fix(core): added cleanup jobs for old system signatures & chain passages 2025-11-12 13:41:33 +01:00
CI
90c3481151 chore: [skip ci] 2025-11-12 10:57:58 +00:00
CI
e36b08a7e5 chore: release version v1.84.7 2025-11-12 10:57:58 +00:00
Dmitry Popov
e1f79170c3 Merge pull request #540 from guarzo/guarzo/apifun
fix: api and search fixes
2025-11-12 14:54:33 +04:00
Guarzo
68b5455e91 bug fix 2025-11-12 07:25:49 +00:00
Guarzo
f28e75c7f4 pr updates 2025-11-12 07:16:21 +00:00
Guarzo
6091adb28e fix: api and structure search fixes 2025-11-12 07:07:39 +00:00
CI
d4657b335f chore: [skip ci] 2025-11-12 00:13:07 +00:00
CI
7fee850902 chore: release version v1.84.6 2025-11-12 00:13:07 +00:00
Dmitry Popov
648c168a66 fix(core): Added map slug uniqness checking while using API 2025-11-12 01:12:13 +01:00
CI
f5c4b2c407 chore: [skip ci] 2025-11-11 12:52:39 +00:00
CI
b592223d52 chore: release version v1.84.5 2025-11-11 12:52:39 +00:00
Dmitry Popov
5cf118c6ee Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-11 13:52:11 +01:00
Dmitry Popov
b25013c652 fix(core): Added tracking for map & character event handling errors 2025-11-11 13:52:07 +01:00
CI
cf43861b11 chore: [skip ci] 2025-11-11 12:27:54 +00:00
CI
b5fe8f8878 chore: release version v1.84.4 2025-11-11 12:27:54 +00:00
Dmitry Popov
5e5068c7de fix(core): fixed issue with updating system signatures 2025-11-11 13:27:17 +01:00
CI
624b51edfb chore: [skip ci] 2025-11-11 09:52:29 +00:00
CI
a72f8e60c4 chore: release version v1.84.3 2025-11-11 09:52:29 +00:00
Dmitry Popov
dec8ae50c9 Merge branch 'develop' 2025-11-11 10:51:55 +01:00
Dmitry Popov
0332d36a8e fix(core): fixed linked signature time status update
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
2025-11-11 10:51:43 +01:00
CI
8444c7f82d chore: [skip ci] 2025-11-10 16:57:53 +00:00
CI
ec3fc7447e chore: release version v1.84.2 2025-11-10 16:57:53 +00:00
Dmitry Popov
20ec2800c9 Merge pull request #538 from wanderer-industries/develop
Develop
2025-11-10 20:56:53 +04:00
Dmitry Popov
6fbf43e860 fix(api): fixed api for get/update map systems
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
2025-11-10 17:23:44 +01:00
Dmitry Popov
697da38020 Merge pull request #537 from guarzo/guarzo/apisystemperf
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
fix: add indexes for map/system
2025-11-09 01:48:01 +04:00
Guarzo
4bc65b43d2 fix: add index for map/systems api 2025-11-08 14:30:19 +00:00
Dmitry Popov
910ec97fd1 chore: refactored map server processes
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
2025-11-06 09:23:19 +01:00
Dmitry Popov
40ed58ee8c Merge pull request #536 from wanderer-industries/refactor-map-servers
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Refactor map servers
2025-11-06 03:03:57 +04:00
169 changed files with 10152 additions and 2743 deletions

View File

@@ -1,5 +1,7 @@
export WEB_APP_URL="http://localhost:8000"
export RELEASE_COOKIE="PDpbnyo6mEI_0T4ZsHH_ESmi1vT1toQ8PTc0vbfg5FIT4Ih-Lh98mw=="
# Erlang node name for distributed Erlang (optional - defaults to wanderer@hostname)
# export RELEASE_NODE="wanderer@localhost"
export EVE_CLIENT_ID="<EVE_CLIENT_ID>"
export EVE_CLIENT_SECRET="<EVE_CLIENT_SECRET>"
export EVE_CLIENT_WITH_WALLET_ID="<EVE_CLIENT_WITH_WALLET_ID>"

View File

@@ -1,9 +1,9 @@
name: Build Docker Image
name: Build Develop
on:
push:
tags:
- '**'
branches:
- develop
env:
MIX_ENV: prod
@@ -18,12 +18,85 @@ permissions:
contents: write
jobs:
build:
name: 🛠 Build
runs-on: ubuntu-22.04
if: ${{ github.ref == 'refs/heads/develop' && github.event_name == 'push' }}
permissions:
checks: write
contents: write
packages: write
attestations: write
id-token: write
pull-requests: write
repository-projects: write
strategy:
matrix:
otp: ["27"]
elixir: ["1.17"]
node-version: ["18.x"]
outputs:
commit_hash: ${{ steps.set-commit-develop.outputs.commit_hash }}
steps:
- name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- name: Setup Elixir
uses: erlef/setup-beam@v1
with:
otp-version: ${{matrix.otp}}
elixir-version: ${{matrix.elixir}}
# nix build would also work here because `todos` is the default package
- name: ⬇️ Checkout repo
uses: actions/checkout@v3
with:
ssh-key: "${{ secrets.COMMIT_KEY }}"
fetch-depth: 0
- name: 😅 Cache deps
id: cache-deps
uses: actions/cache@v4
env:
cache-name: cache-elixir-deps
with:
path: |
deps
key: ${{ runner.os }}-mix-${{ matrix.elixir }}-${{ matrix.otp }}-${{ hashFiles('**/mix.lock') }}
restore-keys: |
${{ runner.os }}-mix-${{ matrix.elixir }}-${{ matrix.otp }}-
- name: 😅 Cache compiled build
id: cache-build
uses: actions/cache@v4
env:
cache-name: cache-compiled-build
with:
path: |
_build
key: ${{ runner.os }}-build-${{ hashFiles('**/mix.lock') }}-${{ hashFiles( '**/lib/**/*.{ex,eex}', '**/config/*.exs', '**/mix.exs' ) }}
restore-keys: |
${{ runner.os }}-build-${{ hashFiles('**/mix.lock') }}-
${{ runner.os }}-build-
# Step: Download project dependencies. If unchanged, uses
# the cached version.
- name: 🌐 Install dependencies
run: mix deps.get --only "prod"
# Step: Compile the project treating any warnings as errors.
# Customize this step if a different behavior is desired.
- name: 🛠 Compiles without warnings
if: steps.cache-build.outputs.cache-hit != 'true'
run: mix compile
- name: Set commit hash for develop
id: set-commit-develop
run: |
echo "commit_hash=$(git rev-parse HEAD)" >> $GITHUB_OUTPUT
docker:
name: 🛠 Build Docker Images
needs: build
runs-on: ubuntu-22.04
outputs:
release-tag: ${{ steps.get-latest-tag.outputs.tag }}
release-notes: ${{ steps.get-content.outputs.string }}
permissions:
checks: write
contents: write
@@ -37,6 +110,7 @@ jobs:
matrix:
platform:
- linux/amd64
- linux/arm64
steps:
- name: Prepare
run: |
@@ -46,25 +120,9 @@ jobs:
- name: ⬇️ Checkout repo
uses: actions/checkout@v3
with:
ref: ${{ needs.build.outputs.commit_hash }}
fetch-depth: 0
- name: Get Release Tag
id: get-latest-tag
uses: "WyriHaximus/github-action-get-previous-tag@v1"
with:
fallback: 1.0.0
- name: ⬇️ Checkout repo
uses: actions/checkout@v3
with:
ref: ${{ steps.get-latest-tag.outputs.tag }}
fetch-depth: 0
- name: Prepare Changelog
run: |
yes | cp -rf CHANGELOG.md priv/changelog/CHANGELOG.md
sed -i '1i%{title: "Change Log"}\n\n---\n' priv/changelog/CHANGELOG.md
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
@@ -113,24 +171,6 @@ jobs:
if-no-files-found: error
retention-days: 1
- uses: markpatterson27/markdown-to-output@v1
id: extract-changelog
with:
filepath: CHANGELOG.md
- name: Get content
uses: 2428392/gh-truncate-string-action@v1.3.0
id: get-content
with:
stringToTruncate: |
📣 Wanderer new release available 🎉
**Version**: ${{ steps.get-latest-tag.outputs.tag }}
${{ steps.extract-changelog.outputs.body }}
maxLength: 500
truncationSymbol: "…"
merge:
runs-on: ubuntu-latest
needs:
@@ -161,9 +201,8 @@ jobs:
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{version}},value=${{ needs.docker.outputs.release-tag }}
type=raw,value=develop,enable=${{ github.ref == 'refs/heads/develop' }}
type=raw,value=develop-{{sha}},enable=${{ github.ref == 'refs/heads/develop' }}
- name: Create manifest list and push
working-directory: /tmp/digests
@@ -176,12 +215,20 @@ jobs:
docker buildx imagetools inspect ${{ env.REGISTRY_IMAGE }}:${{ steps.meta.outputs.version }}
notify:
name: 🏷 Notify about release
name: 🏷 Notify about develop release
runs-on: ubuntu-22.04
needs: [docker, merge]
steps:
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v5.3.0
with:
webhook-url: ${{ secrets.DISCORD_WEBHOOK_URL }}
content: ${{ needs.docker.outputs.release-notes }}
webhook-url: ${{ secrets.DISCORD_WEBHOOK_URL_DEV }}
content: |
📣 New develop release available 🚀
**Commit**: `${{ github.sha }}`
**Status**: Development/Testing Release
Docker image: `wandererltd/community-edition:develop`
⚠️ This is an unstable development release for testing purposes.

View File

@@ -4,7 +4,6 @@ on:
push:
branches:
- main
- develop
env:
MIX_ENV: prod
@@ -22,7 +21,7 @@ jobs:
build:
name: 🛠 Build
runs-on: ubuntu-22.04
if: ${{ (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop') && github.event_name == 'push' }}
if: ${{ github.ref == 'refs/heads/main' && github.event_name == 'push' }}
permissions:
checks: write
contents: write
@@ -37,7 +36,7 @@ jobs:
elixir: ["1.17"]
node-version: ["18.x"]
outputs:
commit_hash: ${{ steps.generate-changelog.outputs.commit_hash || steps.set-commit-develop.outputs.commit_hash }}
commit_hash: ${{ steps.generate-changelog.outputs.commit_hash }}
steps:
- name: Prepare
run: |
@@ -91,7 +90,6 @@ jobs:
- name: Generate Changelog & Update Tag Version
id: generate-changelog
if: github.ref == 'refs/heads/main'
run: |
git config --global user.name 'CI'
git config --global user.email 'ci@users.noreply.github.com'
@@ -102,15 +100,16 @@ jobs:
- name: Set commit hash for develop
id: set-commit-develop
if: github.ref == 'refs/heads/develop'
run: |
echo "commit_hash=$(git rev-parse HEAD)" >> $GITHUB_OUTPUT
docker:
name: 🛠 Build Docker Images
if: github.ref == 'refs/heads/develop'
needs: build
runs-on: ubuntu-22.04
outputs:
release-tag: ${{ steps.get-latest-tag.outputs.tag }}
release-notes: ${{ steps.get-content.outputs.string }}
permissions:
checks: write
contents: write
@@ -137,6 +136,17 @@ jobs:
ref: ${{ needs.build.outputs.commit_hash }}
fetch-depth: 0
- name: Get Release Tag
id: get-latest-tag
uses: "WyriHaximus/github-action-get-previous-tag@v1"
with:
fallback: 1.0.0
- name: Prepare Changelog
run: |
yes | cp -rf CHANGELOG.md priv/changelog/CHANGELOG.md
sed -i '1i%{title: "Change Log"}\n\n---\n' priv/changelog/CHANGELOG.md
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
@@ -185,6 +195,24 @@ jobs:
if-no-files-found: error
retention-days: 1
- uses: markpatterson27/markdown-to-output@v1
id: extract-changelog
with:
filepath: CHANGELOG.md
- name: Get content
uses: 2428392/gh-truncate-string-action@v1.3.0
id: get-content
with:
stringToTruncate: |
📣 Wanderer new release available 🎉
**Version**: ${{ steps.get-latest-tag.outputs.tag }}
${{ steps.extract-changelog.outputs.body }}
maxLength: 500
truncationSymbol: "…"
merge:
runs-on: ubuntu-latest
needs:
@@ -215,8 +243,9 @@ jobs:
tags: |
type=ref,event=branch
type=ref,event=pr
type=raw,value=develop,enable=${{ github.ref == 'refs/heads/develop' }}
type=raw,value=develop-{{sha}},enable=${{ github.ref == 'refs/heads/develop' }}
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{version}},value=${{ needs.docker.outputs.release-tag }}
- name: Create manifest list and push
working-directory: /tmp/digests
@@ -259,3 +288,14 @@ jobs:
## How to Promote?
In order to promote this to prod, edit the draft and press **"Publish release"**.
draft: true
notify:
name: 🏷 Notify about release
runs-on: ubuntu-22.04
needs: [docker, merge]
steps:
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v5.3.0
with:
webhook-url: ${{ secrets.DISCORD_WEBHOOK_URL }}
content: ${{ needs.docker.outputs.release-notes }}

View File

@@ -1,187 +0,0 @@
name: Build Docker ARM Image
on:
push:
tags:
- '**'
env:
MIX_ENV: prod
GH_TOKEN: ${{ github.token }}
REGISTRY_IMAGE: wandererltd/community-edition-arm
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: write
jobs:
docker:
name: 🛠 Build Docker Images
runs-on: ubuntu-22.04
outputs:
release-tag: ${{ steps.get-latest-tag.outputs.tag }}
release-notes: ${{ steps.get-content.outputs.string }}
permissions:
checks: write
contents: write
packages: write
attestations: write
id-token: write
pull-requests: write
repository-projects: write
strategy:
fail-fast: false
matrix:
platform:
- linux/arm64
steps:
- name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- name: ⬇️ Checkout repo
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Get Release Tag
id: get-latest-tag
uses: "WyriHaximus/github-action-get-previous-tag@v1"
with:
fallback: 1.0.0
- name: ⬇️ Checkout repo
uses: actions/checkout@v3
with:
ref: ${{ steps.get-latest-tag.outputs.tag }}
fetch-depth: 0
- name: Prepare Changelog
run: |
yes | cp -rf CHANGELOG.md priv/changelog/CHANGELOG.md
sed -i '1i%{title: "Change Log"}\n\n---\n' priv/changelog/CHANGELOG.md
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY_IMAGE }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.WANDERER_DOCKER_USER }}
password: ${{ secrets.WANDERER_DOCKER_PASSWORD }}
- name: Build and push
id: build
uses: docker/build-push-action@v6
with:
push: true
context: .
file: ./Dockerfile
cache-from: type=gha
cache-to: type=gha,mode=max
labels: ${{ steps.meta.outputs.labels }}
platforms: ${{ matrix.platform }}
outputs: type=image,"name=${{ env.REGISTRY_IMAGE }}",push-by-digest=true,name-canonical=true,push=true
build-args: |
MIX_ENV=prod
BUILD_METADATA=${{ steps.meta.outputs.json }}
- name: Export digest
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM_PAIR }}
path: /tmp/digests/*
if-no-files-found: error
retention-days: 1
- uses: markpatterson27/markdown-to-output@v1
id: extract-changelog
with:
filepath: CHANGELOG.md
- name: Get content
uses: 2428392/gh-truncate-string-action@v1.3.0
id: get-content
with:
stringToTruncate: |
📣 Wanderer **ARM** release available 🎉
**Version**: :${{ steps.get-latest-tag.outputs.tag }}
${{ steps.extract-changelog.outputs.body }}
maxLength: 500
truncationSymbol: "…"
merge:
runs-on: ubuntu-latest
needs:
- docker
steps:
- name: Download digests
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.WANDERER_DOCKER_USER }}
password: ${{ secrets.WANDERER_DOCKER_PASSWORD }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.REGISTRY_IMAGE }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{version}},value=${{ needs.docker.outputs.release-tag }}
- name: Create manifest list and push
working-directory: /tmp/digests
run: |
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.REGISTRY_IMAGE }}@sha256:%s ' *)
- name: Inspect image
run: |
docker buildx imagetools inspect ${{ env.REGISTRY_IMAGE }}:${{ steps.meta.outputs.version }}
notify:
name: 🏷 Notify about release
runs-on: ubuntu-22.04
needs: [docker, merge]
steps:
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v5.3.0
with:
webhook-url: ${{ secrets.DISCORD_WEBHOOK_URL }}
content: ${{ needs.docker.outputs.release-notes }}

View File

@@ -2,6 +2,399 @@
<!-- changelog -->
## [v1.87.0](https://github.com/wanderer-industries/wanderer/compare/v1.86.1...v1.87.0) (2025-11-25)
### Features:
* Add support markdown for system description
## [v1.86.1](https://github.com/wanderer-industries/wanderer/compare/v1.86.0...v1.86.1) (2025-11-25)
### Bug Fixes:
* Map: Add ability to see character passage direction in list of passages
## [v1.86.0](https://github.com/wanderer-industries/wanderer/compare/v1.85.5...v1.86.0) (2025-11-25)
### Features:
* add date filter for character activity
## [v1.85.5](https://github.com/wanderer-industries/wanderer/compare/v1.85.4...v1.85.5) (2025-11-24)
### Bug Fixes:
* core: fixed connections cleanup and rally points delete issues
## [v1.85.4](https://github.com/wanderer-industries/wanderer/compare/v1.85.3...v1.85.4) (2025-11-22)
### Bug Fixes:
* core: invalidate map characters every 1 hour for any missing/revoked permissions
## [v1.85.3](https://github.com/wanderer-industries/wanderer/compare/v1.85.2...v1.85.3) (2025-11-22)
### Bug Fixes:
* core: fixed connection time status issues. fixed character alliance update issues
## [v1.85.2](https://github.com/wanderer-industries/wanderer/compare/v1.85.1...v1.85.2) (2025-11-20)
### Bug Fixes:
* core: increased API pool limits
## [v1.85.1](https://github.com/wanderer-industries/wanderer/compare/v1.85.0...v1.85.1) (2025-11-20)
### Bug Fixes:
* core: increased API pool limits
## [v1.85.0](https://github.com/wanderer-industries/wanderer/compare/v1.84.37...v1.85.0) (2025-11-19)
### Features:
* core: added support for new ship types
## [v1.84.37](https://github.com/wanderer-industries/wanderer/compare/v1.84.36...v1.84.37) (2025-11-19)
### Bug Fixes:
* auth: fixed character auth issues
## [v1.84.36](https://github.com/wanderer-industries/wanderer/compare/v1.84.35...v1.84.36) (2025-11-19)
### Bug Fixes:
* fixed duplicated map slugs
## [v1.84.35](https://github.com/wanderer-industries/wanderer/compare/v1.84.34...v1.84.35) (2025-11-19)
### Bug Fixes:
* structure search / paste issues
## [v1.84.34](https://github.com/wanderer-industries/wanderer/compare/v1.84.33...v1.84.34) (2025-11-18)
### Bug Fixes:
* core: fixed character tracking issues
## [v1.84.33](https://github.com/wanderer-industries/wanderer/compare/v1.84.32...v1.84.33) (2025-11-18)
### Bug Fixes:
* core: fixed character tracking issues
## [v1.84.32](https://github.com/wanderer-industries/wanderer/compare/v1.84.31...v1.84.32) (2025-11-18)
### Bug Fixes:
* core: fixed character tracking issues
## [v1.84.31](https://github.com/wanderer-industries/wanderer/compare/v1.84.30...v1.84.31) (2025-11-17)
### Bug Fixes:
* core: fixed connactions validation logic
## [v1.84.30](https://github.com/wanderer-industries/wanderer/compare/v1.84.29...v1.84.30) (2025-11-17)
## [v1.84.29](https://github.com/wanderer-industries/wanderer/compare/v1.84.28...v1.84.29) (2025-11-17)
## [v1.84.28](https://github.com/wanderer-industries/wanderer/compare/v1.84.27...v1.84.28) (2025-11-17)
### Bug Fixes:
* core: fixed ACL updates
## [v1.84.27](https://github.com/wanderer-industries/wanderer/compare/v1.84.26...v1.84.27) (2025-11-17)
### Bug Fixes:
* core: supported characters_updates for external events
* core: improved character tracking
* core: improved character tracking
* core: improved character location tracking
## [v1.84.26](https://github.com/wanderer-industries/wanderer/compare/v1.84.25...v1.84.26) (2025-11-16)
### Bug Fixes:
* core: disable character tracker pausing
## [v1.84.25](https://github.com/wanderer-industries/wanderer/compare/v1.84.24...v1.84.25) (2025-11-16)
### Bug Fixes:
* core: used upsert for adding map systems
## [v1.84.24](https://github.com/wanderer-industries/wanderer/compare/v1.84.23...v1.84.24) (2025-11-15)
### Bug Fixes:
* Map: Fixed problem related with error if settings was removed and mapper crashed. Fixed settings reset.
## [v1.84.23](https://github.com/wanderer-industries/wanderer/compare/v1.84.22...v1.84.23) (2025-11-15)
### Bug Fixes:
* core: fixed map pings cancel errors
## [v1.84.22](https://github.com/wanderer-industries/wanderer/compare/v1.84.21...v1.84.22) (2025-11-15)
### Bug Fixes:
* core: fixed map initialization
## [v1.84.21](https://github.com/wanderer-industries/wanderer/compare/v1.84.20...v1.84.21) (2025-11-15)
### Bug Fixes:
* core: fixed map characters adding
## [v1.84.20](https://github.com/wanderer-industries/wanderer/compare/v1.84.19...v1.84.20) (2025-11-15)
### Bug Fixes:
* core: fixed map start issues
## [v1.84.19](https://github.com/wanderer-industries/wanderer/compare/v1.84.18...v1.84.19) (2025-11-14)
### Bug Fixes:
* core: fixed map start issues
## [v1.84.18](https://github.com/wanderer-industries/wanderer/compare/v1.84.17...v1.84.18) (2025-11-14)
### Bug Fixes:
* core: added gracefull map poll recovery from saved state. added map slug unique checks
## [v1.84.17](https://github.com/wanderer-industries/wanderer/compare/v1.84.16...v1.84.17) (2025-11-14)
### Bug Fixes:
* core: fixed activity tracking issues
## [v1.84.16](https://github.com/wanderer-industries/wanderer/compare/v1.84.15...v1.84.16) (2025-11-13)
### Bug Fixes:
* core: removed maps auto-start logic
## [v1.84.15](https://github.com/wanderer-industries/wanderer/compare/v1.84.14...v1.84.15) (2025-11-13)
### Bug Fixes:
* core: fixed maps start/stop logic, added server downtime period support
## [v1.84.14](https://github.com/wanderer-industries/wanderer/compare/v1.84.13...v1.84.14) (2025-11-13)
### Bug Fixes:
* Map: Fixed problem related with error if settings was removed and mapper crashed. Fixed settings reset.
## [v1.84.13](https://github.com/wanderer-industries/wanderer/compare/v1.84.12...v1.84.13) (2025-11-13)
## [v1.84.12](https://github.com/wanderer-industries/wanderer/compare/v1.84.11...v1.84.12) (2025-11-13)
## [v1.84.11](https://github.com/wanderer-industries/wanderer/compare/v1.84.10...v1.84.11) (2025-11-12)
### Bug Fixes:
* api and doc updates
## [v1.84.10](https://github.com/wanderer-industries/wanderer/compare/v1.84.9...v1.84.10) (2025-11-12)
### Bug Fixes:
* core: Fixed adding system on character dock
## [v1.84.9](https://github.com/wanderer-industries/wanderer/compare/v1.84.8...v1.84.9) (2025-11-12)
## [v1.84.8](https://github.com/wanderer-industries/wanderer/compare/v1.84.7...v1.84.8) (2025-11-12)
### Bug Fixes:
* core: added cleanup jobs for old system signatures & chain passages
## [v1.84.7](https://github.com/wanderer-industries/wanderer/compare/v1.84.6...v1.84.7) (2025-11-12)
### Bug Fixes:
* api and structure search fixes
## [v1.84.6](https://github.com/wanderer-industries/wanderer/compare/v1.84.5...v1.84.6) (2025-11-12)
### Bug Fixes:
* core: Added map slug uniqness checking while using API
## [v1.84.5](https://github.com/wanderer-industries/wanderer/compare/v1.84.4...v1.84.5) (2025-11-11)
### Bug Fixes:
* core: Added tracking for map & character event handling errors
## [v1.84.4](https://github.com/wanderer-industries/wanderer/compare/v1.84.3...v1.84.4) (2025-11-11)
### Bug Fixes:
* core: fixed issue with updating system signatures
## [v1.84.3](https://github.com/wanderer-industries/wanderer/compare/v1.84.2...v1.84.3) (2025-11-11)
### Bug Fixes:
* core: fixed linked signature time status update
## [v1.84.2](https://github.com/wanderer-industries/wanderer/compare/v1.84.1...v1.84.2) (2025-11-10)
### Bug Fixes:
* api: fixed api for get/update map systems
* add index for map/systems api
## [v1.84.1](https://github.com/wanderer-industries/wanderer/compare/v1.84.0...v1.84.1) (2025-11-01)

View File

@@ -30,7 +30,7 @@ format f:
mix format
test t:
mix test
MIX_ENV=test mix test
coverage cover co:
mix test --cover
@@ -45,4 +45,3 @@ versions v:
@cat .tool-versions
@cat Aptfile
@echo

View File

@@ -73,7 +73,9 @@ body > div:first-of-type {
}
.maps_bg {
background-image: url('../images/maps_bg.webp');
/* OLD image */
/* background-image: url('../images/maps_bg.webp'); */
background-image: url('https://wanderer-industries.github.io/wanderer-assets/images/eve-screen-catalyst-expansion-bg.jpg');
background-size: cover;
background-position: center;
width: 100%;

View File

@@ -51,20 +51,8 @@ export const Characters = ({ data }: CharactersProps) => {
['border-lime-600/70']: character.online,
},
)}
title={character.tracking_paused ? `${character.name} - Tracking Paused (click to resume)` : character.name}
title={character.name}
>
{character.tracking_paused && (
<>
<span
className={clsx(
'absolute flex flex-col p-[2px] top-[0px] left-[0px] w-[35px] h-[35px]',
'text-yellow-500 text-[9px] z-10 bg-gray-800/40',
'pi',
PrimeIcons.PAUSE,
)}
/>
</>
)}
{mainCharacterEveId === character.eve_id && (
<span
className={clsx(

View File

@@ -1,6 +1,6 @@
@use "sass:color";
@use '@/hooks/Mapper/components/map/styles/eve-common-variables';
@import '@/hooks/Mapper/components/map/styles/solar-system-node';
@use '@/hooks/Mapper/components/map/styles/solar-system-node' as v;
@keyframes move-stripes {
from {
@@ -26,8 +26,8 @@
background-color: var(--rf-node-bg-color, #202020) !important;
color: var(--rf-text-color, #ffffff);
box-shadow: 0 0 5px rgba($dark-bg, 0.5);
border: 1px solid color.adjust($pastel-blue, $lightness: -10%);
box-shadow: 0 0 5px rgba(v.$dark-bg, 0.5);
border: 1px solid color.adjust(v.$pastel-blue, $lightness: -10%);
border-radius: 5px;
position: relative;
z-index: 3;
@@ -99,7 +99,7 @@
}
&.selected {
border-color: $pastel-pink;
border-color: v.$pastel-pink;
box-shadow: 0 0 10px #9a1af1c2;
}
@@ -113,11 +113,11 @@
bottom: 0;
z-index: -1;
border-color: $neon-color-1;
border-color: v.$neon-color-1;
background: repeating-linear-gradient(
45deg,
$neon-color-3 0px,
$neon-color-3 8px,
v.$neon-color-3 0px,
v.$neon-color-3 8px,
transparent 8px,
transparent 21px
);
@@ -146,7 +146,7 @@
border: 1px solid var(--eve-solar-system-status-color-lookingFor-dark15);
background-image: linear-gradient(275deg, #45ff8f2f, #457fff2f);
&.selected {
border-color: $pastel-pink;
border-color: v.$pastel-pink;
}
}
@@ -347,13 +347,13 @@
.Handle {
min-width: initial;
min-height: initial;
border: 1px solid $pastel-blue;
border: 1px solid v.$pastel-blue;
width: 5px;
height: 5px;
pointer-events: auto;
&.selected {
border-color: $pastel-pink;
border-color: v.$pastel-pink;
}
&.HandleTop {

View File

@@ -14,8 +14,27 @@ export const useCommandsCharacters = () => {
const ref = useRef({ update });
ref.current = { update };
const charactersUpdated = useCallback((characters: CommandCharactersUpdated) => {
ref.current.update(() => ({ characters: characters.slice() }));
const charactersUpdated = useCallback((updatedCharacters: CommandCharactersUpdated) => {
ref.current.update(state => {
const existing = state.characters ?? [];
// Put updatedCharacters into a map keyed by ID
const updatedMap = new Map(updatedCharacters.map(c => [c.eve_id, c]));
// 1. Update existing characters when possible
const merged = existing.map(character => {
const updated = updatedMap.get(character.eve_id);
if (updated) {
updatedMap.delete(character.eve_id); // Mark as processed
return { ...character, ...updated };
}
return character;
});
// 2. Any remaining items in updatedMap are NEW characters → add them
const newCharacters = Array.from(updatedMap.values());
return { characters: [...merged, ...newCharacters] };
});
}, []);
const characterAdded = useCallback((value: CommandCharacterAdded) => {

View File

@@ -4,10 +4,13 @@ import { DEFAULT_WIDGETS } from '@/hooks/Mapper/components/mapInterface/constant
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
export const MapInterface = () => {
// const [items, setItems] = useState<WindowProps[]>(restoreWindowsFromLS);
const { windowsSettings, updateWidgetSettings } = useMapRootState();
const items = useMemo(() => {
if (Object.keys(windowsSettings).length === 0) {
return [];
}
return windowsSettings.windows
.map(x => {
const content = DEFAULT_WIDGETS.find(y => y.id === x.id)?.content;

View File

@@ -1,4 +1,3 @@
import classes from './MarkdownComment.module.scss';
import clsx from 'clsx';
import {
InfoDrawer,
@@ -49,7 +48,11 @@ export const MarkdownComment = ({ text, time, characterEveId, id }: MarkdownComm
<>
<InfoDrawer
labelClassName="mb-[3px]"
className={clsx(classes.MarkdownCommentRoot, 'p-1 bg-stone-700/20 ')}
className={clsx(
'p-1 bg-stone-700/20',
'text-[12px] leading-[1.2] text-stone-300 break-words',
'bg-gradient-to-r from-stone-600/40 via-stone-600/10 to-stone-600/0',
)}
onMouseEnter={handleMouseEnter}
onMouseLeave={handleMouseLeave}
title={

View File

@@ -0,0 +1,9 @@
.CERoot {
@apply border border-stone-400/30 rounded-[2px];
:global {
.cm-content {
@apply bg-stone-600/40;
}
}
}

View File

@@ -6,6 +6,7 @@ import { useHotkey } from '@/hooks/Mapper/hooks';
import { useCallback, useRef, useState } from 'react';
import { OutCommand } from '@/hooks/Mapper/types';
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
import classes from './CommentsEditor.module.scss';
export interface CommentsEditorProps {}
@@ -48,6 +49,7 @@ export const CommentsEditor = ({}: CommentsEditorProps) => {
return (
<MarkdownEditor
className={classes.CERoot}
value={textVal}
onChange={setTextVal}
overlayContent={

View File

@@ -1,9 +1,9 @@
.CERoot {
@apply border border-stone-400/30 rounded-[2px];
@apply border border-stone-500/30 rounded-[2px];
:global {
.cm-content {
@apply bg-stone-600/40;
@apply bg-stone-950/70;
}
.cm-scroller {

View File

@@ -44,9 +44,17 @@ export interface MarkdownEditorProps {
overlayContent?: ReactNode;
value: string;
onChange: (value: string) => void;
height?: string;
className?: string;
}
export const MarkdownEditor = ({ value, onChange, overlayContent }: MarkdownEditorProps) => {
export const MarkdownEditor = ({
value,
onChange,
overlayContent,
height = '70px',
className,
}: MarkdownEditorProps) => {
const [hasShift, setHasShift] = useState(false);
const refData = useRef({ onChange });
@@ -66,9 +74,9 @@ export const MarkdownEditor = ({ value, onChange, overlayContent }: MarkdownEdit
<div className={clsx(classes.MarkdownEditor, 'relative')}>
<CodeMirror
value={value}
height="70px"
height={height}
extensions={CODE_MIRROR_EXTENSIONS}
className={classes.CERoot}
className={clsx(classes.CERoot, className)}
theme={oneDark}
onChange={handleOnChange}
placeholder="Start typing..."

View File

@@ -8,8 +8,8 @@ import { LabelsManager } from '@/hooks/Mapper/utils/labelsManager.ts';
import { Dialog } from 'primereact/dialog';
import { IconField } from 'primereact/iconfield';
import { InputText } from 'primereact/inputtext';
import { InputTextarea } from 'primereact/inputtextarea';
import { useCallback, useEffect, useRef, useState } from 'react';
import { MarkdownEditor } from '@/hooks/Mapper/components/mapInterface/components/MarkdownEditor';
interface SystemSettingsDialog {
systemId: string;
@@ -214,13 +214,9 @@ export const SystemSettingsDialog = ({ systemId, visible, setVisible }: SystemSe
<div className="flex flex-col gap-1">
<label htmlFor="username">Description</label>
<InputTextarea
autoResize
rows={5}
cols={30}
value={description}
onChange={e => setDescription(e.target.value)}
/>
<div className="h-[200px]">
<MarkdownEditor value={description} onChange={e => setDescription(e)} height="180px" />
</div>
</div>
</div>

View File

@@ -2,7 +2,7 @@ import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
import { isWormholeSpace } from '@/hooks/Mapper/components/map/helpers/isWormholeSpace.ts';
import { useMemo } from 'react';
import { getSystemById, sortWHClasses } from '@/hooks/Mapper/helpers';
import { InfoDrawer, WHClassView, WHEffectView } from '@/hooks/Mapper/components/ui-kit';
import { InfoDrawer, MarkdownTextViewer, WHClassView, WHEffectView } from '@/hooks/Mapper/components/ui-kit';
import { getSystemStaticInfo } from '@/hooks/Mapper/mapRootProvider/hooks/useLoadSystemStatic';
interface SystemInfoContentProps {
@@ -51,7 +51,7 @@ export const SystemInfoContent = ({ systemId }: SystemInfoContentProps) => {
</div>
}
>
<div className="break-words">{description}</div>
<MarkdownTextViewer>{description}</MarkdownTextViewer>
</InfoDrawer>
)}
</div>

View File

@@ -30,10 +30,14 @@ export const SystemStructures: React.FC = () => {
const processClipboard = useCallback(
(text: string) => {
if (!systemId) {
console.warn('Cannot update structures: no system selected');
return;
}
const updated = processSnippetText(text, structures);
handleUpdateStructures(updated);
},
[structures, handleUpdateStructures],
[systemId, structures, handleUpdateStructures],
);
const handlePaste = useCallback(

View File

@@ -30,9 +30,6 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
const { outCommand } = useMapRootState();
const [prevQuery, setPrevQuery] = useState('');
const [prevResults, setPrevResults] = useState<{ label: string; value: string }[]>([]);
useEffect(() => {
if (structure) {
setEditData(structure);
@@ -46,34 +43,24 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
// Searching corporation owners via auto-complete
const searchOwners = useCallback(
async (e: { query: string }) => {
const newQuery = e.query.trim();
if (!newQuery) {
const query = e.query.trim();
if (!query) {
setOwnerSuggestions([]);
return;
}
// If user typed more text but we have partial match in prevResults
if (newQuery.startsWith(prevQuery) && prevResults.length > 0) {
const filtered = prevResults.filter(item => item.label.toLowerCase().includes(newQuery.toLowerCase()));
setOwnerSuggestions(filtered);
return;
}
try {
// TODO fix it
const { results = [] } = await outCommand({
type: OutCommand.getCorporationNames,
data: { search: newQuery },
data: { search: query },
});
setOwnerSuggestions(results);
setPrevQuery(newQuery);
setPrevResults(results);
} catch (err) {
console.error('Failed to fetch owners:', err);
setOwnerSuggestions([]);
}
},
[prevQuery, prevResults, outCommand],
[outCommand],
);
const handleChange = (field: keyof StructureItem, val: string | Date) => {
@@ -122,7 +109,6 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
// fetch corporation ticker if we have an ownerId
if (editData.ownerId) {
try {
// TODO fix it
const { ticker } = await outCommand({
type: OutCommand.getCorporationTicker,
data: { corp_id: editData.ownerId },

View File

@@ -56,6 +56,11 @@ export function useSystemStructures({ systemId, outCommand }: UseSystemStructure
const handleUpdateStructures = useCallback(
async (newList: StructureItem[]) => {
if (!systemId) {
console.warn('Cannot update structures: systemId is undefined');
return;
}
const { added, updated, removed } = getActualStructures(structures, newList);
const sanitizedAdded = added.map(sanitizeIds);

View File

@@ -1,4 +1,7 @@
import { Dialog } from 'primereact/dialog';
import { Menu } from 'primereact/menu';
import { MenuItem } from 'primereact/menuitem';
import { useState, useCallback, useRef, useMemo } from 'react';
import { CharacterActivityContent } from '@/hooks/Mapper/components/mapRootContent/components/CharacterActivity/CharacterActivityContent.tsx';
interface CharacterActivityProps {
@@ -6,17 +9,69 @@ interface CharacterActivityProps {
onHide: () => void;
}
const periodOptions = [
{ value: 30, label: '30 Days' },
{ value: 365, label: '1 Year' },
{ value: null, label: 'All Time' },
];
export const CharacterActivity = ({ visible, onHide }: CharacterActivityProps) => {
const [selectedPeriod, setSelectedPeriod] = useState<number | null>(30);
const menuRef = useRef<Menu>(null);
const handlePeriodChange = useCallback((days: number | null) => {
setSelectedPeriod(days);
}, []);
const menuItems: MenuItem[] = useMemo(
() => [
{
label: 'Period',
items: periodOptions.map(option => ({
label: option.label,
icon: selectedPeriod === option.value ? 'pi pi-check' : undefined,
command: () => handlePeriodChange(option.value),
})),
},
],
[selectedPeriod, handlePeriodChange],
);
const selectedPeriodLabel = useMemo(
() => periodOptions.find(opt => opt.value === selectedPeriod)?.label || 'All Time',
[selectedPeriod],
);
const headerIcons = (
<>
<button
type="button"
className="p-dialog-header-icon p-link"
onClick={e => menuRef.current?.toggle(e)}
aria-label="Filter options"
>
<span className="pi pi-bars" />
</button>
<Menu model={menuItems} popup ref={menuRef} />
</>
);
return (
<Dialog
header="Character Activity"
header={
<div className="flex items-center gap-2">
<span>Character Activity</span>
<span className="text-xs text-stone-400">({selectedPeriodLabel})</span>
</div>
}
visible={visible}
className="w-[550px] max-h-[90vh]"
onHide={onHide}
dismissableMask
contentClassName="p-0 h-full flex flex-col"
icons={headerIcons}
>
<CharacterActivityContent />
<CharacterActivityContent selectedPeriod={selectedPeriod} />
</Dialog>
);
};

View File

@@ -7,16 +7,28 @@ import {
} from '@/hooks/Mapper/components/mapRootContent/components/CharacterActivity/helpers.tsx';
import { Column } from 'primereact/column';
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
import { useMemo } from 'react';
import { useMemo, useEffect } from 'react';
import { useCharacterActivityHandlers } from '@/hooks/Mapper/components/mapRootContent/hooks/useCharacterActivityHandlers';
export const CharacterActivityContent = () => {
interface CharacterActivityContentProps {
selectedPeriod: number | null;
}
export const CharacterActivityContent = ({ selectedPeriod }: CharacterActivityContentProps) => {
const {
data: { characterActivityData },
} = useMapRootState();
const { handleShowActivity } = useCharacterActivityHandlers();
const activity = useMemo(() => characterActivityData?.activity || [], [characterActivityData]);
const loading = useMemo(() => characterActivityData?.loading !== false, [characterActivityData]);
// Reload activity data when period changes
useEffect(() => {
handleShowActivity(selectedPeriod);
}, [selectedPeriod, handleShowActivity]);
if (loading) {
return (
<div className="flex flex-col items-center justify-center h-full w-full">

View File

@@ -3,7 +3,7 @@
}
.SidebarOnTheMap {
width: 400px;
width: 460px;
padding: 0 !important;
:global {

View File

@@ -5,6 +5,7 @@ import {
ConnectionType,
OutCommand,
Passage,
PassageWithSourceTarget,
SolarSystemConnection,
} from '@/hooks/Mapper/types';
import clsx from 'clsx';
@@ -19,7 +20,7 @@ import { PassageCard } from './PassageCard';
const sortByDate = (a: string, b: string) => new Date(a).getTime() - new Date(b).getTime();
const itemTemplate = (item: Passage, options: VirtualScrollerTemplateOptions) => {
const itemTemplate = (item: PassageWithSourceTarget, options: VirtualScrollerTemplateOptions) => {
return (
<div
className={clsx(classes.CharacterRow, 'w-full box-border', {
@@ -35,7 +36,7 @@ const itemTemplate = (item: Passage, options: VirtualScrollerTemplateOptions) =>
};
export interface ConnectionPassagesContentProps {
passages: Passage[];
passages: PassageWithSourceTarget[];
}
export const ConnectionPassages = ({ passages = [] }: ConnectionPassagesContentProps) => {
@@ -113,6 +114,20 @@ export const Connections = ({ selectedConnection, onHide }: OnTheMapProps) => {
[outCommand],
);
const preparedPassages = useMemo(() => {
if (!cnInfo) {
return [];
}
return passages
.sort((a, b) => sortByDate(b.inserted_at, a.inserted_at))
.map<PassageWithSourceTarget>(x => ({
...x,
source: x.from ? cnInfo.target : cnInfo.source,
target: x.from ? cnInfo.source : cnInfo.target,
}));
}, [cnInfo, passages]);
useEffect(() => {
if (!selectedConnection) {
return;
@@ -145,12 +160,14 @@ export const Connections = ({ selectedConnection, onHide }: OnTheMapProps) => {
<InfoDrawer title="Connection" rightSide>
<div className="flex justify-end gap-2 items-center">
<SystemView
showCustomName
systemId={cnInfo.source}
className={clsx(classes.InfoTextSize, 'select-none text-center')}
hideRegion
/>
<span className="pi pi-angle-double-right text-stone-500 text-[15px]"></span>
<SystemView
showCustomName
systemId={cnInfo.target}
className={clsx(classes.InfoTextSize, 'select-none text-center')}
hideRegion
@@ -184,7 +201,7 @@ export const Connections = ({ selectedConnection, onHide }: OnTheMapProps) => {
{/* separator */}
<div className="w-full h-px bg-neutral-800 px-0.5"></div>
<ConnectionPassages passages={passages} />
<ConnectionPassages passages={preparedPassages} />
</div>
</Sidebar>
);

View File

@@ -35,6 +35,10 @@
&.ThreeColumns {
grid-template-columns: auto 1fr auto;
}
&.FourColumns {
grid-template-columns: auto auto 1fr auto;
}
}
.CardBorderLeftIsOwn {

View File

@@ -1,7 +1,7 @@
import clsx from 'clsx';
import classes from './PassageCard.module.scss';
import { Passage } from '@/hooks/Mapper/types';
import { TimeAgo } from '@/hooks/Mapper/components/ui-kit';
import { PassageWithSourceTarget } from '@/hooks/Mapper/types';
import { SystemView, TimeAgo, TooltipPosition } from '@/hooks/Mapper/components/ui-kit';
import { WdTooltipWrapper } from '@/hooks/Mapper/components/ui-kit/WdTooltipWrapper';
import { kgToTons } from '@/hooks/Mapper/utils/kgToTons.ts';
import { useMemo } from 'react';
@@ -11,7 +11,7 @@ type PassageCardType = {
showShipName?: boolean;
// showSystem?: boolean;
// useSystemsCache?: boolean;
} & Passage;
} & PassageWithSourceTarget;
const SHIP_NAME_RX = /u'|'/g;
export const getShipName = (name: string) => {
@@ -25,7 +25,7 @@ export const getShipName = (name: string) => {
});
};
export const PassageCard = ({ inserted_at, character: char, ship }: PassageCardType) => {
export const PassageCard = ({ inserted_at, character: char, ship, source, target, from }: PassageCardType) => {
const isOwn = false;
const insertedAt = useMemo(() => {
@@ -37,7 +37,39 @@ export const PassageCard = ({ inserted_at, character: char, ship }: PassageCardT
<div className={clsx(classes.CharacterCard, 'w-full text-xs', 'flex flex-col box-border')}>
<div className="flex flex-col justify-between px-2 py-1 gap-1">
{/*here icon and other*/}
<div className={clsx(classes.CharRow, classes.ThreeColumns)}>
<div className={clsx(classes.CharRow, classes.FourColumns)}>
<WdTooltipWrapper
position={TooltipPosition.top}
content={
<div className="flex justify-between gap-2 items-center">
<SystemView
showCustomName
systemId={source}
className="select-none text-center !text-[12px]"
hideRegion
/>
<span className="pi pi-angle-double-right text-stone-500 text-[15px]"></span>
<SystemView
showCustomName
systemId={target}
className="select-none text-center !text-[12px]"
hideRegion
/>
</div>
}
>
<div
className={clsx(
'transition-all transform ease-in duration-200',
'pi text-stone-500 text-[15px] w-[35px] h-[33px] !flex items-center justify-center border rounded-[6px]',
{
['pi-angle-double-right !text-orange-400 border-orange-400 hover:bg-orange-400/30']: from,
['pi-angle-double-left !text-stone-500/70 border-stone-500/70 hover:bg-stone-500/30']: !from,
},
)}
/>
</WdTooltipWrapper>
{/*portrait*/}
<span
className={clsx(classes.EveIcon, classes.CharIcon, 'wd-bg-default')}

View File

@@ -10,9 +10,14 @@ import { useCallback } from 'react';
import { TooltipPosition, WdButton, WdTooltipWrapper } from '@/hooks/Mapper/components/ui-kit';
import { ConfirmPopup } from 'primereact/confirmpopup';
import { useConfirmPopup } from '@/hooks/Mapper/hooks';
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
export const CommonSettings = () => {
const { renderSettingItem } = useMapSettings();
const {
storedSettings: { resetSettings },
} = useMapRootState();
const { cfShow, cfHide, cfVisible, cfRef } = useConfirmPopup();
const renderSettingsList = useCallback(
@@ -22,7 +27,7 @@ export const CommonSettings = () => {
[renderSettingItem],
);
const handleResetSettings = () => {};
const handleResetSettings = useCallback(() => resetSettings(), [resetSettings]);
return (
<div className="flex flex-col h-full gap-1">

View File

@@ -1,6 +1,6 @@
@use "sass:color";
@use '@/hooks/Mapper/components/map/styles/eve-common-variables';
@import '@/hooks/Mapper/components/map/styles/solar-system-node';
@use '@/hooks/Mapper/components/map/styles/solar-system-node' as v;
:root {
--rf-has-user-characters: #ffc75d;
@@ -108,7 +108,7 @@
}
&.selected {
border-color: $pastel-pink;
border-color: v.$pastel-pink;
box-shadow: 0 0 10px #9a1af1c2;
}
@@ -122,11 +122,11 @@
bottom: 0;
z-index: -1;
border-color: $neon-color-1;
border-color: v.$neon-color-1;
background: repeating-linear-gradient(
45deg,
$neon-color-3 0px,
$neon-color-3 8px,
v.$neon-color-3 0px,
v.$neon-color-3 8px,
transparent 8px,
transparent 21px
);
@@ -152,7 +152,7 @@
&.eve-system-status-lookingFor {
background-image: linear-gradient(275deg, #45ff8f2f, #457fff2f);
&.selected {
border-color: $pastel-pink;
border-color: v.$pastel-pink;
}
}

View File

@@ -23,17 +23,17 @@ export const useCharacterActivityHandlers = () => {
/**
* Handle showing the character activity dialog
*/
const handleShowActivity = useCallback(() => {
const handleShowActivity = useCallback((days?: number | null) => {
// Update local state to show the dialog
update(state => ({
...state,
showCharacterActivity: true,
}));
// Send the command to the server
// Send the command to the server with optional days parameter
outCommand({
type: OutCommand.showActivity,
data: {},
data: days !== undefined ? { days } : {},
});
}, [outCommand, update]);

View File

@@ -1,8 +1,5 @@
.MarkdownCommentRoot {
border-left-width: 3px;
.MarkdownTextViewer {
@apply text-[12px] leading-[1.2] text-stone-300 break-words;
@apply bg-gradient-to-r from-stone-600/40 via-stone-600/10 to-stone-600/0;
.h1 {
@apply text-[12px] font-normal m-0 p-0 border-none break-words whitespace-normal;
@@ -56,6 +53,10 @@
@apply font-bold text-green-400 break-words whitespace-normal;
}
strong {
font-weight: bold;
}
i, em {
@apply italic text-pink-400 break-words whitespace-normal;
}

View File

@@ -2,10 +2,16 @@ import Markdown from 'react-markdown';
import remarkGfm from 'remark-gfm';
import remarkBreaks from 'remark-breaks';
import classes from './MarkdownTextViewer.module.scss';
const REMARK_PLUGINS = [remarkGfm, remarkBreaks];
type MarkdownTextViewerProps = { children: string };
export const MarkdownTextViewer = ({ children }: MarkdownTextViewerProps) => {
return <Markdown remarkPlugins={REMARK_PLUGINS}>{children}</Markdown>;
return (
<div className={classes.MarkdownTextViewer}>
<Markdown remarkPlugins={REMARK_PLUGINS}>{children}</Markdown>
</div>
);
};

View File

@@ -6,9 +6,11 @@ import {
MapUnionTypes,
OutCommandHandler,
SolarSystemConnection,
StringBoolean,
TrackingCharacter,
UseCharactersCacheData,
UseCommentsData,
UserPermission,
} from '@/hooks/Mapper/types';
import { useCharactersCache, useComments, useMapRootHandlers } from '@/hooks/Mapper/mapRootProvider/hooks';
import { WithChildren } from '@/hooks/Mapper/types/common.ts';
@@ -80,7 +82,16 @@ const INITIAL_DATA: MapRootData = {
selectedSystems: [],
selectedConnections: [],
userPermissions: {},
options: {},
options: {
allowed_copy_for: UserPermission.VIEW_SYSTEM,
allowed_paste_for: UserPermission.VIEW_SYSTEM,
layout: '',
restrict_offline_showing: 'false',
show_linked_signature_id: 'false',
show_linked_signature_id_temp_name: 'false',
show_temp_system_name: 'false',
store_custom_labels: 'false',
},
isSubscriptionActive: false,
linkSignatureToSystem: null,
mainCharacterEveId: null,
@@ -135,7 +146,7 @@ export interface MapRootContextProps {
hasOldSettings: boolean;
getSettingsForExport(): string | undefined;
applySettings(settings: MapUserSettings): boolean;
resetSettings(settings: MapUserSettings): void;
resetSettings(): void;
checkOldSettings(): void;
};
}

View File

@@ -1,5 +1,4 @@
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
import { useCallback, useRef } from 'react';
import {
CommandCharacterAdded,
CommandCharacterRemoved,
@@ -7,6 +6,7 @@ import {
CommandCharacterUpdated,
CommandPresentCharacters,
} from '@/hooks/Mapper/types';
import { useCallback, useRef } from 'react';
export const useCommandsCharacters = () => {
const { update } = useMapRootState();
@@ -14,8 +14,27 @@ export const useCommandsCharacters = () => {
const ref = useRef({ update });
ref.current = { update };
const charactersUpdated = useCallback((characters: CommandCharactersUpdated) => {
ref.current.update(() => ({ characters: characters.slice() }));
const charactersUpdated = useCallback((updatedCharacters: CommandCharactersUpdated) => {
ref.current.update(state => {
const existing = state.characters ?? [];
// Put updatedCharacters into a map keyed by ID
const updatedMap = new Map(updatedCharacters.map(c => [c.eve_id, c]));
// 1. Update existing characters when possible
const merged = existing.map(character => {
const updated = updatedMap.get(character.eve_id);
if (updated) {
updatedMap.delete(character.eve_id); // Mark as processed
return { ...character, ...updated };
}
return character;
});
// 2. Any remaining items in updatedMap are NEW characters → add them
const newCharacters = Array.from(updatedMap.values());
return { characters: [...merged, ...newCharacters] };
});
}, []);
const characterAdded = useCallback((value: CommandCharacterAdded) => {

View File

@@ -148,10 +148,6 @@ export const useMapUserSettings = ({ map_slug }: MapRootData, outCommand: OutCom
setHasOldSettings(!!(widgetsOld || interfaceSettings || widgetRoutes || widgetLocal || widgetKills || onTheMapOld));
}, []);
useEffect(() => {
checkOldSettings();
}, [checkOldSettings]);
const getSettingsForExport = useCallback(() => {
const { map_slug } = ref.current;
@@ -166,6 +162,24 @@ export const useMapUserSettings = ({ map_slug }: MapRootData, outCommand: OutCom
applySettings(createDefaultStoredSettings());
}, [applySettings]);
useEffect(() => {
checkOldSettings();
}, [checkOldSettings]);
// IN Case if in runtime someone clear settings
useEffect(() => {
if (Object.keys(windowsSettings).length !== 0) {
return;
}
if (!isReady) {
return;
}
resetSettings();
location.reload();
}, [isReady, resetSettings, windowsSettings]);
return {
isReady,
hasOldSettings,

View File

@@ -33,7 +33,6 @@ export type CharacterTypeRaw = {
corporation_id: number;
corporation_name: string;
corporation_ticker: string;
tracking_paused: boolean;
};
export interface TrackingCharacter {
@@ -69,4 +68,5 @@ export interface ActivitySummary {
passages: number;
connections: number;
signatures: number;
timestamp?: string;
}

View File

@@ -6,11 +6,17 @@ export type PassageLimitedCharacterType = Pick<
>;
export type Passage = {
from: boolean;
inserted_at: string; // Date
ship: ShipTypeRaw;
character: PassageLimitedCharacterType;
};
export type PassageWithSourceTarget = {
source: string;
target: string;
} & Passage;
export type ConnectionInfoOutput = {
marl_eol_time: string;
};

View File

@@ -12,11 +12,11 @@ const animateBg = function (bgCanvas) {
*/
const randomInRange = (max, min) => Math.floor(Math.random() * (max - min + 1)) + min;
const BASE_SIZE = 1;
const VELOCITY_INC = 1.01;
const VELOCITY_INC = 1.002;
const VELOCITY_INIT_INC = 0.525;
const JUMP_VELOCITY_INC = 0.55;
const JUMP_SIZE_INC = 1.15;
const SIZE_INC = 1.01;
const SIZE_INC = 1.002;
const RAD = Math.PI / 180;
const WARP_COLORS = [
[197, 239, 247],

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -27,11 +27,7 @@ config :wanderer_app,
generators: [timestamp_type: :utc_datetime],
ddrt: WandererApp.Map.CacheRTree,
logger: Logger,
pubsub_client: Phoenix.PubSub,
wanderer_kills_base_url:
System.get_env("WANDERER_KILLS_BASE_URL", "ws://host.docker.internal:4004"),
wanderer_kills_service_enabled:
System.get_env("WANDERER_KILLS_SERVICE_ENABLED", "false") == "true"
pubsub_client: Phoenix.PubSub
config :wanderer_app, WandererAppWeb.Endpoint,
adapter: Bandit.PhoenixAdapter,

View File

@@ -4,7 +4,7 @@ import Config
config :wanderer_app, WandererApp.Repo,
username: "postgres",
password: "postgres",
hostname: System.get_env("DB_HOST", "localhost"),
hostname: "localhost",
database: "wanderer_dev",
stacktrace: true,
show_sensitive_data_on_connection_error: true,

View File

@@ -177,7 +177,34 @@ config :wanderer_app,
],
extra_characters_50: map_subscription_extra_characters_50_price,
extra_hubs_10: map_subscription_extra_hubs_10_price
}
},
# Finch pool configuration - separate pools for different services
# ESI Character Tracking pool - high capacity for bulk character operations
# With 30+ TrackerPools × ~100 concurrent tasks, need large pool
finch_esi_character_pool_size:
System.get_env("WANDERER_FINCH_ESI_CHARACTER_POOL_SIZE", "200") |> String.to_integer(),
finch_esi_character_pool_count:
System.get_env("WANDERER_FINCH_ESI_CHARACTER_POOL_COUNT", "4") |> String.to_integer(),
# ESI General pool - standard capacity for general ESI operations
finch_esi_general_pool_size:
System.get_env("WANDERER_FINCH_ESI_GENERAL_POOL_SIZE", "50") |> String.to_integer(),
finch_esi_general_pool_count:
System.get_env("WANDERER_FINCH_ESI_GENERAL_POOL_COUNT", "4") |> String.to_integer(),
# Webhooks pool - isolated from ESI rate limits
finch_webhooks_pool_size:
System.get_env("WANDERER_FINCH_WEBHOOKS_POOL_SIZE", "25") |> String.to_integer(),
finch_webhooks_pool_count:
System.get_env("WANDERER_FINCH_WEBHOOKS_POOL_COUNT", "2") |> String.to_integer(),
# Default pool - everything else (email, license manager, etc.)
finch_default_pool_size:
System.get_env("WANDERER_FINCH_DEFAULT_POOL_SIZE", "25") |> String.to_integer(),
finch_default_pool_count:
System.get_env("WANDERER_FINCH_DEFAULT_POOL_COUNT", "2") |> String.to_integer(),
# Character tracker concurrency settings
# Location updates need high concurrency for <2s response with 3000+ characters
location_concurrency:
System.get_env("WANDERER_LOCATION_CONCURRENCY", "#{System.schedulers_online() * 12}")
|> String.to_integer()
config :ueberauth, Ueberauth,
providers: [
@@ -258,7 +285,9 @@ config :wanderer_app, WandererApp.Scheduler,
timezone: :utc,
jobs:
[
{"@daily", {WandererApp.Map.Audit, :archive, []}}
{"@daily", {WandererApp.Map.Audit, :archive, []}},
{"@daily", {WandererApp.Map.GarbageCollector, :cleanup_chain_passages, []}},
{"@daily", {WandererApp.Map.GarbageCollector, :cleanup_system_signatures, []}}
] ++ sheduler_jobs,
timeout: :infinity

View File

@@ -1,7 +1,25 @@
defmodule WandererApp.Api.Changes.SlugifyName do
@moduledoc """
Ensures map slugs are unique by:
1. Slugifying the provided slug/name
2. Checking for existing slugs (optimization)
3. Finding next available slug with numeric suffix if needed
4. Relying on database unique constraint as final arbiter
Race Condition Mitigation:
- Optimistic check reduces DB roundtrips for most cases
- Database unique index ensures no duplicates slip through
- Proper error messages for constraint violations
- Telemetry events for monitoring conflicts
"""
use Ash.Resource.Change
alias Ash.Changeset
require Ash.Query
require Logger
# Maximum number of attempts to find a unique slug
@max_attempts 100
@impl true
@spec change(Changeset.t(), keyword, Change.context()) :: Changeset.t()
@@ -12,10 +30,95 @@ defmodule WandererApp.Api.Changes.SlugifyName do
defp maybe_slugify_name(changeset) do
case Changeset.get_attribute(changeset, :slug) do
slug when is_binary(slug) ->
Changeset.force_change_attribute(changeset, :slug, Slug.slugify(slug))
base_slug = Slug.slugify(slug)
unique_slug = ensure_unique_slug(changeset, base_slug)
Changeset.force_change_attribute(changeset, :slug, unique_slug)
_ ->
changeset
end
end
defp ensure_unique_slug(changeset, base_slug) do
# Get the current record ID if this is an update operation
current_id = Changeset.get_attribute(changeset, :id)
# Check if the base slug is available (optimization to avoid numeric suffixes when possible)
if slug_available?(base_slug, current_id) do
base_slug
else
# Find the next available slug with a numeric suffix
find_available_slug(base_slug, current_id, 2)
end
end
defp find_available_slug(base_slug, current_id, n) when n <= @max_attempts do
candidate_slug = "#{base_slug}-#{n}"
if slug_available?(candidate_slug, current_id) do
# Emit telemetry when we had to use a suffix (indicates potential conflict)
:telemetry.execute(
[:wanderer_app, :map, :slug_suffix_used],
%{suffix_number: n},
%{base_slug: base_slug, final_slug: candidate_slug}
)
candidate_slug
else
find_available_slug(base_slug, current_id, n + 1)
end
end
defp find_available_slug(base_slug, _current_id, n) when n > @max_attempts do
# Fallback: use timestamp suffix if we've tried too many numeric suffixes
# This handles edge cases where many maps have similar names
timestamp = System.system_time(:millisecond)
fallback_slug = "#{base_slug}-#{timestamp}"
Logger.warning(
"Slug generation exceeded #{@max_attempts} attempts for '#{base_slug}', using timestamp fallback",
base_slug: base_slug,
fallback_slug: fallback_slug
)
:telemetry.execute(
[:wanderer_app, :map, :slug_fallback_used],
%{attempts: n},
%{base_slug: base_slug, fallback_slug: fallback_slug}
)
fallback_slug
end
defp slug_available?(slug, current_id) do
query =
WandererApp.Api.Map
|> Ash.Query.filter(slug == ^slug)
|> then(fn query ->
# Exclude the current record if this is an update
if current_id do
Ash.Query.filter(query, id != ^current_id)
else
query
end
end)
|> Ash.Query.limit(1)
case Ash.read(query) do
{:ok, []} ->
true
{:ok, _existing} ->
false
{:error, error} ->
# Log error but be conservative - assume slug is not available
Logger.warning("Error checking slug availability",
slug: slug,
error: inspect(error)
)
false
end
end
end

View File

@@ -31,13 +31,13 @@ defmodule WandererApp.Api.Map do
routes do
base("/maps")
get(:by_slug, route: "/:slug")
index :read
# index :read
post(:new)
patch(:update)
delete(:destroy)
# Custom action for map duplication
post(:duplicate, route: "/:id/duplicate")
# post(:duplicate, route: "/:id/duplicate")
end
end

View File

@@ -9,6 +9,11 @@ defmodule WandererApp.Api.MapConnection do
postgres do
repo(WandererApp.Repo)
table("map_chain_v1")
custom_indexes do
# Critical index for list_connections query performance
index [:map_id], name: "map_chain_v1_map_id_index"
end
end
json_api do

View File

@@ -65,7 +65,7 @@ defmodule WandererApp.Api.MapSubscription do
defaults [:create, :read, :update, :destroy]
read :all_active do
prepare build(sort: [updated_at: :asc])
prepare build(sort: [updated_at: :asc], load: [:map])
filter(expr(status == :active))
end

View File

@@ -1,6 +1,26 @@
defmodule WandererApp.Api.MapSystem do
@moduledoc false
@derive {Jason.Encoder,
only: [
:id,
:map_id,
:name,
:solar_system_id,
:position_x,
:position_y,
:status,
:visible,
:locked,
:custom_name,
:description,
:tag,
:temporary_name,
:labels,
:added_at,
:linked_sig_eve_id
]}
use Ash.Resource,
domain: WandererApp.Api,
data_layer: AshPostgres.DataLayer,
@@ -9,6 +29,11 @@ defmodule WandererApp.Api.MapSystem do
postgres do
repo(WandererApp.Repo)
table("map_system_v1")
custom_indexes do
# Partial index for efficient visible systems query
index [:map_id], where: "visible = true", name: "map_system_v1_map_id_visible_index"
end
end
json_api do
@@ -42,6 +67,7 @@ defmodule WandererApp.Api.MapSystem do
code_interface do
define(:create, action: :create)
define(:upsert, action: :upsert)
define(:destroy, action: :destroy)
define(:by_id,
@@ -104,6 +130,31 @@ defmodule WandererApp.Api.MapSystem do
defaults [:create, :update, :destroy]
create :upsert do
primary? false
upsert? true
upsert_identity :map_solar_system_id
# Update these fields on conflict
upsert_fields [
:position_x,
:position_y,
:visible,
:name
]
accept [
:map_id,
:solar_system_id,
:name,
:position_x,
:position_y,
:visible,
:locked,
:status
]
end
read :read do
primary?(true)

View File

@@ -16,15 +16,48 @@ defmodule WandererApp.Application do
WandererApp.Vault,
WandererApp.Repo,
{Phoenix.PubSub, name: WandererApp.PubSub, adapter_name: Phoenix.PubSub.PG2},
# Multiple Finch pools for different services to prevent connection pool exhaustion
# ESI Character Tracking pool - high capacity for bulk character operations
{
Finch,
name: WandererApp.Finch.ESI.CharacterTracking,
pools: %{
default: [
size: Application.get_env(:wanderer_app, :finch_esi_character_pool_size, 100),
count: Application.get_env(:wanderer_app, :finch_esi_character_pool_count, 4)
]
}
},
# ESI General pool - standard capacity for general ESI operations
{
Finch,
name: WandererApp.Finch.ESI.General,
pools: %{
default: [
size: Application.get_env(:wanderer_app, :finch_esi_general_pool_size, 50),
count: Application.get_env(:wanderer_app, :finch_esi_general_pool_count, 4)
]
}
},
# Webhooks pool - isolated from ESI rate limits
{
Finch,
name: WandererApp.Finch.Webhooks,
pools: %{
default: [
size: Application.get_env(:wanderer_app, :finch_webhooks_pool_size, 25),
count: Application.get_env(:wanderer_app, :finch_webhooks_pool_count, 2)
]
}
},
# Default pool - everything else (email, license manager, etc.)
{
Finch,
name: WandererApp.Finch,
pools: %{
default: [
# number of connections per pool
size: 50,
# number of pools (so total 50 connections)
count: 4
size: Application.get_env(:wanderer_app, :finch_default_pool_size, 25),
count: Application.get_env(:wanderer_app, :finch_default_pool_count, 2)
]
}
},

View File

@@ -73,6 +73,54 @@ defmodule WandererApp.Cache do
def filter_by_attr_in(type, attr, includes), do: type |> get() |> filter_in(attr, includes)
@doc """
Batch lookup multiple keys from cache.
Returns a map of key => value pairs, with `default` used for missing keys.
"""
def lookup_all(keys, default \\ nil) when is_list(keys) do
# Get all values from cache
values = get_all(keys)
# Build result map with defaults for missing keys
result =
keys
|> Enum.map(fn key ->
value = Map.get(values, key, default)
{key, value}
end)
|> Map.new()
{:ok, result}
end
@doc """
Batch insert multiple key-value pairs into cache.
Accepts a map of key => value pairs or a list of {key, value} tuples.
Skips nil values (deletes the key instead).
"""
def insert_all(entries, opts \\ [])
def insert_all(entries, opts) when is_map(entries) do
# Filter out nil values and delete those keys
{to_delete, to_insert} =
entries
|> Enum.split_with(fn {_key, value} -> is_nil(value) end)
# Delete keys with nil values
Enum.each(to_delete, fn {key, _} -> delete(key) end)
# Insert non-nil values
unless Enum.empty?(to_insert) do
put_all(to_insert, opts)
end
:ok
end
def insert_all(entries, opts) when is_list(entries) do
insert_all(Map.new(entries), opts)
end
defp find(list, %{} = attrs, match: match) do
list
|> Enum.find(fn item ->

View File

@@ -1,6 +1,8 @@
defmodule WandererApp.CachedInfo do
require Logger
alias WandererAppWeb.Helpers.APIUtils
def run(_arg) do
:ok = cache_trig_systems()
end
@@ -29,14 +31,71 @@ defmodule WandererApp.CachedInfo do
)
end)
Cachex.get(:ship_types_cache, type_id)
get_ship_type_from_cache_or_api(type_id)
{:ok, ship_type} ->
{:ok, ship_type}
end
end
defp get_ship_type_from_cache_or_api(type_id) do
case Cachex.get(:ship_types_cache, type_id) do
{:ok, ship_type} when not is_nil(ship_type) ->
{:ok, ship_type}
{:ok, nil} ->
case WandererApp.Esi.get_type_info(type_id) do
{:ok, info} when not is_nil(info) ->
ship_type = parse_type(type_id, info)
{:ok, group_info} = get_group_info(ship_type.group_id)
{:ok, ship_type_info} =
WandererApp.Api.ShipTypeInfo |> Ash.create(ship_type |> Map.merge(group_info))
{:ok,
ship_type_info
|> Map.take([
:type_id,
:group_id,
:group_name,
:name,
:description,
:mass,
:capacity,
:volume
])}
{:error, reason} ->
Logger.error("Failed to get ship_type #{type_id} from ESI: #{inspect(reason)}")
{:ok, nil}
error ->
Logger.error("Failed to get ship_type #{type_id} from ESI: #{inspect(error)}")
{:ok, nil}
end
end
end
def get_group_info(nil), do: {:ok, nil}
def get_group_info(group_id) do
case WandererApp.Esi.get_group_info(group_id) do
{:ok, info} when not is_nil(info) ->
{:ok, parse_group(group_id, info)}
{:error, reason} ->
Logger.error("Failed to get group_info #{group_id} from ESI: #{inspect(reason)}")
{:ok, %{group_name: ""}}
error ->
Logger.error("Failed to get group_info #{group_id} from ESI: #{inspect(error)}")
{:ok, %{group_name: ""}}
end
end
def get_system_static_info(solar_system_id) do
{:ok, solar_system_id} = APIUtils.parse_int(solar_system_id)
case Cachex.get(:system_static_info_cache, solar_system_id) do
{:ok, nil} ->
case WandererApp.Api.MapSolarSystem.read() do
@@ -149,6 +208,25 @@ defmodule WandererApp.CachedInfo do
end
end
defp parse_group(group_id, group) do
%{
group_id: group_id,
group_name: Map.get(group, "name")
}
end
defp parse_type(type_id, type) do
%{
type_id: type_id,
name: Map.get(type, "name"),
description: Map.get(type, "description"),
group_id: Map.get(type, "group_id"),
mass: "#{Map.get(type, "mass")}",
capacity: "#{Map.get(type, "capacity")}",
volume: "#{Map.get(type, "volume")}"
}
end
defp build_jump_index() do
case get_solar_system_jumps() do
{:ok, jumps} ->

View File

@@ -4,6 +4,8 @@ defmodule WandererApp.Character do
require Logger
alias WandererApp.Cache
@read_character_wallet_scope "esi-wallet.read_character_wallet.v1"
@read_corp_wallet_scope "esi-wallet.read_corporation_wallets.v1"
@@ -16,6 +18,9 @@ defmodule WandererApp.Character do
ship_item_id: nil
}
@present_on_map_ttl :timer.seconds(10)
@not_present_on_map_ttl :timer.minutes(2)
def get_by_eve_id(character_eve_id) when is_binary(character_eve_id) do
WandererApp.Api.Character.by_eve_id(character_eve_id)
end
@@ -41,7 +46,7 @@ defmodule WandererApp.Character do
def get_character!(character_id) do
case get_character(character_id) do
{:ok, character} ->
{:ok, character} when not is_nil(character) ->
character
_ ->
@@ -50,16 +55,10 @@ defmodule WandererApp.Character do
end
end
def get_map_character(map_id, character_id, opts \\ []) do
def get_map_character(map_id, character_id) do
case get_character(character_id) do
{:ok, character} ->
# If we are forcing the character to not be present, we merge the character state with map settings
character_is_present =
if opts |> Keyword.get(:not_present, false) do
false
else
WandererApp.Character.TrackerManager.Impl.character_is_present(map_id, character_id)
end
{:ok, character} when not is_nil(character) ->
character_is_present = character_is_present?(map_id, character_id)
{:ok,
character
@@ -187,6 +186,10 @@ defmodule WandererApp.Character do
{:ok, result} ->
{:ok, result |> prepare_search_results()}
{:error, error} ->
Logger.warning("#{__MODULE__} failed search: #{inspect(error)}")
{:ok, []}
error ->
Logger.warning("#{__MODULE__} failed search: #{inspect(error)}")
{:ok, []}
@@ -263,22 +266,26 @@ defmodule WandererApp.Character do
end
end
defp maybe_merge_map_character_settings(%{id: character_id} = character, _map_id, true) do
{:ok, tracking_paused} =
WandererApp.Cache.lookup("character:#{character_id}:tracking_paused", false)
@decorate cacheable(
cache: Cache,
key: "character-present-#{map_id}-#{character_id}",
opts: [ttl: @present_on_map_ttl]
)
defp character_is_present?(map_id, character_id),
do: WandererApp.Character.TrackerManager.Impl.character_is_present(map_id, character_id)
character
|> Map.merge(%{tracking_paused: tracking_paused})
end
defp maybe_merge_map_character_settings(character, _map_id, true), do: character
@decorate cacheable(
cache: Cache,
key: "not-present-map-character-#{map_id}-#{character_id}",
opts: [ttl: @not_present_on_map_ttl]
)
defp maybe_merge_map_character_settings(
%{id: character_id} = character,
map_id,
_character_is_present
false
) do
{:ok, tracking_paused} =
WandererApp.Cache.lookup("character:#{character_id}:tracking_paused", false)
WandererApp.MapCharacterSettingsRepo.get(map_id, character_id)
|> case do
{:ok, settings} when not is_nil(settings) ->
@@ -296,7 +303,7 @@ defmodule WandererApp.Character do
character
|> Map.merge(@default_character_tracking_data)
end
|> Map.merge(%{online: false, tracking_paused: tracking_paused})
|> Map.merge(%{online: false})
end
defp prepare_search_results(result) do
@@ -324,7 +331,7 @@ defmodule WandererApp.Character do
do:
{:ok,
Enum.map(eve_ids, fn eve_id ->
Task.async(fn -> apply(WandererApp.Esi.ApiClient, method, [eve_id]) end)
Task.async(fn -> apply(WandererApp.Esi, method, [eve_id]) end)
end)
# 145000 == Timeout in milliseconds
|> Enum.map(fn task -> Task.await(task, 145_000) end)

View File

@@ -43,13 +43,14 @@ defmodule WandererApp.Character.Activity do
## Parameters
- `map_id`: ID of the map
- `current_user`: Current user struct (used only to get user settings)
- `days`: Optional number of days to filter activity (nil for all time)
## Returns
- List of processed activity data
"""
def process_character_activity(map_id, current_user) do
def process_character_activity(map_id, current_user, days \\ nil) do
with {:ok, map_user_settings} <- get_map_user_settings(map_id, current_user.id),
{:ok, raw_activity} <- WandererApp.Map.get_character_activity(map_id),
{:ok, raw_activity} <- WandererApp.Map.get_character_activity(map_id, days),
{:ok, user_characters} <-
WandererApp.Api.Character.active_by_user(%{user_id: current_user.id}) do
process_activity_data(raw_activity, map_user_settings, user_characters)

View File

@@ -14,8 +14,8 @@ defmodule WandererApp.Character.Tracker do
active_maps: [],
is_online: false,
track_online: true,
track_location: true,
track_ship: true,
track_location: false,
track_ship: false,
track_wallet: false,
status: "new"
]
@@ -36,14 +36,11 @@ defmodule WandererApp.Character.Tracker do
status: binary()
}
@pause_tracking_timeout :timer.minutes(60 * 10)
@offline_timeout :timer.minutes(5)
@online_error_timeout :timer.minutes(10)
@ship_error_timeout :timer.minutes(10)
@location_error_timeout :timer.minutes(10)
@location_error_timeout :timer.seconds(30)
@location_error_threshold 3
@online_forbidden_ttl :timer.seconds(7)
@offline_check_delay_ttl :timer.seconds(15)
@online_limit_ttl :timer.seconds(7)
@forbidden_ttl :timer.seconds(10)
@limit_ttl :timer.seconds(5)
@location_limit_ttl :timer.seconds(1)
@@ -93,81 +90,16 @@ defmodule WandererApp.Character.Tracker do
end
end
def check_online_errors(character_id),
do: check_tracking_errors(character_id, "online", @online_error_timeout)
def check_ship_errors(character_id),
do: check_tracking_errors(character_id, "ship", @ship_error_timeout)
def check_location_errors(character_id),
do: check_tracking_errors(character_id, "location", @location_error_timeout)
defp check_tracking_errors(character_id, type, timeout) do
WandererApp.Cache.lookup!("character:#{character_id}:#{type}_error_time")
|> case do
nil ->
:skip
error_time ->
duration = DateTime.diff(DateTime.utc_now(), error_time, :millisecond)
if duration >= timeout do
pause_tracking(character_id)
WandererApp.Cache.delete("character:#{character_id}:#{type}_error_time")
:ok
else
:skip
end
end
defp increment_location_error_count(character_id) do
cache_key = "character:#{character_id}:location_error_count"
current_count = WandererApp.Cache.lookup!(cache_key) || 0
new_count = current_count + 1
WandererApp.Cache.put(cache_key, new_count)
new_count
end
defp pause_tracking(character_id) do
if WandererApp.Character.can_pause_tracking?(character_id) &&
not WandererApp.Cache.has_key?("character:#{character_id}:tracking_paused") do
# Log character tracking statistics before pausing
Logger.debug(fn ->
{:ok, character_state} = WandererApp.Character.get_character_state(character_id)
"CHARACTER_TRACKING_PAUSED: Character tracking paused due to sustained errors: #{inspect(character_id: character_id,
active_maps: length(character_state.active_maps),
is_online: character_state.is_online,
tracking_duration_minutes: get_tracking_duration_minutes(character_id))}"
end)
WandererApp.Cache.delete("character:#{character_id}:online_forbidden")
WandererApp.Cache.delete("character:#{character_id}:online_error_time")
WandererApp.Cache.delete("character:#{character_id}:ship_error_time")
WandererApp.Cache.delete("character:#{character_id}:location_error_time")
WandererApp.Character.update_character(character_id, %{online: false})
WandererApp.Character.update_character_state(character_id, %{
is_online: false
})
# Original log kept for backward compatibility
Logger.warning("[CharacterTracker] paused for #{character_id}")
WandererApp.Cache.put(
"character:#{character_id}:tracking_paused",
true,
ttl: @pause_tracking_timeout
)
{:ok, %{solar_system_id: solar_system_id}} =
WandererApp.Character.get_character(character_id)
{:ok, %{active_maps: active_maps}} =
WandererApp.Character.get_character_state(character_id)
active_maps
|> Enum.each(fn map_id ->
WandererApp.Cache.put(
"map:#{map_id}:character:#{character_id}:start_solar_system_id",
solar_system_id
)
end)
end
defp reset_location_error_count(character_id) do
WandererApp.Cache.delete("character:#{character_id}:location_error_count")
end
def update_settings(character_id, track_settings) do
@@ -194,8 +126,7 @@ defmodule WandererApp.Character.Tracker do
case WandererApp.Character.get_character(character_id) do
{:ok, %{eve_id: eve_id, access_token: access_token, tracking_pool: tracking_pool}}
when not is_nil(access_token) ->
(WandererApp.Cache.has_key?("character:#{character_id}:online_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:tracking_paused"))
WandererApp.Cache.has_key?("character:#{character_id}:online_forbidden")
|> case do
true ->
{:error, :skipped}
@@ -224,9 +155,10 @@ defmodule WandererApp.Character.Tracker do
)
end
if online.online == true && online.online != is_online do
if online.online == true && not is_online do
WandererApp.Cache.delete("character:#{character_id}:ship_error_time")
WandererApp.Cache.delete("character:#{character_id}:location_error_time")
WandererApp.Cache.delete("character:#{character_id}:location_error_count")
WandererApp.Cache.delete("character:#{character_id}:info_forbidden")
WandererApp.Cache.delete("character:#{character_id}:ship_forbidden")
WandererApp.Cache.delete("character:#{character_id}:location_forbidden")
@@ -294,12 +226,6 @@ defmodule WandererApp.Character.Tracker do
{:error, :error_limited, headers} ->
reset_timeout = get_reset_timeout(headers)
reset_seconds =
Map.get(headers, "x-esi-error-limit-reset", ["unknown"]) |> List.first()
remaining =
Map.get(headers, "x-esi-error-limit-remain", ["unknown"]) |> List.first()
WandererApp.Cache.put(
"character:#{character_id}:online_forbidden",
true,
@@ -357,8 +283,7 @@ defmodule WandererApp.Character.Tracker do
defp get_reset_timeout(_headers, default_timeout), do: default_timeout
def update_info(character_id) do
(WandererApp.Cache.has_key?("character:#{character_id}:info_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:tracking_paused"))
WandererApp.Cache.has_key?("character:#{character_id}:info_forbidden")
|> case do
true ->
{:error, :skipped}
@@ -442,8 +367,7 @@ defmodule WandererApp.Character.Tracker do
{:ok, %{eve_id: eve_id, access_token: access_token, tracking_pool: tracking_pool}}
when not is_nil(access_token) ->
(WandererApp.Cache.has_key?("character:#{character_id}:online_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:ship_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:tracking_paused"))
WandererApp.Cache.has_key?("character:#{character_id}:ship_forbidden"))
|> case do
true ->
{:error, :skipped}
@@ -552,7 +476,7 @@ defmodule WandererApp.Character.Tracker do
case WandererApp.Character.get_character(character_id) do
{:ok, %{eve_id: eve_id, access_token: access_token, tracking_pool: tracking_pool}}
when not is_nil(access_token) ->
WandererApp.Cache.has_key?("character:#{character_id}:tracking_paused")
WandererApp.Cache.has_key?("character:#{character_id}:location_forbidden")
|> case do
true ->
{:error, :skipped}
@@ -565,19 +489,33 @@ defmodule WandererApp.Character.Tracker do
character_id: character_id
) do
{:ok, location} when is_map(location) and not is_struct(location) ->
reset_location_error_count(character_id)
WandererApp.Cache.delete("character:#{character_id}:location_error_time")
character_state
|> maybe_update_location(location)
:ok
{:error, error} when error in [:forbidden, :not_found, :timeout] ->
error_count = increment_location_error_count(character_id)
Logger.warning("ESI_ERROR: Character location tracking failed",
character_id: character_id,
tracking_pool: tracking_pool,
error_type: error,
error_count: error_count,
endpoint: "character_location"
)
if error_count >= @location_error_threshold do
WandererApp.Cache.put(
"character:#{character_id}:location_forbidden",
true,
ttl: @location_error_timeout
)
end
if is_nil(
WandererApp.Cache.lookup!("character:#{character_id}:location_error_time")
) do
@@ -601,13 +539,24 @@ defmodule WandererApp.Character.Tracker do
{:error, :error_limited}
{:error, error} ->
error_count = increment_location_error_count(character_id)
Logger.error("ESI_ERROR: Character location tracking failed: #{inspect(error)}",
character_id: character_id,
tracking_pool: tracking_pool,
error_type: error,
error_count: error_count,
endpoint: "character_location"
)
if error_count >= @location_error_threshold do
WandererApp.Cache.put(
"character:#{character_id}:location_forbidden",
true,
ttl: @location_error_timeout
)
end
if is_nil(
WandererApp.Cache.lookup!("character:#{character_id}:location_error_time")
) do
@@ -620,13 +569,24 @@ defmodule WandererApp.Character.Tracker do
{:error, :skipped}
_ ->
error_count = increment_location_error_count(character_id)
Logger.error("ESI_ERROR: Character location tracking failed - wrong response",
character_id: character_id,
tracking_pool: tracking_pool,
error_type: "wrong_response",
error_count: error_count,
endpoint: "character_location"
)
if error_count >= @location_error_threshold do
WandererApp.Cache.put(
"character:#{character_id}:location_forbidden",
true,
ttl: @location_error_timeout
)
end
if is_nil(
WandererApp.Cache.lookup!("character:#{character_id}:location_error_time")
) do
@@ -662,8 +622,7 @@ defmodule WandererApp.Character.Tracker do
|> case do
true ->
(WandererApp.Cache.has_key?("character:#{character_id}:online_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:wallet_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:tracking_paused"))
WandererApp.Cache.has_key?("character:#{character_id}:wallet_forbidden"))
|> case do
true ->
{:error, :skipped}
@@ -750,6 +709,7 @@ defmodule WandererApp.Character.Tracker do
end
end
# when old_alliance_id != alliance_id and is_nil(alliance_id)
defp maybe_update_alliance(
%{character_id: character_id, alliance_id: old_alliance_id} = state,
alliance_id
@@ -775,6 +735,7 @@ defmodule WandererApp.Character.Tracker do
)
state
|> Map.merge(%{alliance_id: nil})
end
defp maybe_update_alliance(
@@ -782,8 +743,7 @@ defmodule WandererApp.Character.Tracker do
alliance_id
)
when old_alliance_id != alliance_id do
(WandererApp.Cache.has_key?("character:#{character_id}:online_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:tracking_paused"))
WandererApp.Cache.has_key?("character:#{character_id}:online_forbidden")
|> case do
true ->
state
@@ -813,6 +773,7 @@ defmodule WandererApp.Character.Tracker do
)
state
|> Map.merge(%{alliance_id: alliance_id})
_error ->
Logger.error("Failed to get alliance info for #{alliance_id}")
@@ -829,8 +790,7 @@ defmodule WandererApp.Character.Tracker do
)
when old_corporation_id != corporation_id do
(WandererApp.Cache.has_key?("character:#{character_id}:online_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:corporation_info_forbidden") ||
WandererApp.Cache.has_key?("character:#{character_id}:tracking_paused"))
WandererApp.Cache.has_key?("character:#{character_id}:corporation_info_forbidden"))
|> case do
true ->
state
@@ -1006,9 +966,7 @@ defmodule WandererApp.Character.Tracker do
),
do: %{
state
| track_online: true,
track_location: true,
track_ship: true
| track_online: true
}
defp maybe_start_online_tracking(
@@ -1052,11 +1010,6 @@ defmodule WandererApp.Character.Tracker do
DateTime.utc_now()
)
WandererApp.Cache.put(
"map:#{map_id}:character:#{character_id}:start_solar_system_id",
track_settings |> Map.get(:solar_system_id)
)
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:solar_system_id")
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:station_id")
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:structure_id")
@@ -1107,7 +1060,7 @@ defmodule WandererApp.Character.Tracker do
)
end
state
%{state | track_location: false, track_ship: false}
end
defp maybe_stop_tracking(
@@ -1137,19 +1090,6 @@ defmodule WandererApp.Character.Tracker do
defp get_online(_), do: %{online: false}
defp get_tracking_duration_minutes(character_id) do
case WandererApp.Cache.lookup!("character:#{character_id}:map:*:tracking_start_time") do
nil ->
0
start_time when is_struct(start_time, DateTime) ->
DateTime.diff(DateTime.utc_now(), start_time, :minute)
_ ->
0
end
end
# Telemetry handler for database pool monitoring
def handle_pool_query(_event_name, measurements, metadata, _config) do
queue_time = measurements[:queue_time]

View File

@@ -14,8 +14,8 @@ defmodule WandererApp.Character.TrackerManager do
GenServer.start_link(__MODULE__, args, name: __MODULE__)
end
def start_tracking(character_id, opts \\ []),
do: GenServer.cast(__MODULE__, {&Impl.start_tracking/3, [character_id, opts]})
def start_tracking(character_id),
do: GenServer.cast(__MODULE__, {&Impl.start_tracking/2, [character_id]})
def stop_tracking(character_id),
do: GenServer.cast(__MODULE__, {&Impl.stop_tracking/2, [character_id]})

View File

@@ -40,13 +40,13 @@ defmodule WandererApp.Character.TrackerManager.Impl do
tracked_characters
|> Enum.each(fn character_id ->
start_tracking(state, character_id, %{})
start_tracking(state, character_id)
end)
state
end
def start_tracking(state, character_id, opts) do
def start_tracking(state, character_id) do
if not WandererApp.Cache.has_key?("#{character_id}:track_requested") do
WandererApp.Cache.insert(
"#{character_id}:track_requested",

View File

@@ -8,7 +8,8 @@ defmodule WandererApp.Character.TrackerPool do
:tracked_ids,
:uuid,
:characters,
server_online: true
server_online: false,
last_location_duration: 0
]
@name __MODULE__
@@ -19,13 +20,20 @@ defmodule WandererApp.Character.TrackerPool do
@update_location_interval :timer.seconds(1)
@update_online_interval :timer.seconds(30)
@check_offline_characters_interval :timer.minutes(5)
@check_online_errors_interval :timer.minutes(1)
@check_ship_errors_interval :timer.minutes(1)
@check_location_errors_interval :timer.minutes(1)
@update_ship_interval :timer.seconds(2)
@update_info_interval :timer.minutes(2)
@update_wallet_interval :timer.minutes(10)
# Per-operation concurrency limits
# Location updates are critical and need high concurrency (100 chars in ~200ms)
# Note: This is fetched at runtime since it's configured via runtime.exs
defp location_concurrency do
Application.get_env(:wanderer_app, :location_concurrency, System.schedulers_online() * 12)
end
# Other operations can use lower concurrency
@standard_concurrency System.schedulers_online() * 2
@logger Application.compile_env(:wanderer_app, :logger)
def new(), do: __struct__()
@@ -109,17 +117,23 @@ defmodule WandererApp.Character.TrackerPool do
"server_status"
)
Process.send_after(self(), :update_online, 100)
Process.send_after(self(), :check_online_errors, :timer.seconds(60))
Process.send_after(self(), :check_ship_errors, :timer.seconds(90))
Process.send_after(self(), :check_location_errors, :timer.seconds(120))
# Stagger pool startups to distribute load across multiple pools
# Critical location updates get minimal stagger (0-500ms)
# Other operations get wider stagger (0-10s) to reduce thundering herd
location_stagger = :rand.uniform(500)
online_stagger = :rand.uniform(10_000)
ship_stagger = :rand.uniform(10_000)
info_stagger = :rand.uniform(60_000)
Process.send_after(self(), :update_online, 100 + online_stagger)
Process.send_after(self(), :update_location, 300 + location_stagger)
Process.send_after(self(), :update_ship, 500 + ship_stagger)
Process.send_after(self(), :update_info, 1500 + info_stagger)
Process.send_after(self(), :check_offline_characters, @check_offline_characters_interval)
Process.send_after(self(), :update_location, 300)
Process.send_after(self(), :update_ship, 500)
Process.send_after(self(), :update_info, 1500)
if WandererApp.Env.wallet_tracking_enabled?() do
Process.send_after(self(), :update_wallet, 1000)
wallet_stagger = :rand.uniform(120_000)
Process.send_after(self(), :update_wallet, 1000 + wallet_stagger)
end
{:noreply, state}
@@ -169,7 +183,7 @@ defmodule WandererApp.Character.TrackerPool do
fn character_id ->
WandererApp.Character.Tracker.update_online(character_id)
end,
max_concurrency: System.schedulers_online() * 4,
max_concurrency: @standard_concurrency,
on_timeout: :kill_task,
timeout: :timer.seconds(5)
)
@@ -180,6 +194,8 @@ defmodule WandererApp.Character.TrackerPool do
[Tracker Pool] update_online => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
ErrorTracker.report(e, __STACKTRACE__)
end
{:noreply, state}
@@ -230,7 +246,7 @@ defmodule WandererApp.Character.TrackerPool do
WandererApp.Character.Tracker.check_offline(character_id)
end,
timeout: :timer.seconds(15),
max_concurrency: System.schedulers_online() * 4,
max_concurrency: @standard_concurrency,
on_timeout: :kill_task
)
|> Enum.each(fn
@@ -248,126 +264,6 @@ defmodule WandererApp.Character.TrackerPool do
{:noreply, state}
end
def handle_info(
:check_online_errors,
%{
characters: characters
} =
state
) do
Process.send_after(self(), :check_online_errors, @check_online_errors_interval)
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.TaskWrapper.start_link(
WandererApp.Character.Tracker,
:check_online_errors,
[
character_id
]
)
end,
timeout: :timer.seconds(15),
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task
)
|> Enum.each(fn
{:ok, _result} -> :ok
error -> @logger.error("Error in check_online_errors: #{inspect(error)}")
end)
rescue
e ->
Logger.error("""
[Tracker Pool] check_online_errors => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
def handle_info(
:check_ship_errors,
%{
characters: characters
} =
state
) do
Process.send_after(self(), :check_ship_errors, @check_ship_errors_interval)
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.TaskWrapper.start_link(
WandererApp.Character.Tracker,
:check_ship_errors,
[
character_id
]
)
end,
timeout: :timer.seconds(15),
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task
)
|> Enum.each(fn
{:ok, _result} -> :ok
error -> @logger.error("Error in check_ship_errors: #{inspect(error)}")
end)
rescue
e ->
Logger.error("""
[Tracker Pool] check_ship_errors => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
def handle_info(
:check_location_errors,
%{
characters: characters
} =
state
) do
Process.send_after(self(), :check_location_errors, @check_location_errors_interval)
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.TaskWrapper.start_link(
WandererApp.Character.Tracker,
:check_location_errors,
[
character_id
]
)
end,
timeout: :timer.seconds(15),
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task
)
|> Enum.each(fn
{:ok, _result} -> :ok
error -> @logger.error("Error in check_location_errors: #{inspect(error)}")
end)
rescue
e ->
Logger.error("""
[Tracker Pool] check_location_errors => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
def handle_info(
:update_location,
%{
@@ -378,26 +274,52 @@ defmodule WandererApp.Character.TrackerPool do
) do
Process.send_after(self(), :update_location, @update_location_interval)
start_time = System.monotonic_time(:millisecond)
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.Character.Tracker.update_location(character_id)
end,
max_concurrency: System.schedulers_online() * 4,
max_concurrency: location_concurrency(),
on_timeout: :kill_task,
timeout: :timer.seconds(5)
)
|> Enum.each(fn _result -> :ok end)
# Emit telemetry for location update performance
duration = System.monotonic_time(:millisecond) - start_time
:telemetry.execute(
[:wanderer_app, :tracker_pool, :location_update],
%{duration: duration, character_count: length(characters)},
%{pool_uuid: state.uuid}
)
# Warn if location updates are falling behind (taking > 800ms for 100 chars)
if duration > 2000 do
Logger.warning(
"[Tracker Pool] Location updates falling behind: #{duration}ms for #{length(characters)} chars (pool: #{state.uuid})"
)
:telemetry.execute(
[:wanderer_app, :tracker_pool, :location_lag],
%{duration: duration, character_count: length(characters)},
%{pool_uuid: state.uuid}
)
end
{:noreply, %{state | last_location_duration: duration}}
rescue
e ->
Logger.error("""
[Tracker Pool] update_location => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
{:noreply, state}
end
end
def handle_info(
@@ -413,32 +335,48 @@ defmodule WandererApp.Character.TrackerPool do
:update_ship,
%{
characters: characters,
server_online: true
server_online: true,
last_location_duration: location_duration
} =
state
) do
Process.send_after(self(), :update_ship, @update_ship_interval)
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.Character.Tracker.update_ship(character_id)
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.seconds(5)
# Backpressure: Skip ship updates if location updates are falling behind
if location_duration > 1000 do
Logger.debug(
"[Tracker Pool] Skipping ship update due to location lag (#{location_duration}ms)"
)
|> Enum.each(fn _result -> :ok end)
rescue
e ->
Logger.error("""
[Tracker Pool] update_ship => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
:telemetry.execute(
[:wanderer_app, :tracker_pool, :ship_skipped],
%{count: 1},
%{pool_uuid: state.uuid, reason: :location_lag}
)
{:noreply, state}
else
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.Character.Tracker.update_ship(character_id)
end,
max_concurrency: @standard_concurrency,
on_timeout: :kill_task,
timeout: :timer.seconds(5)
)
|> Enum.each(fn _result -> :ok end)
rescue
e ->
Logger.error("""
[Tracker Pool] update_ship => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
end
def handle_info(
@@ -454,35 +392,51 @@ defmodule WandererApp.Character.TrackerPool do
:update_info,
%{
characters: characters,
server_online: true
server_online: true,
last_location_duration: location_duration
} =
state
) do
Process.send_after(self(), :update_info, @update_info_interval)
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.Character.Tracker.update_info(character_id)
end,
timeout: :timer.seconds(15),
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task
# Backpressure: Skip info updates if location updates are severely falling behind
if location_duration > 1500 do
Logger.debug(
"[Tracker Pool] Skipping info update due to location lag (#{location_duration}ms)"
)
|> Enum.each(fn
{:ok, _result} -> :ok
error -> Logger.error("Error in update_info: #{inspect(error)}")
end)
rescue
e ->
Logger.error("""
[Tracker Pool] update_info => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
:telemetry.execute(
[:wanderer_app, :tracker_pool, :info_skipped],
%{count: 1},
%{pool_uuid: state.uuid, reason: :location_lag}
)
{:noreply, state}
else
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.Character.Tracker.update_info(character_id)
end,
timeout: :timer.seconds(15),
max_concurrency: @standard_concurrency,
on_timeout: :kill_task
)
|> Enum.each(fn
{:ok, _result} -> :ok
error -> Logger.error("Error in update_info: #{inspect(error)}")
end)
rescue
e ->
Logger.error("""
[Tracker Pool] update_info => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
end
def handle_info(
@@ -511,7 +465,7 @@ defmodule WandererApp.Character.TrackerPool do
WandererApp.Character.Tracker.update_wallet(character_id)
end,
timeout: :timer.minutes(5),
max_concurrency: System.schedulers_online() * 4,
max_concurrency: @standard_concurrency,
on_timeout: :kill_task
)
|> Enum.each(fn

View File

@@ -52,7 +52,7 @@ defmodule WandererApp.Character.TrackerPoolDynamicSupervisor do
defp get_available_pool([]), do: nil
defp get_available_pool([{pid, uuid} | pools]) do
defp get_available_pool([{_pid, uuid} | pools]) do
case Registry.lookup(@unique_registry, Module.concat(WandererApp.Character.TrackerPool, uuid)) do
[] ->
nil
@@ -62,8 +62,8 @@ defmodule WandererApp.Character.TrackerPoolDynamicSupervisor do
nil ->
get_available_pool(pools)
pid ->
pid
pool_pid ->
pool_pid
end
end
end

View File

@@ -12,7 +12,7 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
total_balance: 0,
transactions: [],
retries: 5,
server_online: true,
server_online: false,
status: :started
]
@@ -75,7 +75,7 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
def handle_event(
:update_corp_wallets,
%{character: character} = state
%{character: character, server_online: true} = state
) do
Process.send_after(self(), :update_corp_wallets, @update_interval)
@@ -88,26 +88,26 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
:update_corp_wallets,
state
) do
Process.send_after(self(), :update_corp_wallets, :timer.seconds(15))
Process.send_after(self(), :update_corp_wallets, @update_interval)
state
end
def handle_event(
:check_wallets,
%{wallets: []} = state
%{character: character, wallets: wallets, server_online: true} = state
) do
Process.send_after(self(), :check_wallets, :timer.seconds(5))
Process.send_after(self(), :check_wallets, @update_interval)
state
end
def handle_event(
:check_wallets,
%{character: character, wallets: wallets} = state
) do
check_wallets(wallets, character)
state
end
def handle_event(
:check_wallets,
state
) do
Process.send_after(self(), :check_wallets, @update_interval)
state

View File

@@ -2,6 +2,8 @@ defmodule WandererApp.Esi do
@moduledoc group: :esi
defdelegate get_server_status, to: WandererApp.Esi.ApiClient
defdelegate get_group_info(group_id, opts \\ []), to: WandererApp.Esi.ApiClient
defdelegate get_type_info(type_id, opts \\ []), to: WandererApp.Esi.ApiClient
defdelegate get_alliance_info(eve_id, opts \\ []), to: WandererApp.Esi.ApiClient
defdelegate get_corporation_info(eve_id, opts \\ []), to: WandererApp.Esi.ApiClient
defdelegate get_character_info(eve_id, opts \\ []), to: WandererApp.Esi.ApiClient

View File

@@ -17,6 +17,17 @@ defmodule WandererApp.Esi.ApiClient do
@logger Application.compile_env(:wanderer_app, :logger)
# Pool selection for different operation types
# Character tracking operations use dedicated high-capacity pool
@character_tracking_pool WandererApp.Finch.ESI.CharacterTracking
# General ESI operations use standard pool
@general_pool WandererApp.Finch.ESI.General
# Helper function to get Req options with appropriate Finch pool
defp req_options_for_pool(pool) do
[base_url: "https://esi.evetech.net", finch: pool]
end
def get_server_status, do: do_get("/status", [], @cache_opts)
def set_autopilot_waypoint(add_to_beginning, clear_other_waypoints, destination_id, opts \\ []),
@@ -38,10 +49,13 @@ defmodule WandererApp.Esi.ApiClient do
do:
do_post_esi(
"/characters/affiliation/",
json: character_eve_ids,
params: %{
datasource: "tranquility"
}
[
json: character_eve_ids,
params: %{
datasource: "tranquility"
}
],
@character_tracking_pool
)
def get_routes_custom(hubs, origin, params),
@@ -116,7 +130,33 @@ defmodule WandererApp.Esi.ApiClient do
@decorate cacheable(
cache: Cache,
key: "info-#{eve_id}",
key: "group-info-#{group_id}",
opts: [ttl: @ttl]
)
def get_group_info(group_id, opts),
do:
do_get(
"/universe/groups/#{group_id}/",
opts,
@cache_opts
)
@decorate cacheable(
cache: Cache,
key: "type-info-#{type_id}",
opts: [ttl: @ttl]
)
def get_type_info(type_id, opts),
do:
do_get(
"/universe/types/#{type_id}/",
opts,
@cache_opts
)
@decorate cacheable(
cache: Cache,
key: "alliance-info-#{eve_id}",
opts: [ttl: @ttl]
)
def get_alliance_info(eve_id, opts \\ []) do
@@ -137,7 +177,7 @@ defmodule WandererApp.Esi.ApiClient do
@decorate cacheable(
cache: Cache,
key: "info-#{eve_id}",
key: "corporation-info-#{eve_id}",
opts: [ttl: @ttl]
)
def get_corporation_info(eve_id, opts \\ []) do
@@ -150,7 +190,7 @@ defmodule WandererApp.Esi.ApiClient do
@decorate cacheable(
cache: Cache,
key: "info-#{eve_id}",
key: "character-info-#{eve_id}",
opts: [ttl: @ttl]
)
def get_character_info(eve_id, opts \\ []) do
@@ -203,8 +243,17 @@ defmodule WandererApp.Esi.ApiClient do
do: get_character_auth_data(character_eve_id, "ship", opts ++ @cache_opts)
def search(character_eve_id, opts \\ []) do
search_val = to_string(opts[:params][:search] || "")
categories_val = to_string(opts[:params][:categories] || "character,alliance,corporation")
params = Keyword.get(opts, :params, %{}) |> Map.new()
search_val =
to_string(Map.get(params, :search) || Map.get(params, "search") || "")
categories_val =
to_string(
Map.get(params, :categories) ||
Map.get(params, "categories") ||
"character,alliance,corporation"
)
query_params = [
{"search", search_val},
@@ -220,7 +269,7 @@ defmodule WandererApp.Esi.ApiClient do
@decorate cacheable(
cache: Cache,
key: "search-#{character_eve_id}-#{categories_val}-#{search_val |> Slug.slugify()}",
key: "search-#{character_eve_id}-#{categories_val}-#{Base.encode64(search_val)}",
opts: [ttl: @ttl]
)
defp get_search(character_eve_id, search_val, categories_val, merged_opts) do
@@ -254,14 +303,18 @@ defmodule WandererApp.Esi.ApiClient do
character_id = opts |> Keyword.get(:character_id, nil)
# Use character tracking pool for character operations
pool = @character_tracking_pool
if not is_access_token_expired?(character_id) do
do_get(
path,
auth_opts,
opts |> with_refresh_token()
opts |> with_refresh_token(),
pool
)
else
do_get_retry(path, auth_opts, opts |> with_refresh_token())
do_get_retry(path, auth_opts, opts |> with_refresh_token(), :forbidden, pool)
end
end
@@ -295,19 +348,19 @@ defmodule WandererApp.Esi.ApiClient do
defp with_cache_opts(opts),
do: opts |> Keyword.merge(@cache_opts) |> Keyword.merge(cache_dir: System.tmp_dir!())
defp do_get(path, api_opts \\ [], opts \\ []) do
defp do_get(path, api_opts \\ [], opts \\ [], pool \\ @general_pool) do
case Cachex.get(:api_cache, path) do
{:ok, cached_data} when not is_nil(cached_data) ->
{:ok, cached_data}
_ ->
do_get_request(path, api_opts, opts)
do_get_request(path, api_opts, opts, pool)
end
end
defp do_get_request(path, api_opts \\ [], opts \\ []) do
defp do_get_request(path, api_opts \\ [], opts \\ [], pool \\ @general_pool) do
try do
@req_esi_options
req_options_for_pool(pool)
|> Req.new()
|> Req.get(
api_opts
@@ -398,12 +451,49 @@ defmodule WandererApp.Esi.ApiClient do
{:ok, %{status: status, headers: headers}} ->
{:error, "Unexpected status: #{status}"}
{:error, _reason} ->
{:error, %Mint.TransportError{reason: :timeout}} ->
# Emit telemetry for pool timeout
:telemetry.execute(
[:wanderer_app, :finch, :pool_timeout],
%{count: 1},
%{method: "GET", path: path, pool: pool}
)
{:error, :pool_timeout}
{:error, reason} ->
# Check if this is a Finch pool error
if is_exception(reason) and
Exception.message(reason) =~ "unable to provide a connection" do
:telemetry.execute(
[:wanderer_app, :finch, :pool_exhausted],
%{count: 1},
%{method: "GET", path: path, pool: pool}
)
end
{:error, "Request failed"}
end
rescue
e ->
Logger.error(Exception.message(e))
error_msg = Exception.message(e)
# Emit telemetry for pool exhaustion errors
if error_msg =~ "unable to provide a connection" do
:telemetry.execute(
[:wanderer_app, :finch, :pool_exhausted],
%{count: 1},
%{method: "GET", path: path, pool: pool}
)
Logger.error("FINCH_POOL_EXHAUSTED: #{error_msg}",
method: "GET",
path: path,
pool: inspect(pool)
)
else
Logger.error(error_msg)
end
{:error, "Request failed"}
end
@@ -492,13 +582,13 @@ defmodule WandererApp.Esi.ApiClient do
end
end
defp do_post_esi(url, opts) do
defp do_post_esi(url, opts, pool \\ @general_pool) do
try do
req_opts =
(opts |> with_user_agent_opts() |> Keyword.merge(@retry_opts)) ++
[params: opts[:params] || []]
Req.new(@req_esi_options ++ req_opts)
Req.new(req_options_for_pool(pool) ++ req_opts)
|> Req.post(url: url)
|> case do
{:ok, %{status: status, body: body}} when status in [200, 201] ->
@@ -576,18 +666,55 @@ defmodule WandererApp.Esi.ApiClient do
{:ok, %{status: status}} ->
{:error, "Unexpected status: #{status}"}
{:error, %Mint.TransportError{reason: :timeout}} ->
# Emit telemetry for pool timeout
:telemetry.execute(
[:wanderer_app, :finch, :pool_timeout],
%{count: 1},
%{method: "POST_ESI", path: url, pool: pool}
)
{:error, :pool_timeout}
{:error, reason} ->
# Check if this is a Finch pool error
if is_exception(reason) and
Exception.message(reason) =~ "unable to provide a connection" do
:telemetry.execute(
[:wanderer_app, :finch, :pool_exhausted],
%{count: 1},
%{method: "POST_ESI", path: url, pool: pool}
)
end
{:error, reason}
end
rescue
e ->
@logger.error(Exception.message(e))
error_msg = Exception.message(e)
# Emit telemetry for pool exhaustion errors
if error_msg =~ "unable to provide a connection" do
:telemetry.execute(
[:wanderer_app, :finch, :pool_exhausted],
%{count: 1},
%{method: "POST_ESI", path: url, pool: pool}
)
@logger.error("FINCH_POOL_EXHAUSTED: #{error_msg}",
method: "POST_ESI",
path: url,
pool: inspect(pool)
)
else
@logger.error(error_msg)
end
{:error, "Request failed"}
end
end
defp do_get_retry(path, api_opts, opts, status \\ :forbidden) do
defp do_get_retry(path, api_opts, opts, status \\ :forbidden, pool \\ @general_pool) do
refresh_token? = opts |> Keyword.get(:refresh_token?, false)
retry_count = opts |> Keyword.get(:retry_count, 0)
character_id = opts |> Keyword.get(:character_id, nil)
@@ -602,7 +729,8 @@ defmodule WandererApp.Esi.ApiClient do
do_get(
path,
api_opts |> Keyword.merge(auth_opts),
opts |> Keyword.merge(retry_count: retry_count + 1)
opts |> Keyword.merge(retry_count: retry_count + 1),
pool
)
{:error, _error} ->

View File

@@ -2,7 +2,7 @@ defmodule WandererApp.ExternalEvents do
@moduledoc """
External event system for SSE and webhook delivery.
This system is completely separate from the internal Phoenix PubSub
This system is completely separate from the internal Phoenix PubSub
event system and does NOT modify any existing event flows.
External events are delivered to:
@@ -72,20 +72,12 @@ defmodule WandererApp.ExternalEvents do
# Check if MapEventRelay is alive before sending
if Process.whereis(MapEventRelay) do
try do
# Use call with timeout instead of cast for better error handling
GenServer.call(MapEventRelay, {:deliver_event, event}, 5000)
:ok
catch
:exit, {:timeout, _} ->
Logger.error("Timeout delivering event to MapEventRelay for map #{map_id}")
{:error, :timeout}
:exit, reason ->
Logger.error("Failed to deliver event to MapEventRelay: #{inspect(reason)}")
{:error, reason}
end
# Use cast for async delivery to avoid blocking the caller
# This is critical for performance in hot paths (character updates)
GenServer.cast(MapEventRelay, {:deliver_event, event})
:ok
else
Logger.debug(fn -> "MapEventRelay not available for event delivery (map: #{map_id})" end)
{:error, :relay_not_available}
end
else

View File

@@ -20,6 +20,7 @@ defmodule WandererApp.ExternalEvents.Event do
| :character_added
| :character_removed
| :character_updated
| :characters_updated
| :map_kill
| :acl_member_added
| :acl_member_removed
@@ -42,50 +43,6 @@ defmodule WandererApp.ExternalEvents.Event do
defstruct [:id, :map_id, :type, :payload, :timestamp]
@doc """
Creates a new external event with ULID for ordering.
Validates that the event_type is supported before creating the event.
"""
@spec new(String.t(), event_type(), map()) :: t() | {:error, :invalid_event_type}
def new(map_id, event_type, payload) when is_binary(map_id) and is_map(payload) do
if valid_event_type?(event_type) do
%__MODULE__{
id: Ecto.ULID.generate(System.system_time(:millisecond)),
map_id: map_id,
type: event_type,
payload: payload,
timestamp: DateTime.utc_now()
}
else
raise ArgumentError,
"Invalid event type: #{inspect(event_type)}. Must be one of: #{supported_event_types() |> Enum.map(&to_string/1) |> Enum.join(", ")}"
end
end
@doc """
Converts an event to JSON format for delivery.
"""
@spec to_json(t()) :: map()
def to_json(%__MODULE__{} = event) do
%{
"id" => event.id,
"type" => to_string(event.type),
"map_id" => event.map_id,
"timestamp" => DateTime.to_iso8601(event.timestamp),
"payload" => serialize_payload(event.payload)
}
end
# Convert Ash structs and other complex types to plain maps
defp serialize_payload(payload) when is_struct(payload) do
serialize_payload(payload, MapSet.new())
end
defp serialize_payload(payload) when is_map(payload) do
serialize_payload(payload, MapSet.new())
end
# Define allowlisted fields for different struct types
@system_fields [
:id,
@@ -133,6 +90,73 @@ defmodule WandererApp.ExternalEvents.Event do
]
@signature_fields [:id, :signature_id, :name, :type, :group]
@supported_event_types [
:add_system,
:deleted_system,
:system_renamed,
:system_metadata_changed,
:signatures_updated,
:signature_added,
:signature_removed,
:connection_added,
:connection_removed,
:connection_updated,
:character_added,
:character_removed,
:character_updated,
:characters_updated,
:map_kill,
:acl_member_added,
:acl_member_removed,
:acl_member_updated,
:rally_point_added,
:rally_point_removed
]
@doc """
Creates a new external event with ULID for ordering.
Validates that the event_type is supported before creating the event.
"""
@spec new(String.t(), event_type(), map()) :: t() | {:error, :invalid_event_type}
def new(map_id, event_type, payload) when is_binary(map_id) and is_map(payload) do
if valid_event_type?(event_type) do
%__MODULE__{
id: Ecto.ULID.generate(System.system_time(:millisecond)),
map_id: map_id,
type: event_type,
payload: payload,
timestamp: DateTime.utc_now()
}
else
raise ArgumentError,
"Invalid event type: #{inspect(event_type)}. Must be one of: #{supported_event_types() |> Enum.map(&to_string/1) |> Enum.join(", ")}"
end
end
@doc """
Converts an event to JSON format for delivery.
"""
@spec to_json(t()) :: map()
def to_json(%__MODULE__{} = event) do
%{
"id" => event.id,
"type" => to_string(event.type),
"map_id" => event.map_id,
"timestamp" => DateTime.to_iso8601(event.timestamp),
"payload" => serialize_payload(event.payload)
}
end
# Convert Ash structs and other complex types to plain maps
defp serialize_payload(payload) when is_struct(payload) do
serialize_payload(payload, MapSet.new())
end
defp serialize_payload(payload) when is_map(payload) do
serialize_payload(payload, MapSet.new())
end
# Overloaded versions with visited tracking
defp serialize_payload(payload, visited) when is_struct(payload) do
# Check for circular reference
@@ -193,29 +217,7 @@ defmodule WandererApp.ExternalEvents.Event do
Returns all supported event types.
"""
@spec supported_event_types() :: [event_type()]
def supported_event_types do
[
:add_system,
:deleted_system,
:system_renamed,
:system_metadata_changed,
:signatures_updated,
:signature_added,
:signature_removed,
:connection_added,
:connection_removed,
:connection_updated,
:character_added,
:character_removed,
:character_updated,
:map_kill,
:acl_member_added,
:acl_member_removed,
:acl_member_updated,
:rally_point_added,
:rally_point_removed
]
end
def supported_event_types, do: @supported_event_types
@doc """
Validates an event type.

View File

@@ -212,6 +212,7 @@ defmodule WandererApp.ExternalEvents.JsonApiFormatter do
"time_status" => payload["time_status"] || payload[:time_status],
"mass_status" => payload["mass_status"] || payload[:mass_status],
"ship_size_type" => payload["ship_size_type"] || payload[:ship_size_type],
"locked" => payload["locked"] || payload[:locked],
"updated_at" => event.timestamp
},
"relationships" => %{

View File

@@ -82,16 +82,9 @@ defmodule WandererApp.ExternalEvents.MapEventRelay do
@impl true
def handle_call({:deliver_event, %Event{} = event}, _from, state) do
# Log ACL events at info level for debugging
if event.type in [:acl_member_added, :acl_member_removed, :acl_member_updated] do
Logger.debug(fn ->
"MapEventRelay received :deliver_event (call) for map #{event.map_id}, type: #{event.type}"
end)
else
Logger.debug(fn ->
"MapEventRelay received :deliver_event (call) for map #{event.map_id}, type: #{event.type}"
end)
end
Logger.debug(fn ->
"MapEventRelay received :deliver_event (call) for map #{event.map_id}, type: #{event.type}"
end)
new_state = deliver_single_event(event, state)
{:reply, :ok, new_state}

View File

@@ -90,7 +90,9 @@ defmodule WandererApp.ExternalEvents.WebhookDispatcher do
@impl true
def handle_cast({:dispatch_events, map_id, events}, state) do
Logger.debug(fn -> "WebhookDispatcher received #{length(events)} events for map #{map_id}" end)
Logger.debug(fn ->
"WebhookDispatcher received #{length(events)} events for map #{map_id}"
end)
# Emit telemetry for batch events
:telemetry.execute(
@@ -290,7 +292,7 @@ defmodule WandererApp.ExternalEvents.WebhookDispatcher do
request = Finch.build(:post, url, headers, payload)
case Finch.request(request, WandererApp.Finch, timeout: 30_000) do
case Finch.request(request, WandererApp.Finch.Webhooks, timeout: 30_000) do
{:ok, %Finch.Response{status: status}} ->
{:ok, status}

View File

@@ -31,7 +31,7 @@ defmodule WandererApp.StartCorpWalletTrackerTask do
if not is_nil(admin_character) do
:ok =
WandererApp.Character.TrackerManager.start_tracking(admin_character.id, keep_alive: true)
WandererApp.Character.TrackerManager.start_tracking(admin_character.id)
{:ok, _pid} =
WandererApp.Character.TrackerManager.start_transaction_tracker(admin_character.id)

View File

@@ -546,7 +546,7 @@ defmodule WandererApp.Kills.Client do
end
end
defp check_health(%{socket_pid: pid, last_message_time: last_msg_time} = state)
defp check_health(%{socket_pid: pid, last_message_time: last_msg_time} = _state)
when not is_nil(pid) and not is_nil(last_msg_time) do
cond do
not socket_alive?(pid) ->

View File

@@ -229,7 +229,7 @@ defmodule WandererApp.Kills.MapEventListener do
{:error, :not_running} ->
{:error, :not_running}
{:ok, status} ->
{:ok, _status} ->
{:error, :not_connected}
error ->

View File

@@ -136,9 +136,6 @@ defmodule WandererApp.License.LicenseManager do
end
end
@doc """
Updates a license's expiration date based on the map's subscription.
"""
def update_license_expiration_from_subscription(map_id) do
with {:ok, license} <- get_license_by_map_id(map_id),
{:ok, subscription} <- SubscriptionManager.get_active_map_subscription(map_id) do
@@ -146,24 +143,15 @@ defmodule WandererApp.License.LicenseManager do
end
end
@doc """
Formats a datetime as YYYY-MM-DD.
"""
defp format_date(datetime) do
Calendar.strftime(datetime, "%Y-%m-%d")
end
@doc """
Generates a link to the map.
"""
defp generate_map_link(map_slug) do
base_url = Application.get_env(:wanderer_app, :web_app_url)
"#{base_url}/#{map_slug}"
end
@doc """
Gets the map owner's data.
"""
defp get_map_owner_email(map) do
{:ok, %{owner: owner}} = map |> Ash.load([:owner])
"#{owner.name}(#{owner.eve_id})"

View File

@@ -135,7 +135,7 @@ defmodule WandererApp.License.LicenseManagerClient do
Application.get_env(:wanderer_app, :license_manager)[:auth_key]
end
defp parse_error_response(status, %{"error" => error_message}) do
defp parse_error_response(_status, %{"error" => error_message}) do
{:error, error_message}
end

View File

@@ -53,8 +53,8 @@ defmodule WandererApp.Map do
{:ok, map} ->
map
_ ->
Logger.error(fn -> "Failed to get map #{map_id}" end)
error ->
Logger.error("Failed to get map #{map_id}: #{inspect(error)}")
%{}
end
end
@@ -134,6 +134,22 @@ defmodule WandererApp.Map do
def get_options(map_id),
do: {:ok, map_id |> get_map!() |> Map.get(:options, Map.new())}
def get_tracked_character_ids(map_id) do
{:ok,
map_id
|> get_map!()
|> Map.get(:characters, [])
|> Enum.filter(fn character_id ->
{:ok, tracking_start_time} =
WandererApp.Cache.lookup(
"character:#{character_id}:map:#{map_id}:tracking_start_time",
nil
)
not is_nil(tracking_start_time)
end)}
end
@doc """
Returns a full list of characters in the map
"""
@@ -183,9 +199,31 @@ defmodule WandererApp.Map do
def add_characters!(map, []), do: map
def add_characters!(%{map_id: map_id} = map, [character | rest]) do
add_character(map_id, character)
add_characters!(map, rest)
def add_characters!(%{map_id: map_id} = map, characters) when is_list(characters) do
# Get current characters list once
current_characters = Map.get(map, :characters, [])
characters_ids =
characters
|> Enum.map(fn %{id: char_id} -> char_id end)
# Filter out characters that already exist
new_character_ids =
characters_ids
|> Enum.reject(fn char_id -> char_id in current_characters end)
# If all characters already exist, return early
if new_character_ids == [] do
map
else
case update_map(map_id, %{characters: new_character_ids ++ current_characters}) do
{:commit, map} ->
map
_ ->
map
end
end
end
def add_character(
@@ -198,64 +236,13 @@ defmodule WandererApp.Map do
case not (characters |> Enum.member?(character_id)) do
true ->
WandererApp.Character.get_map_character(map_id, character_id)
|> case do
{:ok,
%{
alliance_id: alliance_id,
corporation_id: corporation_id,
solar_system_id: solar_system_id,
structure_id: structure_id,
station_id: station_id,
ship: ship_type_id,
ship_name: ship_name
}} ->
map_id
|> update_map(%{characters: [character_id | characters]})
map_id
|> update_map(%{characters: [character_id | characters]})
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:alliance_id",
# alliance_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:corporation_id",
# corporation_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:solar_system_id",
# solar_system_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:structure_id",
# structure_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:station_id",
# station_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:ship_type_id",
# ship_type_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:ship_name",
# ship_name
# )
:ok
error ->
error
end
:ok
_ ->
{:error, :already_exists}
:ok
end
end
@@ -532,15 +519,16 @@ defmodule WandererApp.Map do
solar_system_source,
solar_system_target
) do
case map_id
|> get_map!()
|> Map.get(:connections, Map.new())
connections =
map_id
|> get_map!()
|> Map.get(:connections, Map.new())
case connections
|> Map.get("#{solar_system_source}_#{solar_system_target}") do
nil ->
{:ok,
map_id
|> get_map!()
|> Map.get(:connections, Map.new())
connections
|> Map.get("#{solar_system_target}_#{solar_system_source}")}
connection ->

View File

@@ -23,7 +23,8 @@ defmodule WandererApp.Map.CacheRTree do
alias WandererApp.Cache
@grid_size 150 # Grid cell size in pixels
# Grid cell size in pixels
@grid_size 150
# Type definitions matching DDRT behavior
@type id :: number() | String.t()
@@ -59,19 +60,26 @@ defmodule WandererApp.Map.CacheRTree do
# Update leaves storage
current_leaves = get_leaves(name)
new_leaves = Enum.reduce(leaves, current_leaves, fn {id, box}, acc ->
Map.put(acc, id, {id, box})
end)
new_leaves =
Enum.reduce(leaves, current_leaves, fn {id, box}, acc ->
Map.put(acc, id, {id, box})
end)
put_leaves(name, new_leaves)
# Update spatial grid
current_grid = get_grid(name)
new_grid = Enum.reduce(leaves, current_grid, fn leaf, grid ->
add_to_grid(grid, leaf)
end)
new_grid =
Enum.reduce(leaves, current_grid, fn leaf, grid ->
add_to_grid(grid, leaf)
end)
put_grid(name, new_grid)
{:ok, %{}} # Match DRTree return format
# Match DRTree return format
{:ok, %{}}
end
@doc """
@@ -97,17 +105,19 @@ defmodule WandererApp.Map.CacheRTree do
current_grid = get_grid(name)
# Remove from leaves and track bounding boxes for grid cleanup
{new_leaves, removed} = Enum.reduce(ids, {current_leaves, []}, fn id, {leaves, removed} ->
case Map.pop(leaves, id) do
{nil, leaves} -> {leaves, removed}
{{^id, box}, leaves} -> {leaves, [{id, box} | removed]}
end
end)
{new_leaves, removed} =
Enum.reduce(ids, {current_leaves, []}, fn id, {leaves, removed} ->
case Map.pop(leaves, id) do
{nil, leaves} -> {leaves, removed}
{{^id, box}, leaves} -> {leaves, [{id, box} | removed]}
end
end)
# Update grid
new_grid = Enum.reduce(removed, current_grid, fn {id, box}, grid ->
remove_from_grid(grid, id, box)
end)
new_grid =
Enum.reduce(removed, current_grid, fn {id, box}, grid ->
remove_from_grid(grid, id, box)
end)
put_leaves(name, new_leaves)
put_grid(name, new_grid)
@@ -133,17 +143,21 @@ defmodule WandererApp.Map.CacheRTree do
"""
@impl true
def update(id, box_or_tuple, name) do
{old_box, new_box} = case box_or_tuple do
{old, new} ->
{old, new}
box ->
# Need to look up old box
leaves = get_leaves(name)
case Map.get(leaves, id) do
{^id, old} -> {old, box}
nil -> {nil, box} # Will be handled as new insert
end
end
{old_box, new_box} =
case box_or_tuple do
{old, new} ->
{old, new}
box ->
# Need to look up old box
leaves = get_leaves(name)
case Map.get(leaves, id) do
{^id, old} -> {old, box}
# Will be handled as new insert
nil -> {nil, box}
end
end
# Delete old, insert new
if old_box, do: delete([id], name)
@@ -184,6 +198,7 @@ defmodule WandererApp.Map.CacheRTree do
# Precise intersection test
leaves = get_leaves(name)
matching_ids =
Enum.filter(candidate_ids, fn id ->
case Map.get(leaves, id) do
@@ -216,6 +231,7 @@ defmodule WandererApp.Map.CacheRTree do
iex> CacheRTree.init_tree("rtree_map_456", %{width: 150, verbose: false})
:ok
"""
@impl true
def init_tree(name, config \\ %{}) do
Cache.put(cache_key(name, :leaves), %{})
Cache.put(cache_key(name, :grid), %{})
@@ -319,6 +335,7 @@ defmodule WandererApp.Map.CacheRTree do
# Floor division that works correctly with negative numbers
defp div_floor(a, b) when a >= 0, do: div(a, b)
defp div_floor(a, b) when a < 0 do
case rem(a, b) do
0 -> div(a, b)

View File

@@ -0,0 +1,38 @@
defmodule WandererApp.Map.GarbageCollector do
@moduledoc """
Manager map subscription plans
"""
require Logger
require Ash.Query
@logger Application.compile_env(:wanderer_app, :logger)
@one_week_seconds 7 * 24 * 60 * 60
@two_weeks_seconds 14 * 24 * 60 * 60
def cleanup_chain_passages() do
Logger.info("Start cleanup old map chain passages...")
WandererApp.Api.MapChainPassages
|> Ash.Query.filter(updated_at: [less_than: get_cutoff_time(@one_week_seconds)])
|> Ash.bulk_destroy!(:destroy, %{}, batch_size: 100)
@logger.info(fn -> "All map chain passages processed" end)
:ok
end
def cleanup_system_signatures() do
Logger.info("Start cleanup old map system signatures...")
WandererApp.Api.MapSystemSignature
|> Ash.Query.filter(updated_at: [less_than: get_cutoff_time(@two_weeks_seconds)])
|> Ash.bulk_destroy!(:destroy, %{}, batch_size: 100)
@logger.info(fn -> "All map system signatures processed" end)
:ok
end
defp get_cutoff_time(seconds), do: DateTime.utc_now() |> DateTime.add(-seconds, :second)
end

View File

@@ -9,8 +9,8 @@ defmodule WandererApp.Map.Manager do
alias WandererApp.Map.Server
@maps_start_per_second 10
@maps_start_interval 1000
@maps_start_chunk_size 20
@maps_start_interval 500
@maps_queue :maps_queue
@check_maps_queue_interval :timer.seconds(1)
@@ -58,10 +58,6 @@ defmodule WandererApp.Map.Manager do
{:ok, pings_cleanup_timer} =
:timer.send_interval(@pings_cleanup_interval, :cleanup_pings)
safe_async_task(fn ->
start_last_active_maps()
end)
{:ok,
%{
check_maps_queue_timer: check_maps_queue_timer,
@@ -134,26 +130,12 @@ defmodule WandererApp.Map.Manager do
end
end
defp start_last_active_maps() do
{:ok, last_map_states} =
WandererApp.Api.MapState.get_last_active(
DateTime.utc_now()
|> DateTime.add(-30, :minute)
)
last_map_states
|> Enum.map(fn %{map_id: map_id} -> map_id end)
|> Enum.each(fn map_id -> start_map(map_id) end)
:ok
end
defp start_maps() do
chunks =
@maps_queue
|> WandererApp.Queue.to_list!()
|> Enum.uniq()
|> Enum.chunk_every(@maps_start_per_second)
|> Enum.chunk_every(@maps_start_chunk_size)
WandererApp.Queue.clear(@maps_queue)

View File

@@ -4,7 +4,7 @@ defmodule WandererApp.Map.MapPool do
require Logger
alias WandererApp.Map.Server
alias WandererApp.Map.{MapPoolState, Server}
defstruct [
:map_ids,
@@ -15,8 +15,9 @@ defmodule WandererApp.Map.MapPool do
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@map_pool_limit 10
@garbage_collection_interval :timer.hours(12)
@garbage_collection_interval :timer.hours(4)
@systems_cleanup_timeout :timer.minutes(30)
@characters_cleanup_timeout :timer.minutes(5)
@connections_cleanup_timeout :timer.minutes(5)
@@ -25,7 +26,17 @@ defmodule WandererApp.Map.MapPool do
def new(), do: __struct__()
def new(args), do: __struct__(args)
def start_link(map_ids) do
# Accept both {uuid, map_ids} tuple (from supervisor restart) and just map_ids (legacy)
def start_link({uuid, map_ids}) when is_binary(uuid) and is_list(map_ids) do
GenServer.start_link(
@name,
{uuid, map_ids},
name: Module.concat(__MODULE__, uuid)
)
end
# For backward compatibility - generate UUID if only map_ids provided
def start_link(map_ids) when is_list(map_ids) do
uuid = UUID.uuid1()
GenServer.start_link(
@@ -37,13 +48,42 @@ defmodule WandererApp.Map.MapPool do
@impl true
def init({uuid, map_ids}) do
{:ok, _} = Registry.register(@unique_registry, Module.concat(__MODULE__, uuid), map_ids)
# Check for crash recovery - if we have previous state in ETS, merge it with new map_ids
{final_map_ids, recovery_info} =
case MapPoolState.get_pool_state(uuid) do
{:ok, recovered_map_ids} ->
# Merge and deduplicate map IDs
merged = Enum.uniq(recovered_map_ids ++ map_ids)
recovery_count = length(recovered_map_ids)
Logger.info(
"[Map Pool #{uuid}] Crash recovery detected: recovering #{recovery_count} maps",
pool_uuid: uuid,
recovered_maps: recovered_map_ids,
new_maps: map_ids,
total_maps: length(merged)
)
# Emit telemetry for crash recovery
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :start],
%{recovered_map_count: recovery_count, total_map_count: length(merged)},
%{pool_uuid: uuid}
)
{merged, %{recovered: true, count: recovery_count}}
{:error, :not_found} ->
# Normal startup, no previous state to recover
{map_ids, %{recovered: false}}
end
# Register with empty list - maps will be added as they're started in handle_continue
{:ok, _} = Registry.register(@unique_registry, Module.concat(__MODULE__, uuid), [])
{:ok, _} = Registry.register(@registry, __MODULE__, uuid)
map_ids
|> Enum.each(fn id ->
Cachex.put(@cache, id, uuid)
end)
# Don't pre-populate cache - will be populated as maps start in handle_continue
# This prevents duplicates when recovering
state =
%{
@@ -52,23 +92,100 @@ defmodule WandererApp.Map.MapPool do
}
|> new()
{:ok, state, {:continue, {:start, map_ids}}}
{:ok, state, {:continue, {:start, {final_map_ids, recovery_info}}}}
end
@impl true
def terminate(_reason, _state) do
def terminate(reason, %{uuid: uuid} = _state) do
# On graceful shutdown, clean up ETS state
# On crash, keep ETS state for recovery
case reason do
:normal ->
Logger.debug("[Map Pool #{uuid}] Graceful shutdown, cleaning up ETS state")
MapPoolState.delete_pool_state(uuid)
:shutdown ->
Logger.debug("[Map Pool #{uuid}] Graceful shutdown, cleaning up ETS state")
MapPoolState.delete_pool_state(uuid)
{:shutdown, _} ->
Logger.debug("[Map Pool #{uuid}] Graceful shutdown, cleaning up ETS state")
MapPoolState.delete_pool_state(uuid)
_ ->
Logger.warning(
"[Map Pool #{uuid}] Abnormal termination (#{inspect(reason)}), keeping ETS state for recovery"
)
# Keep ETS state for crash recovery
:ok
end
:ok
end
@impl true
def handle_continue({:start, map_ids}, state) do
def handle_continue({:start, {map_ids, recovery_info}}, state) do
Logger.info("#{@name} started")
map_ids
|> Enum.each(fn map_id ->
GenServer.cast(self(), {:start_map, map_id})
end)
# Track recovery statistics
start_time = System.monotonic_time(:millisecond)
initial_count = length(map_ids)
# Start maps synchronously and accumulate state changes
{new_state, failed_maps} =
map_ids
|> Enum.reduce({state, []}, fn map_id, {current_state, failed} ->
case do_start_map(map_id, current_state) do
{:ok, updated_state} ->
{updated_state, failed}
{:error, reason} ->
Logger.error("[Map Pool] Failed to start map #{map_id}: #{reason}")
# Emit telemetry for individual map recovery failure
if recovery_info.recovered do
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :map_failed],
%{map_id: map_id},
%{pool_uuid: state.uuid, reason: reason}
)
end
{current_state, [map_id | failed]}
end
end)
# Calculate final statistics
end_time = System.monotonic_time(:millisecond)
duration_ms = end_time - start_time
successful_count = length(new_state.map_ids)
failed_count = length(failed_maps)
# Log and emit telemetry for recovery completion
if recovery_info.recovered do
Logger.info(
"[Map Pool #{state.uuid}] Crash recovery completed: #{successful_count}/#{initial_count} maps recovered in #{duration_ms}ms",
pool_uuid: state.uuid,
recovered_count: successful_count,
failed_count: failed_count,
total_count: initial_count,
duration_ms: duration_ms,
failed_maps: failed_maps
)
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :complete],
%{
recovered_count: successful_count,
failed_count: failed_count,
duration_ms: duration_ms
},
%{pool_uuid: state.uuid}
)
end
# Schedule periodic tasks
Process.send_after(self(), :backup_state, @backup_state_timeout)
Process.send_after(self(), :cleanup_systems, 15_000)
Process.send_after(self(), :cleanup_characters, @characters_cleanup_timeout)
@@ -77,56 +194,372 @@ defmodule WandererApp.Map.MapPool do
# Start message queue monitoring
Process.send_after(self(), :monitor_message_queue, :timer.seconds(30))
{:noreply, state}
{:noreply, new_state}
end
@impl true
def handle_continue({:init_map, map_id}, %{uuid: uuid} = state) do
# Perform the actual map initialization asynchronously
# This runs after the GenServer.call has already returned
start_time = System.monotonic_time(:millisecond)
try do
# Initialize the map state and start the map server using extracted helper
do_initialize_map_server(map_id)
duration = System.monotonic_time(:millisecond) - start_time
Logger.info("[Map Pool #{uuid}] Map #{map_id} initialized successfully in #{duration}ms")
# Emit telemetry for slow initializations
if duration > 5_000 do
Logger.warning("[Map Pool #{uuid}] Slow map initialization: #{map_id} took #{duration}ms")
:telemetry.execute(
[:wanderer_app, :map_pool, :slow_init],
%{duration_ms: duration},
%{map_id: map_id, pool_uuid: uuid}
)
end
{:noreply, state}
rescue
e ->
duration = System.monotonic_time(:millisecond) - start_time
Logger.error("""
[Map Pool #{uuid}] Failed to initialize map #{map_id} after #{duration}ms: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
# Rollback: Remove from state, registry, cache, and ETS using extracted helper
new_state = do_unregister_map(map_id, uuid, state)
# Emit telemetry for failed initialization
:telemetry.execute(
[:wanderer_app, :map_pool, :init_failed],
%{duration_ms: duration},
%{map_id: map_id, pool_uuid: uuid, reason: Exception.message(e)}
)
{:noreply, new_state}
end
end
@impl true
def handle_cast(:stop, state), do: {:stop, :normal, state}
@impl true
def handle_cast({:start_map, map_id}, %{map_ids: map_ids, uuid: uuid} = state) do
if map_id not in map_ids do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
def handle_call({:start_map, map_id}, _from, %{map_ids: map_ids, uuid: uuid} = state) do
# Enforce capacity limit to prevent pool overload due to race conditions
if length(map_ids) >= @map_pool_limit do
Logger.warning(
"[Map Pool #{uuid}] Pool at capacity (#{length(map_ids)}/#{@map_pool_limit}), " <>
"rejecting map #{map_id} and triggering new pool creation"
)
# Trigger a new pool creation attempt asynchronously
# This allows the system to create a new pool for this map
spawn(fn ->
WandererApp.Map.MapPoolDynamicSupervisor.start_map(map_id)
end)
Cachex.put(@cache, map_id, uuid)
map_id
|> WandererApp.Map.get_map_state!()
|> Server.Impl.start_map()
{:noreply, %{state | map_ids: [map_id | map_ids]}}
{:reply, :ok, state}
else
{:noreply, state}
# Check if map is already started or being initialized
if map_id in map_ids do
Logger.debug("[Map Pool #{uuid}] Map #{map_id} already in pool")
{:reply, {:ok, :already_started}, state}
else
# Pre-register the map in registry and cache to claim ownership
# This prevents race conditions where multiple pools try to start the same map
registry_result =
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
end)
case registry_result do
{_new_value, _old_value} ->
# Add to cache
Cachex.put(@cache, map_id, uuid)
# Add to state
new_state = %{state | map_ids: [map_id | map_ids]}
# Persist state to ETS
MapPoolState.save_pool_state(uuid, new_state.map_ids)
Logger.debug("[Map Pool #{uuid}] Map #{map_id} queued for async initialization")
# Return immediately and initialize asynchronously
{:reply, {:ok, :initializing}, new_state, {:continue, {:init_map, map_id}}}
:error ->
Logger.error("[Map Pool #{uuid}] Failed to register map #{map_id} in registry")
{:reply, {:error, :registration_failed}, state}
end
end
end
end
@impl true
def handle_cast(
def handle_call(
{:stop_map, map_id},
%{map_ids: map_ids, uuid: uuid} = state
_from,
state
) do
case do_stop_map(map_id, state) do
{:ok, new_state} ->
{:reply, :ok, new_state}
{:error, reason} ->
{:reply, {:error, reason}, state}
end
end
defp do_start_map(map_id, %{map_ids: map_ids, uuid: uuid} = state) do
if map_id in map_ids do
# Map already started
{:ok, state}
else
# Track what operations succeeded for potential rollback
completed_operations = []
try do
# Step 1: Update Registry (most critical, do first)
registry_result =
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
end)
completed_operations = [:registry | completed_operations]
case registry_result do
{new_value, _old_value} when is_list(new_value) ->
:ok
:error ->
raise "Failed to update registry for pool #{uuid}"
end
# Step 2: Add to cache
case Cachex.put(@cache, map_id, uuid) do
{:ok, _} ->
:ok
{:error, reason} ->
raise "Failed to add to cache: #{inspect(reason)}"
end
completed_operations = [:cache | completed_operations]
# Step 3: Start the map server using extracted helper
do_initialize_map_server(map_id)
completed_operations = [:map_server | completed_operations]
# Step 4: Update GenServer state (last, as this is in-memory and fast)
new_state = %{state | map_ids: [map_id | map_ids]}
# Step 5: Persist state to ETS for crash recovery
MapPoolState.save_pool_state(uuid, new_state.map_ids)
Logger.debug("[Map Pool] Successfully started map #{map_id} in pool #{uuid}")
{:ok, new_state}
rescue
e ->
Logger.error("""
[Map Pool] Failed to start map #{map_id} (completed: #{inspect(completed_operations)}): #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
# Attempt rollback of completed operations
rollback_start_map_operations(map_id, uuid, completed_operations)
{:error, Exception.message(e)}
end
end
end
defp rollback_start_map_operations(map_id, uuid, completed_operations) do
Logger.warning("[Map Pool] Attempting to rollback start_map operations for #{map_id}")
# Rollback in reverse order
if :map_server in completed_operations do
Logger.debug("[Map Pool] Rollback: Stopping map server for #{map_id}")
try do
Server.Impl.stop_map(map_id)
rescue
e ->
Logger.error("[Map Pool] Rollback failed to stop map server: #{Exception.message(e)}")
end
end
if :cache in completed_operations do
Logger.debug("[Map Pool] Rollback: Removing #{map_id} from cache")
case Cachex.del(@cache, map_id) do
{:ok, _} ->
:ok
{:error, reason} ->
Logger.error("[Map Pool] Rollback failed for cache: #{inspect(reason)}")
end
end
if :registry in completed_operations do
Logger.debug("[Map Pool] Rollback: Removing #{map_id} from registry")
try do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
rescue
e ->
Logger.error("[Map Pool] Rollback failed for registry: #{Exception.message(e)}")
end
end
end
defp do_stop_map(map_id, %{map_ids: map_ids, uuid: uuid} = state) do
# Track what operations succeeded for potential rollback
completed_operations = []
try do
# Step 1: Update Registry (most critical, do first)
registry_result =
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
completed_operations = [:registry | completed_operations]
case registry_result do
{new_value, _old_value} when is_list(new_value) ->
:ok
:error ->
raise "Failed to update registry for pool #{uuid}"
end
# Step 2: Delete from cache
case Cachex.del(@cache, map_id) do
{:ok, _} ->
:ok
{:error, reason} ->
raise "Failed to delete from cache: #{inspect(reason)}"
end
completed_operations = [:cache | completed_operations]
# Step 3: Stop the map server (clean up all map resources)
map_id
|> Server.Impl.stop_map()
completed_operations = [:map_server | completed_operations]
# Step 4: Update GenServer state (last, as this is in-memory and fast)
new_state = %{state | map_ids: map_ids |> Enum.reject(fn id -> id == map_id end)}
# Step 5: Persist state to ETS for crash recovery
MapPoolState.save_pool_state(uuid, new_state.map_ids)
Logger.debug("[Map Pool] Successfully stopped map #{map_id} from pool #{uuid}")
{:ok, new_state}
rescue
e ->
Logger.error("""
[Map Pool] Failed to stop map #{map_id} (completed: #{inspect(completed_operations)}): #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
# Attempt rollback of completed operations
rollback_stop_map_operations(map_id, uuid, completed_operations)
{:error, Exception.message(e)}
end
end
# Helper function to initialize the map server (no state management)
# This extracts the common map initialization logic used in both
# synchronous (do_start_map) and asynchronous ({:init_map, map_id}) paths
defp do_initialize_map_server(map_id) do
map_id
|> WandererApp.Map.get_map_state!()
|> Server.Impl.start_map()
end
# Helper function to unregister a map from all tracking
# Used for rollback when map initialization fails in the async path
defp do_unregister_map(map_id, uuid, state) do
# Remove from registry
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
Enum.reject(r_map_ids, &(&1 == map_id))
end)
# Remove from cache
Cachex.del(@cache, map_id)
map_id
|> Server.Impl.stop_map()
# Update state
new_state = %{state | map_ids: Enum.reject(state.map_ids, &(&1 == map_id))}
{:noreply, %{state | map_ids: map_ids |> Enum.reject(fn id -> id == map_id end)}}
# Update ETS
MapPoolState.save_pool_state(uuid, new_state.map_ids)
new_state
end
defp rollback_stop_map_operations(map_id, uuid, completed_operations) do
Logger.warning("[Map Pool] Attempting to rollback stop_map operations for #{map_id}")
# Rollback in reverse order
if :cache in completed_operations do
Logger.debug("[Map Pool] Rollback: Re-adding #{map_id} to cache")
case Cachex.put(@cache, map_id, uuid) do
{:ok, _} ->
:ok
{:error, reason} ->
Logger.error("[Map Pool] Rollback failed for cache: #{inspect(reason)}")
end
end
if :registry in completed_operations do
Logger.debug("[Map Pool] Rollback: Re-adding #{map_id} to registry")
try do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
if map_id in r_map_ids do
r_map_ids
else
[map_id | r_map_ids]
end
end)
rescue
e ->
Logger.error("[Map Pool] Rollback failed for registry: #{Exception.message(e)}")
end
end
# Note: We don't rollback map_server stop as Server.Impl.stop_map() is idempotent
# and the cleanup operations are safe to leave in a "stopped" state
end
@impl true
def handle_call(:error, _, state), do: {:stop, :error, :ok, state}
@impl true
def handle_info(:backup_state, %{map_ids: map_ids} = state) do
def handle_info(:backup_state, %{map_ids: map_ids, uuid: uuid} = state) do
Process.send_after(self(), :backup_state, @backup_state_timeout)
try do
# Persist pool state to ETS
MapPoolState.save_pool_state(uuid, map_ids)
# Backup individual map states to database
map_ids
|> Task.async_stream(
fn map_id ->
@@ -231,25 +664,38 @@ defmodule WandererApp.Map.MapPool do
Process.send_after(self(), :garbage_collect, @garbage_collection_interval)
try do
map_ids
|> Enum.each(fn map_id ->
# presence_character_ids =
# WandererApp.Cache.lookup!("map_#{map_id}:presence_character_ids", [])
# Process each map and accumulate state changes
new_state =
map_ids
|> Enum.reduce(state, fn map_id, current_state ->
presence_character_ids =
WandererApp.Cache.lookup!("map_#{map_id}:presence_character_ids", [])
# if presence_character_ids |> Enum.empty?() do
Logger.info(
"#{uuid}: No more characters present on: #{map_id}, shutting down map server..."
)
if presence_character_ids |> Enum.empty?() do
Logger.info(
"#{uuid}: No more characters present on: #{map_id}, shutting down map server..."
)
GenServer.cast(self(), {:stop_map, map_id})
# end
end)
case do_stop_map(map_id, current_state) do
{:ok, updated_state} ->
Logger.debug("#{uuid}: Successfully stopped map #{map_id}")
updated_state
{:error, reason} ->
Logger.error("#{uuid}: Failed to stop map #{map_id}: #{reason}")
current_state
end
else
current_state
end
end)
{:noreply, new_state}
rescue
e ->
Logger.error(Exception.message(e))
Logger.error("#{uuid}: Garbage collection error: #{Exception.message(e)}")
{:noreply, state}
end
{:noreply, state}
end
@impl true
@@ -277,40 +723,69 @@ defmodule WandererApp.Map.MapPool do
{:noreply, state}
end
def handle_info(
:update_online,
%{
characters: characters,
server_online: true
} =
state
) do
Process.send_after(self(), :update_online, @update_online_interval)
def handle_info(:map_deleted, %{map_ids: map_ids} = state) do
# When a map is deleted, stop all maps in this pool that are deleted
# This is a graceful shutdown triggered by user action
Logger.info("[Map Pool #{state.uuid}] Received map_deleted event, stopping affected maps")
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.Character.Tracker.update_online(character_id)
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.seconds(5)
)
|> Enum.each(fn _result -> :ok end)
rescue
e ->
Logger.error("""
[Tracker Pool] update_online => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
# Check which of our maps were deleted and stop them
new_state =
map_ids
|> Enum.reduce(state, fn map_id, current_state ->
# Check if the map still exists in the database
case WandererApp.MapRepo.get(map_id) do
{:ok, %{deleted: true}} ->
Logger.info("[Map Pool #{state.uuid}] Map #{map_id} was deleted, stopping it")
{:noreply, state}
case do_stop_map(map_id, current_state) do
{:ok, updated_state} ->
updated_state
{:error, reason} ->
Logger.error(
"[Map Pool #{state.uuid}] Failed to stop deleted map #{map_id}: #{reason}"
)
current_state
end
{:ok, _map} ->
# Map still exists and is not deleted
current_state
{:error, _} ->
# Map doesn't exist, should stop it
Logger.info("[Map Pool #{state.uuid}] Map #{map_id} not found, stopping it")
case do_stop_map(map_id, current_state) do
{:ok, updated_state} ->
updated_state
{:error, reason} ->
Logger.error(
"[Map Pool #{state.uuid}] Failed to stop missing map #{map_id}: #{reason}"
)
current_state
end
end
end)
{:noreply, new_state}
end
def handle_info(event, state) do
Server.Impl.handle_event(event)
try do
Server.Impl.handle_event(event)
rescue
e ->
Logger.error("""
[Map Pool] handle_info => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
ErrorTracker.report(e, __STACKTRACE__)
end
{:noreply, state}
end

View File

@@ -8,6 +8,7 @@ defmodule WandererApp.Map.MapPoolDynamicSupervisor do
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@map_pool_limit 10
@genserver_call_timeout :timer.minutes(2)
@name __MODULE__
@@ -30,29 +31,115 @@ defmodule WandererApp.Map.MapPoolDynamicSupervisor do
start_child([map_id], pools |> Enum.count())
pid ->
GenServer.cast(pid, {:start_map, map_id})
result = GenServer.call(pid, {:start_map, map_id}, @genserver_call_timeout)
case result do
{:ok, :initializing} ->
Logger.debug(
"[Map Pool Supervisor] Map #{map_id} queued for async initialization"
)
result
{:ok, :already_started} ->
Logger.debug("[Map Pool Supervisor] Map #{map_id} already started")
result
:ok ->
# Legacy synchronous response (from crash recovery path)
Logger.debug("[Map Pool Supervisor] Map #{map_id} started synchronously")
result
other ->
Logger.warning(
"[Map Pool Supervisor] Unexpected response for map #{map_id}: #{inspect(other)}"
)
other
end
end
end
end
def stop_map(map_id) do
{:ok, pool_uuid} = Cachex.get(@cache, map_id)
case Cachex.get(@cache, map_id) do
{:ok, nil} ->
# Cache miss - try to find the pool by scanning the registry
Logger.warning(
"Cache miss for map #{map_id}, scanning registry for pool containing this map"
)
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
find_pool_by_scanning_registry(map_id)
{:ok, pool_uuid} ->
# Cache hit - use the pool_uuid to lookup the pool
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
[] ->
Logger.warning(
"Pool with UUID #{pool_uuid} not found in registry for map #{map_id}, scanning registry"
)
find_pool_by_scanning_registry(map_id)
[{pool_pid, _}] ->
GenServer.call(pool_pid, {:stop_map, map_id}, @genserver_call_timeout)
end
{:error, reason} ->
Logger.error("Failed to lookup map #{map_id} in cache: #{inspect(reason)}")
:ok
end
end
defp find_pool_by_scanning_registry(map_id) do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
Logger.debug("No map pools found in registry for map #{map_id}")
:ok
[{pool_pid, _}] ->
GenServer.cast(pool_pid, {:stop_map, map_id})
pools ->
# Scan all pools to find the one containing this map_id
found_pool =
Enum.find_value(pools, fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{pool_pid, map_ids}] ->
if map_id in map_ids do
{pool_pid, uuid}
else
nil
end
_ ->
nil
end
end)
case found_pool do
{pool_pid, pool_uuid} ->
Logger.info(
"Found map #{map_id} in pool #{pool_uuid} via registry scan, updating cache"
)
# Update the cache to fix the inconsistency
Cachex.put(@cache, map_id, pool_uuid)
GenServer.call(pool_pid, {:stop_map, map_id}, @genserver_call_timeout)
nil ->
Logger.debug("Map #{map_id} not found in any pool registry")
:ok
end
end
end
defp get_available_pool([]), do: nil
defp get_available_pool([{pid, uuid} | pools]) do
defp get_available_pool([{_pid, uuid} | pools]) do
case Registry.lookup(@unique_registry, Module.concat(WandererApp.Map.MapPool, uuid)) do
[] ->
nil
@@ -79,9 +166,13 @@ defmodule WandererApp.Map.MapPoolDynamicSupervisor do
end
defp start_child(map_ids, pools_count) do
case DynamicSupervisor.start_child(@name, {WandererApp.Map.MapPool, map_ids}) do
# Generate UUID for the new pool - this will be used for crash recovery
uuid = UUID.uuid1()
# Pass both UUID and map_ids to the pool for crash recovery support
case DynamicSupervisor.start_child(@name, {WandererApp.Map.MapPool, {uuid, map_ids}}) do
{:ok, pid} ->
Logger.info("Starting map pool, total map_pools: #{pools_count + 1}")
Logger.info("Starting map pool #{uuid}, total map_pools: #{pools_count + 1}")
{:ok, pid}
{:error, {:already_started, pid}} ->

View File

@@ -0,0 +1,190 @@
defmodule WandererApp.Map.MapPoolState do
@moduledoc """
Helper module for persisting MapPool state to ETS for crash recovery.
This module provides functions to save and retrieve MapPool state from an ETS table.
The state survives GenServer crashes but is lost on node restart, which ensures
automatic recovery from crashes while avoiding stale state on system restart.
## ETS Table Ownership
The ETS table `:map_pool_state_table` is owned by the MapPoolSupervisor,
ensuring it survives individual MapPool process crashes.
## State Format
State is stored as tuples: `{pool_uuid, map_ids, last_updated_timestamp}`
where:
- `pool_uuid` is the unique identifier for the pool (key)
- `map_ids` is a list of map IDs managed by this pool
- `last_updated_timestamp` is the Unix timestamp of the last update
"""
require Logger
@table_name :map_pool_state_table
@stale_threshold_hours 24
@doc """
Initializes the ETS table for storing MapPool state.
This should be called by the MapPoolSupervisor during initialization.
The table is created as:
- `:set` - Each pool UUID has exactly one entry
- `:public` - Any process can read/write
- `:named_table` - Can be accessed by name
Returns the table reference or raises if table already exists.
"""
@spec init_table() :: :ets.table()
def init_table do
:ets.new(@table_name, [:set, :public, :named_table])
end
@doc """
Saves the current state of a MapPool to ETS.
## Parameters
- `uuid` - The unique identifier for the pool
- `map_ids` - List of map IDs currently managed by this pool
## Examples
iex> MapPoolState.save_pool_state("pool-123", [1, 2, 3])
:ok
"""
@spec save_pool_state(String.t(), [integer()]) :: :ok
def save_pool_state(uuid, map_ids) when is_binary(uuid) and is_list(map_ids) do
timestamp = System.system_time(:second)
true = :ets.insert(@table_name, {uuid, map_ids, timestamp})
Logger.debug("Saved MapPool state for #{uuid}: #{length(map_ids)} maps",
pool_uuid: uuid,
map_count: length(map_ids)
)
:ok
end
@doc """
Retrieves the saved state for a MapPool from ETS.
## Parameters
- `uuid` - The unique identifier for the pool
## Returns
- `{:ok, map_ids}` if state exists
- `{:error, :not_found}` if no state exists for this UUID
## Examples
iex> MapPoolState.get_pool_state("pool-123")
{:ok, [1, 2, 3]}
iex> MapPoolState.get_pool_state("non-existent")
{:error, :not_found}
"""
@spec get_pool_state(String.t()) :: {:ok, [integer()]} | {:error, :not_found}
def get_pool_state(uuid) when is_binary(uuid) do
case :ets.lookup(@table_name, uuid) do
[{^uuid, map_ids, _timestamp}] ->
{:ok, map_ids}
[] ->
{:error, :not_found}
end
end
@doc """
Deletes the state for a MapPool from ETS.
This should be called when a pool is gracefully shut down.
## Parameters
- `uuid` - The unique identifier for the pool
## Examples
iex> MapPoolState.delete_pool_state("pool-123")
:ok
"""
@spec delete_pool_state(String.t()) :: :ok
def delete_pool_state(uuid) when is_binary(uuid) do
true = :ets.delete(@table_name, uuid)
Logger.debug("Deleted MapPool state for #{uuid}", pool_uuid: uuid)
:ok
end
@doc """
Removes stale entries from the ETS table.
Entries are considered stale if they haven't been updated in the last
#{@stale_threshold_hours} hours. This helps prevent the table from growing
unbounded due to pool UUIDs that are no longer in use.
Returns the number of entries deleted.
## Examples
iex> MapPoolState.cleanup_stale_entries()
{:ok, 3}
"""
@spec cleanup_stale_entries() :: {:ok, non_neg_integer()}
def cleanup_stale_entries do
stale_threshold = System.system_time(:second) - @stale_threshold_hours * 3600
match_spec = [
{
{:"$1", :"$2", :"$3"},
[{:<, :"$3", stale_threshold}],
[:"$1"]
}
]
stale_uuids = :ets.select(@table_name, match_spec)
Enum.each(stale_uuids, fn uuid ->
:ets.delete(@table_name, uuid)
Logger.info("Cleaned up stale MapPool state for #{uuid}",
pool_uuid: uuid,
reason: :stale
)
end)
{:ok, length(stale_uuids)}
end
@doc """
Returns all pool states currently stored in ETS.
Useful for debugging and monitoring.
## Examples
iex> MapPoolState.list_all_states()
[
{"pool-123", [1, 2, 3], 1699564800},
{"pool-456", [4, 5], 1699564900}
]
"""
@spec list_all_states() :: [{String.t(), [integer()], integer()}]
def list_all_states do
:ets.tab2list(@table_name)
end
@doc """
Returns the count of pool states currently stored in ETS.
## Examples
iex> MapPoolState.count_states()
5
"""
@spec count_states() :: non_neg_integer()
def count_states do
:ets.info(@table_name, :size)
end
end

View File

@@ -2,6 +2,8 @@ defmodule WandererApp.Map.MapPoolSupervisor do
@moduledoc false
use Supervisor
alias WandererApp.Map.MapPoolState
@name __MODULE__
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@@ -11,10 +13,15 @@ defmodule WandererApp.Map.MapPoolSupervisor do
end
def init(_args) do
# Initialize ETS table for MapPool state persistence
# This table survives individual MapPool crashes but is lost on node restart
MapPoolState.init_table()
children = [
{Registry, [keys: :unique, name: @unique_registry]},
{Registry, [keys: :duplicate, name: @registry]},
{WandererApp.Map.MapPoolDynamicSupervisor, []}
{WandererApp.Map.MapPoolDynamicSupervisor, []},
{WandererApp.Map.Reconciler, []}
]
Supervisor.init(children, strategy: :rest_for_one, max_restarts: 10)

View File

@@ -0,0 +1,287 @@
defmodule WandererApp.Map.Reconciler do
@moduledoc """
Periodically reconciles map state across different stores (Cache, Registry, GenServer state)
to detect and fix inconsistencies that may prevent map servers from restarting.
"""
use GenServer
require Logger
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@reconciliation_interval :timer.minutes(5)
def start_link(_opts) do
GenServer.start_link(__MODULE__, [], name: __MODULE__)
end
@impl true
def init(_opts) do
Logger.info("Starting Map Reconciler")
schedule_reconciliation()
{:ok, %{}}
end
@impl true
def handle_info(:reconcile, state) do
schedule_reconciliation()
try do
reconcile_state()
rescue
e ->
Logger.error("""
[Map Reconciler] reconciliation error: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
@doc """
Manually trigger a reconciliation (useful for testing or manual cleanup)
"""
def trigger_reconciliation do
GenServer.cast(__MODULE__, :reconcile_now)
end
@impl true
def handle_cast(:reconcile_now, state) do
try do
reconcile_state()
rescue
e ->
Logger.error("""
[Map Reconciler] manual reconciliation error: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
defp schedule_reconciliation do
Process.send_after(self(), :reconcile, @reconciliation_interval)
end
defp reconcile_state do
Logger.debug("[Map Reconciler] Starting state reconciliation")
# Get started_maps from cache
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
# Get all maps from registries
registry_maps = get_all_registry_maps()
# Detect zombie maps (in started_maps but not in any registry)
zombie_maps = started_maps -- registry_maps
# Detect orphan maps (in registry but not in started_maps)
orphan_maps = registry_maps -- started_maps
# Detect cache inconsistencies (map_pool_cache pointing to wrong or non-existent pools)
cache_inconsistencies = find_cache_inconsistencies(registry_maps)
stats = %{
total_started_maps: length(started_maps),
total_registry_maps: length(registry_maps),
zombie_maps: length(zombie_maps),
orphan_maps: length(orphan_maps),
cache_inconsistencies: length(cache_inconsistencies)
}
Logger.info("[Map Reconciler] Reconciliation stats: #{inspect(stats)}")
# Emit telemetry
:telemetry.execute(
[:wanderer_app, :map, :reconciliation],
stats,
%{}
)
# Clean up zombie maps
cleanup_zombie_maps(zombie_maps)
# Fix orphan maps
fix_orphan_maps(orphan_maps)
# Fix cache inconsistencies
fix_cache_inconsistencies(cache_inconsistencies)
Logger.debug("[Map Reconciler] State reconciliation completed")
end
defp get_all_registry_maps do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
[]
pools ->
pools
|> Enum.flat_map(fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{_pool_pid, map_ids}] -> map_ids
_ -> []
end
end)
|> Enum.uniq()
end
end
defp find_cache_inconsistencies(registry_maps) do
registry_maps
|> Enum.filter(fn map_id ->
case Cachex.get(@cache, map_id) do
{:ok, nil} ->
# Map in registry but not in cache
true
{:ok, pool_uuid} ->
# Check if the pool_uuid actually exists in registry
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
[] ->
# Cache points to non-existent pool
true
[{_pool_pid, map_ids}] ->
# Check if this map is actually in the pool's map_ids
map_id not in map_ids
_ ->
false
end
{:error, _} ->
true
end
end)
end
defp cleanup_zombie_maps([]), do: :ok
defp cleanup_zombie_maps(zombie_maps) do
Logger.warning(
"[Map Reconciler] Found #{length(zombie_maps)} zombie maps: #{inspect(zombie_maps)}"
)
Enum.each(zombie_maps, fn map_id ->
Logger.info("[Map Reconciler] Cleaning up zombie map: #{map_id}")
# Remove from started_maps cache
WandererApp.Cache.insert_or_update(
"started_maps",
[],
fn started_maps ->
started_maps |> Enum.reject(fn started_map_id -> started_map_id == map_id end)
end
)
# Clean up any stale map_pool_cache entries
Cachex.del(@cache, map_id)
# Clean up map-specific caches
WandererApp.Cache.delete("map_#{map_id}:started")
WandererApp.Cache.delete("map_characters-#{map_id}")
WandererApp.Map.CacheRTree.clear_tree("rtree_#{map_id}")
WandererApp.Map.delete_map_state(map_id)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :zombie_cleanup],
%{count: 1},
%{map_id: map_id}
)
end)
end
defp fix_orphan_maps([]), do: :ok
defp fix_orphan_maps(orphan_maps) do
Logger.warning(
"[Map Reconciler] Found #{length(orphan_maps)} orphan maps: #{inspect(orphan_maps)}"
)
Enum.each(orphan_maps, fn map_id ->
Logger.info("[Map Reconciler] Fixing orphan map: #{map_id}")
# Add to started_maps cache
WandererApp.Cache.insert_or_update(
"started_maps",
[map_id],
fn existing ->
[map_id | existing] |> Enum.uniq()
end
)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :orphan_fixed],
%{count: 1},
%{map_id: map_id}
)
end)
end
defp fix_cache_inconsistencies([]), do: :ok
defp fix_cache_inconsistencies(inconsistent_maps) do
Logger.warning(
"[Map Reconciler] Found #{length(inconsistent_maps)} cache inconsistencies: #{inspect(inconsistent_maps)}"
)
Enum.each(inconsistent_maps, fn map_id ->
Logger.info("[Map Reconciler] Fixing cache inconsistency for map: #{map_id}")
# Find the correct pool for this map
case find_pool_for_map(map_id) do
{:ok, pool_uuid} ->
Logger.info("[Map Reconciler] Updating cache: #{map_id} -> #{pool_uuid}")
Cachex.put(@cache, map_id, pool_uuid)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :cache_fixed],
%{count: 1},
%{map_id: map_id, pool_uuid: pool_uuid}
)
:error ->
Logger.warning(
"[Map Reconciler] Could not find pool for map #{map_id}, removing from cache"
)
Cachex.del(@cache, map_id)
end
end)
end
defp find_pool_for_map(map_id) do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
:error
pools ->
pools
|> Enum.find_value(:error, fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{_pool_pid, map_ids}] ->
if map_id in map_ids do
{:ok, uuid}
else
nil
end
_ ->
nil
end
end)
end
end
end

View File

@@ -72,12 +72,12 @@ defmodule WandererApp.Map.Routes do
{:ok, %{routes: routes, systems_static_data: systems_static_data}}
error ->
_error ->
{:ok, %{routes: [], systems_static_data: []}}
end
end
def find(map_id, hubs, origin, routes_settings, true) do
def find(_map_id, hubs, origin, routes_settings, true) do
origin = origin |> String.to_integer()
hubs = hubs |> Enum.map(&(&1 |> String.to_integer()))
@@ -240,8 +240,10 @@ defmodule WandererApp.Map.Routes do
{:ok, result}
{:error, _error} ->
error_file_path = save_error_params(origin, hubs, params)
@logger.error(
"Error getting custom routes for #{inspect(origin)}: #{inspect(params)}"
"Error getting custom routes for #{inspect(origin)}: #{inspect(params)}. Params saved to: #{error_file_path}"
)
WandererApp.Esi.get_routes_eve(hubs, origin, params, opts)
@@ -249,6 +251,35 @@ defmodule WandererApp.Map.Routes do
end
end
defp save_error_params(origin, hubs, params) do
timestamp = DateTime.utc_now() |> DateTime.to_unix(:millisecond)
filename = "#{timestamp}_route_error_params.json"
filepath = Path.join([System.tmp_dir!(), filename])
error_data = %{
timestamp: DateTime.utc_now() |> DateTime.to_iso8601(),
origin: origin,
hubs: hubs,
params: params
}
case Jason.encode(error_data, pretty: true) do
{:ok, json_string} ->
File.write!(filepath, json_string)
filepath
{:error, _reason} ->
# Fallback: save as Elixir term if JSON encoding fails
filepath_term = Path.join([System.tmp_dir!(), "#{timestamp}_route_error_params.term"])
File.write!(filepath_term, inspect(error_data, pretty: true))
filepath_term
end
rescue
e ->
@logger.error("Failed to save error params: #{inspect(e)}")
"error_saving_params"
end
defp remove_intersection(pairs_arr) do
tuples = pairs_arr |> Enum.map(fn x -> {x.first, x.second} end)

View File

@@ -300,10 +300,9 @@ defmodule WandererApp.Map.SubscriptionManager do
defp is_expired(subscription) when is_map(subscription),
do: DateTime.compare(DateTime.utc_now(), subscription.active_till) == :gt
defp renew_subscription(%{auto_renew?: true} = subscription) when is_map(subscription) do
with {:ok, %{map: map}} <-
subscription |> WandererApp.MapSubscriptionRepo.load_relationships([:map]),
{:ok, estimated_price, discount} <- estimate_price(subscription, true),
defp renew_subscription(%{auto_renew?: true, map: map} = subscription)
when is_map(subscription) do
with {:ok, estimated_price, discount} <- estimate_price(subscription, true),
{:ok, map_balance} <- get_balance(map) do
case map_balance >= estimated_price do
true ->

View File

@@ -35,16 +35,14 @@ defmodule WandererApp.Map.ZkbDataFetcher do
|> Task.async_stream(
fn map_id ->
try do
if WandererApp.Map.Server.map_pid(map_id) do
# Always update kill counts
update_map_kills(map_id)
# Always update kill counts
update_map_kills(map_id)
# Update detailed kills for maps with active subscriptions
{:ok, is_subscription_active} = map_id |> WandererApp.Map.is_subscription_active?()
# Update detailed kills for maps with active subscriptions
{:ok, is_subscription_active} = map_id |> WandererApp.Map.is_subscription_active?()
if is_subscription_active do
update_detailed_map_kills(map_id)
end
if is_subscription_active do
update_detailed_map_kills(map_id)
end
rescue
e ->

View File

@@ -12,7 +12,6 @@ defmodule WandererApp.Map.Operations.Connections do
# Connection type constants
@connection_type_wormhole 0
@connection_type_stargate 1
# Ship size constants
@small_ship_size 0
@@ -231,31 +230,15 @@ defmodule WandererApp.Map.Operations.Connections do
attrs
) do
with {:ok, conn_struct} <- MapConnectionRepo.get_by_id(map_id, conn_id),
result <-
:ok <-
(try do
_allowed_keys = [
:mass_status,
:ship_size_type,
:time_status,
:type
]
_update_map =
attrs
|> Enum.filter(fn {k, _v} ->
k in ["mass_status", "ship_size_type", "time_status", "type"]
end)
|> Enum.map(fn {k, v} -> {String.to_atom(k), v} end)
|> Enum.into(%{})
res = apply_connection_updates(map_id, conn_struct, attrs, char_id)
res
rescue
error ->
Logger.error("[update_connection] Exception: #{inspect(error)}")
{:error, :exception}
end),
:ok <- result do
end) do
# Since GenServer updates are asynchronous, manually apply updates to the current struct
# to return the correct data immediately instead of refetching from potentially stale cache
updated_attrs =
@@ -374,6 +357,7 @@ defmodule WandererApp.Map.Operations.Connections do
"ship_size_type" -> maybe_update_ship_size_type(map_id, conn, val)
"time_status" -> maybe_update_time_status(map_id, conn, val)
"type" -> maybe_update_type(map_id, conn, val)
"locked" -> maybe_update_locked(map_id, conn, val)
_ -> :ok
end
@@ -429,6 +413,16 @@ defmodule WandererApp.Map.Operations.Connections do
})
end
defp maybe_update_locked(_map_id, _conn, nil), do: :ok
defp maybe_update_locked(map_id, conn, value) do
Server.update_connection_locked(map_id, %{
solar_system_source_id: conn.solar_system_source,
solar_system_target_id: conn.solar_system_target,
locked: value
})
end
@doc "Creates a connection between two systems"
@spec create_connection(String.t(), map(), String.t()) ::
{:ok, :created} | {:skip, :exists} | {:error, atom()}

View File

@@ -5,9 +5,42 @@ defmodule WandererApp.Map.Operations.Signatures do
require Logger
alias WandererApp.Map.Operations
alias WandererApp.Api.{MapSystem, MapSystemSignature}
alias WandererApp.Api.{Character, MapSystem, MapSystemSignature}
alias WandererApp.Map.Server
# Private helper to validate character_eve_id from params and return internal character ID
# If character_eve_id is provided in params, validates it exists and returns the internal UUID
# If not provided, falls back to the owner's character ID (which is already the internal UUID)
@spec validate_character_eve_id(map() | nil, String.t()) ::
{:ok, String.t()} | {:error, :invalid_character}
defp validate_character_eve_id(params, fallback_char_id) when is_map(params) do
case Map.get(params, "character_eve_id") do
nil ->
# No character_eve_id provided, use fallback (owner's internal character UUID)
{:ok, fallback_char_id}
provided_char_eve_id when is_binary(provided_char_eve_id) ->
# Validate the provided character_eve_id exists and get internal UUID
case Character.by_eve_id(provided_char_eve_id) do
{:ok, character} ->
# Return the internal character UUID, not the eve_id
{:ok, character.id}
_ ->
{:error, :invalid_character}
end
_ ->
# Invalid format
{:error, :invalid_character}
end
end
# Handle nil or non-map params by falling back to owner's character
defp validate_character_eve_id(_params, fallback_char_id) do
{:ok, fallback_char_id}
end
@spec list_signatures(String.t()) :: [map()]
def list_signatures(map_id) do
systems = Operations.list_systems(map_id)
@@ -41,11 +74,14 @@ defmodule WandererApp.Map.Operations.Signatures do
%{"solar_system_id" => solar_system_id} = params
)
when is_integer(solar_system_id) do
# Convert solar_system_id to system_id for internal use
with {:ok, system} <- MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id) do
# Validate character first, then convert solar_system_id to system_id
# validated_char_uuid is the internal character UUID for Server.update_signatures
with {:ok, validated_char_uuid} <- validate_character_eve_id(params, char_id),
{:ok, system} <- MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id) do
# Keep character_eve_id in attrs if provided by user (parse_signatures will use it)
# If not provided, parse_signatures will use the character_eve_id from validated_char_uuid lookup
attrs =
params
|> Map.put("character_eve_id", char_id)
|> Map.put("system_id", system.id)
|> Map.delete("solar_system_id")
@@ -54,7 +90,8 @@ defmodule WandererApp.Map.Operations.Signatures do
updated_signatures: [],
removed_signatures: [],
solar_system_id: solar_system_id,
character_id: char_id,
# Pass internal UUID here
character_id: validated_char_uuid,
user_id: user_id,
delete_connection_with_sigs: false
}) do
@@ -86,6 +123,10 @@ defmodule WandererApp.Map.Operations.Signatures do
{:error, :unexpected_error}
end
else
{:error, :invalid_character} ->
Logger.error("[create_signature] Invalid character_eve_id provided")
{:error, :invalid_character}
_ ->
Logger.error(
"[create_signature] System not found for solar_system_id: #{solar_system_id}"
@@ -111,7 +152,10 @@ defmodule WandererApp.Map.Operations.Signatures do
sig_id,
params
) do
with {:ok, sig} <- MapSystemSignature.by_id(sig_id),
# Validate character first, then look up signature and system
# validated_char_uuid is the internal character UUID
with {:ok, validated_char_uuid} <- validate_character_eve_id(params, char_id),
{:ok, sig} <- MapSystemSignature.by_id(sig_id),
{:ok, system} <- MapSystem.by_id(sig.system_id) do
base = %{
"eve_id" => sig.eve_id,
@@ -120,11 +164,11 @@ defmodule WandererApp.Map.Operations.Signatures do
"group" => sig.group,
"type" => sig.type,
"custom_info" => sig.custom_info,
"character_eve_id" => char_id,
"description" => sig.description,
"linked_system_id" => sig.linked_system_id
}
# Merge user params (which may include character_eve_id) with base
attrs = Map.merge(base, params)
:ok =
@@ -133,7 +177,8 @@ defmodule WandererApp.Map.Operations.Signatures do
updated_signatures: [attrs],
removed_signatures: [],
solar_system_id: system.solar_system_id,
character_id: char_id,
# Pass internal UUID here
character_id: validated_char_uuid,
user_id: user_id,
delete_connection_with_sigs: false
})
@@ -151,6 +196,10 @@ defmodule WandererApp.Map.Operations.Signatures do
_ -> {:ok, attrs}
end
else
{:error, :invalid_character} ->
Logger.error("[update_signature] Invalid character_eve_id provided")
{:error, :invalid_character}
err ->
Logger.error("[update_signature] Unexpected error: #{inspect(err)}")
{:error, :unexpected_error}

View File

@@ -34,28 +34,14 @@ defmodule WandererApp.Map.Server.CharactersImpl do
track_characters(map_id, rest)
end
def update_tracked_characters(map_id) do
def invalidate_characters(map_id) do
Task.start_link(fn ->
{:ok, all_map_tracked_character_ids} =
character_ids =
map_id
|> WandererApp.MapCharacterSettingsRepo.get_tracked_by_map_all()
|> case do
{:ok, settings} -> {:ok, settings |> Enum.map(&Map.get(&1, :character_id))}
_ -> {:ok, []}
end
|> WandererApp.Map.get_map!()
|> Map.get(:characters, [])
{:ok, actual_map_tracked_characters} =
WandererApp.Cache.lookup("maps:#{map_id}:tracked_characters", [])
characters_to_remove = actual_map_tracked_characters -- all_map_tracked_character_ids
WandererApp.Cache.insert_or_update(
"map_#{map_id}:invalidate_character_ids",
characters_to_remove,
fn ids ->
(ids ++ characters_to_remove) |> Enum.uniq()
end
)
WandererApp.Cache.insert("map_#{map_id}:invalidate_character_ids", character_ids)
:ok
end)
@@ -78,9 +64,7 @@ defmodule WandererApp.Map.Server.CharactersImpl do
})
end
defp untrack_character(_is_character_map_active, _map_id, character_id) do
:ok
end
defp untrack_character(_is_character_map_active, _map_id, _character_id), do: :ok
defp is_character_map_active?(map_id, character_id) do
case WandererApp.Character.get_character_state(character_id) do
@@ -155,6 +139,12 @@ defmodule WandererApp.Map.Server.CharactersImpl do
Task.start_link(fn ->
with :ok <- WandererApp.Map.remove_character(map_id, character_id),
{:ok, character} <- WandererApp.Character.get_map_character(map_id, character_id) do
# Clean up character-specific cache entries
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:solar_system_id")
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:station_id")
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:structure_id")
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:location_updated_at")
Impl.broadcast!(map_id, :character_removed, character)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
@@ -193,98 +183,103 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
end
# Calculate optimal concurrency based on character count
# Scales from base concurrency (32 on 8-core) up to 128 for 300+ characters
defp calculate_max_concurrency(character_count) do
base_concurrency = System.schedulers_online() * 4
cond do
character_count < 100 -> base_concurrency
character_count < 200 -> base_concurrency * 2
character_count < 300 -> base_concurrency * 3
true -> base_concurrency * 4
end
end
def update_characters(map_id) do
start_time = System.monotonic_time(:microsecond)
try do
{:ok, presence_character_ids} =
WandererApp.Cache.lookup("map_#{map_id}:presence_character_ids", [])
{:ok, tracked_character_ids} = WandererApp.Map.get_tracked_character_ids(map_id)
presence_character_ids
|> Task.async_stream(
fn character_id ->
character_updates =
maybe_update_online(map_id, character_id) ++
maybe_update_tracking_status(map_id, character_id) ++
maybe_update_location(map_id, character_id) ++
maybe_update_ship(map_id, character_id) ++
maybe_update_alliance(map_id, character_id) ++
maybe_update_corporation(map_id, character_id)
character_count = length(tracked_character_ids)
character_updates
|> Enum.filter(fn update -> update != :skip end)
|> Enum.map(fn update ->
update
|> case do
{:character_location, location_info, old_location_info} ->
{:ok, map_state} = WandererApp.Map.get_map_state(map_id)
update_location(
map_state,
character_id,
location_info,
old_location_info
)
:broadcast
{:character_ship, _info} ->
:broadcast
{:character_online, _info} ->
:broadcast
{:character_tracking, _info} ->
:broadcast
{:character_alliance, _info} ->
WandererApp.Cache.insert_or_update(
"map_#{map_id}:invalidate_character_ids",
[character_id],
fn ids ->
[character_id | ids] |> Enum.uniq()
end
)
:broadcast
{:character_corporation, _info} ->
WandererApp.Cache.insert_or_update(
"map_#{map_id}:invalidate_character_ids",
[character_id],
fn ids ->
[character_id | ids] |> Enum.uniq()
end
)
:broadcast
_ ->
:skip
end
end)
|> Enum.filter(fn update -> update != :skip end)
|> Enum.uniq()
|> Enum.each(fn update ->
case update do
:broadcast ->
update_character(map_id, character_id)
_ ->
:ok
end
end)
:ok
end,
timeout: :timer.seconds(15),
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task
# Emit telemetry for tracking update cycle start
:telemetry.execute(
[:wanderer_app, :map, :update_characters, :start],
%{character_count: character_count, system_time: System.system_time()},
%{map_id: map_id}
)
|> Enum.each(fn
{:ok, _result} -> :ok
{:error, reason} -> Logger.error("Error in update_characters: #{inspect(reason)}")
end)
# Calculate dynamic concurrency based on character count
max_concurrency = calculate_max_concurrency(character_count)
updated_characters =
tracked_character_ids
|> Task.async_stream(
fn character_id ->
# Use batch cache operations for all character tracking data
process_character_updates_batched(map_id, character_id)
end,
timeout: :timer.seconds(15),
max_concurrency: max_concurrency,
on_timeout: :kill_task
)
|> Enum.reduce([], fn
{:ok, {:updated, character}}, acc ->
[character | acc]
{:ok, _result}, acc ->
acc
{:error, reason}, acc ->
Logger.error("Error in update_characters: #{inspect(reason)}")
acc
end)
unless Enum.empty?(updated_characters) do
# Broadcast to internal channels
Impl.broadcast!(map_id, :characters_updated, %{
characters: updated_characters,
timestamp: DateTime.utc_now()
})
# Broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :characters_updated, %{
characters: updated_characters,
timestamp: DateTime.utc_now()
})
end
# Emit telemetry for successful completion
duration = System.monotonic_time(:microsecond) - start_time
:telemetry.execute(
[:wanderer_app, :map, :update_characters, :complete],
%{
duration: duration,
character_count: character_count,
updated_count: length(updated_characters),
system_time: System.system_time()
},
%{map_id: map_id}
)
:ok
rescue
e ->
# Emit telemetry for error case
duration = System.monotonic_time(:microsecond) - start_time
:telemetry.execute(
[:wanderer_app, :map, :update_characters, :error],
%{
duration: duration,
system_time: System.system_time()
},
%{map_id: map_id, error: Exception.message(e)}
)
Logger.error("""
[Map Server] update_characters => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
@@ -292,14 +287,382 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
end
defp update_character(map_id, character_id) do
{:ok, character} = WandererApp.Character.get_map_character(map_id, character_id)
Impl.broadcast!(map_id, :character_updated, character)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :character_updated, character)
defp calculate_character_state_hash(character) do
# Hash all trackable fields for quick comparison
:erlang.phash2(%{
online: character.online,
ship: character.ship,
ship_name: character.ship_name,
ship_item_id: character.ship_item_id,
solar_system_id: character.solar_system_id,
station_id: character.station_id,
structure_id: character.structure_id,
alliance_id: character.alliance_id,
corporation_id: character.corporation_id
})
end
defp process_character_updates_batched(map_id, character_id) do
# Step 1: Get current character data for hash comparison
case WandererApp.Character.get_character(character_id) do
{:ok, character} ->
new_hash = calculate_character_state_hash(character)
state_hash_key = "map:#{map_id}:character:#{character_id}:state_hash"
{:ok, old_hash} = WandererApp.Cache.lookup(state_hash_key, nil)
if new_hash == old_hash do
# No changes detected - skip expensive processing (70-90% of cases)
:no_change
else
# Changes detected - proceed with full processing
process_character_changes(map_id, character_id, character, state_hash_key, new_hash)
end
{:error, _error} ->
:ok
end
end
# Process character changes when hash indicates updates
defp process_character_changes(map_id, character_id, character, state_hash_key, new_hash) do
# Step 1: Batch read all cached values for this character
cache_keys = [
"map:#{map_id}:character:#{character_id}:online",
"map:#{map_id}:character:#{character_id}:ship_type_id",
"map:#{map_id}:character:#{character_id}:ship_name",
"map:#{map_id}:character:#{character_id}:solar_system_id",
"map:#{map_id}:character:#{character_id}:station_id",
"map:#{map_id}:character:#{character_id}:structure_id",
"map:#{map_id}:character:#{character_id}:location_updated_at",
"map:#{map_id}:character:#{character_id}:alliance_id",
"map:#{map_id}:character:#{character_id}:corporation_id"
]
{:ok, cached_values} = WandererApp.Cache.lookup_all(cache_keys)
# Step 2: Calculate all updates
{character_updates, cache_updates} =
calculate_character_updates(map_id, character_id, character, cached_values)
# Step 3: Update the state hash in cache
cache_updates = Map.put(cache_updates, state_hash_key, new_hash)
# Step 4: Batch write all cache updates
unless Enum.empty?(cache_updates) do
WandererApp.Cache.insert_all(cache_updates)
end
# Step 5: Process update events
has_updates =
character_updates
|> Enum.filter(fn update -> update != :skip end)
|> Enum.map(fn update ->
case update do
{:character_location, location_info, old_location_info} ->
start_time = System.monotonic_time(:microsecond)
:telemetry.execute(
[:wanderer_app, :character, :location_update, :start],
%{system_time: System.system_time()},
%{
character_id: character_id,
map_id: map_id,
from_system: old_location_info.solar_system_id,
to_system: location_info.solar_system_id
}
)
{:ok, map_state} = WandererApp.Map.get_map_state(map_id)
update_location(
map_state,
character_id,
location_info,
old_location_info
)
duration = System.monotonic_time(:microsecond) - start_time
:telemetry.execute(
[:wanderer_app, :character, :location_update, :complete],
%{duration: duration, system_time: System.system_time()},
%{
character_id: character_id,
map_id: map_id,
from_system: old_location_info.solar_system_id,
to_system: location_info.solar_system_id
}
)
:has_update
{:character_ship, _info} ->
:has_update
{:character_online, %{online: online}} ->
if not online do
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:solar_system_id")
end
:has_update
{:character_tracking, _info} ->
:has_update
{:character_alliance, _info} ->
WandererApp.Cache.insert_or_update(
"map_#{map_id}:invalidate_character_ids",
[character_id],
fn ids ->
[character_id | ids] |> Enum.uniq()
end
)
:has_update
{:character_corporation, _info} ->
WandererApp.Cache.insert_or_update(
"map_#{map_id}:invalidate_character_ids",
[character_id],
fn ids ->
[character_id | ids] |> Enum.uniq()
end
)
:has_update
_ ->
:skip
end
end)
|> Enum.any?(fn result -> result == :has_update end)
if has_updates do
case WandererApp.Character.get_map_character(map_id, character_id) do
{:ok, character} ->
{:updated, character}
{:error, _} ->
:ok
end
else
:ok
end
end
# Calculate all character updates in a single pass
defp calculate_character_updates(map_id, character_id, character, cached_values) do
updates = []
cache_updates = %{}
# Check each type of update using specialized functions
{updates, cache_updates} =
check_online_update(map_id, character_id, character, cached_values, updates, cache_updates)
{updates, cache_updates} =
check_ship_update(map_id, character_id, character, cached_values, updates, cache_updates)
{updates, cache_updates} =
check_location_update(
map_id,
character_id,
character,
cached_values,
updates,
cache_updates
)
{updates, cache_updates} =
check_alliance_update(
map_id,
character_id,
character,
cached_values,
updates,
cache_updates
)
{updates, cache_updates} =
check_corporation_update(
map_id,
character_id,
character,
cached_values,
updates,
cache_updates
)
{updates, cache_updates}
end
# Check for online status changes
defp check_online_update(map_id, character_id, character, cached_values, updates, cache_updates) do
online_key = "map:#{map_id}:character:#{character_id}:online"
old_online = Map.get(cached_values, online_key)
if character.online != old_online do
{
[{:character_online, %{online: character.online}} | updates],
Map.put(cache_updates, online_key, character.online)
}
else
{updates, cache_updates}
end
end
# Check for ship changes
defp check_ship_update(map_id, character_id, character, cached_values, updates, cache_updates) do
ship_type_key = "map:#{map_id}:character:#{character_id}:ship_type_id"
ship_name_key = "map:#{map_id}:character:#{character_id}:ship_name"
old_ship_type_id = Map.get(cached_values, ship_type_key)
old_ship_name = Map.get(cached_values, ship_name_key)
if character.ship != old_ship_type_id or character.ship_name != old_ship_name do
{
[
{:character_ship,
%{
ship: character.ship,
ship_name: character.ship_name,
ship_item_id: character.ship_item_id
}}
| updates
],
cache_updates
|> Map.put(ship_type_key, character.ship)
|> Map.put(ship_name_key, character.ship_name)
}
else
{updates, cache_updates}
end
end
# Check for location changes with race condition detection
defp check_location_update(
map_id,
character_id,
character,
cached_values,
updates,
cache_updates
) do
solar_system_key = "map:#{map_id}:character:#{character_id}:solar_system_id"
station_key = "map:#{map_id}:character:#{character_id}:station_id"
structure_key = "map:#{map_id}:character:#{character_id}:structure_id"
location_timestamp_key = "map:#{map_id}:character:#{character_id}:location_updated_at"
old_solar_system_id = Map.get(cached_values, solar_system_key)
old_station_id = Map.get(cached_values, station_key)
old_structure_id = Map.get(cached_values, structure_key)
old_timestamp = Map.get(cached_values, location_timestamp_key)
if character.solar_system_id != old_solar_system_id ||
character.structure_id != old_structure_id ||
character.station_id != old_station_id do
# Race condition detection
{:ok, current_cached_timestamp} =
WandererApp.Cache.lookup(location_timestamp_key)
race_detected =
!is_nil(old_timestamp) && !is_nil(current_cached_timestamp) &&
old_timestamp != current_cached_timestamp
if race_detected do
Logger.warning(
"[CharacterTracking] Race condition detected for character #{character_id} on map #{map_id}: " <>
"cache was modified between read (#{inspect(old_timestamp)}) and write (#{inspect(current_cached_timestamp)})"
)
:telemetry.execute(
[:wanderer_app, :character, :location_update, :race_condition],
%{system_time: System.system_time()},
%{
character_id: character_id,
map_id: map_id,
old_system: old_solar_system_id,
new_system: character.solar_system_id,
old_timestamp: old_timestamp,
current_timestamp: current_cached_timestamp
}
)
end
now = DateTime.utc_now()
{
[
{:character_location,
%{
solar_system_id: character.solar_system_id,
structure_id: character.structure_id,
station_id: character.station_id
}, %{solar_system_id: old_solar_system_id}}
| updates
],
cache_updates
|> Map.put(solar_system_key, character.solar_system_id)
|> Map.put(station_key, character.station_id)
|> Map.put(structure_key, character.structure_id)
|> Map.put(location_timestamp_key, now)
}
else
{updates, cache_updates}
end
end
# Check for alliance changes
defp check_alliance_update(
map_id,
character_id,
character,
cached_values,
updates,
cache_updates
) do
alliance_key = "map:#{map_id}:character:#{character_id}:alliance_id"
old_alliance_id = Map.get(cached_values, alliance_key)
if character.alliance_id != old_alliance_id do
{
[{:character_alliance, %{alliance_id: character.alliance_id}} | updates],
Map.put(cache_updates, alliance_key, character.alliance_id)
}
else
{updates, cache_updates}
end
end
# Check for corporation changes
defp check_corporation_update(
map_id,
character_id,
character,
cached_values,
updates,
cache_updates
) do
corporation_key = "map:#{map_id}:character:#{character_id}:corporation_id"
old_corporation_id = Map.get(cached_values, corporation_key)
if character.corporation_id != old_corporation_id do
{
[{:character_corporation, %{corporation_id: character.corporation_id}} | updates],
Map.put(cache_updates, corporation_key, character.corporation_id)
}
else
{updates, cache_updates}
end
end
defp update_location(
_state,
_character_id,
_location,
%{solar_system_id: nil}
),
do: :ok
defp update_location(
%{map: %{scope: scope}, map_id: map_id, map_opts: map_opts} =
_state,
@@ -307,49 +670,59 @@ defmodule WandererApp.Map.Server.CharactersImpl do
location,
old_location
) do
start_solar_system_id =
WandererApp.Cache.take("map:#{map_id}:character:#{character_id}:start_solar_system_id")
case is_nil(old_location.solar_system_id) and
is_nil(start_solar_system_id) and
ConnectionsImpl.can_add_location(scope, location.solar_system_id) do
ConnectionsImpl.is_connection_valid(
scope,
old_location.solar_system_id,
location.solar_system_id
)
|> case do
true ->
:ok = SystemsImpl.maybe_add_system(map_id, location, nil, map_opts)
# Add new location system
case SystemsImpl.maybe_add_system(map_id, location, old_location, map_opts) do
:ok ->
:ok
_ ->
if is_nil(start_solar_system_id) || start_solar_system_id == old_location.solar_system_id do
ConnectionsImpl.is_connection_valid(
scope,
old_location.solar_system_id,
location.solar_system_id
)
|> case do
true ->
:ok =
SystemsImpl.maybe_add_system(map_id, location, old_location, map_opts)
{:error, error} ->
Logger.error(
"[CharacterTracking] Failed to add new location system #{location.solar_system_id} for character #{character_id} on map #{map_id}: #{inspect(error)}"
)
end
:ok =
SystemsImpl.maybe_add_system(map_id, old_location, location, map_opts)
# Add old location system (in case it wasn't on map)
case SystemsImpl.maybe_add_system(map_id, old_location, location, map_opts) do
:ok ->
:ok
if is_character_in_space?(location) do
:ok =
ConnectionsImpl.maybe_add_connection(
map_id,
location,
old_location,
character_id,
false,
nil
)
end
{:error, error} ->
Logger.error(
"[CharacterTracking] Failed to add old location system #{old_location.solar_system_id} for character #{character_id} on map #{map_id}: #{inspect(error)}"
)
end
# Add connection if character is in space
if is_character_in_space?(location) do
case ConnectionsImpl.maybe_add_connection(
map_id,
location,
old_location,
character_id,
false,
nil
) do
:ok ->
:ok
{:error, error} ->
Logger.error(
"[CharacterTracking] Failed to add connection for character #{character_id} on map #{map_id}: #{inspect(error)}"
)
_ ->
:ok
end
else
# skip adding connection or system if character just started tracking on the map
:ok
end
_ ->
:ok
end
end
@@ -390,197 +763,14 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
defp track_character(map_id, character_id) do
{:ok, %{solar_system_id: solar_system_id} = map_character} =
WandererApp.Character.get_map_character(map_id, character_id, not_present: true)
{:ok, character} =
WandererApp.Character.get_character(character_id)
WandererApp.Cache.delete("character:#{character_id}:tracking_paused")
add_character(map_id, map_character, true)
add_character(map_id, character, true)
WandererApp.Character.TrackerManager.update_track_settings(character_id, %{
map_id: map_id,
track: true,
track_online: true,
track_location: true,
track_ship: true,
solar_system_id: solar_system_id
track: true
})
end
defp maybe_update_online(map_id, character_id) do
with {:ok, old_online} <-
WandererApp.Cache.lookup("map:#{map_id}:character:#{character_id}:online"),
{:ok, %{online: online}} <-
WandererApp.Character.get_character(character_id) do
case old_online != online do
true ->
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:online",
online
)
[{:character_online, %{online: online}}]
_ ->
[:skip]
end
else
error ->
Logger.error("Failed to update online: #{inspect(error, pretty: true)}")
[:skip]
end
end
defp maybe_update_tracking_status(map_id, character_id) do
with {:ok, old_tracking_paused} <-
WandererApp.Cache.lookup(
"map:#{map_id}:character:#{character_id}:tracking_paused",
false
),
{:ok, tracking_paused} <-
WandererApp.Cache.lookup("character:#{character_id}:tracking_paused", false) do
case old_tracking_paused != tracking_paused do
true ->
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:tracking_paused",
tracking_paused
)
[{:character_tracking, %{tracking_paused: tracking_paused}}]
_ ->
[:skip]
end
else
error ->
Logger.error("Failed to update character_tracking: #{inspect(error, pretty: true)}")
[:skip]
end
end
defp maybe_update_ship(map_id, character_id) do
with {:ok, old_ship_type_id} <-
WandererApp.Cache.lookup("map:#{map_id}:character:#{character_id}:ship_type_id"),
{:ok, old_ship_name} <-
WandererApp.Cache.lookup("map:#{map_id}:character:#{character_id}:ship_name"),
{:ok, %{ship: ship_type_id, ship_name: ship_name, ship_item_id: ship_item_id}} <-
WandererApp.Character.get_character(character_id) do
case old_ship_type_id != ship_type_id or
old_ship_name != ship_name do
true ->
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:ship_type_id",
ship_type_id
)
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:ship_name",
ship_name
)
[
{:character_ship,
%{ship: ship_type_id, ship_name: ship_name, ship_item_id: ship_item_id}}
]
_ ->
[:skip]
end
else
error ->
Logger.error("Failed to update ship: #{inspect(error, pretty: true)}")
[:skip]
end
end
defp maybe_update_location(map_id, character_id) do
{:ok, old_solar_system_id} =
WandererApp.Cache.lookup("map:#{map_id}:character:#{character_id}:solar_system_id")
{:ok, old_station_id} =
WandererApp.Cache.lookup("map:#{map_id}:character:#{character_id}:station_id")
{:ok, old_structure_id} =
WandererApp.Cache.lookup("map:#{map_id}:character:#{character_id}:structure_id")
{:ok, %{solar_system_id: solar_system_id, structure_id: structure_id, station_id: station_id}} =
WandererApp.Character.get_character(character_id)
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:solar_system_id",
solar_system_id
)
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:station_id",
station_id
)
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:structure_id",
structure_id
)
if solar_system_id != old_solar_system_id || structure_id != old_structure_id ||
station_id != old_station_id do
[
{:character_location,
%{
solar_system_id: solar_system_id,
structure_id: structure_id,
station_id: station_id
}, %{solar_system_id: old_solar_system_id}}
]
else
[:skip]
end
end
defp maybe_update_alliance(map_id, character_id) do
with {:ok, old_alliance_id} <-
WandererApp.Cache.lookup("map:#{map_id}:character:#{character_id}:alliance_id"),
{:ok, %{alliance_id: alliance_id}} <-
WandererApp.Character.get_character(character_id) do
case old_alliance_id != alliance_id do
true ->
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:alliance_id",
alliance_id
)
[{:character_alliance, %{alliance_id: alliance_id}}]
_ ->
[:skip]
end
else
error ->
Logger.error("Failed to update alliance: #{inspect(error, pretty: true)}")
[:skip]
end
end
defp maybe_update_corporation(map_id, character_id) do
with {:ok, old_corporation_id} <-
WandererApp.Cache.lookup("map:#{map_id}:character:#{character_id}:corporation_id"),
{:ok, %{corporation_id: corporation_id}} <-
WandererApp.Character.get_character(character_id) do
case old_corporation_id != corporation_id do
true ->
WandererApp.Cache.insert(
"map:#{map_id}:character:#{character_id}:corporation_id",
corporation_id
)
[{:character_corporation, %{corporation_id: corporation_id}}]
_ ->
[:skip]
end
else
error ->
Logger.error("Failed to update corporation: #{inspect(error, pretty: true)}")
[:skip]
end
end
end

View File

@@ -223,6 +223,7 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
update_connection(map_id, :update_time_status, [:time_status], connection_update, fn
%{time_status: old_time_status},
%{id: connection_id, time_status: time_status} = updated_connection ->
# Handle EOL marking cache separately
case time_status == @connection_time_status_eol do
true ->
if old_time_status != @connection_time_status_eol do
@@ -230,18 +231,30 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
"map_#{map_id}:conn_#{connection_id}:mark_eol_time",
DateTime.utc_now()
)
set_start_time(map_id, connection_id, DateTime.utc_now())
end
_ ->
if old_time_status == @connection_time_status_eol do
WandererApp.Cache.delete("map_#{map_id}:conn_#{connection_id}:mark_eol_time")
set_start_time(map_id, connection_id, DateTime.utc_now())
end
end
# Always reset start_time when status changes (manual override)
# This ensures user manual changes aren't immediately overridden by cleanup
if time_status != old_time_status do
# Emit telemetry for manual time status change
:telemetry.execute(
[:wanderer_app, :connection, :manual_status_change],
%{system_time: System.system_time()},
%{
map_id: map_id,
connection_id: connection_id,
old_time_status: old_time_status,
new_time_status: time_status
}
)
set_start_time(map_id, connection_id, DateTime.utc_now())
maybe_update_linked_signature_time_status(map_id, updated_connection)
end
end)
@@ -353,6 +366,25 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
solar_system_source_id,
solar_system_target_id
) do
# Emit telemetry for automatic time status downgrade
elapsed_minutes = DateTime.diff(DateTime.utc_now(), connection_start_time, :minute)
:telemetry.execute(
[:wanderer_app, :connection, :auto_downgrade],
%{
elapsed_minutes: elapsed_minutes,
system_time: System.system_time()
},
%{
map_id: map_id,
connection_id: connection_id,
old_time_status: time_status,
new_time_status: new_time_status,
solar_system_source: solar_system_source_id,
solar_system_target: solar_system_target_id
}
)
set_start_time(map_id, connection_id, DateTime.utc_now())
update_connection_time_status(map_id, %{
@@ -373,36 +405,36 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
solar_system_target: solar_system_target
} = updated_connection
) do
source_system =
WandererApp.Map.find_system_by_location(
with source_system when not is_nil(source_system) <-
WandererApp.Map.find_system_by_location(
map_id,
%{solar_system_id: solar_system_source}
),
target_system when not is_nil(target_system) <-
WandererApp.Map.find_system_by_location(
map_id,
%{solar_system_id: solar_system_target}
),
source_linked_signatures <-
find_linked_signatures(source_system, target_system),
target_linked_signatures <- find_linked_signatures(target_system, source_system) do
update_signatures_time_status(
map_id,
%{solar_system_id: solar_system_source}
source_system.solar_system_id,
source_linked_signatures,
time_status
)
target_system =
WandererApp.Map.find_system_by_location(
update_signatures_time_status(
map_id,
%{solar_system_id: solar_system_target}
target_system.solar_system_id,
target_linked_signatures,
time_status
)
source_linked_signatures =
find_linked_signatures(source_system, target_system)
target_linked_signatures = find_linked_signatures(target_system, source_system)
update_signatures_time_status(
map_id,
source_system.solar_system_id,
source_linked_signatures,
time_status
)
update_signatures_time_status(
map_id,
target_system.solar_system_id,
target_linked_signatures,
time_status
)
else
error ->
Logger.warning("Failed to update_linked_signature_time_status: #{inspect(error)}")
end
end
defp find_linked_signatures(
@@ -438,7 +470,7 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
%{custom_info: updated_custom_info}
end
SignaturesImpl.apply_update_signature(%{map_id: map_id}, sig, update_params)
SignaturesImpl.apply_update_signature(map_id, sig, update_params)
end)
Impl.broadcast!(map_id, :signatures_updated, solar_system_id)
@@ -537,6 +569,12 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
Impl.broadcast!(map_id, :add_connection, connection)
Impl.broadcast!(map_id, :maybe_link_signature, %{
character_id: character_id,
solar_system_source: old_location.solar_system_id,
solar_system_target: location.solar_system_id
})
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :connection_added, %{
connection_id: connection.id,
@@ -548,19 +586,12 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
time_status: connection.time_status
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: character_id,
user_id: character.user_id,
map_id: map_id,
solar_system_source_id: old_location.solar_system_id,
solar_system_target_id: location.solar_system_id
})
Impl.broadcast!(map_id, :maybe_link_signature, %{
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: character_id,
solar_system_source: old_location.solar_system_id,
solar_system_target: location.solar_system_id
user_id: character.user_id,
map_id: map_id,
solar_system_source_id: old_location.solar_system_id,
solar_system_target_id: location.solar_system_id
})
:ok
@@ -657,12 +688,17 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
)
)
def is_connection_valid(:all, _from_solar_system_id, _to_solar_system_id), do: true
def is_connection_valid(_scope, from_solar_system_id, to_solar_system_id)
when is_nil(from_solar_system_id) or is_nil(to_solar_system_id),
do: false
def is_connection_valid(:all, from_solar_system_id, to_solar_system_id),
do: from_solar_system_id != to_solar_system_id
def is_connection_valid(:none, _from_solar_system_id, _to_solar_system_id), do: false
def is_connection_valid(scope, from_solar_system_id, to_solar_system_id)
when not is_nil(from_solar_system_id) and not is_nil(to_solar_system_id) do
when from_solar_system_id != to_solar_system_id do
with {:ok, known_jumps} <- find_solar_system_jump(from_solar_system_id, to_solar_system_id),
{:ok, from_system_static_info} <- get_system_static_info(from_solar_system_id),
{:ok, to_system_static_info} <- get_system_static_info(to_solar_system_id) do

View File

@@ -25,12 +25,11 @@ defmodule WandererApp.Map.Server.Impl do
]
@pubsub_client Application.compile_env(:wanderer_app, :pubsub_client)
@connections_cleanup_timeout :timer.minutes(1)
@ddrt Application.compile_env(:wanderer_app, :ddrt)
@update_presence_timeout :timer.seconds(5)
@update_characters_timeout :timer.seconds(1)
@update_tracked_characters_timeout :timer.minutes(1)
@invalidate_characters_timeout :timer.hours(1)
def new(), do: __struct__()
def new(args), do: __struct__(args)
@@ -45,19 +44,77 @@ defmodule WandererApp.Map.Server.Impl do
}
|> new()
with {:ok, map} <-
WandererApp.MapRepo.get(map_id, [
:owner,
:characters,
acls: [
:owner_id,
members: [:role, :eve_character_id, :eve_corporation_id, :eve_alliance_id]
]
]),
{:ok, systems} <- WandererApp.MapSystemRepo.get_visible_by_map(map_id),
{:ok, connections} <- WandererApp.MapConnectionRepo.get_by_map(map_id),
{:ok, subscription_settings} <-
WandererApp.Map.SubscriptionManager.get_active_map_subscription(map_id) do
# Parallelize database queries for faster initialization
start_time = System.monotonic_time(:millisecond)
tasks = [
Task.async(fn ->
{:map,
WandererApp.MapRepo.get(map_id, [
:owner,
:characters,
acls: [
:owner_id,
members: [:role, :eve_character_id, :eve_corporation_id, :eve_alliance_id]
]
])}
end),
Task.async(fn ->
{:systems, WandererApp.MapSystemRepo.get_visible_by_map(map_id)}
end),
Task.async(fn ->
{:connections, WandererApp.MapConnectionRepo.get_by_map(map_id)}
end),
Task.async(fn ->
{:subscription, WandererApp.Map.SubscriptionManager.get_active_map_subscription(map_id)}
end)
]
results = Task.await_many(tasks, :timer.seconds(15))
duration = System.monotonic_time(:millisecond) - start_time
# Emit telemetry for slow initializations
if duration > 5_000 do
Logger.warning("[Map Server] Slow map state initialization: #{map_id} took #{duration}ms")
:telemetry.execute(
[:wanderer_app, :map, :slow_init],
%{duration_ms: duration},
%{map_id: map_id}
)
end
# Extract results
map_result =
Enum.find_value(results, fn
{:map, result} -> result
_ -> nil
end)
systems_result =
Enum.find_value(results, fn
{:systems, result} -> result
_ -> nil
end)
connections_result =
Enum.find_value(results, fn
{:connections, result} -> result
_ -> nil
end)
subscription_result =
Enum.find_value(results, fn
{:subscription, result} -> result
_ -> nil
end)
# Process results
with {:ok, map} <- map_result,
{:ok, systems} <- systems_result,
{:ok, connections} <- connections_result,
{:ok, subscription_settings} <- subscription_result do
initial_state
|> init_map(
map,
@@ -88,13 +145,12 @@ defmodule WandererApp.Map.Server.Impl do
"maps:#{map_id}"
)
WandererApp.Map.CacheRTree.init_tree("rtree_#{map_id}", %{width: 150, verbose: false})
Process.send_after(self(), {:update_characters, map_id}, @update_characters_timeout)
Process.send_after(
self(),
{:update_tracked_characters, map_id},
@update_tracked_characters_timeout
{:invalidate_characters, map_id},
@invalidate_characters_timeout
)
Process.send_after(self(), {:update_presence, map_id}, @update_presence_timeout)
@@ -143,17 +199,11 @@ defmodule WandererApp.Map.Server.Impl do
defdelegate cleanup_systems(map_id), to: SystemsImpl
defdelegate cleanup_connections(map_id), to: ConnectionsImpl
defdelegate cleanup_characters(map_id), to: CharactersImpl
defdelegate untrack_characters(map_id, characters_ids), to: CharactersImpl
defdelegate add_system(map_id, system_info, user_id, character_id, opts \\ []), to: SystemsImpl
defdelegate paste_connections(map_id, connections, user_id, character_id), to: ConnectionsImpl
defdelegate paste_systems(map_id, systems, user_id, character_id, opts), to: SystemsImpl
defdelegate add_system_comment(map_id, comment_info, user_id, character_id), to: SystemsImpl
defdelegate remove_system_comment(map_id, comment_id, user_id, character_id), to: SystemsImpl
defdelegate delete_systems(
@@ -165,49 +215,27 @@ defmodule WandererApp.Map.Server.Impl do
to: SystemsImpl
defdelegate update_system_name(map_id, update), to: SystemsImpl
defdelegate update_system_description(map_id, update), to: SystemsImpl
defdelegate update_system_status(map_id, update), to: SystemsImpl
defdelegate update_system_tag(map_id, update), to: SystemsImpl
defdelegate update_system_temporary_name(map_id, update), to: SystemsImpl
defdelegate update_system_locked(map_id, update), to: SystemsImpl
defdelegate update_system_labels(map_id, update), to: SystemsImpl
defdelegate update_system_linked_sig_eve_id(map_id, update), to: SystemsImpl
defdelegate update_system_position(map_id, update), to: SystemsImpl
defdelegate add_hub(map_id, hub_info), to: SystemsImpl
defdelegate remove_hub(map_id, hub_info), to: SystemsImpl
defdelegate add_ping(map_id, ping_info), to: PingsImpl
defdelegate cancel_ping(map_id, ping_info), to: PingsImpl
defdelegate add_connection(map_id, connection_info), to: ConnectionsImpl
defdelegate delete_connection(map_id, connection_info), to: ConnectionsImpl
defdelegate get_connection_info(map_id, connection_info), to: ConnectionsImpl
defdelegate update_connection_time_status(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_type(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_mass_status(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_ship_size_type(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_locked(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_custom_info(map_id, connection_update), to: ConnectionsImpl
defdelegate update_signatures(map_id, signatures_update), to: SignaturesImpl
def import_settings(map_id, settings, user_id) do
@@ -274,14 +302,14 @@ defmodule WandererApp.Map.Server.Impl do
CharactersImpl.update_characters(map_id)
end
def handle_event({:update_tracked_characters, map_id} = event) do
def handle_event({:invalidate_characters, map_id} = event) do
Process.send_after(
self(),
event,
@update_tracked_characters_timeout
@invalidate_characters_timeout
)
CharactersImpl.update_tracked_characters(map_id)
CharactersImpl.invalidate_characters(map_id)
end
def handle_event({:update_presence, map_id} = event) do
@@ -358,6 +386,13 @@ defmodule WandererApp.Map.Server.Impl do
update_options(map_id, options)
end
def handle_event(:map_deleted) do
# Map has been deleted - this event is handled by MapPool to stop the server
# and by MapLive to redirect users. Nothing to do here.
Logger.debug("Map deletion event received, will be handled by MapPool")
:ok
end
def handle_event({ref, _result}) when is_reference(ref) do
Process.demonitor(ref, [:flush])
end
@@ -452,6 +487,8 @@ defmodule WandererApp.Map.Server.Impl do
) do
{:ok, options} = WandererApp.MapRepo.options_to_form_data(initial_map)
@ddrt.init_tree("rtree_#{map_id}", %{width: 150, verbose: false})
map =
initial_map
|> WandererApp.Map.new()

View File

@@ -72,39 +72,53 @@ defmodule WandererApp.Map.Server.PingsImpl do
type: type
} = _ping_info
) do
with {:ok, character} <- WandererApp.Character.get_character(character_id),
{:ok,
%{system: %{id: system_id, name: system_name, solar_system_id: solar_system_id}} = ping} <-
WandererApp.MapPingsRepo.get_by_id(ping_id),
:ok <- WandererApp.MapPingsRepo.destroy(ping) do
Impl.broadcast!(map_id, :ping_cancelled, %{
id: ping_id,
solar_system_id: solar_system_id,
type: type
})
case WandererApp.MapPingsRepo.get_by_id(ping_id) do
{:ok,
%{system: %{id: system_id, name: system_name, solar_system_id: solar_system_id}} = ping} ->
with {:ok, character} <- WandererApp.Character.get_character(character_id),
:ok <- WandererApp.MapPingsRepo.destroy(ping) do
Impl.broadcast!(map_id, :ping_cancelled, %{
id: ping_id,
solar_system_id: solar_system_id,
type: type
})
# Broadcast rally point removal events to external clients (webhooks/SSE)
if type == 1 do
WandererApp.ExternalEvents.broadcast(map_id, :rally_point_removed, %{
id: ping_id,
solar_system_id: solar_system_id,
system_id: system_id,
character_id: character_id,
character_name: character.name,
character_eve_id: character.eve_id,
system_name: system_name
})
end
# Broadcast rally point removal events to external clients (webhooks/SSE)
if type == 1 do
WandererApp.ExternalEvents.broadcast(map_id, :rally_point_removed, %{
id: ping_id,
solar_system_id: solar_system_id,
system_id: system_id,
character_id: character_id,
character_name: character.name,
character_eve_id: character.eve_id,
system_name: system_name
})
end
WandererApp.User.ActivityTracker.track_map_event(:map_rally_cancelled, %{
character_id: character_id,
user_id: user_id,
map_id: map_id,
solar_system_id: solar_system_id
})
else
error ->
Logger.error("Failed to destroy ping: #{inspect(error, pretty: true)}")
end
{:error, %Ash.Error.Query.NotFound{}} ->
# Ping already deleted (possibly by cascade deletion from map/system/character removal,
# auto-expiry, or concurrent cancellation). This is not an error - the desired state
# (ping is gone) is already achieved. Just broadcast the cancellation event.
Logger.debug(
"Ping #{ping_id} not found during cancellation - already deleted, skipping broadcast"
)
:ok
WandererApp.User.ActivityTracker.track_map_event(:map_rally_cancelled, %{
character_id: character_id,
user_id: user_id,
map_id: map_id,
solar_system_id: solar_system_id
})
else
error ->
Logger.error("Failed to cancel_ping: #{inspect(error, pretty: true)}")
Logger.error("Failed to fetch ping for cancellation: #{inspect(error, pretty: true)}")
end
end
end

View File

@@ -212,9 +212,9 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
defp maybe_update_connection_time_status(
map_id,
%{custom_info: old_custom_info} = old_sig,
%{custom_info: old_custom_info} = _old_sig,
%{custom_info: new_custom_info, system_id: system_id, linked_system_id: linked_system_id} =
updated_sig
_updated_sig
)
when not is_nil(linked_system_id) do
old_time_status = get_time_status(old_custom_info)
@@ -235,9 +235,9 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
defp maybe_update_connection_mass_status(
map_id,
%{type: old_type} = old_sig,
%{type: old_type} = _old_sig,
%{type: new_type, system_id: system_id, linked_system_id: linked_system_id} =
updated_sig
_updated_sig
)
when not is_nil(linked_system_id) do
if old_type != new_type do
@@ -279,7 +279,8 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
group: sig["group"],
type: Map.get(sig, "type"),
custom_info: Map.get(sig, "custom_info"),
character_eve_id: character_eve_id,
# Use character_eve_id from sig if provided, otherwise use the default
character_eve_id: Map.get(sig, "character_eve_id", character_eve_id),
deleted: false
}
end)

View File

@@ -45,7 +45,7 @@ defmodule WandererApp.Map.Server.SystemsImpl do
} = system_info,
user_id,
character_id,
opts
_opts
) do
map_id
|> WandererApp.Map.check_location(%{solar_system_id: solar_system_id})
@@ -100,8 +100,8 @@ defmodule WandererApp.Map.Server.SystemsImpl do
%{
solar_system_id: solar_system_id,
text: text
} = comment_info,
user_id,
} = _comment_info,
_user_id,
character_id
) do
system =
@@ -431,6 +431,16 @@ defmodule WandererApp.Map.Server.SystemsImpl do
def maybe_add_system(map_id, location, old_location, map_opts)
when not is_nil(location) do
:telemetry.execute(
[:wanderer_app, :map, :system_addition, :start],
%{system_time: System.system_time()},
%{
map_id: map_id,
solar_system_id: location.solar_system_id,
from_system: old_location && old_location.solar_system_id
}
)
case WandererApp.Map.check_location(map_id, location) do
{:ok, location} ->
rtree_name = "rtree_#{map_id}"
@@ -481,49 +491,142 @@ defmodule WandererApp.Map.Server.SystemsImpl do
position_y: updated_system.position_y
})
:telemetry.execute(
[:wanderer_app, :map, :system_addition, :complete],
%{system_time: System.system_time()},
%{
map_id: map_id,
solar_system_id: updated_system.solar_system_id,
system_id: updated_system.id,
operation: :update_existing
}
)
:ok
_ ->
{:ok, solar_system_info} =
WandererApp.CachedInfo.get_system_static_info(location.solar_system_id)
WandererApp.MapSystemRepo.create(%{
map_id: map_id,
solar_system_id: location.solar_system_id,
name: solar_system_info.solar_system_name,
position_x: position.x,
position_y: position.y
})
WandererApp.CachedInfo.get_system_static_info(location.solar_system_id)
|> case do
{:ok, new_system} ->
@ddrt.insert(
{new_system.solar_system_id,
WandererApp.Map.PositionCalculator.get_system_bounding_rect(new_system)},
rtree_name
)
WandererApp.Cache.put(
"map_#{map_id}:system_#{new_system.id}:last_activity",
DateTime.utc_now(),
ttl: @system_inactive_timeout
)
WandererApp.Map.add_system(map_id, new_system)
Impl.broadcast!(map_id, :add_system, new_system)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :add_system, %{
solar_system_id: new_system.solar_system_id,
name: new_system.name,
position_x: new_system.position_x,
position_y: new_system.position_y
{:ok, solar_system_info} ->
# Use upsert instead of create - handles race conditions gracefully
WandererApp.MapSystemRepo.upsert(%{
map_id: map_id,
solar_system_id: location.solar_system_id,
name: solar_system_info.solar_system_name,
position_x: position.x,
position_y: position.y
})
|> case do
{:ok, system} ->
# System was either created or updated - both cases are success
@ddrt.insert(
{system.solar_system_id,
WandererApp.Map.PositionCalculator.get_system_bounding_rect(system)},
rtree_name
)
:ok
WandererApp.Cache.put(
"map_#{map_id}:system_#{system.id}:last_activity",
DateTime.utc_now(),
ttl: @system_inactive_timeout
)
WandererApp.Map.add_system(map_id, system)
Impl.broadcast!(map_id, :add_system, system)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :add_system, %{
solar_system_id: system.solar_system_id,
name: system.name,
position_x: system.position_x,
position_y: system.position_y
})
:telemetry.execute(
[:wanderer_app, :map, :system_addition, :complete],
%{system_time: System.system_time()},
%{
map_id: map_id,
solar_system_id: system.solar_system_id,
system_id: system.id,
operation: :upsert
}
)
:ok
{:error, error} = result ->
Logger.warning(
"[CharacterTracking] Failed to upsert system #{location.solar_system_id} on map #{map_id}: #{inspect(error, pretty: true)}"
)
:telemetry.execute(
[:wanderer_app, :map, :system_addition, :error],
%{system_time: System.system_time()},
%{
map_id: map_id,
solar_system_id: location.solar_system_id,
error: error,
reason: :db_upsert_failed
}
)
result
error ->
Logger.warning(
"[CharacterTracking] Failed to upsert system #{location.solar_system_id} on map #{map_id}: #{inspect(error, pretty: true)}"
)
:telemetry.execute(
[:wanderer_app, :map, :system_addition, :error],
%{system_time: System.system_time()},
%{
map_id: map_id,
solar_system_id: location.solar_system_id,
error: error,
reason: :db_upsert_failed_unexpected
}
)
{:error, error}
end
{:error, error} = result ->
Logger.warning(
"[CharacterTracking] Failed to add system #{inspect(location.solar_system_id)} on map #{map_id}: #{inspect(error, pretty: true)}"
)
:telemetry.execute(
[:wanderer_app, :map, :system_addition, :error],
%{system_time: System.system_time()},
%{
map_id: map_id,
solar_system_id: location.solar_system_id,
error: error,
reason: :db_upsert_failed
}
)
result
error ->
Logger.warning("Failed to create system: #{inspect(error, pretty: true)}")
:ok
Logger.warning(
"[CharacterTracking] Failed to add system #{inspect(location.solar_system_id)} on map #{map_id}: #{inspect(error, pretty: true)}"
)
:telemetry.execute(
[:wanderer_app, :map, :system_addition, :error],
%{system_time: System.system_time()},
%{
map_id: map_id,
solar_system_id: location.solar_system_id,
error: error,
reason: :db_upsert_failed_unexpected
}
)
{:error, error}
end
end
@@ -642,13 +745,12 @@ defmodule WandererApp.Map.Server.SystemsImpl do
position_y: system.position_y
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:system_added, %{
character_id: character_id,
user_id: user_id,
map_id: map_id,
solar_system_id: solar_system_id
})
WandererApp.User.ActivityTracker.track_map_event(:system_added, %{
character_id: character_id,
user_id: user_id,
map_id: map_id,
solar_system_id: solar_system_id
})
end
defp maybe_update_extra_info(system, nil), do: system
@@ -805,6 +907,10 @@ defmodule WandererApp.Map.Server.SystemsImpl do
update_map_system_last_activity(map_id, updated_system)
else
{:error, error} ->
Logger.error("Failed to update system: #{inspect(error, pretty: true)}")
:ok
error ->
Logger.error("Failed to update system: #{inspect(error, pretty: true)}")
:ok

View File

@@ -0,0 +1,429 @@
defmodule WandererApp.Map.SlugRecovery do
@moduledoc """
Handles automatic recovery from duplicate map slug scenarios.
This module provides functions to:
- Detect duplicate slugs in the database (including deleted maps)
- Automatically fix duplicates by renaming newer maps
- Verify and recreate unique indexes (enforced on all maps, including deleted)
- Safely handle race conditions during recovery
## Slug Uniqueness Policy
All map slugs must be unique across the entire maps_v1 table, including
deleted maps. This prevents confusion and ensures that a slug can always
unambiguously identify a specific map in the system's history.
The recovery process is designed to be:
- Idempotent (safe to run multiple times)
- Production-safe (minimal locking, fast execution)
- Observable (telemetry events for monitoring)
"""
require Logger
alias WandererApp.Repo
@doc """
Recovers from a duplicate slug scenario for a specific slug.
This function:
1. Finds all maps with the given slug (including deleted)
2. Keeps the oldest map with the original slug
3. Renames newer duplicates with numeric suffixes
4. Verifies the unique index exists
Returns:
- `{:ok, result}` - Recovery successful
- `{:error, reason}` - Recovery failed
## Examples
iex> recover_duplicate_slug("home-2")
{:ok, %{fixed_count: 1, kept_map_id: "...", renamed_maps: [...]}}
"""
def recover_duplicate_slug(slug) do
start_time = System.monotonic_time(:millisecond)
Logger.warning("Starting slug recovery for '#{slug}'",
slug: slug,
operation: :recover_duplicate_slug
)
:telemetry.execute(
[:wanderer_app, :map, :slug_recovery, :start],
%{system_time: System.system_time()},
%{slug: slug, operation: :recover_duplicate_slug}
)
result =
Repo.transaction(fn ->
# Find all maps with this slug (including deleted), ordered by insertion time
duplicates = find_duplicate_maps(slug)
case duplicates do
[] ->
Logger.info("No maps found with slug '#{slug}' during recovery")
%{fixed_count: 0, kept_map_id: nil, renamed_maps: []}
[_single_map] ->
Logger.info("Only one map found with slug '#{slug}', no recovery needed")
%{fixed_count: 0, kept_map_id: nil, renamed_maps: []}
[kept_map | maps_to_rename] ->
# Convert binary UUID to string for consistency
kept_map_id_str =
if is_binary(kept_map.id), do: Ecto.UUID.load!(kept_map.id), else: kept_map.id
Logger.warning(
"Found #{length(maps_to_rename)} duplicate maps for slug '#{slug}', fixing...",
slug: slug,
kept_map_id: kept_map_id_str,
duplicate_count: length(maps_to_rename)
)
# Rename the duplicate maps
renamed_maps =
maps_to_rename
|> Enum.with_index(2)
|> Enum.map(fn {map, index} ->
new_slug = generate_unique_slug(slug, index)
rename_map(map, new_slug)
end)
%{
fixed_count: length(renamed_maps),
kept_map_id: kept_map_id_str,
renamed_maps: renamed_maps
}
end
end)
case result do
{:ok, recovery_result} ->
duration = System.monotonic_time(:millisecond) - start_time
:telemetry.execute(
[:wanderer_app, :map, :slug_recovery, :complete],
%{
duration_ms: duration,
fixed_count: recovery_result.fixed_count,
system_time: System.system_time()
},
%{slug: slug, result: recovery_result}
)
Logger.info("Slug recovery completed successfully",
slug: slug,
fixed_count: recovery_result.fixed_count,
duration_ms: duration
)
{:ok, recovery_result}
{:error, reason} = error ->
duration = System.monotonic_time(:millisecond) - start_time
:telemetry.execute(
[:wanderer_app, :map, :slug_recovery, :error],
%{duration_ms: duration, system_time: System.system_time()},
%{slug: slug, error: inspect(reason)}
)
Logger.error("Slug recovery failed",
slug: slug,
error: inspect(reason),
duration_ms: duration
)
error
end
end
@doc """
Verifies that the unique index on map slugs exists.
If missing, attempts to create it (after fixing any duplicates).
Returns:
- `{:ok, :exists}` - Index already exists
- `{:ok, :created}` - Index was created
- `{:error, reason}` - Failed to create index
"""
def verify_unique_index do
Logger.debug("Verifying unique index on maps_v1.slug")
# Check if the index exists
index_query = """
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
LIMIT 1
"""
case Repo.query(index_query, []) do
{:ok, %{rows: [[1]]}} ->
Logger.debug("Unique index exists")
{:ok, :exists}
{:ok, %{rows: []}} ->
Logger.warning("Unique index missing, attempting to create")
create_unique_index()
{:error, reason} ->
Logger.error("Failed to check for unique index", error: inspect(reason))
{:error, reason}
end
end
@doc """
Performs a full recovery scan of all maps, fixing any duplicates found.
Processes both deleted and non-deleted maps.
This function will:
1. Drop the unique index if it exists (to allow fixing duplicates)
2. Find and fix all duplicate slugs
3. Return statistics about the recovery
Note: This function does NOT recreate the index. Call `verify_unique_index/0`
after this function completes to ensure the index is recreated.
This is a more expensive operation and should be run:
- During maintenance windows
- After detecting multiple duplicate slug errors
- As part of deployment verification
Returns:
- `{:ok, stats}` - Recovery completed with statistics
- `{:error, reason}` - Recovery failed
"""
def recover_all_duplicates do
Logger.info("Starting full duplicate slug recovery (including deleted maps)")
start_time = System.monotonic_time(:millisecond)
:telemetry.execute(
[:wanderer_app, :map, :full_recovery, :start],
%{system_time: System.system_time()},
%{}
)
# Drop the unique index if it exists to allow fixing duplicates
drop_unique_index_if_exists()
# Find all slugs that have duplicates (including deleted maps)
duplicate_slugs_query = """
SELECT slug, COUNT(*) as count
FROM maps_v1
GROUP BY slug
HAVING COUNT(*) > 1
"""
case Repo.query(duplicate_slugs_query, []) do
{:ok, %{rows: []}} ->
Logger.info("No duplicate slugs found")
{:ok, %{total_slugs_fixed: 0, total_maps_renamed: 0}}
{:ok, %{rows: duplicate_rows}} ->
Logger.warning("Found #{length(duplicate_rows)} slugs with duplicates",
duplicate_count: length(duplicate_rows)
)
# Fix each duplicate slug
results =
Enum.map(duplicate_rows, fn [slug, _count] ->
case recover_duplicate_slug(slug) do
{:ok, result} -> result
{:error, _} -> %{fixed_count: 0, kept_map_id: nil, renamed_maps: []}
end
end)
stats = %{
total_slugs_fixed: length(results),
total_maps_renamed: Enum.sum(Enum.map(results, & &1.fixed_count))
}
duration = System.monotonic_time(:millisecond) - start_time
:telemetry.execute(
[:wanderer_app, :map, :full_recovery, :complete],
%{
duration_ms: duration,
slugs_fixed: stats.total_slugs_fixed,
maps_renamed: stats.total_maps_renamed,
system_time: System.system_time()
},
%{stats: stats}
)
Logger.info("Full recovery completed",
stats: stats,
duration_ms: duration
)
{:ok, stats}
{:error, reason} = error ->
Logger.error("Failed to query for duplicates", error: inspect(reason))
error
end
end
# Private functions
defp find_duplicate_maps(slug) do
# Find all maps (including deleted) with this slug
query = """
SELECT id, name, slug, deleted, inserted_at
FROM maps_v1
WHERE slug = $1
ORDER BY inserted_at ASC
"""
case Repo.query(query, [slug]) do
{:ok, %{rows: rows}} ->
Enum.map(rows, fn [id, name, slug, deleted, inserted_at] ->
%{id: id, name: name, slug: slug, deleted: deleted, inserted_at: inserted_at}
end)
{:error, reason} ->
Logger.error("Failed to query for duplicate maps",
slug: slug,
error: inspect(reason)
)
[]
end
end
defp rename_map(map, new_slug) do
# Convert binary UUID to string for logging
map_id_str = if is_binary(map.id), do: Ecto.UUID.load!(map.id), else: map.id
Logger.info("Renaming map #{map_id_str} from '#{map.slug}' to '#{new_slug}'",
map_id: map_id_str,
old_slug: map.slug,
new_slug: new_slug,
deleted: map.deleted
)
update_query = """
UPDATE maps_v1
SET slug = $1, updated_at = NOW()
WHERE id = $2
"""
case Repo.query(update_query, [new_slug, map.id]) do
{:ok, _} ->
Logger.info("Successfully renamed map #{map_id_str} to '#{new_slug}'")
%{
map_id: map_id_str,
old_slug: map.slug,
new_slug: new_slug,
map_name: map.name,
deleted: map.deleted
}
{:error, reason} ->
map_id_str = if is_binary(map.id), do: Ecto.UUID.load!(map.id), else: map.id
Logger.error("Failed to rename map #{map_id_str}",
map_id: map_id_str,
old_slug: map.slug,
new_slug: new_slug,
error: inspect(reason)
)
%{
map_id: map_id_str,
old_slug: map.slug,
new_slug: nil,
error: reason
}
end
end
defp generate_unique_slug(base_slug, index) do
candidate = "#{base_slug}-#{index}"
# Verify this slug is actually unique (check all maps, including deleted)
query = "SELECT 1 FROM maps_v1 WHERE slug = $1 LIMIT 1"
case Repo.query(query, [candidate]) do
{:ok, %{rows: []}} ->
candidate
{:ok, %{rows: [[1]]}} ->
# This slug is taken, try the next one
generate_unique_slug(base_slug, index + 1)
{:error, _} ->
# On error, be conservative and try next number
generate_unique_slug(base_slug, index + 1)
end
end
defp create_unique_index do
Logger.warning("Creating unique index on maps_v1.slug")
# Create index on all maps (including deleted ones)
# This enforces slug uniqueness across all maps regardless of deletion status
create_index_query = """
CREATE UNIQUE INDEX CONCURRENTLY IF NOT EXISTS maps_v1_unique_slug_index
ON maps_v1 (slug)
"""
case Repo.query(create_index_query, []) do
{:ok, _} ->
Logger.info("Successfully created unique index (includes deleted maps)")
:telemetry.execute(
[:wanderer_app, :map, :index_created],
%{system_time: System.system_time()},
%{index_name: "maps_v1_unique_slug_index"}
)
{:ok, :created}
{:error, reason} ->
Logger.error("Failed to create unique index", error: inspect(reason))
{:error, reason}
end
end
defp drop_unique_index_if_exists do
Logger.debug("Checking if unique index exists before recovery")
check_query = """
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
LIMIT 1
"""
case Repo.query(check_query, []) do
{:ok, %{rows: [[1]]}} ->
Logger.info("Dropping unique index to allow duplicate recovery")
drop_query = "DROP INDEX IF EXISTS maps_v1_unique_slug_index"
case Repo.query(drop_query, []) do
{:ok, _} ->
Logger.info("Successfully dropped unique index")
:ok
{:error, reason} ->
Logger.warning("Failed to drop unique index", error: inspect(reason))
:ok
end
{:ok, %{rows: []}} ->
Logger.debug("Unique index does not exist, no need to drop")
:ok
{:error, reason} ->
Logger.warning("Failed to check for unique index", error: inspect(reason))
:ok
end
end
end

View File

@@ -23,10 +23,12 @@ defmodule WandererApp.Release do
IO.puts("Run migrations..")
prepare()
for repo <- repos() do
for repo <- repos do
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true))
end
run_post_migration_tasks()
:init.stop()
end
@@ -76,6 +78,8 @@ defmodule WandererApp.Release do
Enum.each(streaks, fn {repo, up_to_version} ->
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, to: up_to_version))
end)
run_post_migration_tasks()
end
defp migration_streaks(pending_migrations) do
@@ -215,4 +219,40 @@ defmodule WandererApp.Release do
IO.puts("Starting repos..")
Enum.each(repos(), & &1.start_link(pool_size: 2))
end
defp run_post_migration_tasks do
IO.puts("Running post-migration tasks..")
# Recover any duplicate map slugs
IO.puts("Checking for duplicate map slugs..")
case WandererApp.Map.SlugRecovery.recover_all_duplicates() do
{:ok, %{total_slugs_fixed: 0}} ->
IO.puts("No duplicate slugs found.")
{:ok, %{total_slugs_fixed: count, total_maps_renamed: renamed}} ->
IO.puts("Successfully fixed #{count} duplicate slug(s), renamed #{renamed} map(s).")
{:error, reason} ->
IO.puts("Warning: Failed to recover duplicate slugs: #{inspect(reason)}")
IO.puts("Application will continue, but you may need to manually fix duplicate slugs.")
end
# Ensure the unique index exists after recovery
IO.puts("Verifying unique index on map slugs..")
case WandererApp.Map.SlugRecovery.verify_unique_index() do
{:ok, :exists} ->
IO.puts("Unique index already exists.")
{:ok, :created} ->
IO.puts("Successfully created unique index.")
{:error, reason} ->
IO.puts("Warning: Failed to verify/create unique index: #{inspect(reason)}")
IO.puts("You may need to manually create the index.")
end
IO.puts("Post-migration tasks completed.")
end
end

View File

@@ -3,11 +3,25 @@ defmodule WandererApp.MapPingsRepo do
require Logger
def get_by_id(ping_id),
do: WandererApp.Api.MapPing.by_id!(ping_id) |> Ash.load([:system])
def get_by_id(ping_id) do
case WandererApp.Api.MapPing.by_id(ping_id) do
{:ok, ping} ->
ping |> Ash.load([:system])
def get_by_map(map_id),
do: WandererApp.Api.MapPing.by_map!(%{map_id: map_id}) |> Ash.load([:character, :system])
error ->
error
end
end
def get_by_map(map_id) do
case WandererApp.Api.MapPing.by_map(%{map_id: map_id}) do
{:ok, ping} ->
ping |> Ash.load([:character, :system])
error ->
error
end
end
def get_by_map_and_system!(map_id, system_id),
do: WandererApp.Api.MapPing.by_map_and_system!(%{map_id: map_id, system_id: system_id})

View File

@@ -1,6 +1,8 @@
defmodule WandererApp.MapRepo do
use WandererApp, :repository
require Logger
@default_map_options %{
"layout" => "left_to_right",
"store_custom_labels" => "false",
@@ -30,6 +32,116 @@ defmodule WandererApp.MapRepo do
|> WandererApp.Api.Map.get_map_by_slug()
|> load_user_permissions(current_user)
@doc """
Safely retrieves a map by slug, handling the case where multiple maps
with the same slug exist (database integrity issue).
When duplicates are detected, automatically triggers recovery to fix them
and retries the query once.
Returns:
- `{:ok, map}` - Single map found
- `{:error, :multiple_results}` - Multiple maps found (after recovery attempt)
- `{:error, :not_found}` - No map found
- `{:error, reason}` - Other error
"""
def get_map_by_slug_safely(slug, retry_count \\ 0) do
try do
map = WandererApp.Api.Map.get_map_by_slug!(slug)
{:ok, map}
rescue
error in Ash.Error.Invalid.MultipleResults ->
handle_multiple_results(slug, error, retry_count)
error in Ash.Error.Invalid ->
# Check if this Invalid error contains a MultipleResults error
case find_multiple_results_error(error) do
{:ok, multiple_results_error} ->
handle_multiple_results(slug, multiple_results_error, retry_count)
:error ->
# Some other Invalid error
Logger.error("Error retrieving map by slug",
slug: slug,
error: inspect(error)
)
{:error, :unknown_error}
end
error in Ash.Error.Query.NotFound ->
Logger.debug("Map not found with slug: #{slug}")
{:error, :not_found}
error ->
Logger.error("Error retrieving map by slug",
slug: slug,
error: inspect(error)
)
{:error, :unknown_error}
end
end
# Helper function to handle multiple results errors with automatic recovery
defp handle_multiple_results(slug, error, retry_count) do
count = Map.get(error, :count, 2)
Logger.error("Multiple maps found with slug '#{slug}' - triggering automatic recovery",
slug: slug,
count: count,
retry_count: retry_count,
error: inspect(error)
)
# Emit telemetry for monitoring
:telemetry.execute(
[:wanderer_app, :map, :duplicate_slug_detected],
%{count: count, retry_count: retry_count},
%{slug: slug, operation: :get_by_slug}
)
# Attempt automatic recovery if this is the first try
if retry_count == 0 do
case WandererApp.Map.SlugRecovery.recover_duplicate_slug(slug) do
{:ok, recovery_result} ->
Logger.info("Successfully recovered duplicate slug '#{slug}', retrying query",
slug: slug,
fixed_count: recovery_result.fixed_count
)
# Retry the query once after recovery
get_map_by_slug_safely(slug, retry_count + 1)
{:error, reason} ->
Logger.error("Failed to recover duplicate slug '#{slug}'",
slug: slug,
error: inspect(reason)
)
{:error, :multiple_results}
end
else
# Already retried once, give up
Logger.error(
"Multiple maps still found with slug '#{slug}' after recovery attempt",
slug: slug,
count: count
)
{:error, :multiple_results}
end
end
# Helper function to check if an Ash.Error.Invalid contains a MultipleResults error
defp find_multiple_results_error(%Ash.Error.Invalid{errors: errors}) do
errors
|> Enum.find_value(:error, fn
%Ash.Error.Invalid.MultipleResults{} = mr_error -> {:ok, mr_error}
_ -> false
end)
end
def load_relationships(map, []), do: {:ok, map}
def load_relationships(map, relationships), do: map |> Ash.load(relationships)

View File

@@ -5,6 +5,10 @@ defmodule WandererApp.MapSystemRepo do
system |> WandererApp.Api.MapSystem.create()
end
def upsert(system) do
system |> WandererApp.Api.MapSystem.upsert()
end
def get_by_map_and_solar_system_id(map_id, solar_system_id) do
WandererApp.Api.MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id)
|> case do

View File

@@ -487,7 +487,7 @@ defmodule WandererApp.SecurityAudit do
# Private functions
defp store_audit_entry(audit_entry) do
defp store_audit_entry(_audit_entry) do
# Handle async processing if enabled
# if async_enabled?() do
# WandererApp.SecurityAudit.AsyncProcessor.log_event(audit_entry)

View File

@@ -11,7 +11,9 @@ defmodule WandererApp.Server.ServerStatusTracker do
:server_version,
:start_time,
:vip,
:retries
:retries,
:in_forced_downtime,
:downtime_notified
]
@retries_count 3
@@ -21,9 +23,17 @@ defmodule WandererApp.Server.ServerStatusTracker do
retries: @retries_count,
server_version: "0",
start_time: "0",
vip: true
vip: true,
in_forced_downtime: false,
downtime_notified: false
}
# EVE Online daily downtime period (UTC/GMT)
@downtime_start_hour 10
@downtime_start_minute 58
@downtime_end_hour 11
@downtime_end_minute 2
@refresh_interval :timer.minutes(1)
@logger Application.compile_env(:wanderer_app, :logger)
@@ -57,13 +67,51 @@ defmodule WandererApp.Server.ServerStatusTracker do
def handle_info(
:refresh_status,
%{
retries: retries
retries: retries,
in_forced_downtime: was_in_downtime
} = state
) do
Process.send_after(self(), :refresh_status, @refresh_interval)
Task.async(fn -> get_server_status(retries) end)
{:noreply, state}
in_downtime = in_forced_downtime?()
cond do
# Entering downtime period - broadcast offline status immediately
in_downtime and not was_in_downtime ->
@logger.info("#{__MODULE__} entering forced downtime period (10:58-11:02 GMT)")
downtime_status = %{
players: 0,
server_version: "downtime",
start_time: DateTime.utc_now() |> DateTime.to_iso8601(),
vip: true
}
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"server_status",
{:server_status, downtime_status}
)
{:noreply,
%{state | in_forced_downtime: true, downtime_notified: true}
|> Map.merge(downtime_status)}
# Currently in downtime - skip API call
in_downtime ->
{:noreply, state}
# Exiting downtime period - resume normal operations
not in_downtime and was_in_downtime ->
@logger.info("#{__MODULE__} exiting forced downtime period, resuming normal operations")
Task.async(fn -> get_server_status(retries) end)
{:noreply, %{state | in_forced_downtime: false, downtime_notified: false}}
# Normal operation
true ->
Task.async(fn -> get_server_status(retries) end)
{:noreply, state}
end
end
@impl true
@@ -155,4 +203,19 @@ defmodule WandererApp.Server.ServerStatusTracker do
vip: false
}
end
# Checks if the current UTC time falls within the forced downtime period (10:58-11:02 GMT).
defp in_forced_downtime? do
now = DateTime.utc_now()
current_hour = now.hour
current_minute = now.minute
# Convert times to minutes since midnight for easier comparison
current_time_minutes = current_hour * 60 + current_minute
downtime_start_minutes = @downtime_start_hour * 60 + @downtime_start_minute
downtime_end_minutes = @downtime_end_hour * 60 + @downtime_end_minute
current_time_minutes >= downtime_start_minutes and
current_time_minutes < downtime_end_minutes
end
end

View File

@@ -4,7 +4,9 @@ defmodule WandererApp.Test.DDRT do
This allows mocking of DDRT calls in tests.
"""
@callback insert({integer(), any()} | list({integer(), any()}), String.t()) :: {:ok, map()} | {:error, term()}
@callback init_tree(String.t(), map()) :: :ok | {:error, term()}
@callback insert({integer(), any()} | list({integer(), any()}), String.t()) ::
{:ok, map()} | {:error, term()}
@callback update(integer(), any(), String.t()) :: {:ok, map()} | {:error, term()}
@callback delete(integer() | [integer()], String.t()) :: {:ok, map()} | {:error, term()}
@callback query(any(), String.t()) :: {:ok, [any()]} | {:error, term()}

View File

@@ -49,7 +49,7 @@ defmodule WandererApp.Ueberauth.Strategy.Eve do
WandererApp.Cache.put(
"eve_auth_#{params[:state]}",
[with_wallet: with_wallet, is_admin?: is_admin?],
ttl: :timer.minutes(15)
ttl: :timer.minutes(30)
)
opts = oauth_client_options_from_conn(conn, with_wallet, is_admin?)
@@ -66,17 +66,22 @@ defmodule WandererApp.Ueberauth.Strategy.Eve do
Handles the callback from Eve.
"""
def handle_callback!(%Plug.Conn{params: %{"code" => code, "state" => state}} = conn) do
opts =
WandererApp.Cache.get("eve_auth_#{state}")
case WandererApp.Cache.get("eve_auth_#{state}") do
nil ->
# Cache expired or invalid state - redirect to welcome page
conn
|> redirect!("/welcome")
params = [code: code]
opts ->
params = [code: code]
case WandererApp.Ueberauth.Strategy.Eve.OAuth.get_access_token(params, opts) do
{:ok, token} ->
fetch_user(conn, token)
case WandererApp.Ueberauth.Strategy.Eve.OAuth.get_access_token(params, opts) do
{:ok, token} ->
fetch_user(conn, token)
{:error, {error_code, error_description}} ->
set_errors!(conn, [error(error_code, error_description)])
{:error, {error_code, error_description}} ->
set_errors!(conn, [error(error_code, error_description)])
end
end
end

View File

@@ -1,16 +1,57 @@
defmodule WandererApp.User.ActivityTracker do
@moduledoc false
@moduledoc """
Activity tracking wrapper that ensures audit logging never crashes application logic.
Activity tracking is best-effort and errors are logged but not propagated to callers.
This prevents race conditions (e.g., duplicate activity records) from affecting
critical business operations like character tracking or connection management.
"""
require Logger
def track_map_event(
event_type,
metadata
),
do: WandererApp.Map.Audit.track_map_event(event_type, metadata)
@doc """
Track a map-related event. Always returns `{:ok, result}` even on error.
def track_acl_event(
event_type,
metadata
),
do: WandererApp.Map.Audit.track_acl_event(event_type, metadata)
Errors (such as unique constraint violations from concurrent operations)
are logged but do not propagate to prevent crashing critical application logic.
"""
def track_map_event(event_type, metadata) do
case WandererApp.Map.Audit.track_map_event(event_type, metadata) do
{:ok, result} ->
{:ok, result}
{:error, error} ->
Logger.warning("Failed to track map event (non-critical)",
event_type: event_type,
map_id: metadata[:map_id],
error: inspect(error),
reason: :best_effort_tracking
)
# Return success to prevent crashes - activity tracking is best-effort
{:ok, nil}
end
end
@doc """
Track an ACL-related event. Always returns `{:ok, result}` even on error.
Errors are logged but do not propagate to prevent crashing critical application logic.
"""
def track_acl_event(event_type, metadata) do
case WandererApp.Map.Audit.track_acl_event(event_type, metadata) do
{:ok, result} ->
{:ok, result}
{:error, error} ->
Logger.warning("Failed to track ACL event (non-critical)",
event_type: event_type,
acl_id: metadata[:acl_id],
error: inspect(error),
reason: :best_effort_tracking
)
# Return success to prevent crashes - activity tracking is best-effort
{:ok, nil}
end
end
end

Some files were not shown because too many files have changed in this diff Show More