Compare commits

...

79 Commits

Author SHA1 Message Date
CI
b22970fef3 chore: release version v1.84.21 2025-11-15 08:48:30 +00:00
Dmitry Popov
cf72394ef9 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-15 09:47:53 +01:00
Dmitry Popov
e6dbba7283 fix(core): fixed map characters adding 2025-11-15 09:47:48 +01:00
CI
843b3b86b2 chore: [skip ci] 2025-11-15 07:29:25 +00:00
CI
bd865b9f64 chore: release version v1.84.20 2025-11-15 07:29:25 +00:00
Dmitry Popov
ae91cd2f92 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-15 08:25:59 +01:00
Dmitry Popov
0be7a5f9d0 fix(core): fixed map start issues 2025-11-15 08:25:55 +01:00
CI
e15bfa426a chore: [skip ci] 2025-11-14 19:28:51 +00:00
CI
4198e4b07a chore: release version v1.84.19 2025-11-14 19:28:51 +00:00
Dmitry Popov
03ee08ff67 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-14 20:28:16 +01:00
Dmitry Popov
ac4dd4c28b fix(core): fixed map start issues 2025-11-14 20:28:12 +01:00
CI
308e81a464 chore: [skip ci] 2025-11-14 18:36:20 +00:00
CI
6f4240d931 chore: release version v1.84.18 2025-11-14 18:36:20 +00:00
Dmitry Popov
847b45a431 fix(core): added gracefull map poll recovery from saved state. added map slug unique checks 2025-11-14 19:35:45 +01:00
CI
5ec97d74ca chore: [skip ci] 2025-11-14 13:43:40 +00:00
CI
74359a5542 chore: release version v1.84.17 2025-11-14 13:43:40 +00:00
Dmitry Popov
0020f46dd8 fix(core): fixed activity tracking issues 2025-11-14 14:42:44 +01:00
CI
a6751b45c6 chore: [skip ci] 2025-11-13 16:20:24 +00:00
CI
f48aeb5cec chore: release version v1.84.16 2025-11-13 16:20:24 +00:00
Dmitry Popov
a5f25646c9 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 17:19:47 +01:00
Dmitry Popov
23cf1fd96f fix(core): removed maps auto-start logic 2025-11-13 17:19:44 +01:00
CI
6f15521069 chore: [skip ci] 2025-11-13 14:49:32 +00:00
CI
9d41e57c06 chore: release version v1.84.15 2025-11-13 14:49:32 +00:00
Dmitry Popov
ea9a22df09 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 15:49:01 +01:00
Dmitry Popov
0d4fd6f214 fix(core): fixed maps start/stop logic, added server downtime period support 2025-11-13 15:48:56 +01:00
CI
87a6c20545 chore: [skip ci] 2025-11-13 14:46:26 +00:00
CI
c375f4e4ce chore: release version v1.84.14 2025-11-13 14:46:26 +00:00
Dmitry Popov
843a6d7320 Merge pull request #543 from wanderer-industries/fix-error-on-remove-settings
fix(Map): Fixed problem related with error if settings was removed an…
2025-11-13 18:43:13 +04:00
DanSylvest
98c54a3413 fix(Map): Fixed problem related with error if settings was removed and mapper crashed. Fixed settings reset. 2025-11-13 12:53:40 +03:00
CI
0439110938 chore: [skip ci] 2025-11-13 07:52:33 +00:00
CI
8ce1e5fa3e chore: release version v1.84.13 2025-11-13 07:52:33 +00:00
Dmitry Popov
ebaf6bcdc6 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 08:52:00 +01:00
Dmitry Popov
40d947bebc chore: updated RELEASE_NODE for server defaults 2025-11-13 08:51:56 +01:00
CI
61d1c3848f chore: [skip ci] 2025-11-13 07:39:29 +00:00
CI
e152ce179f chore: release version v1.84.12 2025-11-13 07:39:29 +00:00
Dmitry Popov
7bbe387183 chore: reduce garbage collection interval 2025-11-13 08:38:52 +01:00
CI
b1555ff03c chore: [skip ci] 2025-11-12 18:53:48 +00:00
CI
e624499244 chore: release version v1.84.11 2025-11-12 18:53:48 +00:00
Dmitry Popov
6a1976dec6 Merge pull request #541 from guarzo/guarzo/apifun2
fix: api and doc updates
2025-11-12 22:53:17 +04:00
Guarzo
3db24c4344 fix: api and doc updates 2025-11-12 18:39:21 +00:00
CI
883c09f255 chore: [skip ci] 2025-11-12 17:28:54 +00:00
CI
ff24d80038 chore: release version v1.84.10 2025-11-12 17:28:54 +00:00
Dmitry Popov
63cbc9c0b9 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-12 18:28:20 +01:00
Dmitry Popov
8056972a27 fix(core): Fixed adding system on character dock 2025-11-12 18:28:16 +01:00
CI
1759d46740 chore: [skip ci] 2025-11-12 13:28:14 +00:00
CI
e4b7d2e45b chore: release version v1.84.9 2025-11-12 13:28:14 +00:00
Dmitry Popov
41573cbee3 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-12 14:27:43 +01:00
Dmitry Popov
24ffc20bb8 chore: added ccp attribution to footer 2025-11-12 14:27:40 +01:00
CI
e077849b66 chore: [skip ci] 2025-11-12 12:42:09 +00:00
CI
375a9ef65b chore: release version v1.84.8 2025-11-12 12:42:08 +00:00
Dmitry Popov
9bf90ab752 fix(core): added cleanup jobs for old system signatures & chain passages 2025-11-12 13:41:33 +01:00
CI
90c3481151 chore: [skip ci] 2025-11-12 10:57:58 +00:00
CI
e36b08a7e5 chore: release version v1.84.7 2025-11-12 10:57:58 +00:00
Dmitry Popov
e1f79170c3 Merge pull request #540 from guarzo/guarzo/apifun
fix: api and search fixes
2025-11-12 14:54:33 +04:00
Guarzo
68b5455e91 bug fix 2025-11-12 07:25:49 +00:00
Guarzo
f28e75c7f4 pr updates 2025-11-12 07:16:21 +00:00
Guarzo
6091adb28e fix: api and structure search fixes 2025-11-12 07:07:39 +00:00
CI
d4657b335f chore: [skip ci] 2025-11-12 00:13:07 +00:00
CI
7fee850902 chore: release version v1.84.6 2025-11-12 00:13:07 +00:00
Dmitry Popov
648c168a66 fix(core): Added map slug uniqness checking while using API 2025-11-12 01:12:13 +01:00
CI
f5c4b2c407 chore: [skip ci] 2025-11-11 12:52:39 +00:00
CI
b592223d52 chore: release version v1.84.5 2025-11-11 12:52:39 +00:00
Dmitry Popov
5cf118c6ee Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-11 13:52:11 +01:00
Dmitry Popov
b25013c652 fix(core): Added tracking for map & character event handling errors 2025-11-11 13:52:07 +01:00
CI
cf43861b11 chore: [skip ci] 2025-11-11 12:27:54 +00:00
CI
b5fe8f8878 chore: release version v1.84.4 2025-11-11 12:27:54 +00:00
Dmitry Popov
5e5068c7de fix(core): fixed issue with updating system signatures 2025-11-11 13:27:17 +01:00
CI
624b51edfb chore: [skip ci] 2025-11-11 09:52:29 +00:00
CI
a72f8e60c4 chore: release version v1.84.3 2025-11-11 09:52:29 +00:00
Dmitry Popov
dec8ae50c9 Merge branch 'develop' 2025-11-11 10:51:55 +01:00
Dmitry Popov
0332d36a8e fix(core): fixed linked signature time status update
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
2025-11-11 10:51:43 +01:00
CI
8444c7f82d chore: [skip ci] 2025-11-10 16:57:53 +00:00
CI
ec3fc7447e chore: release version v1.84.2 2025-11-10 16:57:53 +00:00
Dmitry Popov
20ec2800c9 Merge pull request #538 from wanderer-industries/develop
Develop
2025-11-10 20:56:53 +04:00
Dmitry Popov
6fbf43e860 fix(api): fixed api for get/update map systems
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
2025-11-10 17:23:44 +01:00
Dmitry Popov
697da38020 Merge pull request #537 from guarzo/guarzo/apisystemperf
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
fix: add indexes for map/system
2025-11-09 01:48:01 +04:00
Guarzo
4bc65b43d2 fix: add index for map/systems api 2025-11-08 14:30:19 +00:00
Dmitry Popov
910ec97fd1 chore: refactored map server processes
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
2025-11-06 09:23:19 +01:00
Dmitry Popov
40ed58ee8c Merge pull request #536 from wanderer-industries/refactor-map-servers
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Refactor map servers
2025-11-06 03:03:57 +04:00
70 changed files with 5746 additions and 1057 deletions

View File

@@ -1,5 +1,7 @@
export WEB_APP_URL="http://localhost:8000"
export RELEASE_COOKIE="PDpbnyo6mEI_0T4ZsHH_ESmi1vT1toQ8PTc0vbfg5FIT4Ih-Lh98mw=="
# Erlang node name for distributed Erlang (optional - defaults to wanderer@hostname)
# export RELEASE_NODE="wanderer@localhost"
export EVE_CLIENT_ID="<EVE_CLIENT_ID>"
export EVE_CLIENT_SECRET="<EVE_CLIENT_SECRET>"
export EVE_CLIENT_WITH_WALLET_ID="<EVE_CLIENT_WITH_WALLET_ID>"

View File

@@ -2,6 +2,176 @@
<!-- changelog -->
## [v1.84.21](https://github.com/wanderer-industries/wanderer/compare/v1.84.20...v1.84.21) (2025-11-15)
### Bug Fixes:
* core: fixed map characters adding
## [v1.84.20](https://github.com/wanderer-industries/wanderer/compare/v1.84.19...v1.84.20) (2025-11-15)
### Bug Fixes:
* core: fixed map start issues
## [v1.84.19](https://github.com/wanderer-industries/wanderer/compare/v1.84.18...v1.84.19) (2025-11-14)
### Bug Fixes:
* core: fixed map start issues
## [v1.84.18](https://github.com/wanderer-industries/wanderer/compare/v1.84.17...v1.84.18) (2025-11-14)
### Bug Fixes:
* core: added gracefull map poll recovery from saved state. added map slug unique checks
## [v1.84.17](https://github.com/wanderer-industries/wanderer/compare/v1.84.16...v1.84.17) (2025-11-14)
### Bug Fixes:
* core: fixed activity tracking issues
## [v1.84.16](https://github.com/wanderer-industries/wanderer/compare/v1.84.15...v1.84.16) (2025-11-13)
### Bug Fixes:
* core: removed maps auto-start logic
## [v1.84.15](https://github.com/wanderer-industries/wanderer/compare/v1.84.14...v1.84.15) (2025-11-13)
### Bug Fixes:
* core: fixed maps start/stop logic, added server downtime period support
## [v1.84.14](https://github.com/wanderer-industries/wanderer/compare/v1.84.13...v1.84.14) (2025-11-13)
### Bug Fixes:
* Map: Fixed problem related with error if settings was removed and mapper crashed. Fixed settings reset.
## [v1.84.13](https://github.com/wanderer-industries/wanderer/compare/v1.84.12...v1.84.13) (2025-11-13)
## [v1.84.12](https://github.com/wanderer-industries/wanderer/compare/v1.84.11...v1.84.12) (2025-11-13)
## [v1.84.11](https://github.com/wanderer-industries/wanderer/compare/v1.84.10...v1.84.11) (2025-11-12)
### Bug Fixes:
* api and doc updates
## [v1.84.10](https://github.com/wanderer-industries/wanderer/compare/v1.84.9...v1.84.10) (2025-11-12)
### Bug Fixes:
* core: Fixed adding system on character dock
## [v1.84.9](https://github.com/wanderer-industries/wanderer/compare/v1.84.8...v1.84.9) (2025-11-12)
## [v1.84.8](https://github.com/wanderer-industries/wanderer/compare/v1.84.7...v1.84.8) (2025-11-12)
### Bug Fixes:
* core: added cleanup jobs for old system signatures & chain passages
## [v1.84.7](https://github.com/wanderer-industries/wanderer/compare/v1.84.6...v1.84.7) (2025-11-12)
### Bug Fixes:
* api and structure search fixes
## [v1.84.6](https://github.com/wanderer-industries/wanderer/compare/v1.84.5...v1.84.6) (2025-11-12)
### Bug Fixes:
* core: Added map slug uniqness checking while using API
## [v1.84.5](https://github.com/wanderer-industries/wanderer/compare/v1.84.4...v1.84.5) (2025-11-11)
### Bug Fixes:
* core: Added tracking for map & character event handling errors
## [v1.84.4](https://github.com/wanderer-industries/wanderer/compare/v1.84.3...v1.84.4) (2025-11-11)
### Bug Fixes:
* core: fixed issue with updating system signatures
## [v1.84.3](https://github.com/wanderer-industries/wanderer/compare/v1.84.2...v1.84.3) (2025-11-11)
### Bug Fixes:
* core: fixed linked signature time status update
## [v1.84.2](https://github.com/wanderer-industries/wanderer/compare/v1.84.1...v1.84.2) (2025-11-10)
### Bug Fixes:
* api: fixed api for get/update map systems
* add index for map/systems api
## [v1.84.1](https://github.com/wanderer-industries/wanderer/compare/v1.84.0...v1.84.1) (2025-11-01)

View File

@@ -4,10 +4,13 @@ import { DEFAULT_WIDGETS } from '@/hooks/Mapper/components/mapInterface/constant
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
export const MapInterface = () => {
// const [items, setItems] = useState<WindowProps[]>(restoreWindowsFromLS);
const { windowsSettings, updateWidgetSettings } = useMapRootState();
const items = useMemo(() => {
if (Object.keys(windowsSettings).length === 0) {
return [];
}
return windowsSettings.windows
.map(x => {
const content = DEFAULT_WIDGETS.find(y => y.id === x.id)?.content;

View File

@@ -30,9 +30,6 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
const { outCommand } = useMapRootState();
const [prevQuery, setPrevQuery] = useState('');
const [prevResults, setPrevResults] = useState<{ label: string; value: string }[]>([]);
useEffect(() => {
if (structure) {
setEditData(structure);
@@ -46,34 +43,24 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
// Searching corporation owners via auto-complete
const searchOwners = useCallback(
async (e: { query: string }) => {
const newQuery = e.query.trim();
if (!newQuery) {
const query = e.query.trim();
if (!query) {
setOwnerSuggestions([]);
return;
}
// If user typed more text but we have partial match in prevResults
if (newQuery.startsWith(prevQuery) && prevResults.length > 0) {
const filtered = prevResults.filter(item => item.label.toLowerCase().includes(newQuery.toLowerCase()));
setOwnerSuggestions(filtered);
return;
}
try {
// TODO fix it
const { results = [] } = await outCommand({
type: OutCommand.getCorporationNames,
data: { search: newQuery },
data: { search: query },
});
setOwnerSuggestions(results);
setPrevQuery(newQuery);
setPrevResults(results);
} catch (err) {
console.error('Failed to fetch owners:', err);
setOwnerSuggestions([]);
}
},
[prevQuery, prevResults, outCommand],
[outCommand],
);
const handleChange = (field: keyof StructureItem, val: string | Date) => {
@@ -122,7 +109,6 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
// fetch corporation ticker if we have an ownerId
if (editData.ownerId) {
try {
// TODO fix it
const { ticker } = await outCommand({
type: OutCommand.getCorporationTicker,
data: { corp_id: editData.ownerId },

View File

@@ -10,9 +10,14 @@ import { useCallback } from 'react';
import { TooltipPosition, WdButton, WdTooltipWrapper } from '@/hooks/Mapper/components/ui-kit';
import { ConfirmPopup } from 'primereact/confirmpopup';
import { useConfirmPopup } from '@/hooks/Mapper/hooks';
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
export const CommonSettings = () => {
const { renderSettingItem } = useMapSettings();
const {
storedSettings: { resetSettings },
} = useMapRootState();
const { cfShow, cfHide, cfVisible, cfRef } = useConfirmPopup();
const renderSettingsList = useCallback(
@@ -22,7 +27,7 @@ export const CommonSettings = () => {
[renderSettingItem],
);
const handleResetSettings = () => {};
const handleResetSettings = useCallback(() => resetSettings(), [resetSettings]);
return (
<div className="flex flex-col h-full gap-1">

View File

@@ -6,9 +6,11 @@ import {
MapUnionTypes,
OutCommandHandler,
SolarSystemConnection,
StringBoolean,
TrackingCharacter,
UseCharactersCacheData,
UseCommentsData,
UserPermission,
} from '@/hooks/Mapper/types';
import { useCharactersCache, useComments, useMapRootHandlers } from '@/hooks/Mapper/mapRootProvider/hooks';
import { WithChildren } from '@/hooks/Mapper/types/common.ts';
@@ -80,7 +82,16 @@ const INITIAL_DATA: MapRootData = {
selectedSystems: [],
selectedConnections: [],
userPermissions: {},
options: {},
options: {
allowed_copy_for: UserPermission.VIEW_SYSTEM,
allowed_paste_for: UserPermission.VIEW_SYSTEM,
layout: '',
restrict_offline_showing: 'false',
show_linked_signature_id: 'false',
show_linked_signature_id_temp_name: 'false',
show_temp_system_name: 'false',
store_custom_labels: 'false',
},
isSubscriptionActive: false,
linkSignatureToSystem: null,
mainCharacterEveId: null,
@@ -135,7 +146,7 @@ export interface MapRootContextProps {
hasOldSettings: boolean;
getSettingsForExport(): string | undefined;
applySettings(settings: MapUserSettings): boolean;
resetSettings(settings: MapUserSettings): void;
resetSettings(): void;
checkOldSettings(): void;
};
}

View File

@@ -148,10 +148,6 @@ export const useMapUserSettings = ({ map_slug }: MapRootData, outCommand: OutCom
setHasOldSettings(!!(widgetsOld || interfaceSettings || widgetRoutes || widgetLocal || widgetKills || onTheMapOld));
}, []);
useEffect(() => {
checkOldSettings();
}, [checkOldSettings]);
const getSettingsForExport = useCallback(() => {
const { map_slug } = ref.current;
@@ -166,6 +162,24 @@ export const useMapUserSettings = ({ map_slug }: MapRootData, outCommand: OutCom
applySettings(createDefaultStoredSettings());
}, [applySettings]);
useEffect(() => {
checkOldSettings();
}, [checkOldSettings]);
// IN Case if in runtime someone clear settings
useEffect(() => {
if (Object.keys(windowsSettings).length !== 0) {
return;
}
if (!isReady) {
return;
}
resetSettings();
location.reload();
}, [isReady, resetSettings, windowsSettings]);
return {
isReady,
hasOldSettings,

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -27,11 +27,7 @@ config :wanderer_app,
generators: [timestamp_type: :utc_datetime],
ddrt: WandererApp.Map.CacheRTree,
logger: Logger,
pubsub_client: Phoenix.PubSub,
wanderer_kills_base_url:
System.get_env("WANDERER_KILLS_BASE_URL", "ws://host.docker.internal:4004"),
wanderer_kills_service_enabled:
System.get_env("WANDERER_KILLS_SERVICE_ENABLED", "false") == "true"
pubsub_client: Phoenix.PubSub
config :wanderer_app, WandererAppWeb.Endpoint,
adapter: Bandit.PhoenixAdapter,

View File

@@ -4,7 +4,7 @@ import Config
config :wanderer_app, WandererApp.Repo,
username: "postgres",
password: "postgres",
hostname: System.get_env("DB_HOST", "localhost"),
hostname: "localhost",
database: "wanderer_dev",
stacktrace: true,
show_sensitive_data_on_connection_error: true,

View File

@@ -258,7 +258,9 @@ config :wanderer_app, WandererApp.Scheduler,
timezone: :utc,
jobs:
[
{"@daily", {WandererApp.Map.Audit, :archive, []}}
{"@daily", {WandererApp.Map.Audit, :archive, []}},
{"@daily", {WandererApp.Map.GarbageCollector, :cleanup_chain_passages, []}},
{"@daily", {WandererApp.Map.GarbageCollector, :cleanup_system_signatures, []}}
] ++ sheduler_jobs,
timeout: :infinity

View File

@@ -1,7 +1,25 @@
defmodule WandererApp.Api.Changes.SlugifyName do
@moduledoc """
Ensures map slugs are unique by:
1. Slugifying the provided slug/name
2. Checking for existing slugs (optimization)
3. Finding next available slug with numeric suffix if needed
4. Relying on database unique constraint as final arbiter
Race Condition Mitigation:
- Optimistic check reduces DB roundtrips for most cases
- Database unique index ensures no duplicates slip through
- Proper error messages for constraint violations
- Telemetry events for monitoring conflicts
"""
use Ash.Resource.Change
alias Ash.Changeset
require Ash.Query
require Logger
# Maximum number of attempts to find a unique slug
@max_attempts 100
@impl true
@spec change(Changeset.t(), keyword, Change.context()) :: Changeset.t()
@@ -12,10 +30,95 @@ defmodule WandererApp.Api.Changes.SlugifyName do
defp maybe_slugify_name(changeset) do
case Changeset.get_attribute(changeset, :slug) do
slug when is_binary(slug) ->
Changeset.force_change_attribute(changeset, :slug, Slug.slugify(slug))
base_slug = Slug.slugify(slug)
unique_slug = ensure_unique_slug(changeset, base_slug)
Changeset.force_change_attribute(changeset, :slug, unique_slug)
_ ->
changeset
end
end
defp ensure_unique_slug(changeset, base_slug) do
# Get the current record ID if this is an update operation
current_id = Changeset.get_attribute(changeset, :id)
# Check if the base slug is available (optimization to avoid numeric suffixes when possible)
if slug_available?(base_slug, current_id) do
base_slug
else
# Find the next available slug with a numeric suffix
find_available_slug(base_slug, current_id, 2)
end
end
defp find_available_slug(base_slug, current_id, n) when n <= @max_attempts do
candidate_slug = "#{base_slug}-#{n}"
if slug_available?(candidate_slug, current_id) do
# Emit telemetry when we had to use a suffix (indicates potential conflict)
:telemetry.execute(
[:wanderer_app, :map, :slug_suffix_used],
%{suffix_number: n},
%{base_slug: base_slug, final_slug: candidate_slug}
)
candidate_slug
else
find_available_slug(base_slug, current_id, n + 1)
end
end
defp find_available_slug(base_slug, _current_id, n) when n > @max_attempts do
# Fallback: use timestamp suffix if we've tried too many numeric suffixes
# This handles edge cases where many maps have similar names
timestamp = System.system_time(:millisecond)
fallback_slug = "#{base_slug}-#{timestamp}"
Logger.warning(
"Slug generation exceeded #{@max_attempts} attempts for '#{base_slug}', using timestamp fallback",
base_slug: base_slug,
fallback_slug: fallback_slug
)
:telemetry.execute(
[:wanderer_app, :map, :slug_fallback_used],
%{attempts: n},
%{base_slug: base_slug, fallback_slug: fallback_slug}
)
fallback_slug
end
defp slug_available?(slug, current_id) do
query =
WandererApp.Api.Map
|> Ash.Query.filter(slug == ^slug)
|> then(fn query ->
# Exclude the current record if this is an update
if current_id do
Ash.Query.filter(query, id != ^current_id)
else
query
end
end)
|> Ash.Query.limit(1)
case Ash.read(query) do
{:ok, []} ->
true
{:ok, _existing} ->
false
{:error, error} ->
# Log error but be conservative - assume slug is not available
Logger.warning("Error checking slug availability",
slug: slug,
error: inspect(error)
)
false
end
end
end

View File

@@ -31,13 +31,13 @@ defmodule WandererApp.Api.Map do
routes do
base("/maps")
get(:by_slug, route: "/:slug")
index :read
# index :read
post(:new)
patch(:update)
delete(:destroy)
# Custom action for map duplication
post(:duplicate, route: "/:id/duplicate")
# post(:duplicate, route: "/:id/duplicate")
end
end

View File

@@ -9,6 +9,11 @@ defmodule WandererApp.Api.MapConnection do
postgres do
repo(WandererApp.Repo)
table("map_chain_v1")
custom_indexes do
# Critical index for list_connections query performance
index [:map_id], name: "map_chain_v1_map_id_index"
end
end
json_api do

View File

@@ -65,7 +65,7 @@ defmodule WandererApp.Api.MapSubscription do
defaults [:create, :read, :update, :destroy]
read :all_active do
prepare build(sort: [updated_at: :asc])
prepare build(sort: [updated_at: :asc], load: [:map])
filter(expr(status == :active))
end

View File

@@ -1,6 +1,26 @@
defmodule WandererApp.Api.MapSystem do
@moduledoc false
@derive {Jason.Encoder,
only: [
:id,
:map_id,
:name,
:solar_system_id,
:position_x,
:position_y,
:status,
:visible,
:locked,
:custom_name,
:description,
:tag,
:temporary_name,
:labels,
:added_at,
:linked_sig_eve_id
]}
use Ash.Resource,
domain: WandererApp.Api,
data_layer: AshPostgres.DataLayer,
@@ -9,6 +29,11 @@ defmodule WandererApp.Api.MapSystem do
postgres do
repo(WandererApp.Repo)
table("map_system_v1")
custom_indexes do
# Partial index for efficient visible systems query
index [:map_id], where: "visible = true", name: "map_system_v1_map_id_visible_index"
end
end
json_api do

View File

@@ -8,7 +8,7 @@ defmodule WandererApp.Character.TrackerPool do
:tracked_ids,
:uuid,
:characters,
server_online: true
server_online: false
]
@name __MODULE__
@@ -180,6 +180,8 @@ defmodule WandererApp.Character.TrackerPool do
[Tracker Pool] update_online => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
ErrorTracker.report(e, __STACKTRACE__)
end
{:noreply, state}

View File

@@ -12,7 +12,7 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
total_balance: 0,
transactions: [],
retries: 5,
server_online: true,
server_online: false,
status: :started
]
@@ -75,7 +75,7 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
def handle_event(
:update_corp_wallets,
%{character: character} = state
%{character: character, server_online: true} = state
) do
Process.send_after(self(), :update_corp_wallets, @update_interval)
@@ -88,26 +88,26 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
:update_corp_wallets,
state
) do
Process.send_after(self(), :update_corp_wallets, :timer.seconds(15))
Process.send_after(self(), :update_corp_wallets, @update_interval)
state
end
def handle_event(
:check_wallets,
%{wallets: []} = state
%{character: character, wallets: wallets, server_online: true} = state
) do
Process.send_after(self(), :check_wallets, :timer.seconds(5))
Process.send_after(self(), :check_wallets, @update_interval)
state
end
def handle_event(
:check_wallets,
%{character: character, wallets: wallets} = state
) do
check_wallets(wallets, character)
state
end
def handle_event(
:check_wallets,
state
) do
Process.send_after(self(), :check_wallets, @update_interval)
state

View File

@@ -212,6 +212,7 @@ defmodule WandererApp.ExternalEvents.JsonApiFormatter do
"time_status" => payload["time_status"] || payload[:time_status],
"mass_status" => payload["mass_status"] || payload[:mass_status],
"ship_size_type" => payload["ship_size_type"] || payload[:ship_size_type],
"locked" => payload["locked"] || payload[:locked],
"updated_at" => event.timestamp
},
"relationships" => %{

View File

@@ -53,8 +53,8 @@ defmodule WandererApp.Map do
{:ok, map} ->
map
_ ->
Logger.error(fn -> "Failed to get map #{map_id}" end)
error ->
Logger.error("Failed to get map #{map_id}: #{inspect(error)}")
%{}
end
end
@@ -183,9 +183,31 @@ defmodule WandererApp.Map do
def add_characters!(map, []), do: map
def add_characters!(%{map_id: map_id} = map, [character | rest]) do
add_character(map_id, character)
add_characters!(map, rest)
def add_characters!(%{map_id: map_id} = map, characters) when is_list(characters) do
# Get current characters list once
current_characters = Map.get(map, :characters, [])
characters_ids =
characters
|> Enum.map(fn %{id: char_id} -> char_id end)
# Filter out characters that already exist
new_character_ids =
characters_ids
|> Enum.reject(fn char_id -> char_id in current_characters end)
# If all characters already exist, return early
if new_character_ids == [] do
map
else
case update_map(map_id, %{characters: new_character_ids ++ current_characters}) do
{:commit, map} ->
map
_ ->
map
end
end
end
def add_character(
@@ -198,61 +220,10 @@ defmodule WandererApp.Map do
case not (characters |> Enum.member?(character_id)) do
true ->
WandererApp.Character.get_map_character(map_id, character_id)
|> case do
{:ok,
%{
alliance_id: alliance_id,
corporation_id: corporation_id,
solar_system_id: solar_system_id,
structure_id: structure_id,
station_id: station_id,
ship: ship_type_id,
ship_name: ship_name
}} ->
map_id
|> update_map(%{characters: [character_id | characters]})
map_id
|> update_map(%{characters: [character_id | characters]})
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:alliance_id",
# alliance_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:corporation_id",
# corporation_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:solar_system_id",
# solar_system_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:structure_id",
# structure_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:station_id",
# station_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:ship_type_id",
# ship_type_id
# )
# WandererApp.Cache.insert(
# "map:#{map_id}:character:#{character_id}:ship_name",
# ship_name
# )
:ok
error ->
error
end
:ok
_ ->
{:error, :already_exists}
@@ -532,15 +503,16 @@ defmodule WandererApp.Map do
solar_system_source,
solar_system_target
) do
case map_id
|> get_map!()
|> Map.get(:connections, Map.new())
connections =
map_id
|> get_map!()
|> Map.get(:connections, Map.new())
case connections
|> Map.get("#{solar_system_source}_#{solar_system_target}") do
nil ->
{:ok,
map_id
|> get_map!()
|> Map.get(:connections, Map.new())
connections
|> Map.get("#{solar_system_target}_#{solar_system_source}")}
connection ->

View File

@@ -0,0 +1,38 @@
defmodule WandererApp.Map.GarbageCollector do
@moduledoc """
Manager map subscription plans
"""
require Logger
require Ash.Query
@logger Application.compile_env(:wanderer_app, :logger)
@one_week_seconds 7 * 24 * 60 * 60
@two_weeks_seconds 14 * 24 * 60 * 60
def cleanup_chain_passages() do
Logger.info("Start cleanup old map chain passages...")
WandererApp.Api.MapChainPassages
|> Ash.Query.filter(updated_at: [less_than: get_cutoff_time(@one_week_seconds)])
|> Ash.bulk_destroy!(:destroy, %{}, batch_size: 100)
@logger.info(fn -> "All map chain passages processed" end)
:ok
end
def cleanup_system_signatures() do
Logger.info("Start cleanup old map system signatures...")
WandererApp.Api.MapSystemSignature
|> Ash.Query.filter(updated_at: [less_than: get_cutoff_time(@two_weeks_seconds)])
|> Ash.bulk_destroy!(:destroy, %{}, batch_size: 100)
@logger.info(fn -> "All map system signatures processed" end)
:ok
end
defp get_cutoff_time(seconds), do: DateTime.utc_now() |> DateTime.add(-seconds, :second)
end

View File

@@ -9,8 +9,8 @@ defmodule WandererApp.Map.Manager do
alias WandererApp.Map.Server
@maps_start_per_second 10
@maps_start_interval 1000
@maps_start_chunk_size 20
@maps_start_interval 500
@maps_queue :maps_queue
@check_maps_queue_interval :timer.seconds(1)
@@ -58,9 +58,9 @@ defmodule WandererApp.Map.Manager do
{:ok, pings_cleanup_timer} =
:timer.send_interval(@pings_cleanup_interval, :cleanup_pings)
safe_async_task(fn ->
start_last_active_maps()
end)
# safe_async_task(fn ->
# start_last_active_maps()
# end)
{:ok,
%{
@@ -153,7 +153,7 @@ defmodule WandererApp.Map.Manager do
@maps_queue
|> WandererApp.Queue.to_list!()
|> Enum.uniq()
|> Enum.chunk_every(@maps_start_per_second)
|> Enum.chunk_every(@maps_start_chunk_size)
WandererApp.Queue.clear(@maps_queue)

View File

@@ -4,7 +4,7 @@ defmodule WandererApp.Map.MapPool do
require Logger
alias WandererApp.Map.Server
alias WandererApp.Map.{MapPoolState, Server}
defstruct [
:map_ids,
@@ -15,8 +15,9 @@ defmodule WandererApp.Map.MapPool do
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@map_pool_limit 10
@garbage_collection_interval :timer.hours(12)
@garbage_collection_interval :timer.hours(4)
@systems_cleanup_timeout :timer.minutes(30)
@characters_cleanup_timeout :timer.minutes(5)
@connections_cleanup_timeout :timer.minutes(5)
@@ -25,7 +26,17 @@ defmodule WandererApp.Map.MapPool do
def new(), do: __struct__()
def new(args), do: __struct__(args)
def start_link(map_ids) do
# Accept both {uuid, map_ids} tuple (from supervisor restart) and just map_ids (legacy)
def start_link({uuid, map_ids}) when is_binary(uuid) and is_list(map_ids) do
GenServer.start_link(
@name,
{uuid, map_ids},
name: Module.concat(__MODULE__, uuid)
)
end
# For backward compatibility - generate UUID if only map_ids provided
def start_link(map_ids) when is_list(map_ids) do
uuid = UUID.uuid1()
GenServer.start_link(
@@ -37,13 +48,42 @@ defmodule WandererApp.Map.MapPool do
@impl true
def init({uuid, map_ids}) do
{:ok, _} = Registry.register(@unique_registry, Module.concat(__MODULE__, uuid), map_ids)
# Check for crash recovery - if we have previous state in ETS, merge it with new map_ids
{final_map_ids, recovery_info} =
case MapPoolState.get_pool_state(uuid) do
{:ok, recovered_map_ids} ->
# Merge and deduplicate map IDs
merged = Enum.uniq(recovered_map_ids ++ map_ids)
recovery_count = length(recovered_map_ids)
Logger.info(
"[Map Pool #{uuid}] Crash recovery detected: recovering #{recovery_count} maps",
pool_uuid: uuid,
recovered_maps: recovered_map_ids,
new_maps: map_ids,
total_maps: length(merged)
)
# Emit telemetry for crash recovery
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :start],
%{recovered_map_count: recovery_count, total_map_count: length(merged)},
%{pool_uuid: uuid}
)
{merged, %{recovered: true, count: recovery_count}}
{:error, :not_found} ->
# Normal startup, no previous state to recover
{map_ids, %{recovered: false}}
end
# Register with empty list - maps will be added as they're started in handle_continue
{:ok, _} = Registry.register(@unique_registry, Module.concat(__MODULE__, uuid), [])
{:ok, _} = Registry.register(@registry, __MODULE__, uuid)
map_ids
|> Enum.each(fn id ->
Cachex.put(@cache, id, uuid)
end)
# Don't pre-populate cache - will be populated as maps start in handle_continue
# This prevents duplicates when recovering
state =
%{
@@ -52,23 +92,100 @@ defmodule WandererApp.Map.MapPool do
}
|> new()
{:ok, state, {:continue, {:start, map_ids}}}
{:ok, state, {:continue, {:start, {final_map_ids, recovery_info}}}}
end
@impl true
def terminate(_reason, _state) do
def terminate(reason, %{uuid: uuid} = _state) do
# On graceful shutdown, clean up ETS state
# On crash, keep ETS state for recovery
case reason do
:normal ->
Logger.debug("[Map Pool #{uuid}] Graceful shutdown, cleaning up ETS state")
MapPoolState.delete_pool_state(uuid)
:shutdown ->
Logger.debug("[Map Pool #{uuid}] Graceful shutdown, cleaning up ETS state")
MapPoolState.delete_pool_state(uuid)
{:shutdown, _} ->
Logger.debug("[Map Pool #{uuid}] Graceful shutdown, cleaning up ETS state")
MapPoolState.delete_pool_state(uuid)
_ ->
Logger.warning(
"[Map Pool #{uuid}] Abnormal termination (#{inspect(reason)}), keeping ETS state for recovery"
)
# Keep ETS state for crash recovery
:ok
end
:ok
end
@impl true
def handle_continue({:start, map_ids}, state) do
def handle_continue({:start, {map_ids, recovery_info}}, state) do
Logger.info("#{@name} started")
map_ids
|> Enum.each(fn map_id ->
GenServer.cast(self(), {:start_map, map_id})
end)
# Track recovery statistics
start_time = System.monotonic_time(:millisecond)
initial_count = length(map_ids)
# Start maps synchronously and accumulate state changes
{new_state, failed_maps} =
map_ids
|> Enum.reduce({state, []}, fn map_id, {current_state, failed} ->
case do_start_map(map_id, current_state) do
{:ok, updated_state} ->
{updated_state, failed}
{:error, reason} ->
Logger.error("[Map Pool] Failed to start map #{map_id}: #{reason}")
# Emit telemetry for individual map recovery failure
if recovery_info.recovered do
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :map_failed],
%{map_id: map_id},
%{pool_uuid: state.uuid, reason: reason}
)
end
{current_state, [map_id | failed]}
end
end)
# Calculate final statistics
end_time = System.monotonic_time(:millisecond)
duration_ms = end_time - start_time
successful_count = length(new_state.map_ids)
failed_count = length(failed_maps)
# Log and emit telemetry for recovery completion
if recovery_info.recovered do
Logger.info(
"[Map Pool #{state.uuid}] Crash recovery completed: #{successful_count}/#{initial_count} maps recovered in #{duration_ms}ms",
pool_uuid: state.uuid,
recovered_count: successful_count,
failed_count: failed_count,
total_count: initial_count,
duration_ms: duration_ms,
failed_maps: failed_maps
)
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :complete],
%{
recovered_count: successful_count,
failed_count: failed_count,
duration_ms: duration_ms
},
%{pool_uuid: state.uuid}
)
end
# Schedule periodic tasks
Process.send_after(self(), :backup_state, @backup_state_timeout)
Process.send_after(self(), :cleanup_systems, 15_000)
Process.send_after(self(), :cleanup_characters, @characters_cleanup_timeout)
@@ -77,56 +194,354 @@ defmodule WandererApp.Map.MapPool do
# Start message queue monitoring
Process.send_after(self(), :monitor_message_queue, :timer.seconds(30))
{:noreply, state}
{:noreply, new_state}
end
@impl true
def handle_continue({:init_map, map_id}, %{uuid: uuid} = state) do
# Perform the actual map initialization asynchronously
# This runs after the GenServer.call has already returned
start_time = System.monotonic_time(:millisecond)
try do
# Initialize the map state and start the map server
map_id
|> WandererApp.Map.get_map_state!()
|> Server.Impl.start_map()
duration = System.monotonic_time(:millisecond) - start_time
Logger.info("[Map Pool #{uuid}] Map #{map_id} initialized successfully in #{duration}ms")
# Emit telemetry for slow initializations
if duration > 5_000 do
Logger.warning("[Map Pool #{uuid}] Slow map initialization: #{map_id} took #{duration}ms")
:telemetry.execute(
[:wanderer_app, :map_pool, :slow_init],
%{duration_ms: duration},
%{map_id: map_id, pool_uuid: uuid}
)
end
{:noreply, state}
rescue
e ->
duration = System.monotonic_time(:millisecond) - start_time
Logger.error("""
[Map Pool #{uuid}] Failed to initialize map #{map_id} after #{duration}ms: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
# Rollback: Remove from state, registry, and cache
new_state = %{state | map_ids: state.map_ids |> Enum.reject(fn id -> id == map_id end)}
# Update registry
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
# Remove from cache
Cachex.del(@cache, map_id)
# Update ETS state
MapPoolState.save_pool_state(uuid, new_state.map_ids)
# Emit telemetry for failed initialization
:telemetry.execute(
[:wanderer_app, :map_pool, :init_failed],
%{duration_ms: duration},
%{map_id: map_id, pool_uuid: uuid, reason: Exception.message(e)}
)
{:noreply, new_state}
end
end
@impl true
def handle_cast(:stop, state), do: {:stop, :normal, state}
@impl true
def handle_cast({:start_map, map_id}, %{map_ids: map_ids, uuid: uuid} = state) do
if map_id not in map_ids do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
def handle_call({:start_map, map_id}, _from, %{map_ids: map_ids, uuid: uuid} = state) do
# Enforce capacity limit to prevent pool overload due to race conditions
if length(map_ids) >= @map_pool_limit do
Logger.warning(
"[Map Pool #{uuid}] Pool at capacity (#{length(map_ids)}/#{@map_pool_limit}), " <>
"rejecting map #{map_id} and triggering new pool creation"
)
# Trigger a new pool creation attempt asynchronously
# This allows the system to create a new pool for this map
spawn(fn ->
WandererApp.Map.MapPoolDynamicSupervisor.start_map(map_id)
end)
Cachex.put(@cache, map_id, uuid)
map_id
|> WandererApp.Map.get_map_state!()
|> Server.Impl.start_map()
{:noreply, %{state | map_ids: [map_id | map_ids]}}
{:reply, :ok, state}
else
{:noreply, state}
# Check if map is already started or being initialized
if map_id in map_ids do
Logger.debug("[Map Pool #{uuid}] Map #{map_id} already in pool")
{:reply, {:ok, :already_started}, state}
else
# Pre-register the map in registry and cache to claim ownership
# This prevents race conditions where multiple pools try to start the same map
registry_result =
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
end)
case registry_result do
{_new_value, _old_value} ->
# Add to cache
Cachex.put(@cache, map_id, uuid)
# Add to state
new_state = %{state | map_ids: [map_id | map_ids]}
# Persist state to ETS
MapPoolState.save_pool_state(uuid, new_state.map_ids)
Logger.debug("[Map Pool #{uuid}] Map #{map_id} queued for async initialization")
# Return immediately and initialize asynchronously
{:reply, {:ok, :initializing}, new_state, {:continue, {:init_map, map_id}}}
:error ->
Logger.error("[Map Pool #{uuid}] Failed to register map #{map_id} in registry")
{:reply, {:error, :registration_failed}, state}
end
end
end
end
@impl true
def handle_cast(
def handle_call(
{:stop_map, map_id},
%{map_ids: map_ids, uuid: uuid} = state
_from,
state
) do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
case do_stop_map(map_id, state) do
{:ok, new_state} ->
{:reply, :ok, new_state}
Cachex.del(@cache, map_id)
{:error, reason} ->
{:reply, {:error, reason}, state}
end
end
map_id
|> Server.Impl.stop_map()
defp do_start_map(map_id, %{map_ids: map_ids, uuid: uuid} = state) do
if map_id in map_ids do
# Map already started
{:ok, state}
else
# Track what operations succeeded for potential rollback
completed_operations = []
{:noreply, %{state | map_ids: map_ids |> Enum.reject(fn id -> id == map_id end)}}
try do
# Step 1: Update Registry (most critical, do first)
registry_result =
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
end)
completed_operations = [:registry | completed_operations]
case registry_result do
{new_value, _old_value} when is_list(new_value) ->
:ok
:error ->
raise "Failed to update registry for pool #{uuid}"
end
# Step 2: Add to cache
case Cachex.put(@cache, map_id, uuid) do
{:ok, _} ->
completed_operations = [:cache | completed_operations]
{:error, reason} ->
raise "Failed to add to cache: #{inspect(reason)}"
end
# Step 3: Start the map server
map_id
|> WandererApp.Map.get_map_state!()
|> Server.Impl.start_map()
completed_operations = [:map_server | completed_operations]
# Step 4: Update GenServer state (last, as this is in-memory and fast)
new_state = %{state | map_ids: [map_id | map_ids]}
# Step 5: Persist state to ETS for crash recovery
MapPoolState.save_pool_state(uuid, new_state.map_ids)
Logger.debug("[Map Pool] Successfully started map #{map_id} in pool #{uuid}")
{:ok, new_state}
rescue
e ->
Logger.error("""
[Map Pool] Failed to start map #{map_id} (completed: #{inspect(completed_operations)}): #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
# Attempt rollback of completed operations
rollback_start_map_operations(map_id, uuid, completed_operations)
{:error, Exception.message(e)}
end
end
end
defp rollback_start_map_operations(map_id, uuid, completed_operations) do
Logger.warning("[Map Pool] Attempting to rollback start_map operations for #{map_id}")
# Rollback in reverse order
if :map_server in completed_operations do
Logger.debug("[Map Pool] Rollback: Stopping map server for #{map_id}")
try do
Server.Impl.stop_map(map_id)
rescue
e ->
Logger.error("[Map Pool] Rollback failed to stop map server: #{Exception.message(e)}")
end
end
if :cache in completed_operations do
Logger.debug("[Map Pool] Rollback: Removing #{map_id} from cache")
case Cachex.del(@cache, map_id) do
{:ok, _} ->
:ok
{:error, reason} ->
Logger.error("[Map Pool] Rollback failed for cache: #{inspect(reason)}")
end
end
if :registry in completed_operations do
Logger.debug("[Map Pool] Rollback: Removing #{map_id} from registry")
try do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
rescue
e ->
Logger.error("[Map Pool] Rollback failed for registry: #{Exception.message(e)}")
end
end
end
defp do_stop_map(map_id, %{map_ids: map_ids, uuid: uuid} = state) do
# Track what operations succeeded for potential rollback
completed_operations = []
try do
# Step 1: Update Registry (most critical, do first)
registry_result =
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
completed_operations = [:registry | completed_operations]
case registry_result do
{new_value, _old_value} when is_list(new_value) ->
:ok
:error ->
raise "Failed to update registry for pool #{uuid}"
end
# Step 2: Delete from cache
case Cachex.del(@cache, map_id) do
{:ok, _} ->
completed_operations = [:cache | completed_operations]
{:error, reason} ->
raise "Failed to delete from cache: #{inspect(reason)}"
end
# Step 3: Stop the map server (clean up all map resources)
map_id
|> Server.Impl.stop_map()
completed_operations = [:map_server | completed_operations]
# Step 4: Update GenServer state (last, as this is in-memory and fast)
new_state = %{state | map_ids: map_ids |> Enum.reject(fn id -> id == map_id end)}
# Step 5: Persist state to ETS for crash recovery
MapPoolState.save_pool_state(uuid, new_state.map_ids)
Logger.debug("[Map Pool] Successfully stopped map #{map_id} from pool #{uuid}")
{:ok, new_state}
rescue
e ->
Logger.error("""
[Map Pool] Failed to stop map #{map_id} (completed: #{inspect(completed_operations)}): #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
# Attempt rollback of completed operations
rollback_stop_map_operations(map_id, uuid, completed_operations)
{:error, Exception.message(e)}
end
end
defp rollback_stop_map_operations(map_id, uuid, completed_operations) do
Logger.warning("[Map Pool] Attempting to rollback stop_map operations for #{map_id}")
# Rollback in reverse order
if :cache in completed_operations do
Logger.debug("[Map Pool] Rollback: Re-adding #{map_id} to cache")
case Cachex.put(@cache, map_id, uuid) do
{:ok, _} ->
:ok
{:error, reason} ->
Logger.error("[Map Pool] Rollback failed for cache: #{inspect(reason)}")
end
end
if :registry in completed_operations do
Logger.debug("[Map Pool] Rollback: Re-adding #{map_id} to registry")
try do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
if map_id in r_map_ids do
r_map_ids
else
[map_id | r_map_ids]
end
end)
rescue
e ->
Logger.error("[Map Pool] Rollback failed for registry: #{Exception.message(e)}")
end
end
# Note: We don't rollback map_server stop as Server.Impl.stop_map() is idempotent
# and the cleanup operations are safe to leave in a "stopped" state
end
@impl true
def handle_call(:error, _, state), do: {:stop, :error, :ok, state}
@impl true
def handle_info(:backup_state, %{map_ids: map_ids} = state) do
def handle_info(:backup_state, %{map_ids: map_ids, uuid: uuid} = state) do
Process.send_after(self(), :backup_state, @backup_state_timeout)
try do
# Persist pool state to ETS
MapPoolState.save_pool_state(uuid, map_ids)
# Backup individual map states to database
map_ids
|> Task.async_stream(
fn map_id ->
@@ -231,25 +646,38 @@ defmodule WandererApp.Map.MapPool do
Process.send_after(self(), :garbage_collect, @garbage_collection_interval)
try do
map_ids
|> Enum.each(fn map_id ->
# presence_character_ids =
# WandererApp.Cache.lookup!("map_#{map_id}:presence_character_ids", [])
# Process each map and accumulate state changes
new_state =
map_ids
|> Enum.reduce(state, fn map_id, current_state ->
presence_character_ids =
WandererApp.Cache.lookup!("map_#{map_id}:presence_character_ids", [])
# if presence_character_ids |> Enum.empty?() do
Logger.info(
"#{uuid}: No more characters present on: #{map_id}, shutting down map server..."
)
if presence_character_ids |> Enum.empty?() do
Logger.info(
"#{uuid}: No more characters present on: #{map_id}, shutting down map server..."
)
GenServer.cast(self(), {:stop_map, map_id})
# end
end)
case do_stop_map(map_id, current_state) do
{:ok, updated_state} ->
Logger.debug("#{uuid}: Successfully stopped map #{map_id}")
updated_state
{:error, reason} ->
Logger.error("#{uuid}: Failed to stop map #{map_id}: #{reason}")
current_state
end
else
current_state
end
end)
{:noreply, new_state}
rescue
e ->
Logger.error(Exception.message(e))
Logger.error("#{uuid}: Garbage collection error: #{Exception.message(e)}")
{:noreply, state}
end
{:noreply, state}
end
@impl true
@@ -309,8 +737,69 @@ defmodule WandererApp.Map.MapPool do
{:noreply, state}
end
def handle_info(:map_deleted, %{map_ids: map_ids} = state) do
# When a map is deleted, stop all maps in this pool that are deleted
# This is a graceful shutdown triggered by user action
Logger.info("[Map Pool #{state.uuid}] Received map_deleted event, stopping affected maps")
# Check which of our maps were deleted and stop them
new_state =
map_ids
|> Enum.reduce(state, fn map_id, current_state ->
# Check if the map still exists in the database
case WandererApp.MapRepo.get(map_id) do
{:ok, %{deleted: true}} ->
Logger.info("[Map Pool #{state.uuid}] Map #{map_id} was deleted, stopping it")
case do_stop_map(map_id, current_state) do
{:ok, updated_state} ->
updated_state
{:error, reason} ->
Logger.error(
"[Map Pool #{state.uuid}] Failed to stop deleted map #{map_id}: #{reason}"
)
current_state
end
{:ok, _map} ->
# Map still exists and is not deleted
current_state
{:error, _} ->
# Map doesn't exist, should stop it
Logger.info("[Map Pool #{state.uuid}] Map #{map_id} not found, stopping it")
case do_stop_map(map_id, current_state) do
{:ok, updated_state} ->
updated_state
{:error, reason} ->
Logger.error(
"[Map Pool #{state.uuid}] Failed to stop missing map #{map_id}: #{reason}"
)
current_state
end
end
end)
{:noreply, new_state}
end
def handle_info(event, state) do
Server.Impl.handle_event(event)
try do
Server.Impl.handle_event(event)
rescue
e ->
Logger.error("""
[Map Pool] handle_info => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
ErrorTracker.report(e, __STACKTRACE__)
end
{:noreply, state}
end

View File

@@ -8,6 +8,7 @@ defmodule WandererApp.Map.MapPoolDynamicSupervisor do
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@map_pool_limit 10
@genserver_call_timeout :timer.minutes(2)
@name __MODULE__
@@ -30,23 +31,109 @@ defmodule WandererApp.Map.MapPoolDynamicSupervisor do
start_child([map_id], pools |> Enum.count())
pid ->
GenServer.cast(pid, {:start_map, map_id})
result = GenServer.call(pid, {:start_map, map_id}, @genserver_call_timeout)
case result do
{:ok, :initializing} ->
Logger.debug(
"[Map Pool Supervisor] Map #{map_id} queued for async initialization"
)
result
{:ok, :already_started} ->
Logger.debug("[Map Pool Supervisor] Map #{map_id} already started")
result
:ok ->
# Legacy synchronous response (from crash recovery path)
Logger.debug("[Map Pool Supervisor] Map #{map_id} started synchronously")
result
other ->
Logger.warning(
"[Map Pool Supervisor] Unexpected response for map #{map_id}: #{inspect(other)}"
)
other
end
end
end
end
def stop_map(map_id) do
{:ok, pool_uuid} = Cachex.get(@cache, map_id)
case Cachex.get(@cache, map_id) do
{:ok, nil} ->
# Cache miss - try to find the pool by scanning the registry
Logger.warning(
"Cache miss for map #{map_id}, scanning registry for pool containing this map"
)
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
find_pool_by_scanning_registry(map_id)
{:ok, pool_uuid} ->
# Cache hit - use the pool_uuid to lookup the pool
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
[] ->
Logger.warning(
"Pool with UUID #{pool_uuid} not found in registry for map #{map_id}, scanning registry"
)
find_pool_by_scanning_registry(map_id)
[{pool_pid, _}] ->
GenServer.call(pool_pid, {:stop_map, map_id}, @genserver_call_timeout)
end
{:error, reason} ->
Logger.error("Failed to lookup map #{map_id} in cache: #{inspect(reason)}")
:ok
end
end
defp find_pool_by_scanning_registry(map_id) do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
Logger.debug("No map pools found in registry for map #{map_id}")
:ok
[{pool_pid, _}] ->
GenServer.cast(pool_pid, {:stop_map, map_id})
pools ->
# Scan all pools to find the one containing this map_id
found_pool =
Enum.find_value(pools, fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{pool_pid, map_ids}] ->
if map_id in map_ids do
{pool_pid, uuid}
else
nil
end
_ ->
nil
end
end)
case found_pool do
{pool_pid, pool_uuid} ->
Logger.info(
"Found map #{map_id} in pool #{pool_uuid} via registry scan, updating cache"
)
# Update the cache to fix the inconsistency
Cachex.put(@cache, map_id, pool_uuid)
GenServer.call(pool_pid, {:stop_map, map_id}, @genserver_call_timeout)
nil ->
Logger.debug("Map #{map_id} not found in any pool registry")
:ok
end
end
end
@@ -79,9 +166,13 @@ defmodule WandererApp.Map.MapPoolDynamicSupervisor do
end
defp start_child(map_ids, pools_count) do
case DynamicSupervisor.start_child(@name, {WandererApp.Map.MapPool, map_ids}) do
# Generate UUID for the new pool - this will be used for crash recovery
uuid = UUID.uuid1()
# Pass both UUID and map_ids to the pool for crash recovery support
case DynamicSupervisor.start_child(@name, {WandererApp.Map.MapPool, {uuid, map_ids}}) do
{:ok, pid} ->
Logger.info("Starting map pool, total map_pools: #{pools_count + 1}")
Logger.info("Starting map pool #{uuid}, total map_pools: #{pools_count + 1}")
{:ok, pid}
{:error, {:already_started, pid}} ->

View File

@@ -0,0 +1,190 @@
defmodule WandererApp.Map.MapPoolState do
@moduledoc """
Helper module for persisting MapPool state to ETS for crash recovery.
This module provides functions to save and retrieve MapPool state from an ETS table.
The state survives GenServer crashes but is lost on node restart, which ensures
automatic recovery from crashes while avoiding stale state on system restart.
## ETS Table Ownership
The ETS table `:map_pool_state_table` is owned by the MapPoolSupervisor,
ensuring it survives individual MapPool process crashes.
## State Format
State is stored as tuples: `{pool_uuid, map_ids, last_updated_timestamp}`
where:
- `pool_uuid` is the unique identifier for the pool (key)
- `map_ids` is a list of map IDs managed by this pool
- `last_updated_timestamp` is the Unix timestamp of the last update
"""
require Logger
@table_name :map_pool_state_table
@stale_threshold_hours 24
@doc """
Initializes the ETS table for storing MapPool state.
This should be called by the MapPoolSupervisor during initialization.
The table is created as:
- `:set` - Each pool UUID has exactly one entry
- `:public` - Any process can read/write
- `:named_table` - Can be accessed by name
Returns the table reference or raises if table already exists.
"""
@spec init_table() :: :ets.table()
def init_table do
:ets.new(@table_name, [:set, :public, :named_table])
end
@doc """
Saves the current state of a MapPool to ETS.
## Parameters
- `uuid` - The unique identifier for the pool
- `map_ids` - List of map IDs currently managed by this pool
## Examples
iex> MapPoolState.save_pool_state("pool-123", [1, 2, 3])
:ok
"""
@spec save_pool_state(String.t(), [integer()]) :: :ok
def save_pool_state(uuid, map_ids) when is_binary(uuid) and is_list(map_ids) do
timestamp = System.system_time(:second)
true = :ets.insert(@table_name, {uuid, map_ids, timestamp})
Logger.debug("Saved MapPool state for #{uuid}: #{length(map_ids)} maps",
pool_uuid: uuid,
map_count: length(map_ids)
)
:ok
end
@doc """
Retrieves the saved state for a MapPool from ETS.
## Parameters
- `uuid` - The unique identifier for the pool
## Returns
- `{:ok, map_ids}` if state exists
- `{:error, :not_found}` if no state exists for this UUID
## Examples
iex> MapPoolState.get_pool_state("pool-123")
{:ok, [1, 2, 3]}
iex> MapPoolState.get_pool_state("non-existent")
{:error, :not_found}
"""
@spec get_pool_state(String.t()) :: {:ok, [integer()]} | {:error, :not_found}
def get_pool_state(uuid) when is_binary(uuid) do
case :ets.lookup(@table_name, uuid) do
[{^uuid, map_ids, _timestamp}] ->
{:ok, map_ids}
[] ->
{:error, :not_found}
end
end
@doc """
Deletes the state for a MapPool from ETS.
This should be called when a pool is gracefully shut down.
## Parameters
- `uuid` - The unique identifier for the pool
## Examples
iex> MapPoolState.delete_pool_state("pool-123")
:ok
"""
@spec delete_pool_state(String.t()) :: :ok
def delete_pool_state(uuid) when is_binary(uuid) do
true = :ets.delete(@table_name, uuid)
Logger.debug("Deleted MapPool state for #{uuid}", pool_uuid: uuid)
:ok
end
@doc """
Removes stale entries from the ETS table.
Entries are considered stale if they haven't been updated in the last
#{@stale_threshold_hours} hours. This helps prevent the table from growing
unbounded due to pool UUIDs that are no longer in use.
Returns the number of entries deleted.
## Examples
iex> MapPoolState.cleanup_stale_entries()
{:ok, 3}
"""
@spec cleanup_stale_entries() :: {:ok, non_neg_integer()}
def cleanup_stale_entries do
stale_threshold = System.system_time(:second) - @stale_threshold_hours * 3600
match_spec = [
{
{:"$1", :"$2", :"$3"},
[{:<, :"$3", stale_threshold}],
[:"$1"]
}
]
stale_uuids = :ets.select(@table_name, match_spec)
Enum.each(stale_uuids, fn uuid ->
:ets.delete(@table_name, uuid)
Logger.info("Cleaned up stale MapPool state for #{uuid}",
pool_uuid: uuid,
reason: :stale
)
end)
{:ok, length(stale_uuids)}
end
@doc """
Returns all pool states currently stored in ETS.
Useful for debugging and monitoring.
## Examples
iex> MapPoolState.list_all_states()
[
{"pool-123", [1, 2, 3], 1699564800},
{"pool-456", [4, 5], 1699564900}
]
"""
@spec list_all_states() :: [{String.t(), [integer()], integer()}]
def list_all_states do
:ets.tab2list(@table_name)
end
@doc """
Returns the count of pool states currently stored in ETS.
## Examples
iex> MapPoolState.count_states()
5
"""
@spec count_states() :: non_neg_integer()
def count_states do
:ets.info(@table_name, :size)
end
end

View File

@@ -2,6 +2,8 @@ defmodule WandererApp.Map.MapPoolSupervisor do
@moduledoc false
use Supervisor
alias WandererApp.Map.MapPoolState
@name __MODULE__
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@@ -11,10 +13,15 @@ defmodule WandererApp.Map.MapPoolSupervisor do
end
def init(_args) do
# Initialize ETS table for MapPool state persistence
# This table survives individual MapPool crashes but is lost on node restart
MapPoolState.init_table()
children = [
{Registry, [keys: :unique, name: @unique_registry]},
{Registry, [keys: :duplicate, name: @registry]},
{WandererApp.Map.MapPoolDynamicSupervisor, []}
{WandererApp.Map.MapPoolDynamicSupervisor, []},
{WandererApp.Map.Reconciler, []}
]
Supervisor.init(children, strategy: :rest_for_one, max_restarts: 10)

View File

@@ -0,0 +1,280 @@
defmodule WandererApp.Map.Reconciler do
@moduledoc """
Periodically reconciles map state across different stores (Cache, Registry, GenServer state)
to detect and fix inconsistencies that may prevent map servers from restarting.
"""
use GenServer
require Logger
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@reconciliation_interval :timer.minutes(5)
def start_link(_opts) do
GenServer.start_link(__MODULE__, [], name: __MODULE__)
end
@impl true
def init(_opts) do
Logger.info("Starting Map Reconciler")
schedule_reconciliation()
{:ok, %{}}
end
@impl true
def handle_info(:reconcile, state) do
schedule_reconciliation()
try do
reconcile_state()
rescue
e ->
Logger.error("""
[Map Reconciler] reconciliation error: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
@doc """
Manually trigger a reconciliation (useful for testing or manual cleanup)
"""
def trigger_reconciliation do
GenServer.cast(__MODULE__, :reconcile_now)
end
@impl true
def handle_cast(:reconcile_now, state) do
try do
reconcile_state()
rescue
e ->
Logger.error("""
[Map Reconciler] manual reconciliation error: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
defp schedule_reconciliation do
Process.send_after(self(), :reconcile, @reconciliation_interval)
end
defp reconcile_state do
Logger.debug("[Map Reconciler] Starting state reconciliation")
# Get started_maps from cache
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
# Get all maps from registries
registry_maps = get_all_registry_maps()
# Detect zombie maps (in started_maps but not in any registry)
zombie_maps = started_maps -- registry_maps
# Detect orphan maps (in registry but not in started_maps)
orphan_maps = registry_maps -- started_maps
# Detect cache inconsistencies (map_pool_cache pointing to wrong or non-existent pools)
cache_inconsistencies = find_cache_inconsistencies(registry_maps)
stats = %{
total_started_maps: length(started_maps),
total_registry_maps: length(registry_maps),
zombie_maps: length(zombie_maps),
orphan_maps: length(orphan_maps),
cache_inconsistencies: length(cache_inconsistencies)
}
Logger.info("[Map Reconciler] Reconciliation stats: #{inspect(stats)}")
# Emit telemetry
:telemetry.execute(
[:wanderer_app, :map, :reconciliation],
stats,
%{}
)
# Clean up zombie maps
cleanup_zombie_maps(zombie_maps)
# Fix orphan maps
fix_orphan_maps(orphan_maps)
# Fix cache inconsistencies
fix_cache_inconsistencies(cache_inconsistencies)
Logger.debug("[Map Reconciler] State reconciliation completed")
end
defp get_all_registry_maps do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
[]
pools ->
pools
|> Enum.flat_map(fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{_pool_pid, map_ids}] -> map_ids
_ -> []
end
end)
|> Enum.uniq()
end
end
defp find_cache_inconsistencies(registry_maps) do
registry_maps
|> Enum.filter(fn map_id ->
case Cachex.get(@cache, map_id) do
{:ok, nil} ->
# Map in registry but not in cache
true
{:ok, pool_uuid} ->
# Check if the pool_uuid actually exists in registry
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
[] ->
# Cache points to non-existent pool
true
[{_pool_pid, map_ids}] ->
# Check if this map is actually in the pool's map_ids
map_id not in map_ids
_ ->
false
end
{:error, _} ->
true
end
end)
end
defp cleanup_zombie_maps([]), do: :ok
defp cleanup_zombie_maps(zombie_maps) do
Logger.warning("[Map Reconciler] Found #{length(zombie_maps)} zombie maps: #{inspect(zombie_maps)}")
Enum.each(zombie_maps, fn map_id ->
Logger.info("[Map Reconciler] Cleaning up zombie map: #{map_id}")
# Remove from started_maps cache
WandererApp.Cache.insert_or_update(
"started_maps",
[],
fn started_maps ->
started_maps |> Enum.reject(fn started_map_id -> started_map_id == map_id end)
end
)
# Clean up any stale map_pool_cache entries
Cachex.del(@cache, map_id)
# Clean up map-specific caches
WandererApp.Cache.delete("map_#{map_id}:started")
WandererApp.Cache.delete("map_characters-#{map_id}")
WandererApp.Map.CacheRTree.clear_tree("rtree_#{map_id}")
WandererApp.Map.delete_map_state(map_id)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :zombie_cleanup],
%{count: 1},
%{map_id: map_id}
)
end)
end
defp fix_orphan_maps([]), do: :ok
defp fix_orphan_maps(orphan_maps) do
Logger.warning("[Map Reconciler] Found #{length(orphan_maps)} orphan maps: #{inspect(orphan_maps)}")
Enum.each(orphan_maps, fn map_id ->
Logger.info("[Map Reconciler] Fixing orphan map: #{map_id}")
# Add to started_maps cache
WandererApp.Cache.insert_or_update(
"started_maps",
[map_id],
fn existing ->
[map_id | existing] |> Enum.uniq()
end
)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :orphan_fixed],
%{count: 1},
%{map_id: map_id}
)
end)
end
defp fix_cache_inconsistencies([]), do: :ok
defp fix_cache_inconsistencies(inconsistent_maps) do
Logger.warning(
"[Map Reconciler] Found #{length(inconsistent_maps)} cache inconsistencies: #{inspect(inconsistent_maps)}"
)
Enum.each(inconsistent_maps, fn map_id ->
Logger.info("[Map Reconciler] Fixing cache inconsistency for map: #{map_id}")
# Find the correct pool for this map
case find_pool_for_map(map_id) do
{:ok, pool_uuid} ->
Logger.info("[Map Reconciler] Updating cache: #{map_id} -> #{pool_uuid}")
Cachex.put(@cache, map_id, pool_uuid)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :cache_fixed],
%{count: 1},
%{map_id: map_id, pool_uuid: pool_uuid}
)
:error ->
Logger.warning("[Map Reconciler] Could not find pool for map #{map_id}, removing from cache")
Cachex.del(@cache, map_id)
end
end)
end
defp find_pool_for_map(map_id) do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
:error
pools ->
pools
|> Enum.find_value(:error, fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{_pool_pid, map_ids}] ->
if map_id in map_ids do
{:ok, uuid}
else
nil
end
_ ->
nil
end
end)
end
end
end

View File

@@ -300,10 +300,9 @@ defmodule WandererApp.Map.SubscriptionManager do
defp is_expired(subscription) when is_map(subscription),
do: DateTime.compare(DateTime.utc_now(), subscription.active_till) == :gt
defp renew_subscription(%{auto_renew?: true} = subscription) when is_map(subscription) do
with {:ok, %{map: map}} <-
subscription |> WandererApp.MapSubscriptionRepo.load_relationships([:map]),
{:ok, estimated_price, discount} <- estimate_price(subscription, true),
defp renew_subscription(%{auto_renew?: true, map: map} = subscription)
when is_map(subscription) do
with {:ok, estimated_price, discount} <- estimate_price(subscription, true),
{:ok, map_balance} <- get_balance(map) do
case map_balance >= estimated_price do
true ->

View File

@@ -35,16 +35,14 @@ defmodule WandererApp.Map.ZkbDataFetcher do
|> Task.async_stream(
fn map_id ->
try do
if WandererApp.Map.Server.map_pid(map_id) do
# Always update kill counts
update_map_kills(map_id)
# Always update kill counts
update_map_kills(map_id)
# Update detailed kills for maps with active subscriptions
{:ok, is_subscription_active} = map_id |> WandererApp.Map.is_subscription_active?()
# Update detailed kills for maps with active subscriptions
{:ok, is_subscription_active} = map_id |> WandererApp.Map.is_subscription_active?()
if is_subscription_active do
update_detailed_map_kills(map_id)
end
if is_subscription_active do
update_detailed_map_kills(map_id)
end
rescue
e ->

View File

@@ -231,31 +231,15 @@ defmodule WandererApp.Map.Operations.Connections do
attrs
) do
with {:ok, conn_struct} <- MapConnectionRepo.get_by_id(map_id, conn_id),
result <-
:ok <-
(try do
_allowed_keys = [
:mass_status,
:ship_size_type,
:time_status,
:type
]
_update_map =
attrs
|> Enum.filter(fn {k, _v} ->
k in ["mass_status", "ship_size_type", "time_status", "type"]
end)
|> Enum.map(fn {k, v} -> {String.to_atom(k), v} end)
|> Enum.into(%{})
res = apply_connection_updates(map_id, conn_struct, attrs, char_id)
res
rescue
error ->
Logger.error("[update_connection] Exception: #{inspect(error)}")
{:error, :exception}
end),
:ok <- result do
end) do
# Since GenServer updates are asynchronous, manually apply updates to the current struct
# to return the correct data immediately instead of refetching from potentially stale cache
updated_attrs =
@@ -374,6 +358,7 @@ defmodule WandererApp.Map.Operations.Connections do
"ship_size_type" -> maybe_update_ship_size_type(map_id, conn, val)
"time_status" -> maybe_update_time_status(map_id, conn, val)
"type" -> maybe_update_type(map_id, conn, val)
"locked" -> maybe_update_locked(map_id, conn, val)
_ -> :ok
end
@@ -429,6 +414,16 @@ defmodule WandererApp.Map.Operations.Connections do
})
end
defp maybe_update_locked(_map_id, _conn, nil), do: :ok
defp maybe_update_locked(map_id, conn, value) do
Server.update_connection_locked(map_id, %{
solar_system_source_id: conn.solar_system_source,
solar_system_target_id: conn.solar_system_target,
locked: value
})
end
@doc "Creates a connection between two systems"
@spec create_connection(String.t(), map(), String.t()) ::
{:ok, :created} | {:skip, :exists} | {:error, atom()}

View File

@@ -5,9 +5,42 @@ defmodule WandererApp.Map.Operations.Signatures do
require Logger
alias WandererApp.Map.Operations
alias WandererApp.Api.{MapSystem, MapSystemSignature}
alias WandererApp.Api.{Character, MapSystem, MapSystemSignature}
alias WandererApp.Map.Server
# Private helper to validate character_eve_id from params and return internal character ID
# If character_eve_id is provided in params, validates it exists and returns the internal UUID
# If not provided, falls back to the owner's character ID (which is already the internal UUID)
@spec validate_character_eve_id(map() | nil, String.t()) ::
{:ok, String.t()} | {:error, :invalid_character}
defp validate_character_eve_id(params, fallback_char_id) when is_map(params) do
case Map.get(params, "character_eve_id") do
nil ->
# No character_eve_id provided, use fallback (owner's internal character UUID)
{:ok, fallback_char_id}
provided_char_eve_id when is_binary(provided_char_eve_id) ->
# Validate the provided character_eve_id exists and get internal UUID
case Character.by_eve_id(provided_char_eve_id) do
{:ok, character} ->
# Return the internal character UUID, not the eve_id
{:ok, character.id}
_ ->
{:error, :invalid_character}
end
_ ->
# Invalid format
{:error, :invalid_character}
end
end
# Handle nil or non-map params by falling back to owner's character
defp validate_character_eve_id(_params, fallback_char_id) do
{:ok, fallback_char_id}
end
@spec list_signatures(String.t()) :: [map()]
def list_signatures(map_id) do
systems = Operations.list_systems(map_id)
@@ -41,11 +74,14 @@ defmodule WandererApp.Map.Operations.Signatures do
%{"solar_system_id" => solar_system_id} = params
)
when is_integer(solar_system_id) do
# Convert solar_system_id to system_id for internal use
with {:ok, system} <- MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id) do
# Validate character first, then convert solar_system_id to system_id
# validated_char_uuid is the internal character UUID for Server.update_signatures
with {:ok, validated_char_uuid} <- validate_character_eve_id(params, char_id),
{:ok, system} <- MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id) do
# Keep character_eve_id in attrs if provided by user (parse_signatures will use it)
# If not provided, parse_signatures will use the character_eve_id from validated_char_uuid lookup
attrs =
params
|> Map.put("character_eve_id", char_id)
|> Map.put("system_id", system.id)
|> Map.delete("solar_system_id")
@@ -54,7 +90,7 @@ defmodule WandererApp.Map.Operations.Signatures do
updated_signatures: [],
removed_signatures: [],
solar_system_id: solar_system_id,
character_id: char_id,
character_id: validated_char_uuid, # Pass internal UUID here
user_id: user_id,
delete_connection_with_sigs: false
}) do
@@ -86,6 +122,10 @@ defmodule WandererApp.Map.Operations.Signatures do
{:error, :unexpected_error}
end
else
{:error, :invalid_character} ->
Logger.error("[create_signature] Invalid character_eve_id provided")
{:error, :invalid_character}
_ ->
Logger.error(
"[create_signature] System not found for solar_system_id: #{solar_system_id}"
@@ -111,7 +151,10 @@ defmodule WandererApp.Map.Operations.Signatures do
sig_id,
params
) do
with {:ok, sig} <- MapSystemSignature.by_id(sig_id),
# Validate character first, then look up signature and system
# validated_char_uuid is the internal character UUID
with {:ok, validated_char_uuid} <- validate_character_eve_id(params, char_id),
{:ok, sig} <- MapSystemSignature.by_id(sig_id),
{:ok, system} <- MapSystem.by_id(sig.system_id) do
base = %{
"eve_id" => sig.eve_id,
@@ -120,11 +163,11 @@ defmodule WandererApp.Map.Operations.Signatures do
"group" => sig.group,
"type" => sig.type,
"custom_info" => sig.custom_info,
"character_eve_id" => char_id,
"description" => sig.description,
"linked_system_id" => sig.linked_system_id
}
# Merge user params (which may include character_eve_id) with base
attrs = Map.merge(base, params)
:ok =
@@ -133,7 +176,7 @@ defmodule WandererApp.Map.Operations.Signatures do
updated_signatures: [attrs],
removed_signatures: [],
solar_system_id: system.solar_system_id,
character_id: char_id,
character_id: validated_char_uuid, # Pass internal UUID here
user_id: user_id,
delete_connection_with_sigs: false
})
@@ -151,6 +194,10 @@ defmodule WandererApp.Map.Operations.Signatures do
_ -> {:ok, attrs}
end
else
{:error, :invalid_character} ->
Logger.error("[update_signature] Invalid character_eve_id provided")
{:error, :invalid_character}
err ->
Logger.error("[update_signature] Unexpected error: #{inspect(err)}")
{:error, :unexpected_error}

View File

@@ -310,8 +310,8 @@ defmodule WandererApp.Map.Server.CharactersImpl do
start_solar_system_id =
WandererApp.Cache.take("map:#{map_id}:character:#{character_id}:start_solar_system_id")
case is_nil(old_location.solar_system_id) and
is_nil(start_solar_system_id) and
case is_nil(old_location.solar_system_id) &&
is_nil(start_solar_system_id) &&
ConnectionsImpl.can_add_location(scope, location.solar_system_id) do
true ->
:ok = SystemsImpl.maybe_add_system(map_id, location, nil, map_opts)

View File

@@ -373,36 +373,36 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
solar_system_target: solar_system_target
} = updated_connection
) do
source_system =
WandererApp.Map.find_system_by_location(
with source_system when not is_nil(source_system) <-
WandererApp.Map.find_system_by_location(
map_id,
%{solar_system_id: solar_system_source}
),
target_system when not is_nil(source_system) <-
WandererApp.Map.find_system_by_location(
map_id,
%{solar_system_id: solar_system_target}
),
source_linked_signatures <-
find_linked_signatures(source_system, target_system),
target_linked_signatures <- find_linked_signatures(target_system, source_system) do
update_signatures_time_status(
map_id,
%{solar_system_id: solar_system_source}
source_system.solar_system_id,
source_linked_signatures,
time_status
)
target_system =
WandererApp.Map.find_system_by_location(
update_signatures_time_status(
map_id,
%{solar_system_id: solar_system_target}
target_system.solar_system_id,
target_linked_signatures,
time_status
)
source_linked_signatures =
find_linked_signatures(source_system, target_system)
target_linked_signatures = find_linked_signatures(target_system, source_system)
update_signatures_time_status(
map_id,
source_system.solar_system_id,
source_linked_signatures,
time_status
)
update_signatures_time_status(
map_id,
target_system.solar_system_id,
target_linked_signatures,
time_status
)
else
error ->
Logger.error("Failed to update_linked_signature_time_status: #{inspect(error)}")
end
end
defp find_linked_signatures(
@@ -438,7 +438,7 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
%{custom_info: updated_custom_info}
end
SignaturesImpl.apply_update_signature(%{map_id: map_id}, sig, update_params)
SignaturesImpl.apply_update_signature(map_id, sig, update_params)
end)
Impl.broadcast!(map_id, :signatures_updated, solar_system_id)
@@ -537,6 +537,12 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
Impl.broadcast!(map_id, :add_connection, connection)
Impl.broadcast!(map_id, :maybe_link_signature, %{
character_id: character_id,
solar_system_source: old_location.solar_system_id,
solar_system_target: location.solar_system_id
})
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :connection_added, %{
connection_id: connection.id,
@@ -548,19 +554,12 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
time_status: connection.time_status
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: character_id,
user_id: character.user_id,
map_id: map_id,
solar_system_source_id: old_location.solar_system_id,
solar_system_target_id: location.solar_system_id
})
Impl.broadcast!(map_id, :maybe_link_signature, %{
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: character_id,
solar_system_source: old_location.solar_system_id,
solar_system_target: location.solar_system_id
user_id: character.user_id,
map_id: map_id,
solar_system_source_id: old_location.solar_system_id,
solar_system_target_id: location.solar_system_id
})
:ok
@@ -657,12 +656,14 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
)
)
def is_connection_valid(:all, _from_solar_system_id, _to_solar_system_id), do: true
def is_connection_valid(:all, from_solar_system_id, to_solar_system_id),
do: from_solar_system_id != to_solar_system_id
def is_connection_valid(:none, _from_solar_system_id, _to_solar_system_id), do: false
def is_connection_valid(scope, from_solar_system_id, to_solar_system_id)
when not is_nil(from_solar_system_id) and not is_nil(to_solar_system_id) do
when not is_nil(from_solar_system_id) and not is_nil(to_solar_system_id) and
from_solar_system_id != to_solar_system_id do
with {:ok, known_jumps} <- find_solar_system_jump(from_solar_system_id, to_solar_system_id),
{:ok, from_system_static_info} <- get_system_static_info(from_solar_system_id),
{:ok, to_system_static_info} <- get_system_static_info(to_solar_system_id) do

View File

@@ -45,19 +45,72 @@ defmodule WandererApp.Map.Server.Impl do
}
|> new()
with {:ok, map} <-
WandererApp.MapRepo.get(map_id, [
:owner,
:characters,
acls: [
:owner_id,
members: [:role, :eve_character_id, :eve_corporation_id, :eve_alliance_id]
]
]),
{:ok, systems} <- WandererApp.MapSystemRepo.get_visible_by_map(map_id),
{:ok, connections} <- WandererApp.MapConnectionRepo.get_by_map(map_id),
{:ok, subscription_settings} <-
WandererApp.Map.SubscriptionManager.get_active_map_subscription(map_id) do
# Parallelize database queries for faster initialization
start_time = System.monotonic_time(:millisecond)
tasks = [
Task.async(fn ->
{:map, WandererApp.MapRepo.get(map_id, [
:owner,
:characters,
acls: [
:owner_id,
members: [:role, :eve_character_id, :eve_corporation_id, :eve_alliance_id]
]
])}
end),
Task.async(fn ->
{:systems, WandererApp.MapSystemRepo.get_visible_by_map(map_id)}
end),
Task.async(fn ->
{:connections, WandererApp.MapConnectionRepo.get_by_map(map_id)}
end),
Task.async(fn ->
{:subscription, WandererApp.Map.SubscriptionManager.get_active_map_subscription(map_id)}
end)
]
results = Task.await_many(tasks, :timer.seconds(15))
duration = System.monotonic_time(:millisecond) - start_time
# Emit telemetry for slow initializations
if duration > 5_000 do
Logger.warning("[Map Server] Slow map state initialization: #{map_id} took #{duration}ms")
:telemetry.execute(
[:wanderer_app, :map, :slow_init],
%{duration_ms: duration},
%{map_id: map_id}
)
end
# Extract results
map_result = Enum.find_value(results, fn
{:map, result} -> result
_ -> nil
end)
systems_result = Enum.find_value(results, fn
{:systems, result} -> result
_ -> nil
end)
connections_result = Enum.find_value(results, fn
{:connections, result} -> result
_ -> nil
end)
subscription_result = Enum.find_value(results, fn
{:subscription, result} -> result
_ -> nil
end)
# Process results
with {:ok, map} <- map_result,
{:ok, systems} <- systems_result,
{:ok, connections} <- connections_result,
{:ok, subscription_settings} <- subscription_result do
initial_state
|> init_map(
map,
@@ -358,6 +411,13 @@ defmodule WandererApp.Map.Server.Impl do
update_options(map_id, options)
end
def handle_event(:map_deleted) do
# Map has been deleted - this event is handled by MapPool to stop the server
# and by MapLive to redirect users. Nothing to do here.
Logger.debug("Map deletion event received, will be handled by MapPool")
:ok
end
def handle_event({ref, _result}) when is_reference(ref) do
Process.demonitor(ref, [:flush])
end

View File

@@ -279,7 +279,8 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
group: sig["group"],
type: Map.get(sig, "type"),
custom_info: Map.get(sig, "custom_info"),
character_eve_id: character_eve_id,
# Use character_eve_id from sig if provided, otherwise use the default
character_eve_id: Map.get(sig, "character_eve_id", character_eve_id),
deleted: false
}
end)

View File

@@ -642,13 +642,12 @@ defmodule WandererApp.Map.Server.SystemsImpl do
position_y: system.position_y
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:system_added, %{
character_id: character_id,
user_id: user_id,
map_id: map_id,
solar_system_id: solar_system_id
})
WandererApp.User.ActivityTracker.track_map_event(:system_added, %{
character_id: character_id,
user_id: user_id,
map_id: map_id,
solar_system_id: solar_system_id
})
end
defp maybe_update_extra_info(system, nil), do: system

View File

@@ -11,7 +11,9 @@ defmodule WandererApp.Server.ServerStatusTracker do
:server_version,
:start_time,
:vip,
:retries
:retries,
:in_forced_downtime,
:downtime_notified
]
@retries_count 3
@@ -21,9 +23,17 @@ defmodule WandererApp.Server.ServerStatusTracker do
retries: @retries_count,
server_version: "0",
start_time: "0",
vip: true
vip: true,
in_forced_downtime: false,
downtime_notified: false
}
# EVE Online daily downtime period (UTC/GMT)
@downtime_start_hour 10
@downtime_start_minute 58
@downtime_end_hour 11
@downtime_end_minute 2
@refresh_interval :timer.minutes(1)
@logger Application.compile_env(:wanderer_app, :logger)
@@ -57,13 +67,51 @@ defmodule WandererApp.Server.ServerStatusTracker do
def handle_info(
:refresh_status,
%{
retries: retries
retries: retries,
in_forced_downtime: was_in_downtime
} = state
) do
Process.send_after(self(), :refresh_status, @refresh_interval)
Task.async(fn -> get_server_status(retries) end)
{:noreply, state}
in_downtime = in_forced_downtime?()
cond do
# Entering downtime period - broadcast offline status immediately
in_downtime and not was_in_downtime ->
@logger.info("#{__MODULE__} entering forced downtime period (10:58-11:02 GMT)")
downtime_status = %{
players: 0,
server_version: "downtime",
start_time: DateTime.utc_now() |> DateTime.to_iso8601(),
vip: true
}
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"server_status",
{:server_status, downtime_status}
)
{:noreply,
%{state | in_forced_downtime: true, downtime_notified: true}
|> Map.merge(downtime_status)}
# Currently in downtime - skip API call
in_downtime ->
{:noreply, state}
# Exiting downtime period - resume normal operations
not in_downtime and was_in_downtime ->
@logger.info("#{__MODULE__} exiting forced downtime period, resuming normal operations")
Task.async(fn -> get_server_status(retries) end)
{:noreply, %{state | in_forced_downtime: false, downtime_notified: false}}
# Normal operation
true ->
Task.async(fn -> get_server_status(retries) end)
{:noreply, state}
end
end
@impl true
@@ -155,4 +203,19 @@ defmodule WandererApp.Server.ServerStatusTracker do
vip: false
}
end
# Checks if the current UTC time falls within the forced downtime period (10:58-11:02 GMT).
defp in_forced_downtime? do
now = DateTime.utc_now()
current_hour = now.hour
current_minute = now.minute
# Convert times to minutes since midnight for easier comparison
current_time_minutes = current_hour * 60 + current_minute
downtime_start_minutes = @downtime_start_hour * 60 + @downtime_start_minute
downtime_end_minutes = @downtime_end_hour * 60 + @downtime_end_minute
current_time_minutes >= downtime_start_minutes and
current_time_minutes < downtime_end_minutes
end
end

View File

@@ -1,16 +1,57 @@
defmodule WandererApp.User.ActivityTracker do
@moduledoc false
@moduledoc """
Activity tracking wrapper that ensures audit logging never crashes application logic.
Activity tracking is best-effort and errors are logged but not propagated to callers.
This prevents race conditions (e.g., duplicate activity records) from affecting
critical business operations like character tracking or connection management.
"""
require Logger
def track_map_event(
event_type,
metadata
),
do: WandererApp.Map.Audit.track_map_event(event_type, metadata)
@doc """
Track a map-related event. Always returns `{:ok, result}` even on error.
def track_acl_event(
event_type,
metadata
),
do: WandererApp.Map.Audit.track_acl_event(event_type, metadata)
Errors (such as unique constraint violations from concurrent operations)
are logged but do not propagate to prevent crashing critical application logic.
"""
def track_map_event(event_type, metadata) do
case WandererApp.Map.Audit.track_map_event(event_type, metadata) do
{:ok, result} ->
{:ok, result}
{:error, error} ->
Logger.warning("Failed to track map event (non-critical)",
event_type: event_type,
map_id: metadata[:map_id],
error: inspect(error),
reason: :best_effort_tracking
)
# Return success to prevent crashes - activity tracking is best-effort
{:ok, nil}
end
end
@doc """
Track an ACL-related event. Always returns `{:ok, result}` even on error.
Errors are logged but do not propagate to prevent crashing critical application logic.
"""
def track_acl_event(event_type, metadata) do
case WandererApp.Map.Audit.track_acl_event(event_type, metadata) do
{:ok, result} ->
{:ok, result}
{:error, error} ->
Logger.warning("Failed to track ACL event (non-critical)",
event_type: event_type,
acl_id: metadata[:acl_id],
error: inspect(error),
reason: :best_effort_tracking
)
# Return success to prevent crashes - activity tracking is best-effort
{:ok, nil}
end
end
end

View File

@@ -38,14 +38,18 @@
</div>
<div class="navbar-end"></div>
</navbar>
<div class="!z-10 min-h-[calc(100vh-7rem)]">
<div class="!z-10 min-h-[calc(100vh-11rem)]">
{@inner_content}
</div>
<!--Footer-->
<footer class="!z-10 w-full pb-4 text-sm text-center fade-in">
<a class="text-gray-500 no-underline hover:no-underline" href="#">
&copy; Wanderer 2024
</a>
<footer class="!z-10 w-full pt-8 pb-4 text-sm text-center fade-in flex justify-center items-center">
<div class="flex flex-col justify-center items-center">
<a target="_blank" rel="noopener noreferrer" href="https://www.eveonline.com/partners"><img src="/images/eo_pp.png" style="width: 300px;" alt="Eve Online Partnership Program"></a>
<div class="text-gray-500 no-underline hover:no-underline">
All <a href="/license">EVE related materials</a> are property of <a href="https://www.ccpgames.com">CCP Games</a>
&copy; {Date.utc_today().year} Wanderer Industries.
</div>
</div>
</footer>
<div class="fixed top-0 left-0 w-full h-full !-z-1 maps_bg" />
</main>

View File

@@ -432,32 +432,42 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
responses: ResponseSchemas.standard_responses(@detail_response_schema)
)
def show(%{assigns: %{map_id: map_id}} = conn, %{"id" => id}) do
with {:ok, system_uuid} <- APIUtils.validate_uuid(id),
{:ok, system} <- WandererApp.Api.MapSystem.by_id(system_uuid) do
# Verify the system belongs to the requested map
if system.map_id == map_id do
APIUtils.respond_data(conn, APIUtils.map_system_to_json(system))
else
# Look up by solar_system_id (EVE Online integer ID)
case APIUtils.parse_int(id) do
{:ok, solar_system_id} ->
case Operations.get_system(map_id, solar_system_id) do
{:ok, system} ->
APIUtils.respond_data(conn, APIUtils.map_system_to_json(system))
{:error, :not_found} ->
{:error, :not_found}
end
{:error, _} ->
{:error, :not_found}
end
else
{:error, %Ash.Error.Query.NotFound{}} -> {:error, :not_found}
{:error, _} -> {:error, :not_found}
error -> error
end
end
operation(:create,
summary: "Upsert Systems and Connections (batch or single)",
summary: "Create or Update Systems and Connections",
description: """
Creates or updates systems and connections. Supports two formats:
1. **Single System Format**: Post a single system object directly (e.g., `{"solar_system_id": 30000142, "position_x": 100, ...}`)
2. **Batch Format**: Post multiple systems and connections (e.g., `{"systems": [...], "connections": [...]}`)
Systems are identified by solar_system_id and will be updated if they already exist on the map.
""",
parameters: [
map_identifier: [
in: :path,
@@ -472,8 +482,22 @@ defmodule WandererAppWeb.MapSystemAPIController do
)
def create(conn, params) do
systems = Map.get(params, "systems", [])
connections = Map.get(params, "connections", [])
# Support both batch format {"systems": [...], "connections": [...]}
# and single system format {"solar_system_id": ..., ...}
{systems, connections} =
cond do
Map.has_key?(params, "systems") ->
# Batch format
{Map.get(params, "systems", []), Map.get(params, "connections", [])}
Map.has_key?(params, "solar_system_id") or Map.has_key?(params, :solar_system_id) ->
# Single system format - wrap it in an array
{[params], []}
true ->
# Empty request
{[], []}
end
case Operations.upsert_systems_and_connections(conn, systems, connections) do
{:ok, result} ->
@@ -496,9 +520,10 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
request_body: {"System update request", "application/json", @system_update_schema},
@@ -506,11 +531,15 @@ defmodule WandererAppWeb.MapSystemAPIController do
)
def update(conn, %{"id" => id} = params) do
with {:ok, system_uuid} <- APIUtils.validate_uuid(id),
{:ok, system} <- WandererApp.Api.MapSystem.by_id(system_uuid),
{:ok, attrs} <- APIUtils.extract_update_params(params),
{:ok, updated_system} <- Ash.update(system, attrs) do
APIUtils.respond_data(conn, APIUtils.map_system_to_json(updated_system))
with {:ok, solar_system_id} <- APIUtils.parse_int(id),
{:ok, attrs} <- APIUtils.extract_update_params(params) do
case Operations.update_system(conn, solar_system_id, attrs) do
{:ok, result} ->
APIUtils.respond_data(conn, result)
error ->
error
end
end
end
@@ -578,9 +607,10 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
responses: ResponseSchemas.standard_responses(@delete_response_schema)

View File

@@ -12,28 +12,32 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
# Inlined OpenAPI schema for a map system signature
@signature_schema %OpenApiSpex.Schema{
title: "MapSystemSignature",
description: "A cosmic signature scanned in an EVE Online solar system",
type: :object,
properties: %{
id: %OpenApiSpex.Schema{type: :string, format: :uuid},
solar_system_id: %OpenApiSpex.Schema{type: :integer},
eve_id: %OpenApiSpex.Schema{type: :string},
character_eve_id: %OpenApiSpex.Schema{type: :string},
name: %OpenApiSpex.Schema{type: :string, nullable: true},
description: %OpenApiSpex.Schema{type: :string, nullable: true},
type: %OpenApiSpex.Schema{type: :string, nullable: true},
linked_system_id: %OpenApiSpex.Schema{type: :integer, nullable: true},
kind: %OpenApiSpex.Schema{type: :string, nullable: true},
group: %OpenApiSpex.Schema{type: :string, nullable: true},
custom_info: %OpenApiSpex.Schema{type: :string, nullable: true},
updated: %OpenApiSpex.Schema{type: :integer, nullable: true},
inserted_at: %OpenApiSpex.Schema{type: :string, format: :date_time},
updated_at: %OpenApiSpex.Schema{type: :string, format: :date_time}
id: %OpenApiSpex.Schema{type: :string, format: :uuid, description: "Unique signature identifier"},
solar_system_id: %OpenApiSpex.Schema{type: :integer, description: "EVE Online solar system ID"},
eve_id: %OpenApiSpex.Schema{type: :string, description: "In-game signature ID (e.g., ABC-123)"},
character_eve_id: %OpenApiSpex.Schema{
type: :string,
description: "EVE character ID who scanned/updated this signature. Must be a valid character in the database. If not provided, defaults to the map owner's character.",
nullable: true
},
name: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Signature name"},
description: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Additional notes"},
type: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Signature type"},
linked_system_id: %OpenApiSpex.Schema{type: :integer, nullable: true, description: "Connected solar system ID for wormholes"},
kind: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Signature kind (e.g., cosmic_signature)"},
group: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Signature group (e.g., wormhole, data, relic)"},
custom_info: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Custom metadata"},
updated: %OpenApiSpex.Schema{type: :integer, nullable: true, description: "Update counter"},
inserted_at: %OpenApiSpex.Schema{type: :string, format: :date_time, description: "Creation timestamp"},
updated_at: %OpenApiSpex.Schema{type: :string, format: :date_time, description: "Last update timestamp"}
},
required: [
:id,
:solar_system_id,
:eve_id,
:character_eve_id
:eve_id
],
example: %{
id: "sig-uuid-1",
@@ -143,6 +147,10 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
@doc """
Create a new signature.
The `character_eve_id` field is optional. If provided, it must be a valid character
that exists in the database, otherwise a 422 error will be returned. If not provided,
the signature will be associated with the map owner's character.
"""
operation(:create,
summary: "Create a new signature",
@@ -162,6 +170,18 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
type: :object,
properties: %{data: @signature_schema},
example: %{data: @signature_schema.example}
}},
unprocessable_entity:
{"Validation error", "application/json",
%OpenApiSpex.Schema{
type: :object,
properties: %{
error: %OpenApiSpex.Schema{
type: :string,
description: "Error type (e.g., 'invalid_character', 'system_not_found', 'missing_params')"
}
},
example: %{error: "invalid_character"}
}}
]
)
@@ -175,6 +195,9 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
@doc """
Update a signature by ID.
The `character_eve_id` field is optional. If provided, it must be a valid character
that exists in the database, otherwise a 422 error will be returned.
"""
operation(:update,
summary: "Update a signature by ID",
@@ -195,6 +218,18 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
type: :object,
properties: %{data: @signature_schema},
example: %{data: @signature_schema.example}
}},
unprocessable_entity:
{"Validation error", "application/json",
%OpenApiSpex.Schema{
type: :object,
properties: %{
error: %OpenApiSpex.Schema{
type: :string,
description: "Error type (e.g., 'invalid_character', 'unexpected_error')"
}
},
example: %{error: "invalid_character"}
}}
]
)

View File

@@ -149,12 +149,12 @@ defmodule WandererAppWeb.Plugs.CheckJsonApiAuth do
end
defp validate_api_token(conn, token) do
# Check for map identifier in path params
# According to PR feedback, routes supply params["map_identifier"]
case conn.params["map_identifier"] do
# Try to get map identifier from multiple sources
map_identifier = get_map_identifier(conn)
case map_identifier do
nil ->
# No map identifier in path - this might be a general API endpoint
# For now, we'll return an error since we need to validate against a specific map
# No map identifier found - this might be a general API endpoint
{:error, "Authentication failed", :no_map_context}
identifier ->
@@ -182,6 +182,37 @@ defmodule WandererAppWeb.Plugs.CheckJsonApiAuth do
end
end
# Extract map identifier from multiple sources
defp get_map_identifier(conn) do
# 1. Check path params (e.g., /api/v1/maps/:map_identifier/systems)
case conn.params["map_identifier"] do
id when is_binary(id) and id != "" -> id
_ ->
# 2. Check request body for map_id (JSON:API format)
case conn.body_params do
%{"data" => %{"attributes" => %{"map_id" => map_id}}} when is_binary(map_id) and map_id != "" ->
map_id
%{"data" => %{"relationships" => %{"map" => %{"data" => %{"id" => map_id}}}}} when is_binary(map_id) and map_id != "" ->
map_id
# 3. Check flat body params (non-JSON:API format)
%{"map_id" => map_id} when is_binary(map_id) and map_id != "" ->
map_id
_ ->
# 4. Check query params (e.g., ?filter[map_id]=...)
case conn.params do
%{"filter" => %{"map_id" => map_id}} when is_binary(map_id) and map_id != "" ->
map_id
_ ->
nil
end
end
end
end
# Helper to resolve map by ID or slug
defp resolve_map_identifier(identifier) do
# Try as UUID first

View File

@@ -353,7 +353,7 @@ defmodule WandererAppWeb.Helpers.APIUtils do
def connection_to_json(conn) do
Map.take(conn, ~w(
id map_id solar_system_source solar_system_target mass_status
time_status ship_size_type type wormhole_type inserted_at updated_at
time_status ship_size_type type wormhole_type locked inserted_at updated_at
)a)
end
end

View File

@@ -272,6 +272,9 @@
<.icon name="hero-check-badge-solid" class="w-5 h-5" />
</div>
</:col>
<:col :let={subscription} label="Map">
{subscription.map.name}
</:col>
<:col :let={subscription} label="Active Till">
<.local_time
:if={subscription.active_till}
@@ -333,7 +336,7 @@
label="Valid"
options={Enum.map(@valid_types, fn valid_type -> {valid_type.label, valid_type.id} end)}
/>
<!-- API Key Section with grid layout -->
<div class="modal-action">
<.button class="mt-2" type="submit" phx-disable-with="Saving...">

View File

@@ -59,14 +59,13 @@ defmodule WandererAppWeb.MapConnectionsEventHandler do
character_id: main_character_id
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: main_character_id,
user_id: current_user_id,
map_id: map_id,
solar_system_source_id: "#{solar_system_source_id}" |> String.to_integer(),
solar_system_target_id: "#{solar_system_target_id}" |> String.to_integer()
})
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: main_character_id,
user_id: current_user_id,
map_id: map_id,
solar_system_source_id: "#{solar_system_source_id}" |> String.to_integer(),
solar_system_target_id: "#{solar_system_target_id}" |> String.to_integer()
})
{:noreply, socket}
end
@@ -149,7 +148,6 @@ defmodule WandererAppWeb.MapConnectionsEventHandler do
end
end
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_removed, %{
character_id: main_character_id,
user_id: current_user_id,
@@ -202,7 +200,6 @@ defmodule WandererAppWeb.MapConnectionsEventHandler do
_ -> nil
end
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_updated, %{
character_id: main_character_id,
user_id: current_user_id,

View File

@@ -21,59 +21,85 @@ defmodule WandererAppWeb.MapCoreEventHandler do
:refresh_permissions,
%{assigns: %{current_user: current_user, map_slug: map_slug}} = socket
) do
{:ok, %{id: map_id, user_permissions: user_permissions, owner_id: owner_id}} =
map_slug
|> WandererApp.Api.Map.get_map_by_slug!()
|> Ash.load(:user_permissions, actor: current_user)
try do
{:ok, %{id: map_id, user_permissions: user_permissions, owner_id: owner_id}} =
map_slug
|> WandererApp.Api.Map.get_map_by_slug!()
|> Ash.load(:user_permissions, actor: current_user)
user_permissions =
WandererApp.Permissions.get_map_permissions(
user_permissions,
owner_id,
current_user.characters |> Enum.map(& &1.id)
)
user_permissions =
WandererApp.Permissions.get_map_permissions(
user_permissions,
owner_id,
current_user.characters |> Enum.map(& &1.id)
)
case user_permissions do
%{view_system: false} ->
socket
|> Phoenix.LiveView.put_flash(:error, "Your access to the map have been revoked.")
|> Phoenix.LiveView.push_navigate(to: ~p"/maps")
case user_permissions do
%{view_system: false} ->
socket
|> Phoenix.LiveView.put_flash(:error, "Your access to the map have been revoked.")
|> Phoenix.LiveView.push_navigate(to: ~p"/maps")
%{track_character: track_character} ->
{:ok, map_characters} =
case WandererApp.MapCharacterSettingsRepo.get_tracked_by_map_filtered(
map_id,
current_user.characters |> Enum.map(& &1.id)
) do
{:ok, settings} ->
{:ok,
settings
|> Enum.map(fn s -> s |> Ash.load!(:character) |> Map.get(:character) end)}
%{track_character: track_character} ->
{:ok, map_characters} =
case WandererApp.MapCharacterSettingsRepo.get_tracked_by_map_filtered(
map_id,
current_user.characters |> Enum.map(& &1.id)
) do
{:ok, settings} ->
{:ok,
settings
|> Enum.map(fn s -> s |> Ash.load!(:character) |> Map.get(:character) end)}
_ ->
{:ok, []}
end
case track_character do
false ->
:ok = WandererApp.Character.TrackingUtils.untrack(map_characters, map_id, self())
_ ->
{:ok, []}
:ok =
WandererApp.Character.TrackingUtils.track(
map_characters,
map_id,
true,
self()
)
end
case track_character do
false ->
:ok = WandererApp.Character.TrackingUtils.untrack(map_characters, map_id, self())
socket
|> assign(user_permissions: user_permissions)
|> MapEventHandler.push_map_event(
"user_permissions",
user_permissions
)
end
rescue
error in Ash.Error.Invalid.MultipleResults ->
Logger.error("Multiple maps found with slug '#{map_slug}' during refresh_permissions",
slug: map_slug,
error: inspect(error)
)
_ ->
:ok =
WandererApp.Character.TrackingUtils.track(
map_characters,
map_id,
true,
self()
)
end
# Emit telemetry for monitoring
:telemetry.execute(
[:wanderer_app, :map, :duplicate_slug_detected],
%{count: 1},
%{slug: map_slug, operation: :refresh_permissions}
)
# Return socket unchanged - permissions won't refresh but won't crash
socket
error ->
Logger.error("Error refreshing permissions for map slug '#{map_slug}'",
slug: map_slug,
error: inspect(error)
)
socket
|> assign(user_permissions: user_permissions)
|> MapEventHandler.push_map_event(
"user_permissions",
user_permissions
)
end
end

View File

@@ -165,13 +165,12 @@ defmodule WandererAppWeb.MapRoutesEventHandler do
solar_system_id: solar_system_id
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:hub_added, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: solar_system_id
})
WandererApp.User.ActivityTracker.track_map_event(:hub_added, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: solar_system_id
})
{:noreply, socket}
else
@@ -204,13 +203,12 @@ defmodule WandererAppWeb.MapRoutesEventHandler do
solar_system_id: solar_system_id
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:hub_removed, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: solar_system_id
})
WandererApp.User.ActivityTracker.track_map_event(:hub_removed, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: solar_system_id
})
{:noreply, socket}
end

View File

@@ -250,15 +250,14 @@ defmodule WandererAppWeb.MapSystemsEventHandler do
|> Map.put_new(key_atom, value)
])
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:system_updated, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: "#{solar_system_id}" |> String.to_integer(),
key: key_atom,
value: value
})
WandererApp.User.ActivityTracker.track_map_event(:system_updated, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: "#{solar_system_id}" |> String.to_integer(),
key: key_atom,
value: value
})
end
{:noreply, socket}

View File

@@ -74,6 +74,13 @@ defmodule WandererAppWeb.MapLive do
"You don't have main character set, please update it in tracking settings (top right icon)."
)}
def handle_info(:map_deleted, socket),
do:
{:noreply,
socket
|> put_flash(:info, "This map has been deleted.")
|> push_navigate(to: ~p"/maps")}
def handle_info(:no_access, socket),
do:
{:noreply,

View File

@@ -1,6 +1,8 @@
defmodule WandererAppWeb.MapsLive do
use WandererAppWeb, :live_view
alias Phoenix.LiveView.AsyncResult
require Logger
@pubsub_client Application.compile_env(:wanderer_app, :pubsub_client)
@@ -275,17 +277,57 @@ defmodule WandererAppWeb.MapsLive do
:telemetry.execute([:wanderer_app, :map, :created], %{count: 1})
maybe_create_default_acl(form, new_map)
# Reload maps synchronously to avoid timing issues with flash messages
{:ok, %{maps: maps}} = load_maps(current_user)
{:noreply,
socket
|> assign_async(:maps, fn ->
load_maps(current_user)
end)
|> put_flash(
:info,
"Map '#{new_map.name}' created successfully with slug '#{new_map.slug}'"
)
|> assign(:maps, AsyncResult.ok(maps))
|> push_patch(to: ~p"/maps")}
{:error, %Ash.Error.Invalid{errors: errors}} ->
# Check for slug uniqueness constraint violation
slug_error =
Enum.find(errors, fn error ->
case error do
%{field: :slug} -> true
%{message: message} when is_binary(message) -> String.contains?(message, "unique")
_ -> false
end
end)
error_message =
if slug_error do
"A map with this name already exists. The system will automatically adjust the name if needed. Please try again."
else
errors
|> Enum.map(fn error ->
field = Map.get(error, :field, "field")
message = Map.get(error, :message, "validation error")
"#{field}: #{message}"
end)
|> Enum.join(", ")
end
Logger.warning("Map creation failed",
form: form,
errors: inspect(errors),
slug_error: slug_error != nil
)
{:noreply,
socket
|> put_flash(:error, "Failed to create map: #{error_message}")
|> assign(error: error_message)}
{:error, %{errors: errors}} ->
error_message =
errors
|> Enum.map(fn %{field: _field} = error ->
|> Enum.map(fn error ->
"#{Map.get(error, :message, "Field validation error")}"
end)
|> Enum.join(", ")
@@ -296,9 +338,14 @@ defmodule WandererAppWeb.MapsLive do
|> assign(error: error_message)}
{:error, error} ->
Logger.error("Unexpected error creating map",
form: form,
error: inspect(error)
)
{:noreply,
socket
|> put_flash(:error, "Failed to create map")
|> put_flash(:error, "Failed to create map. Please try again.")
|> assign(error: error)}
end
end
@@ -342,99 +389,156 @@ defmodule WandererAppWeb.MapsLive do
%{"form" => form} = _params,
%{assigns: %{map_slug: map_slug, current_user: current_user}} = socket
) do
{:ok, map} =
map_slug
|> WandererApp.Api.Map.get_map_by_slug!()
|> Ash.load(:acls)
case get_map_by_slug_safely(map_slug) do
{:ok, map} ->
# Successfully found the map, proceed with loading and updating
{:ok, map_with_acls} = Ash.load(map, :acls)
scope =
form
|> Map.get("scope")
|> case do
"" -> "wormholes"
scope -> scope
end
scope =
form
|> Map.get("scope")
|> case do
"" -> "wormholes"
scope -> scope
end
form =
form
|> Map.put("acls", form["acls"] || [])
|> Map.put("scope", scope)
|> Map.put(
"only_tracked_characters",
(form["only_tracked_characters"] || "false") |> String.to_existing_atom()
)
form =
form
|> Map.put("acls", form["acls"] || [])
|> Map.put("scope", scope)
|> Map.put(
"only_tracked_characters",
(form["only_tracked_characters"] || "false") |> String.to_existing_atom()
)
map
|> WandererApp.Api.Map.update(form)
|> case do
{:ok, _updated_map} ->
{added_acls, removed_acls} = map.acls |> Enum.map(& &1.id) |> _get_acls_diff(form["acls"])
map_with_acls
|> WandererApp.Api.Map.update(form)
|> case do
{:ok, _updated_map} ->
{added_acls, removed_acls} =
map_with_acls.acls |> Enum.map(& &1.id) |> _get_acls_diff(form["acls"])
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{map.id}",
{:map_acl_updated, map.id, added_acls, removed_acls}
)
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{map_with_acls.id}",
{:map_acl_updated, map_with_acls.id, added_acls, removed_acls}
)
{:ok, tracked_characters} =
WandererApp.Maps.get_tracked_map_characters(map.id, current_user)
{:ok, tracked_characters} =
WandererApp.Maps.get_tracked_map_characters(map_with_acls.id, current_user)
first_tracked_character_id = Enum.map(tracked_characters, & &1.id) |> List.first()
first_tracked_character_id = Enum.map(tracked_characters, & &1.id) |> List.first()
added_acls
|> Enum.each(fn acl_id ->
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_acl_added, %{
character_id: first_tracked_character_id,
user_id: current_user.id,
map_id: map.id,
acl_id: acl_id
})
end)
added_acls
|> Enum.each(fn acl_id ->
WandererApp.User.ActivityTracker.track_map_event(:map_acl_added, %{
character_id: first_tracked_character_id,
user_id: current_user.id,
map_id: map_with_acls.id,
acl_id: acl_id
})
end)
removed_acls
|> Enum.each(fn acl_id ->
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_acl_removed, %{
character_id: first_tracked_character_id,
user_id: current_user.id,
map_id: map.id,
acl_id: acl_id
})
end)
removed_acls
|> Enum.each(fn acl_id ->
WandererApp.User.ActivityTracker.track_map_event(:map_acl_removed, %{
character_id: first_tracked_character_id,
user_id: current_user.id,
map_id: map_with_acls.id,
acl_id: acl_id
})
end)
{:noreply,
socket
|> push_navigate(to: ~p"/maps")}
{:error, error} ->
{:noreply,
socket
|> put_flash(:error, "Failed to update map")
|> assign(error: error)}
end
{:error, :multiple_results} ->
{:noreply,
socket
|> put_flash(
:error,
"Multiple maps found with this identifier. Please contact support to resolve this issue."
)
|> push_navigate(to: ~p"/maps")}
{:error, error} ->
{:error, :not_found} ->
{:noreply,
socket
|> put_flash(:error, "Failed to update map")
|> assign(error: error)}
|> put_flash(:error, "Map not found")
|> push_navigate(to: ~p"/maps")}
{:error, _reason} ->
{:noreply,
socket
|> put_flash(:error, "Failed to load map. Please try again.")
|> push_navigate(to: ~p"/maps")}
end
end
def handle_event("delete", %{"data" => map_slug} = _params, socket) do
map =
map_slug
|> WandererApp.Api.Map.get_map_by_slug!()
|> WandererApp.Api.Map.mark_as_deleted!()
case get_map_by_slug_safely(map_slug) do
{:ok, map} ->
# Successfully found the map, proceed with deletion
deleted_map = WandererApp.Api.Map.mark_as_deleted!(map)
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{map.id}",
:map_deleted
)
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{deleted_map.id}",
:map_deleted
)
current_user = socket.assigns.current_user
current_user = socket.assigns.current_user
{:noreply,
socket
|> assign_async(:maps, fn ->
load_maps(current_user)
end)
|> push_patch(to: ~p"/maps")}
# Reload maps synchronously to avoid timing issues with flash messages
{:ok, %{maps: maps}} = load_maps(current_user)
{:noreply,
socket
|> assign(:maps, AsyncResult.ok(maps))
|> push_patch(to: ~p"/maps")}
{:error, :multiple_results} ->
# Multiple maps found with this slug - data integrity issue
# Reload maps synchronously
{:ok, %{maps: maps}} = load_maps(socket.assigns.current_user)
{:noreply,
socket
|> put_flash(
:error,
"Multiple maps found with this identifier. Please contact support to resolve this issue."
)
|> assign(:maps, AsyncResult.ok(maps))}
{:error, :not_found} ->
# Map not found
# Reload maps synchronously
{:ok, %{maps: maps}} = load_maps(socket.assigns.current_user)
{:noreply,
socket
|> put_flash(:error, "Map not found or already deleted")
|> assign(:maps, AsyncResult.ok(maps))
|> push_patch(to: ~p"/maps")}
{:error, _reason} ->
# Other error
# Reload maps synchronously
{:ok, %{maps: maps}} = load_maps(socket.assigns.current_user)
{:noreply,
socket
|> put_flash(:error, "Failed to delete map. Please try again.")
|> assign(:maps, AsyncResult.ok(maps))}
end
end
def handle_event(
@@ -683,4 +787,49 @@ defmodule WandererAppWeb.MapsLive do
map
|> Map.put(:acls, acls |> Enum.map(&map_acl/1))
end
@doc """
Safely retrieves a map by slug, handling the case where multiple maps
with the same slug exist (database integrity issue).
Returns:
- `{:ok, map}` - Single map found
- `{:error, :multiple_results}` - Multiple maps found (logs error)
- `{:error, :not_found}` - No map found
- `{:error, reason}` - Other error
"""
defp get_map_by_slug_safely(slug) do
try do
map = WandererApp.Api.Map.get_map_by_slug!(slug)
{:ok, map}
rescue
error in Ash.Error.Invalid.MultipleResults ->
Logger.error("Multiple maps found with slug '#{slug}' - database integrity issue",
slug: slug,
error: inspect(error)
)
# Emit telemetry for monitoring
:telemetry.execute(
[:wanderer_app, :map, :duplicate_slug_detected],
%{count: 1},
%{slug: slug, operation: :get_by_slug}
)
# Return error - caller should handle this appropriately
{:error, :multiple_results}
error in Ash.Error.Query.NotFound ->
Logger.debug("Map not found with slug: #{slug}")
{:error, :not_found}
error ->
Logger.error("Error retrieving map by slug",
slug: slug,
error: inspect(error)
)
{:error, :unknown_error}
end
end
end

View File

@@ -10,529 +10,8 @@ defmodule WandererAppWeb.OpenApiV1Spec do
@impl OpenApiSpex.OpenApi
def spec do
# This is called by the modify_open_api option in the router
# We should return the spec from WandererAppWeb.OpenApi module
# We delegate to WandererAppWeb.OpenApi module which generates
# the spec from AshJsonApi with custom endpoints merged in
WandererAppWeb.OpenApi.spec()
end
defp generate_spec_manually do
%OpenApi{
info: %Info{
title: "WandererApp v1 JSON:API",
version: "1.0.0",
description: """
JSON:API compliant endpoints for WandererApp.
## Features
- Filtering: Use `filter[attribute]=value` parameters
- Sorting: Use `sort=attribute` or `sort=-attribute` for descending
- Pagination: Use `page[limit]=n` and `page[offset]=n`
- Relationships: Include related resources with `include=relationship`
## Authentication
All endpoints require Bearer token authentication:
```
Authorization: Bearer YOUR_API_KEY
```
"""
},
servers: [
Server.from_endpoint(WandererAppWeb.Endpoint)
],
paths: get_v1_paths(),
components: %Components{
schemas: get_v1_schemas(),
securitySchemes: %{
"bearerAuth" => %{
"type" => "http",
"scheme" => "bearer",
"description" => "Map API key for authentication"
}
}
},
security: [%{"bearerAuth" => []}],
tags: get_v1_tags()
}
end
defp get_v1_tags do
[
%{"name" => "Access Lists", "description" => "Access control list management"},
%{"name" => "Access List Members", "description" => "ACL member management"},
%{"name" => "Characters", "description" => "Character management"},
%{"name" => "Maps", "description" => "Map management"},
%{"name" => "Map Systems", "description" => "Map system operations"},
%{"name" => "Map Connections", "description" => "System connection management"},
%{"name" => "Map Solar Systems", "description" => "Solar system data"},
%{"name" => "Map System Signatures", "description" => "Wormhole signature tracking"},
%{"name" => "Map System Structures", "description" => "Structure management"},
%{"name" => "Map System Comments", "description" => "System comments"},
%{"name" => "Map Character Settings", "description" => "Character map settings"},
%{"name" => "Map User Settings", "description" => "User map preferences"},
%{"name" => "Map Subscriptions", "description" => "Map subscription management"},
%{"name" => "Map Access Lists", "description" => "Map-specific ACLs"},
%{"name" => "Map States", "description" => "Map state information"},
%{"name" => "Users", "description" => "User management"},
%{"name" => "User Activities", "description" => "User activity tracking"},
%{"name" => "Ship Type Info", "description" => "Ship type information"}
]
end
defp get_v1_paths do
# Generate paths for all resources
resources = [
{"access_lists", "Access Lists"},
{"access_list_members", "Access List Members"},
{"characters", "Characters"},
{"maps", "Maps"},
{"map_systems", "Map Systems"},
{"map_connections", "Map Connections"},
{"map_solar_systems", "Map Solar Systems"},
{"map_system_signatures", "Map System Signatures"},
{"map_system_structures", "Map System Structures"},
{"map_system_comments", "Map System Comments"},
{"map_character_settings", "Map Character Settings"},
{"map_user_settings", "Map User Settings"},
{"map_subscriptions", "Map Subscriptions"},
{"map_access_lists", "Map Access Lists"},
{"map_states", "Map States"},
{"users", "Users"},
{"user_activities", "User Activities"},
{"ship_type_infos", "Ship Type Info"}
]
Enum.reduce(resources, %{}, fn {resource, tag}, acc ->
base_path = "/api/v1/#{resource}"
paths = %{
base_path => %{
"get" => %{
"summary" => "List #{resource}",
"tags" => [tag],
"parameters" => get_standard_list_parameters(resource),
"responses" => %{
"200" => %{
"description" => "List of #{resource}",
"content" => %{
"application/vnd.api+json" => %{
"schema" => %{
"$ref" => "#/components/schemas/#{String.capitalize(resource)}ListResponse"
}
}
}
}
}
},
"post" => %{
"summary" => "Create #{String.replace(resource, "_", " ")}",
"tags" => [tag],
"requestBody" => %{
"required" => true,
"content" => %{
"application/vnd.api+json" => %{
"schema" => %{
"$ref" => "#/components/schemas/#{String.capitalize(resource)}CreateRequest"
}
}
}
},
"responses" => %{
"201" => %{"description" => "Created"}
}
}
},
"#{base_path}/{id}" => %{
"get" => %{
"summary" => "Get #{String.replace(resource, "_", " ")}",
"tags" => [tag],
"parameters" => [
%{
"name" => "id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"}
}
],
"responses" => %{
"200" => %{"description" => "Resource details"}
}
},
"patch" => %{
"summary" => "Update #{String.replace(resource, "_", " ")}",
"tags" => [tag],
"parameters" => [
%{
"name" => "id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"}
}
],
"requestBody" => %{
"required" => true,
"content" => %{
"application/vnd.api+json" => %{
"schema" => %{
"$ref" => "#/components/schemas/#{String.capitalize(resource)}UpdateRequest"
}
}
}
},
"responses" => %{
"200" => %{"description" => "Updated"}
}
},
"delete" => %{
"summary" => "Delete #{String.replace(resource, "_", " ")}",
"tags" => [tag],
"parameters" => [
%{
"name" => "id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"}
}
],
"responses" => %{
"204" => %{"description" => "Deleted"}
}
}
}
}
Map.merge(acc, paths)
end)
|> add_custom_paths()
end
defp add_custom_paths(paths) do
# Add custom action paths
custom_paths = %{
"/api/v1/maps/{id}/duplicate" => %{
"post" => %{
"summary" => "Duplicate map",
"tags" => ["Maps"],
"parameters" => [
%{
"name" => "id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"}
}
],
"responses" => %{
"201" => %{"description" => "Map duplicated"}
}
}
},
"/api/v1/maps/{map_id}/systems_and_connections" => %{
"get" => %{
"summary" => "Get Map Systems and Connections",
"description" => "Retrieve both systems and connections for a map in a single response",
"tags" => ["Maps"],
"parameters" => [
%{
"name" => "map_id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"},
"description" => "Map ID"
}
],
"responses" => %{
"200" => %{
"description" => "Combined systems and connections data",
"content" => %{
"application/json" => %{
"schema" => %{
"type" => "object",
"properties" => %{
"systems" => %{
"type" => "array",
"items" => %{
"type" => "object",
"properties" => %{
"id" => %{"type" => "string"},
"solar_system_id" => %{"type" => "integer"},
"name" => %{"type" => "string"},
"status" => %{"type" => "string"},
"visible" => %{"type" => "boolean"},
"locked" => %{"type" => "boolean"},
"position_x" => %{"type" => "integer"},
"position_y" => %{"type" => "integer"}
}
}
},
"connections" => %{
"type" => "array",
"items" => %{
"type" => "object",
"properties" => %{
"id" => %{"type" => "string"},
"solar_system_source" => %{"type" => "integer"},
"solar_system_target" => %{"type" => "integer"},
"type" => %{"type" => "string"},
"time_status" => %{"type" => "string"},
"mass_status" => %{"type" => "string"}
}
}
}
}
}
}
}
},
"404" => %{"description" => "Map not found"},
"401" => %{"description" => "Unauthorized"}
}
}
}
}
Map.merge(paths, custom_paths)
end
defp get_standard_list_parameters(resource) do
base_params = [
%{
"name" => "sort",
"in" => "query",
"description" => "Sort results (e.g., 'name', '-created_at')",
"schema" => %{"type" => "string"}
},
%{
"name" => "page[limit]",
"in" => "query",
"description" => "Number of results per page",
"schema" => %{"type" => "integer", "default" => 50}
},
%{
"name" => "page[offset]",
"in" => "query",
"description" => "Offset for pagination",
"schema" => %{"type" => "integer", "default" => 0}
},
%{
"name" => "include",
"in" => "query",
"description" => "Include related resources (comma-separated)",
"schema" => %{"type" => "string"}
}
]
# Add resource-specific filter parameters
filter_params =
case resource do
"characters" ->
[
%{
"name" => "filter[name]",
"in" => "query",
"description" => "Filter by character name",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[user_id]",
"in" => "query",
"description" => "Filter by user ID",
"schema" => %{"type" => "string"}
}
]
"maps" ->
[
%{
"name" => "filter[scope]",
"in" => "query",
"description" => "Filter by map scope",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[archived]",
"in" => "query",
"description" => "Filter by archived status",
"schema" => %{"type" => "boolean"}
}
]
"map_systems" ->
[
%{
"name" => "filter[map_id]",
"in" => "query",
"description" => "Filter by map ID",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[solar_system_id]",
"in" => "query",
"description" => "Filter by solar system ID",
"schema" => %{"type" => "integer"}
}
]
"map_connections" ->
[
%{
"name" => "filter[map_id]",
"in" => "query",
"description" => "Filter by map ID",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[source_id]",
"in" => "query",
"description" => "Filter by source system ID",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[target_id]",
"in" => "query",
"description" => "Filter by target system ID",
"schema" => %{"type" => "string"}
}
]
"map_system_signatures" ->
[
%{
"name" => "filter[system_id]",
"in" => "query",
"description" => "Filter by system ID",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[type]",
"in" => "query",
"description" => "Filter by signature type",
"schema" => %{"type" => "string"}
}
]
_ ->
[]
end
base_params ++ filter_params
end
defp get_v1_schemas do
%{
# Generic JSON:API response wrapper
"JsonApiWrapper" => %{
"type" => "object",
"properties" => %{
"data" => %{
"type" => "object",
"description" => "Primary data"
},
"included" => %{
"type" => "array",
"description" => "Included related resources"
},
"meta" => %{
"type" => "object",
"description" => "Metadata about the response"
},
"links" => %{
"type" => "object",
"description" => "Links for pagination and relationships"
}
}
},
# Character schemas
"CharacterResource" => %{
"type" => "object",
"properties" => %{
"type" => %{"type" => "string", "enum" => ["characters"]},
"id" => %{"type" => "string"},
"attributes" => %{
"type" => "object",
"properties" => %{
"name" => %{"type" => "string"},
"eve_id" => %{"type" => "integer"},
"corporation_id" => %{"type" => "integer"},
"alliance_id" => %{"type" => "integer"},
"online" => %{"type" => "boolean"},
"location" => %{"type" => "object"},
"inserted_at" => %{"type" => "string", "format" => "date-time"},
"updated_at" => %{"type" => "string", "format" => "date-time"}
}
},
"relationships" => %{
"type" => "object",
"properties" => %{
"user" => %{
"type" => "object",
"properties" => %{
"data" => %{
"type" => "object",
"properties" => %{
"type" => %{"type" => "string"},
"id" => %{"type" => "string"}
}
}
}
}
}
}
}
},
"CharactersListResponse" => %{
"type" => "object",
"properties" => %{
"data" => %{
"type" => "array",
"items" => %{"$ref" => "#/components/schemas/CharacterResource"}
},
"meta" => %{
"type" => "object",
"properties" => %{
"page" => %{
"type" => "object",
"properties" => %{
"offset" => %{"type" => "integer"},
"limit" => %{"type" => "integer"},
"total" => %{"type" => "integer"}
}
}
}
}
}
},
# Map schemas
"MapResource" => %{
"type" => "object",
"properties" => %{
"type" => %{"type" => "string", "enum" => ["maps"]},
"id" => %{"type" => "string"},
"attributes" => %{
"type" => "object",
"properties" => %{
"name" => %{"type" => "string"},
"slug" => %{"type" => "string"},
"scope" => %{"type" => "string"},
"public_key" => %{"type" => "string"},
"archived" => %{"type" => "boolean"},
"inserted_at" => %{"type" => "string", "format" => "date-time"},
"updated_at" => %{"type" => "string", "format" => "date-time"}
}
},
"relationships" => %{
"type" => "object",
"properties" => %{
"owner" => %{
"type" => "object"
},
"characters" => %{
"type" => "object"
},
"acls" => %{
"type" => "object"
}
}
}
}
}
}
end
end

View File

@@ -597,7 +597,7 @@ defmodule WandererAppWeb.Router do
scope "/api/v1" do
pipe_through :api_v1
# Custom combined endpoints
# Custom combined endpoint with map_id in path
get "/maps/:map_id/systems_and_connections",
WandererAppWeb.Api.MapSystemsConnectionsController,
:show
@@ -605,6 +605,18 @@ defmodule WandererAppWeb.Router do
# Forward all v1 requests to AshJsonApi router
# This will automatically generate RESTful JSON:API endpoints
# for all Ash resources once they're configured with the AshJsonApi extension
#
# NOTE: AshJsonApi generates flat routes (e.g., /api/v1/map_systems)
# Phoenix's `forward` cannot be used with dynamic path segments, so proper
# nested routes like /api/v1/maps/{id}/systems would require custom controllers.
#
# Current approach: Use flat routes with map_id in request body or filters:
# - POST /api/v1/map_systems with {"data": {"attributes": {"map_id": "..."}}}
# - GET /api/v1/map_systems?filter[map_id]=...
# - PATCH /api/v1/map_systems/{id} with map_id in body
#
# Authentication is handled by CheckJsonApiAuth which validates the Bearer
# token against the map's API key.
forward "/", WandererAppWeb.ApiV1Router
end
end

View File

@@ -3,7 +3,7 @@ defmodule WandererApp.MixProject do
@source_url "https://github.com/wanderer-industries/wanderer"
@version "1.84.1"
@version "1.84.21"
def project do
[

View File

@@ -144,33 +144,28 @@ The API v1 provides access to over 25 resources through the Ash Framework. Here
### Core Resources
- **Maps** (`/api/v1/maps`) - Map management with full CRUD operations
- **Characters** (`/api/v1/characters`) - Character tracking and management (GET, DELETE only)
- **Access Lists** (`/api/v1/access_lists`) - ACL management and permissions
- **Access List Members** (`/api/v1/access_list_members`) - ACL member management
- **Access Lists** (`/api/v1/access_lists`) - ACL management and permissions with full CRUD operations
- **Access List Members** (`/api/v1/access_list_members`) - ACL member management with full CRUD operations
- **Map Access Lists** (`/api/v1/map_access_lists`) - Map-ACL associations with full CRUD operations
### Map Resources
- **Map Systems** (`/api/v1/map_systems`) - Solar system data and metadata
- **Map Connections** (`/api/v1/map_connections`) - Wormhole connections
- **Map Signatures** (`/api/v1/map_system_signatures`) - Signature scanning data (GET, DELETE only)
- **Map Structures** (`/api/v1/map_system_structures`) - Structure information
- **Map Subscriptions** (`/api/v1/map_subscriptions`) - Subscription management (GET only)
- **Map Systems and Connections** (`/api/v1/maps/{map_id}/systems_and_connections`) - Combined endpoint (GET only)
- **Map Systems** (`/api/v1/map_systems`) - Solar system data and metadata with full CRUD operations (paginated: default 100, max 500)
- **Map Connections** (`/api/v1/map_connections`) - Wormhole connections with full CRUD operations
- **Map Signatures** (`/api/v1/map_system_signatures`) - Signature scanning data (read and delete only, paginated: default 50, max 200)
- **Map Structures** (`/api/v1/map_system_structures`) - Structure information with full CRUD operations
- **Map Subscriptions** (`/api/v1/map_subscriptions`) - Subscription management (read-only)
- **Map Default Settings** (`/api/v1/map_default_settings`) - Default map configurations with full CRUD operations
- **Map Systems and Connections** (`/api/v1/maps/{map_id}/systems_and_connections`) - Combined endpoint (read-only)
### System Resources
- **Map System Comments** (`/api/v1/map_system_comments`) - System annotations (GET only)
- **Map System Comments** (`/api/v1/map_system_comments`) - System annotations (read-only)
### User Resources
- **User Activities** (`/api/v1/user_activities`) - User activity tracking (GET only)
- **Map Character Settings** (`/api/v1/map_character_settings`) - Character preferences (GET only)
- **Map User Settings** (`/api/v1/map_user_settings`) - User map preferences (GET only)
- **User Activities** (`/api/v1/user_activities`) - User activity tracking (read-only, paginated: default 15)
- **Map Character Settings** (`/api/v1/map_character_settings`) - Character preferences (read-only)
- **Map User Settings** (`/api/v1/map_user_settings`) - User map preferences (read-only)
### Additional Resources
- **Map Webhook Subscriptions** (`/api/v1/map_webhook_subscriptions`) - Webhook management
- **Map Invites** (`/api/v1/map_invites`) - Map invitation system
- **Map Pings** (`/api/v1/map_pings`) - In-game ping tracking
- **Corp Wallet Transactions** (`/api/v1/corp_wallet_transactions`) - Corporation finances
*Note: Some resources have been restricted to read-only access for security and consistency. Resources marked as "(GET only)" support only read operations, while "(GET, DELETE only)" support read and delete operations.*
*Note: Resources marked as "full CRUD operations" support create, read, update, and delete. Resources marked as "read-only" support only GET operations. Resources marked as "read and delete only" support GET and DELETE operations. Pagination limits are configurable via `page[limit]` and `page[offset]` parameters where supported.*
## API v1 Feature Set

View File

@@ -0,0 +1,24 @@
defmodule WandererApp.Repo.Migrations.AddMapPerformanceIndexes do
@moduledoc """
Updates resources based on their most recent snapshots.
This file was autogenerated with `mix ash_postgres.generate_migrations`
"""
use Ecto.Migration
def up do
create index(:map_system_v1, [:map_id],
name: "map_system_v1_map_id_visible_index",
where: "visible = true"
)
create index(:map_chain_v1, [:map_id], name: "map_chain_v1_map_id_index")
end
def down do
drop_if_exists index(:map_chain_v1, [:map_id], name: "map_chain_v1_map_id_index")
drop_if_exists index(:map_system_v1, [:map_id], name: "map_system_v1_map_id_visible_index")
end
end

View File

@@ -0,0 +1,144 @@
defmodule WandererApp.Repo.Migrations.FixDuplicateMapSlugs do
use Ecto.Migration
import Ecto.Query
def up do
# Check for duplicates first
has_duplicates = check_for_duplicates()
# If duplicates exist, drop the index first to allow fixing them
if has_duplicates do
IO.puts("Duplicates found, dropping index before cleanup...")
drop_index_if_exists()
end
# Fix duplicate slugs in maps_v1 table
fix_duplicate_slugs()
# Ensure unique index exists (recreate if needed)
ensure_unique_index()
end
def down do
# This migration is idempotent and safe to run multiple times
# No need to revert as it only fixes data integrity issues
:ok
end
defp check_for_duplicates do
duplicates_query = """
SELECT COUNT(*) as duplicate_count
FROM (
SELECT slug
FROM maps_v1
GROUP BY slug
HAVING count(*) > 1
) duplicates
"""
case repo().query(duplicates_query, []) do
{:ok, %{rows: [[count]]}} when count > 0 ->
IO.puts("Found #{count} duplicate slug(s)")
true
{:ok, %{rows: [[0]]}} ->
false
{:error, error} ->
IO.puts("Error checking for duplicates: #{inspect(error)}")
false
end
end
defp drop_index_if_exists do
index_exists_query = """
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
)
"""
case repo().query(index_exists_query, []) do
{:ok, %{rows: [[true]]}} ->
IO.puts("Dropping existing unique index...")
execute("DROP INDEX IF EXISTS maps_v1_unique_slug_index")
IO.puts("✓ Index dropped")
{:ok, %{rows: [[false]]}} ->
IO.puts("No existing index to drop")
{:error, error} ->
IO.puts("Error checking index: #{inspect(error)}")
end
end
defp fix_duplicate_slugs do
# Get all duplicate slugs with their IDs
duplicates_query = """
SELECT slug, array_agg(id::text ORDER BY updated_at) as ids
FROM maps_v1
GROUP BY slug
HAVING count(*) > 1
"""
case repo().query(duplicates_query, []) do
{:ok, %{rows: rows}} when length(rows) > 0 ->
IO.puts("Fixing #{length(rows)} duplicate slug(s)...")
Enum.each(rows, fn [slug, ids] ->
IO.puts("Processing duplicate slug: #{slug} (#{length(ids)} occurrences)")
# Keep the first one (oldest), rename the rest
[_keep_id | rename_ids] = ids
rename_ids
|> Enum.with_index(2)
|> Enum.each(fn {id_string, n} ->
new_slug = "#{slug}-#{n}"
# Use parameterized query for safety
update_query = "UPDATE maps_v1 SET slug = $1 WHERE id::text = $2"
repo().query!(update_query, [new_slug, id_string])
IO.puts(" ✓ Renamed #{id_string} to '#{new_slug}'")
end)
end)
IO.puts("✓ All duplicate slugs fixed!")
{:ok, %{rows: []}} ->
IO.puts("No duplicate slugs to fix")
{:error, error} ->
IO.puts("Error checking for duplicates: #{inspect(error)}")
raise "Failed to check for duplicate slugs: #{inspect(error)}"
end
end
defp ensure_unique_index do
# Check if index exists
index_exists_query = """
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
)
"""
case repo().query(index_exists_query, []) do
{:ok, %{rows: [[true]]}} ->
IO.puts("Unique index on slug already exists")
{:ok, %{rows: [[false]]}} ->
IO.puts("Creating unique index on slug...")
create_if_not_exists index(:maps_v1, [:slug], unique: true, name: :maps_v1_unique_slug_index)
IO.puts("✓ Index created successfully!")
{:error, error} ->
IO.puts("Error checking index: #{inspect(error)}")
raise "Failed to check index: #{inspect(error)}"
end
end
end

View File

@@ -0,0 +1,212 @@
defmodule WandererApp.Repo.Migrations.EnsureNoDuplicateMapSlugs do
@moduledoc """
Final migration to ensure all duplicate map slugs are removed and unique index exists.
This migration:
1. Checks for any remaining duplicate slugs
2. Fixes duplicates by renaming them (keeps oldest, renames newer ones)
3. Ensures unique index exists on maps_v1.slug
4. Verifies no duplicates remain after migration
Safe to run multiple times (idempotent).
"""
use Ecto.Migration
import Ecto.Query
require Logger
def up do
IO.puts("\n=== Starting Map Slug Deduplication Migration ===\n")
# Step 1: Check for duplicates
duplicate_count = count_duplicates()
if duplicate_count > 0 do
IO.puts("Found #{duplicate_count} duplicate slug(s) - proceeding with cleanup...")
# Step 2: Drop index temporarily if it exists (to allow updates)
drop_index_if_exists()
# Step 3: Fix all duplicates
fix_duplicate_slugs()
# Step 4: Recreate unique index
ensure_unique_index()
else
IO.puts("No duplicate slugs found - ensuring unique index exists...")
ensure_unique_index()
end
# Step 5: Final verification
verify_no_duplicates()
IO.puts("\n=== Migration completed successfully! ===\n")
end
def down do
# This migration is idempotent and only fixes data integrity issues
# No need to revert as it doesn't change schema in a harmful way
IO.puts("This migration does not need to be reverted")
:ok
end
defp count_duplicates do
duplicates_query = """
SELECT COUNT(*) as duplicate_count
FROM (
SELECT slug
FROM maps_v1
WHERE deleted = false
GROUP BY slug
HAVING COUNT(*) > 1
) duplicates
"""
case repo().query(duplicates_query, []) do
{:ok, %{rows: [[count]]}} ->
count
{:error, error} ->
IO.puts("Error counting duplicates: #{inspect(error)}")
0
end
end
defp drop_index_if_exists do
index_exists_query = """
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
)
"""
case repo().query(index_exists_query, []) do
{:ok, %{rows: [[true]]}} ->
IO.puts("Temporarily dropping unique index to allow updates...")
execute("DROP INDEX IF EXISTS maps_v1_unique_slug_index")
IO.puts("✓ Index dropped")
{:ok, %{rows: [[false]]}} ->
IO.puts("No existing index to drop")
{:error, error} ->
IO.puts("Error checking index: #{inspect(error)}")
end
end
defp fix_duplicate_slugs do
# Get all duplicate slugs with their IDs and timestamps
# Order by inserted_at to keep the oldest one unchanged
duplicates_query = """
SELECT
slug,
array_agg(id::text ORDER BY inserted_at ASC, id ASC) as ids,
array_agg(name ORDER BY inserted_at ASC, id ASC) as names
FROM maps_v1
WHERE deleted = false
GROUP BY slug
HAVING COUNT(*) > 1
ORDER BY slug
"""
case repo().query(duplicates_query, []) do
{:ok, %{rows: rows}} when length(rows) > 0 ->
IO.puts("\nFixing #{length(rows)} duplicate slug(s)...")
Enum.each(rows, fn [slug, ids, names] ->
IO.puts("\n Processing: '#{slug}' (#{length(ids)} duplicates)")
# Keep the first one (oldest by inserted_at), rename the rest
[keep_id | rename_ids] = ids
[keep_name | rename_names] = names
IO.puts(" ✓ Keeping: #{keep_id} - '#{keep_name}'")
# Rename duplicates
rename_ids
|> Enum.zip(rename_names)
|> Enum.with_index(2)
|> Enum.each(fn {{id_string, name}, n} ->
new_slug = generate_unique_slug(slug, n)
# Use parameterized query for safety
update_query = "UPDATE maps_v1 SET slug = $1 WHERE id::text = $2"
repo().query!(update_query, [new_slug, id_string])
IO.puts(" → Renamed: #{id_string} - '#{name}' to slug '#{new_slug}'")
end)
end)
IO.puts("\n✓ All duplicate slugs fixed!")
{:ok, %{rows: []}} ->
IO.puts("No duplicate slugs to fix")
{:error, error} ->
IO.puts("Error finding duplicates: #{inspect(error)}")
raise "Failed to query duplicate slugs: #{inspect(error)}"
end
end
defp generate_unique_slug(base_slug, n) when n >= 2 do
candidate = "#{base_slug}-#{n}"
# Check if this slug already exists
check_query = "SELECT COUNT(*) FROM maps_v1 WHERE slug = $1 AND deleted = false"
case repo().query!(check_query, [candidate]) do
%{rows: [[0]]} ->
candidate
%{rows: [[_count]]} ->
# Try next number
generate_unique_slug(base_slug, n + 1)
end
end
defp ensure_unique_index do
# Check if index exists
index_exists_query = """
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
)
"""
case repo().query(index_exists_query, []) do
{:ok, %{rows: [[true]]}} ->
IO.puts("✓ Unique index on slug already exists")
{:ok, %{rows: [[false]]}} ->
IO.puts("Creating unique index on slug column...")
create_if_not_exists(
index(:maps_v1, [:slug],
unique: true,
name: :maps_v1_unique_slug_index,
where: "deleted = false"
)
)
IO.puts("✓ Unique index created successfully!")
{:error, error} ->
IO.puts("Error checking index: #{inspect(error)}")
raise "Failed to check index existence: #{inspect(error)}"
end
end
defp verify_no_duplicates do
IO.puts("\nVerifying no duplicates remain...")
remaining_duplicates = count_duplicates()
if remaining_duplicates > 0 do
IO.puts("❌ ERROR: #{remaining_duplicates} duplicate(s) still exist!")
raise "Migration failed: duplicates still exist after cleanup"
else
IO.puts("✓ Verification passed: No duplicates found")
end
end
end

View File

@@ -0,0 +1,201 @@
{
"attributes": [
{
"allow_nil?": false,
"default": "fragment(\"gen_random_uuid()\")",
"generated?": false,
"primary_key?": true,
"references": null,
"size": null,
"source": "id",
"type": "uuid"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_source",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_target",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "mass_status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "time_status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "2",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "ship_size_type",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "type",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "wormhole_type",
"type": "text"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "count_of_passage",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "locked",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "custom_info",
"type": "text"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "inserted_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "updated_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": {
"deferrable": false,
"destination_attribute": "id",
"destination_attribute_default": null,
"destination_attribute_generated": null,
"index?": false,
"match_type": null,
"match_with": null,
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"name": "map_chain_v1_map_id_fkey",
"on_delete": null,
"on_update": null,
"primary_key?": true,
"schema": null,
"table": "maps_v1"
},
"size": null,
"source": "map_id",
"type": "uuid"
}
],
"base_filter": null,
"check_constraints": [],
"custom_indexes": [
{
"all_tenants?": false,
"concurrently": false,
"error_fields": [
"map_id"
],
"fields": [
{
"type": "atom",
"value": "map_id"
}
],
"include": null,
"message": null,
"name": "map_chain_v1_map_id_index",
"nulls_distinct": true,
"prefix": null,
"table": null,
"unique": false,
"using": null,
"where": null
}
],
"custom_statements": [],
"has_create_action": true,
"hash": "43AE341D09AA875BB0F0D2ACE7AC6301064697D656FD1729FC36E6A1F77E4CB7",
"identities": [],
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"repo": "Elixir.WandererApp.Repo",
"schema": null,
"table": "map_chain_v1"
}

View File

@@ -0,0 +1,260 @@
{
"attributes": [
{
"allow_nil?": false,
"default": "fragment(\"gen_random_uuid()\")",
"generated?": false,
"primary_key?": true,
"references": null,
"size": null,
"source": "id",
"type": "uuid"
},
{
"allow_nil?": false,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_id",
"type": "bigint"
},
{
"allow_nil?": false,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "custom_name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "description",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "tag",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "temporary_name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "labels",
"type": "text"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "true",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "visible",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "false",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "locked",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "position_x",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "position_y",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "added_at",
"type": "utc_datetime"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "linked_sig_eve_id",
"type": "text"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "inserted_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "updated_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": {
"deferrable": false,
"destination_attribute": "id",
"destination_attribute_default": null,
"destination_attribute_generated": null,
"index?": false,
"match_type": null,
"match_with": null,
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"name": "map_system_v1_map_id_fkey",
"on_delete": null,
"on_update": null,
"primary_key?": true,
"schema": null,
"table": "maps_v1"
},
"size": null,
"source": "map_id",
"type": "uuid"
}
],
"base_filter": null,
"check_constraints": [],
"custom_indexes": [
{
"all_tenants?": false,
"concurrently": false,
"error_fields": [
"map_id"
],
"fields": [
{
"type": "atom",
"value": "map_id"
}
],
"include": null,
"message": null,
"name": "map_system_v1_map_id_visible_index",
"nulls_distinct": true,
"prefix": null,
"table": null,
"unique": false,
"using": null,
"where": "visible = true"
}
],
"custom_statements": [],
"has_create_action": true,
"hash": "AD7B82611EDA495AD35F114406C7F0C2D941C10E51105361002AA3144D7F7EA9",
"identities": [
{
"all_tenants?": false,
"base_filter": null,
"index_name": "map_system_v1_map_solar_system_id_index",
"keys": [
{
"type": "atom",
"value": "map_id"
},
{
"type": "atom",
"value": "solar_system_id"
}
],
"name": "map_solar_system_id",
"nils_distinct?": true,
"where": null
}
],
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"repo": "Elixir.WandererApp.Repo",
"schema": null,
"table": "map_system_v1"
}

View File

@@ -2,4 +2,15 @@
export ERL_AFLAGS="-proto_dist inet6_tcp"
export RELEASE_DISTRIBUTION="name"
export RELEASE_NODE="${FLY_APP_NAME}-${FLY_IMAGE_REF##*-}@${FLY_PRIVATE_IP}"
# Use custom RELEASE_NODE if set, otherwise detect environment
if [ -n "$RELEASE_NODE" ]; then
# RELEASE_NODE already set, use as-is
export RELEASE_NODE
elif [ -n "$FLY_APP_NAME" ] && [ -n "$FLY_IMAGE_REF" ] && [ -n "$FLY_PRIVATE_IP" ]; then
# Fly.io environment detected
export RELEASE_NODE="${FLY_APP_NAME}-${FLY_IMAGE_REF##*-}@${FLY_PRIVATE_IP}"
else
# Generic deployment - use hostname
export RELEASE_NODE="wanderer@$(hostname)"
fi

View File

@@ -0,0 +1,463 @@
defmodule WandererApp.Map.MapPoolCrashIntegrationTest do
@moduledoc """
Integration tests for MapPool crash recovery.
These tests verify end-to-end crash recovery behavior including:
- MapPool GenServer crashes and restarts
- State recovery from ETS
- Registry and cache consistency after recovery
- Telemetry events during recovery
- Multi-pool scenarios
Note: Many tests are skipped as they require full map infrastructure
(database, Server.Impl, map data, etc.) to be set up.
"""
use ExUnit.Case, async: false
alias WandererApp.Map.{MapPool, MapPoolDynamicSupervisor, MapPoolState}
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@ets_table :map_pool_state_table
setup do
# Clean up any existing test data
cleanup_test_data()
# Check if required infrastructure is running
supervisor_running? = Process.whereis(MapPoolDynamicSupervisor) != nil
ets_exists? =
try do
:ets.info(@ets_table) != :undefined
rescue
_ -> false
end
on_exit(fn ->
cleanup_test_data()
end)
{:ok, supervisor_running: supervisor_running?, ets_exists: ets_exists?}
end
defp cleanup_test_data do
# Clean up test caches
WandererApp.Cache.delete("started_maps")
Cachex.clear(@cache)
# Clean up ETS entries
if :ets.whereis(@ets_table) != :undefined do
:ets.match_delete(@ets_table, {:"$1", :"$2", :"$3"})
end
end
defp find_pool_pid(uuid) do
pool_name = Module.concat(MapPool, uuid)
case Registry.lookup(@unique_registry, pool_name) do
[{pid, _value}] -> {:ok, pid}
[] -> {:error, :not_found}
end
end
describe "End-to-end crash recovery" do
@tag :skip
@tag :integration
test "MapPool recovers all maps after abnormal crash" do
# This test would:
# 1. Start a MapPool with test maps via MapPoolDynamicSupervisor
# 2. Verify maps are running and state is in ETS
# 3. Simulate crash using GenServer.call(pool_pid, :error)
# 4. Wait for supervisor to restart the pool
# 5. Verify all maps are recovered
# 6. Verify Registry, Cache, and ETS are consistent
# Requires:
# - Test map data in database
# - Server.Impl.start_map to work with test data
# - Full supervision tree running
:ok
end
@tag :skip
@tag :integration
test "MapPool preserves ETS state on abnormal termination" do
# This test would:
# 1. Start a MapPool with maps
# 2. Force crash
# 3. Verify ETS state is preserved (not deleted)
# 4. Verify new pool instance recovers from ETS
:ok
end
@tag :skip
@tag :integration
test "MapPool cleans ETS state on graceful shutdown" do
# This test would:
# 1. Start a MapPool with maps
# 2. Gracefully stop the pool (GenServer.cast(pool_pid, :stop))
# 3. Verify ETS state is deleted
# 4. Verify new pool starts with empty state
:ok
end
end
describe "Multi-pool crash scenarios" do
@tag :skip
@tag :integration
test "multiple pools crash and recover independently" do
# This test would:
# 1. Start multiple MapPool instances with different maps
# 2. Crash one pool
# 3. Verify only that pool recovers, others unaffected
# 4. Verify no cross-pool state corruption
:ok
end
@tag :skip
@tag :integration
test "concurrent pool crashes don't corrupt recovery state" do
# This test would:
# 1. Start multiple pools
# 2. Crash multiple pools simultaneously
# 3. Verify all pools recover correctly
# 4. Verify no ETS corruption or race conditions
:ok
end
end
describe "State consistency after recovery" do
@tag :skip
@tag :integration
test "Registry state matches recovered state" do
# This test would verify that after recovery:
# - unique_registry has correct map_ids for pool UUID
# - map_pool_registry has correct pool UUID entry
# - All map_ids in Registry match ETS state
:ok
end
@tag :skip
@tag :integration
test "Cache state matches recovered state" do
# This test would verify that after recovery:
# - map_pool_cache has correct map_id -> uuid mappings
# - started_maps cache includes all recovered maps
# - No orphaned cache entries
:ok
end
@tag :skip
@tag :integration
test "Map servers are actually running after recovery" do
# This test would:
# 1. Recover maps from crash
# 2. Verify each map's GenServer is actually running
# 3. Verify maps respond to requests
# 4. Verify map state is correct
:ok
end
end
describe "Recovery failure handling" do
@tag :skip
@tag :integration
test "recovery continues when individual map fails to start" do
# This test would:
# 1. Save state with maps [1, 2, 3] to ETS
# 2. Delete map 2 from database
# 3. Trigger recovery
# 4. Verify maps 1 and 3 recover successfully
# 5. Verify map 2 failure is logged and telemetry emitted
# 6. Verify pool continues with maps [1, 3]
:ok
end
@tag :skip
@tag :integration
test "recovery handles maps already running in different pool" do
# This test would simulate a race condition where:
# 1. Pool A crashes with map X
# 2. Before recovery, map X is started in Pool B
# 3. Pool A tries to recover map X
# 4. Verify conflict is detected and handled gracefully
:ok
end
@tag :skip
@tag :integration
test "recovery handles corrupted ETS state" do
# This test would:
# 1. Manually corrupt ETS state (invalid map IDs, wrong types, etc.)
# 2. Trigger recovery
# 3. Verify pool handles corruption gracefully
# 4. Verify telemetry emitted for failures
# 5. Verify pool continues with valid maps only
:ok
end
end
describe "Telemetry during recovery" do
test "telemetry events emitted in correct order", %{ets_exists: ets_exists?} do
if ets_exists? do
test_pid = self()
events = []
# Attach handlers for all recovery events
:telemetry.attach_many(
"test-recovery-events",
[
[:wanderer_app, :map_pool, :recovery, :start],
[:wanderer_app, :map_pool, :recovery, :complete],
[:wanderer_app, :map_pool, :recovery, :map_failed]
],
fn event, measurements, metadata, _config ->
send(test_pid, {:telemetry_event, event, measurements, metadata})
end,
nil
)
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
# Simulate recovery sequence
# 1. Start event
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :start],
%{recovered_map_count: 3, total_map_count: 3},
%{pool_uuid: uuid}
)
# 2. Complete event (in real recovery, this comes after all maps start)
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :complete],
%{recovered_count: 3, failed_count: 0, duration_ms: 100},
%{pool_uuid: uuid}
)
# Verify we received both events
assert_receive {:telemetry_event, [:wanderer_app, :map_pool, :recovery, :start], _, _},
500
assert_receive {:telemetry_event, [:wanderer_app, :map_pool, :recovery, :complete], _, _},
500
:telemetry.detach("test-recovery-events")
else
:ok
end
end
@tag :skip
@tag :integration
test "telemetry includes accurate recovery statistics" do
# This test would verify that:
# - recovered_map_count matches actual recovered maps
# - failed_count matches actual failed maps
# - duration_ms is accurate
# - All metadata is correct
:ok
end
end
describe "Interaction with Reconciler" do
@tag :skip
@tag :integration
test "Reconciler doesn't interfere with crash recovery" do
# This test would:
# 1. Crash a pool with maps
# 2. Trigger both recovery and reconciliation
# 3. Verify they don't conflict
# 4. Verify final state is consistent
:ok
end
@tag :skip
@tag :integration
test "Reconciler detects failed recovery" do
# This test would:
# 1. Crash a pool with map X
# 2. Make recovery fail for map X
# 3. Run reconciler
# 4. Verify reconciler detects and potentially fixes the issue
:ok
end
end
describe "Edge cases" do
@tag :skip
@tag :integration
test "recovery during pool at capacity" do
# This test would:
# 1. Create pool with 19 maps
# 2. Crash pool while adding 20th map
# 3. Verify recovery handles capacity limit
# 4. Verify all maps start or overflow is handled
:ok
end
@tag :skip
@tag :integration
test "recovery with empty map list" do
# This test would:
# 1. Crash pool with empty map_ids
# 2. Verify recovery completes successfully
# 3. Verify pool starts with no maps
:ok
end
@tag :skip
@tag :integration
test "multiple crashes in quick succession" do
# This test would:
# 1. Crash pool
# 2. Immediately crash again during recovery
# 3. Verify supervisor's max_restarts is respected
# 4. Verify state remains consistent
:ok
end
end
describe "Performance under load" do
@tag :slow
@tag :skip
@tag :integration
test "recovery completes within 2 seconds for 20 maps" do
# This test would:
# 1. Create pool with 20 maps (pool limit)
# 2. Crash pool
# 3. Measure time to full recovery
# 4. Assert recovery < 2 seconds
:ok
end
@tag :slow
@tag :skip
@tag :integration
test "recovery doesn't block other pools" do
# This test would:
# 1. Start multiple pools
# 2. Crash one pool with many maps
# 3. Verify other pools continue to operate normally during recovery
# 4. Measure performance impact on healthy pools
:ok
end
end
describe "Supervisor interaction" do
test "ETS table survives individual pool crash", %{ets_exists: ets_exists?} do
if ets_exists? do
# Verify ETS table is owned by supervisor, not individual pools
table_info = :ets.info(@ets_table)
owner_pid = Keyword.get(table_info, :owner)
# Owner should be alive and be the supervisor or a system process
assert Process.alive?(owner_pid)
# Verify we can still access the table
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
MapPoolState.save_pool_state(uuid, [1, 2, 3])
assert {:ok, [1, 2, 3]} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
@tag :skip
@tag :integration
test "supervisor restarts pool after crash" do
# This test would:
# 1. Start a pool via DynamicSupervisor
# 2. Crash the pool
# 3. Verify supervisor restarts it
# 4. Verify new PID is different from old PID
# 5. Verify pool is functional after restart
:ok
end
end
describe "Database consistency" do
@tag :skip
@tag :integration
test "recovered maps load latest state from database" do
# This test would:
# 1. Start maps with initial state
# 2. Modify map state in database
# 3. Crash pool
# 4. Verify recovered maps have latest database state
:ok
end
@tag :skip
@tag :integration
test "recovery uses MapState for map configuration" do
# This test would:
# 1. Verify recovery calls WandererApp.Map.get_map_state!/1
# 2. Verify state comes from database MapState table
# 3. Verify maps start with correct configuration
:ok
end
end
describe "Real-world scenarios" do
@tag :skip
@tag :integration
test "recovery after OOM crash" do
# This test would simulate recovery after out-of-memory crash:
# 1. Start pool with maps
# 2. Simulate OOM condition
# 3. Verify recovery completes successfully
# 4. Verify no memory leaks after recovery
:ok
end
@tag :skip
@tag :integration
test "recovery after network partition" do
# This test would simulate recovery after network issues:
# 1. Start maps with external dependencies
# 2. Simulate network partition
# 3. Crash pool
# 4. Verify recovery handles network errors gracefully
:ok
end
@tag :skip
@tag :integration
test "recovery preserves user sessions" do
# This test would:
# 1. Start maps with active user sessions
# 2. Crash pool
# 3. Verify users can continue after recovery
# 4. Verify presence tracking works after recovery
:ok
end
end
end

View File

@@ -0,0 +1,18 @@
# Example environment file for manual API tests
# Copy this to .env and fill in your values
# Your Wanderer server URL
API_BASE_URL=http://localhost:8000
# Your map's slug (found in the map URL: /your-map-slug)
MAP_SLUG=your-map-slug
# Your map's public API token (found in map settings)
API_TOKEN=your_map_public_api_key_here
# For character_eve_id testing:
# Find a valid character EVE ID from your database
VALID_CHAR_ID=111111111
# Use any non-existent character ID for invalid tests
INVALID_CHAR_ID=999999999

View File

@@ -0,0 +1,249 @@
# Manual cURL Testing for Character EVE ID Fix (Issue #539)
This guide provides standalone curl commands to manually test the character_eve_id fix.
## Prerequisites
1. **Get your Map's Public API Token:**
- Log into Wanderer
- Go to your map settings
- Find the "Public API Key" section
- Copy your API token
2. **Find your Map Slug:**
- Look at your map URL: `https://your-instance.com/your-map-slug`
- The slug is the last part of the URL
3. **Get a valid Character EVE ID:**
```bash
# Option 1: Query your database
psql $DATABASE_URL -c "SELECT eve_id, name FROM character_v1 WHERE deleted = false LIMIT 5;"
# Option 2: Use the characters API
curl -H "Authorization: Bearer YOUR_API_TOKEN" \
http://localhost:8000/api/characters
```
4. **Get a Solar System ID from your map:**
```bash
curl -H "Authorization: Bearer YOUR_API_TOKEN" \
http://localhost:8000/api/maps/YOUR_SLUG/systems \
| jq '.data[0].solar_system_id'
```
## Set Environment Variables (for convenience)
```bash
export API_BASE_URL="http://localhost:8000"
export MAP_SLUG="your-map-slug"
export API_TOKEN="your_api_token_here"
export SOLAR_SYSTEM_ID="30000142" # Replace with actual system ID from your map
export VALID_CHAR_ID="111111111" # Replace with real character eve_id
export INVALID_CHAR_ID="999999999" # Non-existent character
```
---
## Test 1: Create Signature with Valid character_eve_id
**Expected Result:** HTTP 201, returned object has the submitted character_eve_id
```bash
curl -v -X POST \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"solar_system_id": '"$SOLAR_SYSTEM_ID"',
"eve_id": "TEST-001",
"character_eve_id": "'"$VALID_CHAR_ID"'",
"group": "wormhole",
"kind": "cosmic_signature",
"name": "Test Signature 1"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" | jq '.'
```
**Verification:**
```bash
# The response should contain:
# "character_eve_id": "111111111" (your VALID_CHAR_ID)
```
---
## Test 2: Create Signature with Invalid character_eve_id
**Expected Result:** HTTP 422 with error "invalid_character"
```bash
curl -v -X POST \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"solar_system_id": '"$SOLAR_SYSTEM_ID"',
"eve_id": "TEST-002",
"character_eve_id": "'"$INVALID_CHAR_ID"'",
"group": "wormhole",
"kind": "cosmic_signature"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" | jq '.'
```
**Expected Response:**
```json
{
"error": "invalid_character"
}
```
---
## Test 3: Create Signature WITHOUT character_eve_id (Backward Compatibility)
**Expected Result:** HTTP 201, uses map owner's character_eve_id as fallback
```bash
curl -v -X POST \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"solar_system_id": '"$SOLAR_SYSTEM_ID"',
"eve_id": "TEST-003",
"group": "data",
"kind": "cosmic_signature",
"name": "Test Signature 3"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" | jq '.'
```
**Verification:**
```bash
# The response should contain the map owner's character_eve_id
# This proves backward compatibility is maintained
```
---
## Test 4: Update Signature with Valid character_eve_id
**Expected Result:** HTTP 200, returned object has the submitted character_eve_id
```bash
# First, save a signature ID from Test 1 or 3
export SIG_ID="paste-signature-id-here"
curl -v -X PUT \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Updated Signature Name",
"character_eve_id": "'"$VALID_CHAR_ID"'",
"description": "Updated via API"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID" | jq '.'
```
**Verification:**
```bash
# The response should contain:
# "character_eve_id": "111111111" (your VALID_CHAR_ID)
```
---
## Test 5: Update Signature with Invalid character_eve_id
**Expected Result:** HTTP 422 with error "invalid_character"
```bash
curl -v -X PUT \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Should Fail",
"character_eve_id": "'"$INVALID_CHAR_ID"'"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID" | jq '.'
```
**Expected Response:**
```json
{
"error": "invalid_character"
}
```
---
## Cleanup
Delete test signatures:
```bash
# List all signatures to find IDs
curl -H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" | jq '.data[] | {id, eve_id, name}'
# Delete specific signature
export SIG_ID="signature-uuid-here"
curl -v -X DELETE \
-H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID"
```
---
## Quick Debugging Tips
### View All Signatures
```bash
curl -H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" \
| jq '.data[] | {id, eve_id, character_eve_id, name}'
```
### View All Characters in Database
```bash
curl -H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/characters" \
| jq '.[] | {eve_id, name}'
```
### View All Systems in Map
```bash
curl -H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/maps/$MAP_SLUG/systems" \
| jq '.data[] | {id, solar_system_id, name}'
```
---
## Expected Behavior Summary
| Test Case | HTTP Status | character_eve_id in Response |
|-----------|-------------|------------------------------|
| Create with valid char ID | 201 | Matches submitted value |
| Create with invalid char ID | 422 | N/A (error returned) |
| Create without char ID | 201 | Map owner's char ID (fallback) |
| Update with valid char ID | 200 | Matches submitted value |
| Update with invalid char ID | 422 | N/A (error returned) |
---
## Troubleshooting
### "Unauthorized (invalid token for map)"
- Double-check your API_TOKEN matches the map's public API key
- Verify the token doesn't have extra spaces or newlines
### "Map not found"
- Verify your MAP_SLUG is correct
- Try using the map UUID instead of slug
### "System not found for solar_system_id"
- The system must already exist in your map
- Run the "View All Systems" command to find valid system IDs
### "invalid_character" when using what should be valid
- Verify the character exists: `SELECT * FROM character_v1 WHERE eve_id = 'YOUR_ID';`
- Make sure `deleted = false` for the character

View File

@@ -0,0 +1,289 @@
#!/bin/bash
# test/manual/api/test_character_eve_id_fix.sh
# ─── Manual Test for Character EVE ID Fix (Issue #539) ────────────────────────
#
# This script tests the fix for GitHub issue #539 where character_eve_id
# was being ignored when creating/updating signatures via the REST API.
#
# Usage:
# 1. Create a .env file in this directory with:
# API_TOKEN=your_map_public_api_key
# API_BASE_URL=http://localhost:8000 # or your server URL
# MAP_SLUG=your_map_slug
# VALID_CHAR_ID=111111111 # A character that exists in your database
# INVALID_CHAR_ID=999999999 # A character that does NOT exist
#
# 2. Run: ./test_character_eve_id_fix.sh
#
# Prerequisites:
# - curl and jq must be installed
# - A map must exist with a valid API token
# - At least one system must be added to the map
set -eu
source "$(dirname "$0")/utils.sh"
echo "═══════════════════════════════════════════════════════════════════"
echo "Testing Character EVE ID Fix (GitHub Issue #539)"
echo "═══════════════════════════════════════════════════════════════════"
echo ""
# Check required environment variables
: "${API_BASE_URL:?Error: API_BASE_URL not set}"
: "${MAP_SLUG:?Error: MAP_SLUG not set}"
: "${VALID_CHAR_ID:?Error: VALID_CHAR_ID not set (provide a character eve_id that exists in DB)}"
: "${INVALID_CHAR_ID:?Error: INVALID_CHAR_ID not set (provide a non-existent character eve_id)}"
# Get a system to use for testing
echo "📋 Fetching available systems from map..."
SYSTEMS_RAW=$(make_request GET "$API_BASE_URL/api/maps/$MAP_SLUG/systems")
SYSTEMS_STATUS=$(parse_status "$SYSTEMS_RAW")
SYSTEMS_RESPONSE=$(parse_response "$SYSTEMS_RAW")
if [ "$SYSTEMS_STATUS" != "200" ]; then
echo "❌ Failed to fetch systems (HTTP $SYSTEMS_STATUS)"
echo "$SYSTEMS_RESPONSE"
exit 1
fi
# Extract first system's solar_system_id
SOLAR_SYSTEM_ID=$(echo "$SYSTEMS_RESPONSE" | jq -r '.data[0].solar_system_id // empty')
if [ -z "$SOLAR_SYSTEM_ID" ]; then
echo "❌ No systems found in map. Please add at least one system first."
exit 1
fi
echo "✅ Using solar_system_id: $SOLAR_SYSTEM_ID"
echo ""
# ═══════════════════════════════════════════════════════════════════════
# Test 1: Create signature with valid character_eve_id
# ═══════════════════════════════════════════════════════════════════════
echo "─────────────────────────────────────────────────────────────────"
echo "Test 1: Create signature with VALID character_eve_id"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD1=$(cat <<EOF
{
"solar_system_id": $SOLAR_SYSTEM_ID,
"eve_id": "TEST-001",
"character_eve_id": "$VALID_CHAR_ID",
"group": "wormhole",
"kind": "cosmic_signature",
"name": "Test Sig 1"
}
EOF
)
echo "Request:"
echo "$PAYLOAD1" | jq '.'
echo ""
RAW1=$(make_request POST "$API_BASE_URL/api/maps/$MAP_SLUG/signatures" "$PAYLOAD1")
STATUS1=$(parse_status "$RAW1")
RESPONSE1=$(parse_response "$RAW1")
echo "Response (HTTP $STATUS1):"
echo "$RESPONSE1" | jq '.'
echo ""
if [ "$STATUS1" = "201" ]; then
RETURNED_CHAR_ID=$(echo "$RESPONSE1" | jq -r '.data.character_eve_id')
if [ "$RETURNED_CHAR_ID" = "$VALID_CHAR_ID" ]; then
echo "✅ PASS: Signature created with correct character_eve_id: $RETURNED_CHAR_ID"
SIG_ID_1=$(echo "$RESPONSE1" | jq -r '.data.id')
else
echo "❌ FAIL: Expected character_eve_id=$VALID_CHAR_ID, got $RETURNED_CHAR_ID"
fi
else
echo "❌ FAIL: Expected HTTP 201, got $STATUS1"
fi
echo ""
# ═══════════════════════════════════════════════════════════════════════
# Test 2: Create signature with invalid character_eve_id
# ═══════════════════════════════════════════════════════════════════════
echo "─────────────────────────────────────────────────────────────────"
echo "Test 2: Create signature with INVALID character_eve_id"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD2=$(cat <<EOF
{
"solar_system_id": $SOLAR_SYSTEM_ID,
"eve_id": "TEST-002",
"character_eve_id": "$INVALID_CHAR_ID",
"group": "wormhole",
"kind": "cosmic_signature"
}
EOF
)
echo "Request:"
echo "$PAYLOAD2" | jq '.'
echo ""
RAW2=$(make_request POST "$API_BASE_URL/api/maps/$MAP_SLUG/signatures" "$PAYLOAD2")
STATUS2=$(parse_status "$RAW2")
RESPONSE2=$(parse_response "$RAW2")
echo "Response (HTTP $STATUS2):"
echo "$RESPONSE2" | jq '.'
echo ""
if [ "$STATUS2" = "422" ]; then
ERROR_MSG=$(echo "$RESPONSE2" | jq -r '.error // empty')
if [ "$ERROR_MSG" = "invalid_character" ]; then
echo "✅ PASS: Correctly rejected invalid character_eve_id with error: $ERROR_MSG"
else
echo "⚠️ PARTIAL: Got HTTP 422 but unexpected error message: $ERROR_MSG"
fi
else
echo "❌ FAIL: Expected HTTP 422, got $STATUS2"
fi
echo ""
# ═══════════════════════════════════════════════════════════════════════
# Test 3: Create signature WITHOUT character_eve_id (fallback test)
# ═══════════════════════════════════════════════════════════════════════
echo "─────────────────────────────────────────────────────────────────"
echo "Test 3: Create signature WITHOUT character_eve_id (fallback)"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD3=$(cat <<EOF
{
"solar_system_id": $SOLAR_SYSTEM_ID,
"eve_id": "TEST-003",
"group": "data",
"kind": "cosmic_signature",
"name": "Test Sig 3"
}
EOF
)
echo "Request:"
echo "$PAYLOAD3" | jq '.'
echo ""
RAW3=$(make_request POST "$API_BASE_URL/api/maps/$MAP_SLUG/signatures" "$PAYLOAD3")
STATUS3=$(parse_status "$RAW3")
RESPONSE3=$(parse_response "$RAW3")
echo "Response (HTTP $STATUS3):"
echo "$RESPONSE3" | jq '.'
echo ""
if [ "$STATUS3" = "201" ]; then
RETURNED_CHAR_ID=$(echo "$RESPONSE3" | jq -r '.data.character_eve_id')
echo "✅ PASS: Signature created with fallback character_eve_id: $RETURNED_CHAR_ID"
echo " (This should be the map owner's character)"
SIG_ID_3=$(echo "$RESPONSE3" | jq -r '.data.id')
else
echo "❌ FAIL: Expected HTTP 201, got $STATUS3"
fi
echo ""
# ═══════════════════════════════════════════════════════════════════════
# Test 4: Update signature with valid character_eve_id
# ═══════════════════════════════════════════════════════════════════════
if [ -n "${SIG_ID_1:-}" ]; then
echo "─────────────────────────────────────────────────────────────────"
echo "Test 4: Update signature with VALID character_eve_id"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD4=$(cat <<EOF
{
"name": "Updated Test Sig 1",
"character_eve_id": "$VALID_CHAR_ID",
"description": "Updated via API"
}
EOF
)
echo "Request:"
echo "$PAYLOAD4" | jq '.'
echo ""
RAW4=$(make_request PUT "$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID_1" "$PAYLOAD4")
STATUS4=$(parse_status "$RAW4")
RESPONSE4=$(parse_response "$RAW4")
echo "Response (HTTP $STATUS4):"
echo "$RESPONSE4" | jq '.'
echo ""
if [ "$STATUS4" = "200" ]; then
RETURNED_CHAR_ID=$(echo "$RESPONSE4" | jq -r '.data.character_eve_id')
if [ "$RETURNED_CHAR_ID" = "$VALID_CHAR_ID" ]; then
echo "✅ PASS: Signature updated with correct character_eve_id: $RETURNED_CHAR_ID"
else
echo "❌ FAIL: Expected character_eve_id=$VALID_CHAR_ID, got $RETURNED_CHAR_ID"
fi
else
echo "❌ FAIL: Expected HTTP 200, got $STATUS4"
fi
echo ""
fi
# ═══════════════════════════════════════════════════════════════════════
# Test 5: Update signature with invalid character_eve_id
# ═══════════════════════════════════════════════════════════════════════
if [ -n "${SIG_ID_3:-}" ]; then
echo "─────────────────────────────────────────────────────────────────"
echo "Test 5: Update signature with INVALID character_eve_id"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD5=$(cat <<EOF
{
"name": "Should Fail",
"character_eve_id": "$INVALID_CHAR_ID"
}
EOF
)
echo "Request:"
echo "$PAYLOAD5" | jq '.'
echo ""
RAW5=$(make_request PUT "$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID_3" "$PAYLOAD5")
STATUS5=$(parse_status "$RAW5")
RESPONSE5=$(parse_response "$RAW5")
echo "Response (HTTP $STATUS5):"
echo "$RESPONSE5" | jq '.'
echo ""
if [ "$STATUS5" = "422" ]; then
ERROR_MSG=$(echo "$RESPONSE5" | jq -r '.error // empty')
if [ "$ERROR_MSG" = "invalid_character" ]; then
echo "✅ PASS: Correctly rejected invalid character_eve_id with error: $ERROR_MSG"
else
echo "⚠️ PARTIAL: Got HTTP 422 but unexpected error message: $ERROR_MSG"
fi
else
echo "❌ FAIL: Expected HTTP 422, got $STATUS5"
fi
echo ""
fi
# ═══════════════════════════════════════════════════════════════════════
# Cleanup (optional)
# ═══════════════════════════════════════════════════════════════════════
echo "─────────────────────────────────────────────────────────────────"
echo "Cleanup"
echo "─────────────────────────────────────────────────────────────────"
echo "Created signature IDs: ${SIG_ID_1:-none} ${SIG_ID_3:-none}"
echo ""
echo "To clean up manually, delete these signatures via the UI or API:"
for sig_id in ${SIG_ID_1:-} ${SIG_ID_3:-}; do
if [ -n "$sig_id" ]; then
echo " curl -X DELETE -H 'Authorization: Bearer \$API_TOKEN' \\"
echo " $API_BASE_URL/api/maps/$MAP_SLUG/signatures/$sig_id"
fi
done
echo ""
echo "═══════════════════════════════════════════════════════════════════"
echo "Test Complete!"
echo "═══════════════════════════════════════════════════════════════════"

View File

@@ -410,7 +410,7 @@ defmodule WandererApp.Map.CacheRTreeTest do
# Check many positions for availability (simulating auto-positioning)
test_positions = for x <- 0..20, y <- 0..20, do: {x * 100, y * 50}
for {x, y} do
for {x, y} <- test_positions do
box = [{x, x + 130}, {y, y + 34}]
{:ok, _ids} = CacheRTree.query(box, name)
# Not asserting anything, just verifying queries work

View File

@@ -0,0 +1,561 @@
defmodule WandererApp.Map.MapPoolCrashRecoveryTest do
use ExUnit.Case, async: false
alias WandererApp.Map.{MapPool, MapPoolState}
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@ets_table :map_pool_state_table
setup do
# Clean up any existing test data
cleanup_test_data()
# Check if ETS table exists
ets_exists? =
try do
:ets.info(@ets_table) != :undefined
rescue
_ -> false
end
on_exit(fn ->
cleanup_test_data()
end)
{:ok, ets_exists: ets_exists?}
end
defp cleanup_test_data do
# Clean up test caches
WandererApp.Cache.delete("started_maps")
Cachex.clear(@cache)
# Clean up ETS entries for test pools
if :ets.whereis(@ets_table) != :undefined do
:ets.match_delete(@ets_table, {:"$1", :"$2", :"$3"})
end
end
defp create_test_pool_with_uuid(uuid, map_ids) do
# Manually register in unique_registry
{:ok, _} = Registry.register(@unique_registry, Module.concat(MapPool, uuid), map_ids)
{:ok, _} = Registry.register(@registry, MapPool, uuid)
# Add to cache
Enum.each(map_ids, fn map_id ->
Cachex.put(@cache, map_id, uuid)
end)
# Save to ETS
MapPoolState.save_pool_state(uuid, map_ids)
uuid
end
defp get_pool_map_ids(uuid) do
case Registry.lookup(@unique_registry, Module.concat(MapPool, uuid)) do
[{_pid, map_ids}] -> map_ids
[] -> []
end
end
describe "MapPoolState - ETS operations" do
test "save_pool_state stores state in ETS", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
map_ids = [1, 2, 3]
assert :ok = MapPoolState.save_pool_state(uuid, map_ids)
# Verify it's in ETS
assert {:ok, ^map_ids} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
test "get_pool_state returns not_found for non-existent pool", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "non-existent-#{:rand.uniform(1_000_000)}"
assert {:error, :not_found} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
test "delete_pool_state removes state from ETS", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
map_ids = [1, 2, 3]
MapPoolState.save_pool_state(uuid, map_ids)
assert {:ok, ^map_ids} = MapPoolState.get_pool_state(uuid)
assert :ok = MapPoolState.delete_pool_state(uuid)
assert {:error, :not_found} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
test "save_pool_state updates existing state", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
# Save initial state
MapPoolState.save_pool_state(uuid, [1, 2])
assert {:ok, [1, 2]} = MapPoolState.get_pool_state(uuid)
# Update state
MapPoolState.save_pool_state(uuid, [1, 2, 3, 4])
assert {:ok, [1, 2, 3, 4]} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
test "list_all_states returns all pool states", %{ets_exists: ets_exists?} do
if ets_exists? do
# Clean first
:ets.delete_all_objects(@ets_table)
uuid1 = "test-pool-1-#{:rand.uniform(1_000_000)}"
uuid2 = "test-pool-2-#{:rand.uniform(1_000_000)}"
MapPoolState.save_pool_state(uuid1, [1, 2])
MapPoolState.save_pool_state(uuid2, [3, 4])
states = MapPoolState.list_all_states()
assert length(states) >= 2
# Verify our pools are in there
uuids = Enum.map(states, fn {uuid, _map_ids, _timestamp} -> uuid end)
assert uuid1 in uuids
assert uuid2 in uuids
else
:ok
end
end
test "count_states returns correct count", %{ets_exists: ets_exists?} do
if ets_exists? do
# Clean first
:ets.delete_all_objects(@ets_table)
uuid1 = "test-pool-1-#{:rand.uniform(1_000_000)}"
uuid2 = "test-pool-2-#{:rand.uniform(1_000_000)}"
MapPoolState.save_pool_state(uuid1, [1, 2])
MapPoolState.save_pool_state(uuid2, [3, 4])
count = MapPoolState.count_states()
assert count >= 2
else
:ok
end
end
end
describe "MapPoolState - stale entry cleanup" do
test "cleanup_stale_entries removes old entries", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "stale-pool-#{:rand.uniform(1_000_000)}"
# Manually insert a stale entry (24+ hours old)
stale_timestamp = System.system_time(:second) - 25 * 3600
:ets.insert(@ets_table, {uuid, [1, 2], stale_timestamp})
assert {:ok, [1, 2]} = MapPoolState.get_pool_state(uuid)
# Clean up stale entries
{:ok, deleted_count} = MapPoolState.cleanup_stale_entries()
assert deleted_count >= 1
# Verify stale entry was removed
assert {:error, :not_found} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
test "cleanup_stale_entries preserves recent entries", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "recent-pool-#{:rand.uniform(1_000_000)}"
map_ids = [1, 2, 3]
# Save recent entry
MapPoolState.save_pool_state(uuid, map_ids)
# Clean up
MapPoolState.cleanup_stale_entries()
# Recent entry should still exist
assert {:ok, ^map_ids} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
end
describe "Crash recovery - basic scenarios" do
@tag :skip
test "MapPool recovers single map after crash" do
# This test requires a full MapPool GenServer with actual map data
# Skipping as it needs integration with Server.Impl.start_map
:ok
end
@tag :skip
test "MapPool recovers multiple maps after crash" do
# Similar to above - requires full integration
:ok
end
@tag :skip
test "MapPool merges new and recovered map_ids" do
# Tests that if pool crashes while starting a new map,
# both the new map and recovered maps are started
:ok
end
end
describe "Crash recovery - telemetry" do
test "recovery emits start telemetry event", %{ets_exists: ets_exists?} do
if ets_exists? do
test_pid = self()
# Attach telemetry handler
:telemetry.attach(
"test-recovery-start",
[:wanderer_app, :map_pool, :recovery, :start],
fn _event, measurements, metadata, _config ->
send(test_pid, {:telemetry_start, measurements, metadata})
end,
nil
)
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
recovered_maps = [1, 2, 3]
# Save state to ETS (simulating previous run)
MapPoolState.save_pool_state(uuid, recovered_maps)
# Simulate init with recovery
# Note: Can't actually start a MapPool here without full integration,
# but we can verify the telemetry handler is set up correctly
# Manually emit the event to test handler
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :start],
%{recovered_map_count: 3, total_map_count: 3},
%{pool_uuid: uuid}
)
assert_receive {:telemetry_start, measurements, metadata}, 500
assert measurements.recovered_map_count == 3
assert measurements.total_map_count == 3
assert metadata.pool_uuid == uuid
# Cleanup
:telemetry.detach("test-recovery-start")
else
:ok
end
end
test "recovery emits complete telemetry event", %{ets_exists: ets_exists?} do
if ets_exists? do
test_pid = self()
:telemetry.attach(
"test-recovery-complete",
[:wanderer_app, :map_pool, :recovery, :complete],
fn _event, measurements, metadata, _config ->
send(test_pid, {:telemetry_complete, measurements, metadata})
end,
nil
)
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
# Manually emit the event
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :complete],
%{recovered_count: 3, failed_count: 0, duration_ms: 100},
%{pool_uuid: uuid}
)
assert_receive {:telemetry_complete, measurements, metadata}, 500
assert measurements.recovered_count == 3
assert measurements.failed_count == 0
assert measurements.duration_ms == 100
assert metadata.pool_uuid == uuid
:telemetry.detach("test-recovery-complete")
else
:ok
end
end
test "recovery emits map_failed telemetry event", %{ets_exists: ets_exists?} do
if ets_exists? do
test_pid = self()
:telemetry.attach(
"test-recovery-map-failed",
[:wanderer_app, :map_pool, :recovery, :map_failed],
fn _event, measurements, metadata, _config ->
send(test_pid, {:telemetry_map_failed, measurements, metadata})
end,
nil
)
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
failed_map_id = 123
# Manually emit the event
:telemetry.execute(
[:wanderer_app, :map_pool, :recovery, :map_failed],
%{map_id: failed_map_id},
%{pool_uuid: uuid, reason: "Map not found"}
)
assert_receive {:telemetry_map_failed, measurements, metadata}, 500
assert measurements.map_id == failed_map_id
assert metadata.pool_uuid == uuid
assert metadata.reason == "Map not found"
:telemetry.detach("test-recovery-map-failed")
else
:ok
end
end
end
describe "Crash recovery - state persistence" do
@tag :skip
test "state persisted after successful map start" do
# Would need to start actual MapPool and trigger start_map
:ok
end
@tag :skip
test "state persisted after successful map stop" do
# Would need to start actual MapPool and trigger stop_map
:ok
end
@tag :skip
test "state persisted during backup_state" do
# Would need to trigger backup_state handler
:ok
end
end
describe "Graceful shutdown cleanup" do
test "ETS state cleaned on normal termination", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
map_ids = [1, 2, 3]
# Save state
MapPoolState.save_pool_state(uuid, map_ids)
assert {:ok, ^map_ids} = MapPoolState.get_pool_state(uuid)
# Simulate graceful shutdown by calling delete
MapPoolState.delete_pool_state(uuid)
# State should be gone
assert {:error, :not_found} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
@tag :skip
test "ETS state preserved on abnormal termination" do
# Would need to actually crash a MapPool to test this
# The terminate callback would not call delete_pool_state
:ok
end
end
describe "Edge cases" do
test "recovery with empty map_ids list", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
# Save empty state
MapPoolState.save_pool_state(uuid, [])
assert {:ok, []} = MapPoolState.get_pool_state(uuid)
else
:ok
end
end
test "recovery with duplicate map_ids gets deduplicated", %{ets_exists: ets_exists?} do
if ets_exists? do
# This tests the deduplication logic in init
# If we have [1, 2] in ETS and [2, 3] in new map_ids,
# result should be [1, 2, 3] after Enum.uniq
recovered_maps = [1, 2]
new_maps = [2, 3]
expected = Enum.uniq(recovered_maps ++ new_maps)
# Should be [1, 2, 3] or [2, 3, 1] depending on order
assert 1 in expected
assert 2 in expected
assert 3 in expected
assert length(expected) == 3
else
:ok
end
end
test "large number of maps in recovery", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
# Test with 20 maps (the pool limit)
map_ids = Enum.to_list(1..20)
MapPoolState.save_pool_state(uuid, map_ids)
assert {:ok, recovered} = MapPoolState.get_pool_state(uuid)
assert length(recovered) == 20
assert recovered == map_ids
else
:ok
end
end
end
describe "Concurrent operations" do
test "multiple pools can save state concurrently", %{ets_exists: ets_exists?} do
if ets_exists? do
# Create 10 pools concurrently
tasks =
1..10
|> Enum.map(fn i ->
Task.async(fn ->
uuid = "concurrent-pool-#{i}-#{:rand.uniform(1_000_000)}"
map_ids = [i * 10, i * 10 + 1]
MapPoolState.save_pool_state(uuid, map_ids)
{uuid, map_ids}
end)
end)
results = Task.await_many(tasks, 5000)
# Verify all pools saved successfully
Enum.each(results, fn {uuid, expected_map_ids} ->
assert {:ok, ^expected_map_ids} = MapPoolState.get_pool_state(uuid)
end)
else
:ok
end
end
test "concurrent reads and writes don't corrupt state", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "test-pool-#{:rand.uniform(1_000_000)}"
MapPoolState.save_pool_state(uuid, [1, 2, 3])
# Spawn multiple readers and writers
readers =
1..5
|> Enum.map(fn _ ->
Task.async(fn ->
MapPoolState.get_pool_state(uuid)
end)
end)
writers =
1..5
|> Enum.map(fn i ->
Task.async(fn ->
MapPoolState.save_pool_state(uuid, [i, i + 1])
end)
end)
# All operations should complete without error
reader_results = Task.await_many(readers, 5000)
writer_results = Task.await_many(writers, 5000)
assert Enum.all?(reader_results, fn
{:ok, _} -> true
_ -> false
end)
assert Enum.all?(writer_results, fn :ok -> true end)
# Final state should be valid (one of the writer's values)
assert {:ok, final_state} = MapPoolState.get_pool_state(uuid)
assert is_list(final_state)
assert length(final_state) == 2
else
:ok
end
end
end
describe "Performance" do
@tag :slow
test "recovery completes within acceptable time", %{ets_exists: ets_exists?} do
if ets_exists? do
uuid = "perf-pool-#{:rand.uniform(1_000_000)}"
# Test with pool at limit (20 maps)
map_ids = Enum.to_list(1..20)
# Measure save time
{save_time_us, :ok} = :timer.tc(fn ->
MapPoolState.save_pool_state(uuid, map_ids)
end)
# Measure retrieval time
{get_time_us, {:ok, _}} = :timer.tc(fn ->
MapPoolState.get_pool_state(uuid)
end)
# Both operations should be very fast (< 1ms)
assert save_time_us < 1000, "Save took #{save_time_us}µs, expected < 1000µs"
assert get_time_us < 1000, "Get took #{get_time_us}µs, expected < 1000µs"
else
:ok
end
end
@tag :slow
test "cleanup performance with many stale entries", %{ets_exists: ets_exists?} do
if ets_exists? do
# Insert 100 stale entries
stale_timestamp = System.system_time(:second) - 25 * 3600
1..100
|> Enum.each(fn i ->
uuid = "stale-pool-#{i}"
:ets.insert(@ets_table, {uuid, [i], stale_timestamp})
end)
# Measure cleanup time
{cleanup_time_us, {:ok, deleted_count}} = :timer.tc(fn ->
MapPoolState.cleanup_stale_entries()
end)
# Should have deleted at least 100 entries
assert deleted_count >= 100
# Cleanup should be reasonably fast (< 100ms for 100 entries)
assert cleanup_time_us < 100_000,
"Cleanup took #{cleanup_time_us}µs, expected < 100,000µs"
else
:ok
end
end
end
end

View File

@@ -0,0 +1,343 @@
defmodule WandererApp.Map.MapPoolTest do
use ExUnit.Case, async: false
alias WandererApp.Map.{MapPool, MapPoolDynamicSupervisor, Reconciler}
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
setup do
# Clean up any existing test data
cleanup_test_data()
# Check if required infrastructure is running
registries_running? =
try do
Registry.keys(@registry, self()) != :error
rescue
_ -> false
end
reconciler_running? = Process.whereis(Reconciler) != nil
on_exit(fn ->
cleanup_test_data()
end)
{:ok, registries_running: registries_running?, reconciler_running: reconciler_running?}
end
defp cleanup_test_data do
# Clean up test caches
WandererApp.Cache.delete("started_maps")
Cachex.clear(@cache)
end
describe "garbage collection with synchronous stop" do
@tag :skip
test "garbage collector successfully stops map with synchronous call" do
# This test would require setting up a full map pool with a test map
# Skipping for now as it requires more complex setup with actual map data
:ok
end
@tag :skip
test "garbage collector handles stop failures gracefully" do
# This test would verify error handling when stop fails
:ok
end
end
describe "cache lookup with registry fallback" do
test "stop_map handles cache miss by scanning registry", %{registries_running: registries_running?} do
if registries_running? do
# Setup: Create a map_id that's not in cache but will be found in registry scan
map_id = "test_map_#{:rand.uniform(1_000_000)}"
# Verify cache is empty for this map
assert {:ok, nil} = Cachex.get(@cache, map_id)
# Call stop_map - should handle gracefully with fallback
assert :ok = MapPoolDynamicSupervisor.stop_map(map_id)
else
# Skip test if registries not running
:ok
end
end
test "stop_map handles non-existent pool_uuid in registry", %{registries_running: registries_running?} do
if registries_running? do
map_id = "test_map_#{:rand.uniform(1_000_000)}"
fake_uuid = "fake_uuid_#{:rand.uniform(1_000_000)}"
# Put fake uuid in cache that doesn't exist in registry
Cachex.put(@cache, map_id, fake_uuid)
# Call stop_map - should handle gracefully with fallback
assert :ok = MapPoolDynamicSupervisor.stop_map(map_id)
else
:ok
end
end
test "stop_map updates cache when found via registry scan", %{registries_running: registries_running?} do
if registries_running? do
# This test would require a running pool with registered maps
# For now, we verify the fallback logic doesn't crash
map_id = "test_map_#{:rand.uniform(1_000_000)}"
assert :ok = MapPoolDynamicSupervisor.stop_map(map_id)
else
:ok
end
end
end
describe "state cleanup atomicity" do
@tag :skip
test "rollback occurs when registry update fails" do
# This would require mocking Registry.update_value to fail
# Skipping for now as it requires more complex mocking setup
:ok
end
@tag :skip
test "rollback occurs when cache delete fails" do
# This would require mocking Cachex.del to fail
:ok
end
@tag :skip
test "successful cleanup updates all three state stores" do
# This would verify Registry, Cache, and GenServer state are all updated
:ok
end
end
describe "Reconciler - zombie map detection and cleanup" do
test "reconciler detects zombie maps in started_maps cache", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
# Setup: Add maps to started_maps that aren't in any registry
zombie_map_id = "zombie_map_#{:rand.uniform(1_000_000)}"
WandererApp.Cache.insert_or_update(
"started_maps",
[zombie_map_id],
fn existing -> [zombie_map_id | existing] |> Enum.uniq() end
)
# Get started_maps
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
assert zombie_map_id in started_maps
# Trigger reconciliation
send(Reconciler, :reconcile)
# Give it time to process
Process.sleep(200)
# Verify zombie was cleaned up
{:ok, started_maps_after} = WandererApp.Cache.lookup("started_maps", [])
refute zombie_map_id in started_maps_after
else
:ok
end
end
test "reconciler cleans up zombie map caches", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
zombie_map_id = "zombie_map_#{:rand.uniform(1_000_000)}"
# Setup zombie state
WandererApp.Cache.insert_or_update(
"started_maps",
[zombie_map_id],
fn existing -> [zombie_map_id | existing] |> Enum.uniq() end
)
WandererApp.Cache.insert("map_#{zombie_map_id}:started", true)
Cachex.put(@cache, zombie_map_id, "fake_uuid")
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
# Verify all caches cleaned
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
refute zombie_map_id in started_maps
{:ok, cache_entry} = Cachex.get(@cache, zombie_map_id)
assert cache_entry == nil
else
:ok
end
end
end
describe "Reconciler - orphan map detection and fix" do
@tag :skip
test "reconciler detects orphan maps in registry" do
# This would require setting up a pool with maps in registry
# but not in started_maps cache
:ok
end
@tag :skip
test "reconciler adds orphan maps to started_maps cache" do
# This would verify orphan maps get added to the cache
:ok
end
end
describe "Reconciler - cache inconsistency detection and fix" do
test "reconciler detects map with missing cache entry", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
# This test verifies the reconciler can detect when a map
# is in the registry but has no cache entry
# Since we can't easily set up a full pool, we test the detection logic
map_id = "test_map_#{:rand.uniform(1_000_000)}"
# Ensure no cache entry
Cachex.del(@cache, map_id)
# The reconciler would detect this if the map was in a registry
# For now, we just verify the logic doesn't crash
send(Reconciler, :reconcile)
Process.sleep(200)
# No assertions needed - just verifying no crashes
end
end
test "reconciler detects cache pointing to non-existent pool", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
map_id = "test_map_#{:rand.uniform(1_000_000)}"
fake_uuid = "fake_uuid_#{:rand.uniform(1_000_000)}"
# Put fake uuid in cache
Cachex.put(@cache, map_id, fake_uuid)
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
# Cache entry should be removed since pool doesn't exist
{:ok, cache_entry} = Cachex.get(@cache, map_id)
assert cache_entry == nil
else
:ok
end
end
end
describe "Reconciler - stats and telemetry" do
test "reconciler emits telemetry events", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
# Setup telemetry handler
test_pid = self()
:telemetry.attach(
"test-reconciliation",
[:wanderer_app, :map, :reconciliation],
fn _event, measurements, _metadata, _config ->
send(test_pid, {:telemetry, measurements})
end,
nil
)
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
# Should receive telemetry event
assert_receive {:telemetry, measurements}, 500
assert is_integer(measurements.total_started_maps)
assert is_integer(measurements.total_registry_maps)
assert is_integer(measurements.zombie_maps)
assert is_integer(measurements.orphan_maps)
assert is_integer(measurements.cache_inconsistencies)
# Cleanup
:telemetry.detach("test-reconciliation")
else
:ok
end
end
end
describe "Reconciler - manual trigger" do
test "trigger_reconciliation runs reconciliation immediately", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
zombie_map_id = "zombie_map_#{:rand.uniform(1_000_000)}"
# Setup zombie state
WandererApp.Cache.insert_or_update(
"started_maps",
[zombie_map_id],
fn existing -> [zombie_map_id | existing] |> Enum.uniq() end
)
# Verify it exists
{:ok, started_maps_before} = WandererApp.Cache.lookup("started_maps", [])
assert zombie_map_id in started_maps_before
# Trigger manual reconciliation
Reconciler.trigger_reconciliation()
Process.sleep(200)
# Verify zombie was cleaned up
{:ok, started_maps_after} = WandererApp.Cache.lookup("started_maps", [])
refute zombie_map_id in started_maps_after
else
:ok
end
end
end
describe "edge cases and error handling" do
test "stop_map with cache error returns ok", %{registries_running: registries_running?} do
if registries_running? do
map_id = "test_map_#{:rand.uniform(1_000_000)}"
# Even if cache operations fail, should return :ok
assert :ok = MapPoolDynamicSupervisor.stop_map(map_id)
else
:ok
end
end
test "reconciler handles empty registries gracefully", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
# Clear everything
cleanup_test_data()
# Should not crash even with empty data
send(Reconciler, :reconcile)
Process.sleep(200)
# No assertions - just verifying no crash
assert true
else
:ok
end
end
test "reconciler handles nil values in caches", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
map_id = "test_map_#{:rand.uniform(1_000_000)}"
# Explicitly set nil
Cachex.put(@cache, map_id, nil)
# Should handle gracefully
send(Reconciler, :reconcile)
Process.sleep(200)
assert true
else
:ok
end
end
end
end

View File

@@ -580,6 +580,155 @@ defmodule WandererApp.Map.Operations.SignaturesTest do
end
end
describe "character_eve_id validation" do
test "create_signature uses provided character_eve_id when valid" do
# Create a test character
{:ok, character} =
WandererApp.Api.Character.create(%{
eve_id: "111111111",
name: "Test Character"
})
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123",
"character_eve_id" => character.eve_id
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
case result do
{:ok, data} ->
# Should use the provided character_eve_id, not the owner's
assert Map.get(data, "character_eve_id") == character.eve_id
{:error, _} ->
# System not found error is acceptable
:ok
end
end)
end
test "create_signature rejects invalid character_eve_id" do
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123",
"character_eve_id" => "invalid_char_id_999"
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
# Should return invalid_character error
assert {:error, :invalid_character} = result
end)
end
test "create_signature falls back to owner when character_eve_id not provided" do
owner_char_id = "888888888"
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: owner_char_id,
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123"
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
case result do
{:ok, data} ->
# Should use the owner's character_eve_id
assert Map.get(data, "character_eve_id") == owner_char_id
{:error, _} ->
# System not found error is acceptable
:ok
end
end)
end
test "update_signature respects provided character_eve_id when valid" do
# Create a test character
{:ok, character} =
WandererApp.Api.Character.create(%{
eve_id: "222222222",
name: "Another Test Character"
})
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
sig_id = Ecto.UUID.generate()
params = %{
"name" => "Updated Name",
"character_eve_id" => character.eve_id
}
result = Signatures.update_signature(conn, sig_id, params)
case result do
{:ok, data} ->
# Should use the provided character_eve_id
assert Map.get(data, "character_eve_id") == character.eve_id
{:error, _} ->
# Signature not found error is acceptable
:ok
end
end
test "update_signature rejects invalid character_eve_id" do
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
sig_id = Ecto.UUID.generate()
params = %{
"name" => "Updated Name",
"character_eve_id" => "totally_invalid_char"
}
result = Signatures.update_signature(conn, sig_id, params)
# Should return invalid_character error
assert {:error, :invalid_character} = result
end
end
describe "parameter merging and character_eve_id injection" do
test "create_signature injects character_eve_id correctly" do
char_id = "987654321"

View File

@@ -0,0 +1,320 @@
defmodule WandererApp.Map.SlugUniquenessTest do
@moduledoc """
Tests for map slug uniqueness constraints and handling.
These tests verify that:
1. Database unique constraint is enforced
2. Application-level slug generation handles uniqueness
3. Concurrent map creation doesn't create duplicates
4. Error handling works correctly for slug conflicts
"""
use WandererApp.DataCase, async: false
alias WandererApp.Api.Map
require Logger
describe "slug uniqueness constraint" do
setup do
# Create a test user
user = create_test_user()
%{user: user}
end
test "prevents duplicate slugs via database constraint", %{user: user} do
# Create first map with a specific slug
{:ok, _map1} =
Map.new(%{
name: "Test Map",
slug: "test-map",
owner_id: user.id,
description: "First map",
scope: "wormholes"
})
# Attempt to create second map with same slug by bypassing Ash slug generation
# This simulates a race condition where slug generation passes but DB insert fails
result =
Map.new(%{
name: "Different Name",
slug: "test-map",
owner_id: user.id,
description: "Second map",
scope: "wormholes"
})
# Should get a unique constraint error from database
assert {:error, _error} = result
end
test "automatically increments slug when duplicate detected", %{user: user} do
# Create first map
{:ok, map1} =
Map.new(%{
name: "Test Map",
slug: "test-map",
owner_id: user.id,
description: "First map",
scope: "wormholes"
})
assert map1.slug == "test-map"
# Create second map with same name (should auto-increment slug)
{:ok, map2} =
Map.new(%{
name: "Test Map",
slug: "test-map",
owner_id: user.id,
description: "Second map",
scope: "wormholes"
})
# Slug should be automatically incremented
assert map2.slug == "test-map-2"
# Create third map with same name
{:ok, map3} =
Map.new(%{
name: "Test Map",
slug: "test-map",
owner_id: user.id,
description: "Third map",
scope: "wormholes"
})
assert map3.slug == "test-map-3"
end
test "handles many maps with similar names", %{user: user} do
# Create 10 maps with the same base slug
maps =
for i <- 1..10 do
{:ok, map} =
Map.new(%{
name: "Popular Name",
slug: "popular-name",
owner_id: user.id,
description: "Map #{i}",
scope: "wormholes"
})
map
end
# Verify all slugs are unique
slugs = Enum.map(maps, & &1.slug)
assert length(Enum.uniq(slugs)) == 10
# First should keep the base slug
assert List.first(maps).slug == "popular-name"
# Others should be numbered
assert "popular-name-2" in slugs
assert "popular-name-10" in slugs
end
end
describe "concurrent slug creation (race condition)" do
setup do
user = create_test_user()
%{user: user}
end
@tag :slow
test "handles concurrent map creation with identical slugs", %{user: user} do
# Create 5 concurrent map creation requests with the same slug
tasks =
for i <- 1..5 do
Task.async(fn ->
Map.new(%{
name: "Concurrent Test",
slug: "concurrent-test",
owner_id: user.id,
description: "Concurrent map #{i}",
scope: "wormholes"
})
end)
end
# Wait for all tasks to complete
results = Task.await_many(tasks, 10_000)
# All should either succeed or fail gracefully (no crashes)
assert length(results) == 5
# Get successful results
successful = Enum.filter(results, &match?({:ok, _}, &1))
failed = Enum.filter(results, &match?({:error, _}, &1))
# At least some should succeed
assert length(successful) > 0
# Extract maps from successful results
maps = Enum.map(successful, fn {:ok, map} -> map end)
# Verify all successful maps have unique slugs
slugs = Enum.map(maps, & &1.slug)
assert length(Enum.uniq(slugs)) == length(slugs), "All successful maps should have unique slugs"
# Log results for visibility
Logger.info("Concurrent test: #{length(successful)} succeeded, #{length(failed)} failed")
Logger.info("Unique slugs created: #{inspect(slugs)}")
end
@tag :slow
test "concurrent creation with different names creates different base slugs", %{user: user} do
# Create concurrent requests with different names (should all succeed)
tasks =
for i <- 1..5 do
Task.async(fn ->
Map.new(%{
name: "Concurrent Map #{i}",
slug: "concurrent-map-#{i}",
owner_id: user.id,
description: "Map #{i}",
scope: "wormholes"
})
end)
end
results = Task.await_many(tasks, 10_000)
# All should succeed
assert Enum.all?(results, &match?({:ok, _}, &1))
# All should have different slugs
slugs = Enum.map(results, fn {:ok, map} -> map.slug end)
assert length(Enum.uniq(slugs)) == 5
end
end
describe "slug generation edge cases" do
setup do
user = create_test_user()
%{user: user}
end
test "handles very long slugs", %{user: user} do
# Create map with name that would generate very long slug
long_name = String.duplicate("a", 100)
{:ok, map} =
Map.new(%{
name: long_name,
slug: long_name,
owner_id: user.id,
description: "Long name test",
scope: "wormholes"
})
# Slug should be truncated to max length (40 chars based on map.ex constraints)
assert String.length(map.slug) <= 40
end
test "handles special characters in slugs", %{user: user} do
# Test that special characters are properly slugified
{:ok, map} =
Map.new(%{
name: "Test: Map & Name!",
slug: "test-map-name",
owner_id: user.id,
description: "Special chars test",
scope: "wormholes"
})
# Slug should only contain allowed characters
assert map.slug =~ ~r/^[a-z0-9-]+$/
end
end
describe "slug update operations" do
setup do
user = create_test_user()
{:ok, map} =
Map.new(%{
name: "Original Map",
slug: "original-map",
owner_id: user.id,
description: "Original",
scope: "wormholes"
})
%{user: user, map: map}
end
test "updating map with same slug succeeds", %{map: map} do
# Update other fields, keep same slug
result =
Map.update(map, %{
description: "Updated description",
slug: "original-map"
})
assert {:ok, updated_map} = result
assert updated_map.slug == "original-map"
assert updated_map.description == "Updated description"
end
test "updating to conflicting slug is handled", %{user: user, map: map} do
# Create another map
{:ok, _other_map} =
Map.new(%{
name: "Other Map",
slug: "other-map",
owner_id: user.id,
description: "Other",
scope: "wormholes"
})
# Try to update first map to use other map's slug
result =
Map.update(map, %{
slug: "other-map"
})
# Should either fail or auto-increment
case result do
{:ok, updated_map} ->
# If successful, slug should be different
assert updated_map.slug != "other-map"
assert updated_map.slug =~ ~r/^other-map-\d+$/
{:error, _} ->
# Or it can fail with validation error
:ok
end
end
end
describe "get_map_by_slug with duplicates" do
setup do
user = create_test_user()
%{user: user}
end
test "get_map_by_slug! raises on duplicates if they exist" do
# Note: This test documents the behavior when duplicates somehow exist
# In production, this should be prevented by our fixes
# If duplicates exist (data integrity issue), the query should fail
# This is a documentation test - we can't easily create duplicates
# due to the database constraint, but we document expected behavior
assert true
end
end
# Helper functions
defp create_test_user do
# Create a test user with necessary attributes
{:ok, user} =
WandererApp.Api.User.new(%{
name: "Test User #{:rand.uniform(10_000)}",
eve_id: :rand.uniform(100_000_000)
})
user
end
end

View File

@@ -0,0 +1,84 @@
defmodule WandererApp.User.ActivityTrackerTest do
use WandererApp.DataCase, async: false
alias WandererApp.User.ActivityTracker
describe "track_map_event/2" do
test "returns {:ok, result} on success" do
# This test verifies the happy path
# In real scenarios, this would succeed when creating a new activity record
assert {:ok, _} = ActivityTracker.track_map_event(:test_event, %{})
end
test "returns {:ok, nil} on error without crashing" do
# This simulates the scenario where tracking fails (e.g., unique constraint violation)
# The function should handle the error gracefully and return {:ok, nil}
# Note: In actual implementation, this would catch errors from:
# - Unique constraint violations
# - Database connection issues
# - Invalid data
# The key requirement is that it NEVER crashes the calling code
result = ActivityTracker.track_map_event(:map_connection_added, %{
character_id: nil, # This will cause the function to skip tracking
user_id: nil,
map_id: nil
})
# Should return success even when input is incomplete
assert {:ok, _} = result
end
test "handles errors gracefully and logs them" do
# Verify that errors are logged for observability
# This is important for monitoring and debugging
# The function should complete without raising even with incomplete data
assert {:ok, _} = ActivityTracker.track_map_event(:map_connection_added, %{
character_id: nil,
user_id: nil,
map_id: nil
})
end
end
describe "track_acl_event/2" do
test "returns {:ok, result} on success" do
assert {:ok, _} = ActivityTracker.track_acl_event(:test_event, %{})
end
test "returns {:ok, nil} on error without crashing" do
result = ActivityTracker.track_acl_event(:map_acl_added, %{
user_id: nil,
acl_id: nil
})
assert {:ok, _} = result
end
end
describe "error resilience" do
test "always returns success tuple even on internal errors" do
# The key guarantee is that activity tracking never crashes calling code
# Even if the internal tracking fails (e.g., unique constraint violation),
# the wrapper ensures a success tuple is returned
# This test verifies that the function signature guarantees {:ok, _}
# regardless of internal errors
# Test with nil values (which will fail validation)
assert {:ok, _} = ActivityTracker.track_map_event(:test_event, %{
character_id: nil,
user_id: nil,
map_id: nil
})
# Test with empty map (which will fail validation)
assert {:ok, _} = ActivityTracker.track_map_event(:test_event, %{})
# The guarantee is: no matter what, it returns {:ok, _}
# This prevents MatchError crashes in calling code
end
end
end