Compare commits

...

64 Commits

Author SHA1 Message Date
CI
74359a5542 chore: release version v1.84.17 2025-11-14 13:43:40 +00:00
Dmitry Popov
0020f46dd8 fix(core): fixed activity tracking issues 2025-11-14 14:42:44 +01:00
CI
a6751b45c6 chore: [skip ci] 2025-11-13 16:20:24 +00:00
CI
f48aeb5cec chore: release version v1.84.16 2025-11-13 16:20:24 +00:00
Dmitry Popov
a5f25646c9 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 17:19:47 +01:00
Dmitry Popov
23cf1fd96f fix(core): removed maps auto-start logic 2025-11-13 17:19:44 +01:00
CI
6f15521069 chore: [skip ci] 2025-11-13 14:49:32 +00:00
CI
9d41e57c06 chore: release version v1.84.15 2025-11-13 14:49:32 +00:00
Dmitry Popov
ea9a22df09 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 15:49:01 +01:00
Dmitry Popov
0d4fd6f214 fix(core): fixed maps start/stop logic, added server downtime period support 2025-11-13 15:48:56 +01:00
CI
87a6c20545 chore: [skip ci] 2025-11-13 14:46:26 +00:00
CI
c375f4e4ce chore: release version v1.84.14 2025-11-13 14:46:26 +00:00
Dmitry Popov
843a6d7320 Merge pull request #543 from wanderer-industries/fix-error-on-remove-settings
fix(Map): Fixed problem related with error if settings was removed an…
2025-11-13 18:43:13 +04:00
DanSylvest
98c54a3413 fix(Map): Fixed problem related with error if settings was removed and mapper crashed. Fixed settings reset. 2025-11-13 12:53:40 +03:00
CI
0439110938 chore: [skip ci] 2025-11-13 07:52:33 +00:00
CI
8ce1e5fa3e chore: release version v1.84.13 2025-11-13 07:52:33 +00:00
Dmitry Popov
ebaf6bcdc6 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-13 08:52:00 +01:00
Dmitry Popov
40d947bebc chore: updated RELEASE_NODE for server defaults 2025-11-13 08:51:56 +01:00
CI
61d1c3848f chore: [skip ci] 2025-11-13 07:39:29 +00:00
CI
e152ce179f chore: release version v1.84.12 2025-11-13 07:39:29 +00:00
Dmitry Popov
7bbe387183 chore: reduce garbage collection interval 2025-11-13 08:38:52 +01:00
CI
b1555ff03c chore: [skip ci] 2025-11-12 18:53:48 +00:00
CI
e624499244 chore: release version v1.84.11 2025-11-12 18:53:48 +00:00
Dmitry Popov
6a1976dec6 Merge pull request #541 from guarzo/guarzo/apifun2
fix: api and doc updates
2025-11-12 22:53:17 +04:00
Guarzo
3db24c4344 fix: api and doc updates 2025-11-12 18:39:21 +00:00
CI
883c09f255 chore: [skip ci] 2025-11-12 17:28:54 +00:00
CI
ff24d80038 chore: release version v1.84.10 2025-11-12 17:28:54 +00:00
Dmitry Popov
63cbc9c0b9 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-12 18:28:20 +01:00
Dmitry Popov
8056972a27 fix(core): Fixed adding system on character dock 2025-11-12 18:28:16 +01:00
CI
1759d46740 chore: [skip ci] 2025-11-12 13:28:14 +00:00
CI
e4b7d2e45b chore: release version v1.84.9 2025-11-12 13:28:14 +00:00
Dmitry Popov
41573cbee3 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-12 14:27:43 +01:00
Dmitry Popov
24ffc20bb8 chore: added ccp attribution to footer 2025-11-12 14:27:40 +01:00
CI
e077849b66 chore: [skip ci] 2025-11-12 12:42:09 +00:00
CI
375a9ef65b chore: release version v1.84.8 2025-11-12 12:42:08 +00:00
Dmitry Popov
9bf90ab752 fix(core): added cleanup jobs for old system signatures & chain passages 2025-11-12 13:41:33 +01:00
CI
90c3481151 chore: [skip ci] 2025-11-12 10:57:58 +00:00
CI
e36b08a7e5 chore: release version v1.84.7 2025-11-12 10:57:58 +00:00
Dmitry Popov
e1f79170c3 Merge pull request #540 from guarzo/guarzo/apifun
fix: api and search fixes
2025-11-12 14:54:33 +04:00
Guarzo
68b5455e91 bug fix 2025-11-12 07:25:49 +00:00
Guarzo
f28e75c7f4 pr updates 2025-11-12 07:16:21 +00:00
Guarzo
6091adb28e fix: api and structure search fixes 2025-11-12 07:07:39 +00:00
CI
d4657b335f chore: [skip ci] 2025-11-12 00:13:07 +00:00
CI
7fee850902 chore: release version v1.84.6 2025-11-12 00:13:07 +00:00
Dmitry Popov
648c168a66 fix(core): Added map slug uniqness checking while using API 2025-11-12 01:12:13 +01:00
CI
f5c4b2c407 chore: [skip ci] 2025-11-11 12:52:39 +00:00
CI
b592223d52 chore: release version v1.84.5 2025-11-11 12:52:39 +00:00
Dmitry Popov
5cf118c6ee Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-11 13:52:11 +01:00
Dmitry Popov
b25013c652 fix(core): Added tracking for map & character event handling errors 2025-11-11 13:52:07 +01:00
CI
cf43861b11 chore: [skip ci] 2025-11-11 12:27:54 +00:00
CI
b5fe8f8878 chore: release version v1.84.4 2025-11-11 12:27:54 +00:00
Dmitry Popov
5e5068c7de fix(core): fixed issue with updating system signatures 2025-11-11 13:27:17 +01:00
CI
624b51edfb chore: [skip ci] 2025-11-11 09:52:29 +00:00
CI
a72f8e60c4 chore: release version v1.84.3 2025-11-11 09:52:29 +00:00
Dmitry Popov
dec8ae50c9 Merge branch 'develop' 2025-11-11 10:51:55 +01:00
Dmitry Popov
0332d36a8e fix(core): fixed linked signature time status update
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
2025-11-11 10:51:43 +01:00
CI
8444c7f82d chore: [skip ci] 2025-11-10 16:57:53 +00:00
CI
ec3fc7447e chore: release version v1.84.2 2025-11-10 16:57:53 +00:00
Dmitry Popov
20ec2800c9 Merge pull request #538 from wanderer-industries/develop
Develop
2025-11-10 20:56:53 +04:00
Dmitry Popov
6fbf43e860 fix(api): fixed api for get/update map systems
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
2025-11-10 17:23:44 +01:00
Dmitry Popov
697da38020 Merge pull request #537 from guarzo/guarzo/apisystemperf
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
fix: add indexes for map/system
2025-11-09 01:48:01 +04:00
Guarzo
4bc65b43d2 fix: add index for map/systems api 2025-11-08 14:30:19 +00:00
Dmitry Popov
910ec97fd1 chore: refactored map server processes
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
2025-11-06 09:23:19 +01:00
Dmitry Popov
40ed58ee8c Merge pull request #536 from wanderer-industries/refactor-map-servers
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Refactor map servers
2025-11-06 03:03:57 +04:00
61 changed files with 3212 additions and 866 deletions

View File

@@ -1,5 +1,7 @@
export WEB_APP_URL="http://localhost:8000"
export RELEASE_COOKIE="PDpbnyo6mEI_0T4ZsHH_ESmi1vT1toQ8PTc0vbfg5FIT4Ih-Lh98mw=="
# Erlang node name for distributed Erlang (optional - defaults to wanderer@hostname)
# export RELEASE_NODE="wanderer@localhost"
export EVE_CLIENT_ID="<EVE_CLIENT_ID>"
export EVE_CLIENT_SECRET="<EVE_CLIENT_SECRET>"
export EVE_CLIENT_WITH_WALLET_ID="<EVE_CLIENT_WITH_WALLET_ID>"

View File

@@ -2,6 +2,140 @@
<!-- changelog -->
## [v1.84.17](https://github.com/wanderer-industries/wanderer/compare/v1.84.16...v1.84.17) (2025-11-14)
### Bug Fixes:
* core: fixed activity tracking issues
## [v1.84.16](https://github.com/wanderer-industries/wanderer/compare/v1.84.15...v1.84.16) (2025-11-13)
### Bug Fixes:
* core: removed maps auto-start logic
## [v1.84.15](https://github.com/wanderer-industries/wanderer/compare/v1.84.14...v1.84.15) (2025-11-13)
### Bug Fixes:
* core: fixed maps start/stop logic, added server downtime period support
## [v1.84.14](https://github.com/wanderer-industries/wanderer/compare/v1.84.13...v1.84.14) (2025-11-13)
### Bug Fixes:
* Map: Fixed problem related with error if settings was removed and mapper crashed. Fixed settings reset.
## [v1.84.13](https://github.com/wanderer-industries/wanderer/compare/v1.84.12...v1.84.13) (2025-11-13)
## [v1.84.12](https://github.com/wanderer-industries/wanderer/compare/v1.84.11...v1.84.12) (2025-11-13)
## [v1.84.11](https://github.com/wanderer-industries/wanderer/compare/v1.84.10...v1.84.11) (2025-11-12)
### Bug Fixes:
* api and doc updates
## [v1.84.10](https://github.com/wanderer-industries/wanderer/compare/v1.84.9...v1.84.10) (2025-11-12)
### Bug Fixes:
* core: Fixed adding system on character dock
## [v1.84.9](https://github.com/wanderer-industries/wanderer/compare/v1.84.8...v1.84.9) (2025-11-12)
## [v1.84.8](https://github.com/wanderer-industries/wanderer/compare/v1.84.7...v1.84.8) (2025-11-12)
### Bug Fixes:
* core: added cleanup jobs for old system signatures & chain passages
## [v1.84.7](https://github.com/wanderer-industries/wanderer/compare/v1.84.6...v1.84.7) (2025-11-12)
### Bug Fixes:
* api and structure search fixes
## [v1.84.6](https://github.com/wanderer-industries/wanderer/compare/v1.84.5...v1.84.6) (2025-11-12)
### Bug Fixes:
* core: Added map slug uniqness checking while using API
## [v1.84.5](https://github.com/wanderer-industries/wanderer/compare/v1.84.4...v1.84.5) (2025-11-11)
### Bug Fixes:
* core: Added tracking for map & character event handling errors
## [v1.84.4](https://github.com/wanderer-industries/wanderer/compare/v1.84.3...v1.84.4) (2025-11-11)
### Bug Fixes:
* core: fixed issue with updating system signatures
## [v1.84.3](https://github.com/wanderer-industries/wanderer/compare/v1.84.2...v1.84.3) (2025-11-11)
### Bug Fixes:
* core: fixed linked signature time status update
## [v1.84.2](https://github.com/wanderer-industries/wanderer/compare/v1.84.1...v1.84.2) (2025-11-10)
### Bug Fixes:
* api: fixed api for get/update map systems
* add index for map/systems api
## [v1.84.1](https://github.com/wanderer-industries/wanderer/compare/v1.84.0...v1.84.1) (2025-11-01)

View File

@@ -4,10 +4,13 @@ import { DEFAULT_WIDGETS } from '@/hooks/Mapper/components/mapInterface/constant
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
export const MapInterface = () => {
// const [items, setItems] = useState<WindowProps[]>(restoreWindowsFromLS);
const { windowsSettings, updateWidgetSettings } = useMapRootState();
const items = useMemo(() => {
if (Object.keys(windowsSettings).length === 0) {
return [];
}
return windowsSettings.windows
.map(x => {
const content = DEFAULT_WIDGETS.find(y => y.id === x.id)?.content;

View File

@@ -30,9 +30,6 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
const { outCommand } = useMapRootState();
const [prevQuery, setPrevQuery] = useState('');
const [prevResults, setPrevResults] = useState<{ label: string; value: string }[]>([]);
useEffect(() => {
if (structure) {
setEditData(structure);
@@ -46,34 +43,24 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
// Searching corporation owners via auto-complete
const searchOwners = useCallback(
async (e: { query: string }) => {
const newQuery = e.query.trim();
if (!newQuery) {
const query = e.query.trim();
if (!query) {
setOwnerSuggestions([]);
return;
}
// If user typed more text but we have partial match in prevResults
if (newQuery.startsWith(prevQuery) && prevResults.length > 0) {
const filtered = prevResults.filter(item => item.label.toLowerCase().includes(newQuery.toLowerCase()));
setOwnerSuggestions(filtered);
return;
}
try {
// TODO fix it
const { results = [] } = await outCommand({
type: OutCommand.getCorporationNames,
data: { search: newQuery },
data: { search: query },
});
setOwnerSuggestions(results);
setPrevQuery(newQuery);
setPrevResults(results);
} catch (err) {
console.error('Failed to fetch owners:', err);
setOwnerSuggestions([]);
}
},
[prevQuery, prevResults, outCommand],
[outCommand],
);
const handleChange = (field: keyof StructureItem, val: string | Date) => {
@@ -122,7 +109,6 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
// fetch corporation ticker if we have an ownerId
if (editData.ownerId) {
try {
// TODO fix it
const { ticker } = await outCommand({
type: OutCommand.getCorporationTicker,
data: { corp_id: editData.ownerId },

View File

@@ -10,9 +10,14 @@ import { useCallback } from 'react';
import { TooltipPosition, WdButton, WdTooltipWrapper } from '@/hooks/Mapper/components/ui-kit';
import { ConfirmPopup } from 'primereact/confirmpopup';
import { useConfirmPopup } from '@/hooks/Mapper/hooks';
import { useMapRootState } from '@/hooks/Mapper/mapRootProvider';
export const CommonSettings = () => {
const { renderSettingItem } = useMapSettings();
const {
storedSettings: { resetSettings },
} = useMapRootState();
const { cfShow, cfHide, cfVisible, cfRef } = useConfirmPopup();
const renderSettingsList = useCallback(
@@ -22,7 +27,7 @@ export const CommonSettings = () => {
[renderSettingItem],
);
const handleResetSettings = () => {};
const handleResetSettings = useCallback(() => resetSettings(), [resetSettings]);
return (
<div className="flex flex-col h-full gap-1">

View File

@@ -6,9 +6,11 @@ import {
MapUnionTypes,
OutCommandHandler,
SolarSystemConnection,
StringBoolean,
TrackingCharacter,
UseCharactersCacheData,
UseCommentsData,
UserPermission,
} from '@/hooks/Mapper/types';
import { useCharactersCache, useComments, useMapRootHandlers } from '@/hooks/Mapper/mapRootProvider/hooks';
import { WithChildren } from '@/hooks/Mapper/types/common.ts';
@@ -80,7 +82,16 @@ const INITIAL_DATA: MapRootData = {
selectedSystems: [],
selectedConnections: [],
userPermissions: {},
options: {},
options: {
allowed_copy_for: UserPermission.VIEW_SYSTEM,
allowed_paste_for: UserPermission.VIEW_SYSTEM,
layout: '',
restrict_offline_showing: 'false',
show_linked_signature_id: 'false',
show_linked_signature_id_temp_name: 'false',
show_temp_system_name: 'false',
store_custom_labels: 'false',
},
isSubscriptionActive: false,
linkSignatureToSystem: null,
mainCharacterEveId: null,
@@ -135,7 +146,7 @@ export interface MapRootContextProps {
hasOldSettings: boolean;
getSettingsForExport(): string | undefined;
applySettings(settings: MapUserSettings): boolean;
resetSettings(settings: MapUserSettings): void;
resetSettings(): void;
checkOldSettings(): void;
};
}

View File

@@ -148,10 +148,6 @@ export const useMapUserSettings = ({ map_slug }: MapRootData, outCommand: OutCom
setHasOldSettings(!!(widgetsOld || interfaceSettings || widgetRoutes || widgetLocal || widgetKills || onTheMapOld));
}, []);
useEffect(() => {
checkOldSettings();
}, [checkOldSettings]);
const getSettingsForExport = useCallback(() => {
const { map_slug } = ref.current;
@@ -166,6 +162,24 @@ export const useMapUserSettings = ({ map_slug }: MapRootData, outCommand: OutCom
applySettings(createDefaultStoredSettings());
}, [applySettings]);
useEffect(() => {
checkOldSettings();
}, [checkOldSettings]);
// IN Case if in runtime someone clear settings
useEffect(() => {
if (Object.keys(windowsSettings).length !== 0) {
return;
}
if (!isReady) {
return;
}
resetSettings();
location.reload();
}, [isReady, resetSettings, windowsSettings]);
return {
isReady,
hasOldSettings,

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -27,11 +27,7 @@ config :wanderer_app,
generators: [timestamp_type: :utc_datetime],
ddrt: WandererApp.Map.CacheRTree,
logger: Logger,
pubsub_client: Phoenix.PubSub,
wanderer_kills_base_url:
System.get_env("WANDERER_KILLS_BASE_URL", "ws://host.docker.internal:4004"),
wanderer_kills_service_enabled:
System.get_env("WANDERER_KILLS_SERVICE_ENABLED", "false") == "true"
pubsub_client: Phoenix.PubSub
config :wanderer_app, WandererAppWeb.Endpoint,
adapter: Bandit.PhoenixAdapter,

View File

@@ -4,7 +4,7 @@ import Config
config :wanderer_app, WandererApp.Repo,
username: "postgres",
password: "postgres",
hostname: System.get_env("DB_HOST", "localhost"),
hostname: "localhost",
database: "wanderer_dev",
stacktrace: true,
show_sensitive_data_on_connection_error: true,

View File

@@ -258,7 +258,9 @@ config :wanderer_app, WandererApp.Scheduler,
timezone: :utc,
jobs:
[
{"@daily", {WandererApp.Map.Audit, :archive, []}}
{"@daily", {WandererApp.Map.Audit, :archive, []}},
{"@daily", {WandererApp.Map.GarbageCollector, :cleanup_chain_passages, []}},
{"@daily", {WandererApp.Map.GarbageCollector, :cleanup_system_signatures, []}}
] ++ sheduler_jobs,
timeout: :infinity

View File

@@ -2,6 +2,7 @@ defmodule WandererApp.Api.Changes.SlugifyName do
use Ash.Resource.Change
alias Ash.Changeset
require Ash.Query
@impl true
@spec change(Changeset.t(), keyword, Change.context()) :: Changeset.t()
@@ -12,10 +13,56 @@ defmodule WandererApp.Api.Changes.SlugifyName do
defp maybe_slugify_name(changeset) do
case Changeset.get_attribute(changeset, :slug) do
slug when is_binary(slug) ->
Changeset.force_change_attribute(changeset, :slug, Slug.slugify(slug))
base_slug = Slug.slugify(slug)
unique_slug = ensure_unique_slug(changeset, base_slug)
Changeset.force_change_attribute(changeset, :slug, unique_slug)
_ ->
changeset
end
end
defp ensure_unique_slug(changeset, base_slug) do
# Get the current record ID if this is an update operation
current_id = Changeset.get_attribute(changeset, :id)
# Check if the base slug is available
if slug_available?(base_slug, current_id) do
base_slug
else
# Find the next available slug with a numeric suffix
find_available_slug(base_slug, current_id, 2)
end
end
defp find_available_slug(base_slug, current_id, n) do
candidate_slug = "#{base_slug}-#{n}"
if slug_available?(candidate_slug, current_id) do
candidate_slug
else
find_available_slug(base_slug, current_id, n + 1)
end
end
defp slug_available?(slug, current_id) do
query =
WandererApp.Api.Map
|> Ash.Query.filter(slug == ^slug)
|> then(fn query ->
# Exclude the current record if this is an update
if current_id do
Ash.Query.filter(query, id != ^current_id)
else
query
end
end)
|> Ash.Query.limit(1)
case Ash.read(query) do
{:ok, []} -> true
{:ok, _} -> false
{:error, _} -> false
end
end
end

View File

@@ -37,7 +37,7 @@ defmodule WandererApp.Api.Map do
delete(:destroy)
# Custom action for map duplication
post(:duplicate, route: "/:id/duplicate")
# post(:duplicate, route: "/:id/duplicate")
end
end

View File

@@ -9,6 +9,11 @@ defmodule WandererApp.Api.MapConnection do
postgres do
repo(WandererApp.Repo)
table("map_chain_v1")
custom_indexes do
# Critical index for list_connections query performance
index [:map_id], name: "map_chain_v1_map_id_index"
end
end
json_api do

View File

@@ -65,7 +65,7 @@ defmodule WandererApp.Api.MapSubscription do
defaults [:create, :read, :update, :destroy]
read :all_active do
prepare build(sort: [updated_at: :asc])
prepare build(sort: [updated_at: :asc], load: [:map])
filter(expr(status == :active))
end

View File

@@ -1,6 +1,26 @@
defmodule WandererApp.Api.MapSystem do
@moduledoc false
@derive {Jason.Encoder,
only: [
:id,
:map_id,
:name,
:solar_system_id,
:position_x,
:position_y,
:status,
:visible,
:locked,
:custom_name,
:description,
:tag,
:temporary_name,
:labels,
:added_at,
:linked_sig_eve_id
]}
use Ash.Resource,
domain: WandererApp.Api,
data_layer: AshPostgres.DataLayer,
@@ -9,6 +29,11 @@ defmodule WandererApp.Api.MapSystem do
postgres do
repo(WandererApp.Repo)
table("map_system_v1")
custom_indexes do
# Partial index for efficient visible systems query
index [:map_id], where: "visible = true", name: "map_system_v1_map_id_visible_index"
end
end
json_api do

View File

@@ -8,7 +8,7 @@ defmodule WandererApp.Character.TrackerPool do
:tracked_ids,
:uuid,
:characters,
server_online: true
server_online: false
]
@name __MODULE__
@@ -180,6 +180,8 @@ defmodule WandererApp.Character.TrackerPool do
[Tracker Pool] update_online => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
ErrorTracker.report(e, __STACKTRACE__)
end
{:noreply, state}

View File

@@ -12,7 +12,7 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
total_balance: 0,
transactions: [],
retries: 5,
server_online: true,
server_online: false,
status: :started
]
@@ -75,7 +75,7 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
def handle_event(
:update_corp_wallets,
%{character: character} = state
%{character: character, server_online: true} = state
) do
Process.send_after(self(), :update_corp_wallets, @update_interval)
@@ -88,26 +88,26 @@ defmodule WandererApp.Character.TransactionsTracker.Impl do
:update_corp_wallets,
state
) do
Process.send_after(self(), :update_corp_wallets, :timer.seconds(15))
Process.send_after(self(), :update_corp_wallets, @update_interval)
state
end
def handle_event(
:check_wallets,
%{wallets: []} = state
%{character: character, wallets: wallets, server_online: true} = state
) do
Process.send_after(self(), :check_wallets, :timer.seconds(5))
Process.send_after(self(), :check_wallets, @update_interval)
state
end
def handle_event(
:check_wallets,
%{character: character, wallets: wallets} = state
) do
check_wallets(wallets, character)
state
end
def handle_event(
:check_wallets,
state
) do
Process.send_after(self(), :check_wallets, @update_interval)
state

View File

@@ -212,6 +212,7 @@ defmodule WandererApp.ExternalEvents.JsonApiFormatter do
"time_status" => payload["time_status"] || payload[:time_status],
"mass_status" => payload["mass_status"] || payload[:mass_status],
"ship_size_type" => payload["ship_size_type"] || payload[:ship_size_type],
"locked" => payload["locked"] || payload[:locked],
"updated_at" => event.timestamp
},
"relationships" => %{

View File

@@ -532,15 +532,16 @@ defmodule WandererApp.Map do
solar_system_source,
solar_system_target
) do
case map_id
|> get_map!()
|> Map.get(:connections, Map.new())
connections =
map_id
|> get_map!()
|> Map.get(:connections, Map.new())
case connections
|> Map.get("#{solar_system_source}_#{solar_system_target}") do
nil ->
{:ok,
map_id
|> get_map!()
|> Map.get(:connections, Map.new())
connections
|> Map.get("#{solar_system_target}_#{solar_system_source}")}
connection ->

View File

@@ -0,0 +1,38 @@
defmodule WandererApp.Map.GarbageCollector do
@moduledoc """
Manager map subscription plans
"""
require Logger
require Ash.Query
@logger Application.compile_env(:wanderer_app, :logger)
@one_week_seconds 7 * 24 * 60 * 60
@two_weeks_seconds 14 * 24 * 60 * 60
def cleanup_chain_passages() do
Logger.info("Start cleanup old map chain passages...")
WandererApp.Api.MapChainPassages
|> Ash.Query.filter(updated_at: [less_than: get_cutoff_time(@one_week_seconds)])
|> Ash.bulk_destroy!(:destroy, %{}, batch_size: 100)
@logger.info(fn -> "All map chain passages processed" end)
:ok
end
def cleanup_system_signatures() do
Logger.info("Start cleanup old map system signatures...")
WandererApp.Api.MapSystemSignature
|> Ash.Query.filter(updated_at: [less_than: get_cutoff_time(@two_weeks_seconds)])
|> Ash.bulk_destroy!(:destroy, %{}, batch_size: 100)
@logger.info(fn -> "All map system signatures processed" end)
:ok
end
defp get_cutoff_time(seconds), do: DateTime.utc_now() |> DateTime.add(-seconds, :second)
end

View File

@@ -9,8 +9,8 @@ defmodule WandererApp.Map.Manager do
alias WandererApp.Map.Server
@maps_start_per_second 10
@maps_start_interval 1000
@maps_start_chunk_size 20
@maps_start_interval 500
@maps_queue :maps_queue
@check_maps_queue_interval :timer.seconds(1)
@@ -58,9 +58,9 @@ defmodule WandererApp.Map.Manager do
{:ok, pings_cleanup_timer} =
:timer.send_interval(@pings_cleanup_interval, :cleanup_pings)
safe_async_task(fn ->
start_last_active_maps()
end)
# safe_async_task(fn ->
# start_last_active_maps()
# end)
{:ok,
%{
@@ -153,7 +153,7 @@ defmodule WandererApp.Map.Manager do
@maps_queue
|> WandererApp.Queue.to_list!()
|> Enum.uniq()
|> Enum.chunk_every(@maps_start_per_second)
|> Enum.chunk_every(@maps_start_chunk_size)
WandererApp.Queue.clear(@maps_queue)

View File

@@ -15,8 +15,9 @@ defmodule WandererApp.Map.MapPool do
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@map_pool_limit 20
@garbage_collection_interval :timer.hours(12)
@garbage_collection_interval :timer.hours(4)
@systems_cleanup_timeout :timer.minutes(30)
@characters_cleanup_timeout :timer.minutes(5)
@connections_cleanup_timeout :timer.minutes(5)
@@ -64,11 +65,21 @@ defmodule WandererApp.Map.MapPool do
def handle_continue({:start, map_ids}, state) do
Logger.info("#{@name} started")
map_ids
|> Enum.each(fn map_id ->
GenServer.cast(self(), {:start_map, map_id})
end)
# Start maps synchronously and accumulate state changes
new_state =
map_ids
|> Enum.reduce(state, fn map_id, current_state ->
case do_start_map(map_id, current_state) do
{:ok, updated_state} ->
updated_state
{:error, reason} ->
Logger.error("[Map Pool] Failed to start map #{map_id}: #{reason}")
current_state
end
end)
# Schedule periodic tasks
Process.send_after(self(), :backup_state, @backup_state_timeout)
Process.send_after(self(), :cleanup_systems, 15_000)
Process.send_after(self(), :cleanup_characters, @characters_cleanup_timeout)
@@ -77,46 +88,247 @@ defmodule WandererApp.Map.MapPool do
# Start message queue monitoring
Process.send_after(self(), :monitor_message_queue, :timer.seconds(30))
{:noreply, state}
{:noreply, new_state}
end
@impl true
def handle_cast(:stop, state), do: {:stop, :normal, state}
@impl true
def handle_cast({:start_map, map_id}, %{map_ids: map_ids, uuid: uuid} = state) do
if map_id not in map_ids do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
def handle_call({:start_map, map_id}, _from, %{map_ids: map_ids, uuid: uuid} = state) do
# Enforce capacity limit to prevent pool overload due to race conditions
if length(map_ids) >= @map_pool_limit do
Logger.warning(
"[Map Pool #{uuid}] Pool at capacity (#{length(map_ids)}/#{@map_pool_limit}), " <>
"rejecting map #{map_id} and triggering new pool creation"
)
# Trigger a new pool creation attempt asynchronously
# This allows the system to create a new pool for this map
spawn(fn ->
WandererApp.Map.MapPoolDynamicSupervisor.start_map(map_id)
end)
Cachex.put(@cache, map_id, uuid)
map_id
|> WandererApp.Map.get_map_state!()
|> Server.Impl.start_map()
{:noreply, %{state | map_ids: [map_id | map_ids]}}
{:reply, :ok, state}
else
{:noreply, state}
case do_start_map(map_id, state) do
{:ok, new_state} ->
{:reply, :ok, new_state}
{:error, _reason} ->
# Error already logged in do_start_map
{:reply, :ok, state}
end
end
end
@impl true
def handle_cast(
def handle_call(
{:stop_map, map_id},
%{map_ids: map_ids, uuid: uuid} = state
_from,
state
) do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
case do_stop_map(map_id, state) do
{:ok, new_state} ->
{:reply, :ok, new_state}
Cachex.del(@cache, map_id)
{:error, reason} ->
{:reply, {:error, reason}, state}
end
end
map_id
|> Server.Impl.stop_map()
defp do_start_map(map_id, %{map_ids: map_ids, uuid: uuid} = state) do
if map_id in map_ids do
# Map already started
{:ok, state}
else
# Track what operations succeeded for potential rollback
completed_operations = []
{:noreply, %{state | map_ids: map_ids |> Enum.reject(fn id -> id == map_id end)}}
try do
# Step 1: Update Registry (most critical, do first)
registry_result =
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
end)
completed_operations = [:registry | completed_operations]
case registry_result do
{new_value, _old_value} when is_list(new_value) ->
:ok
:error ->
raise "Failed to update registry for pool #{uuid}"
end
# Step 2: Add to cache
case Cachex.put(@cache, map_id, uuid) do
{:ok, _} ->
completed_operations = [:cache | completed_operations]
{:error, reason} ->
raise "Failed to add to cache: #{inspect(reason)}"
end
# Step 3: Start the map server
map_id
|> WandererApp.Map.get_map_state!()
|> Server.Impl.start_map()
completed_operations = [:map_server | completed_operations]
# Step 4: Update GenServer state (last, as this is in-memory and fast)
new_state = %{state | map_ids: [map_id | map_ids]}
Logger.debug("[Map Pool] Successfully started map #{map_id} in pool #{uuid}")
{:ok, new_state}
rescue
e ->
Logger.error("""
[Map Pool] Failed to start map #{map_id} (completed: #{inspect(completed_operations)}): #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
# Attempt rollback of completed operations
rollback_start_map_operations(map_id, uuid, completed_operations)
{:error, Exception.message(e)}
end
end
end
defp rollback_start_map_operations(map_id, uuid, completed_operations) do
Logger.warning("[Map Pool] Attempting to rollback start_map operations for #{map_id}")
# Rollback in reverse order
if :map_server in completed_operations do
Logger.debug("[Map Pool] Rollback: Stopping map server for #{map_id}")
try do
Server.Impl.stop_map(map_id)
rescue
e ->
Logger.error("[Map Pool] Rollback failed to stop map server: #{Exception.message(e)}")
end
end
if :cache in completed_operations do
Logger.debug("[Map Pool] Rollback: Removing #{map_id} from cache")
case Cachex.del(@cache, map_id) do
{:ok, _} ->
:ok
{:error, reason} ->
Logger.error("[Map Pool] Rollback failed for cache: #{inspect(reason)}")
end
end
if :registry in completed_operations do
Logger.debug("[Map Pool] Rollback: Removing #{map_id} from registry")
try do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
rescue
e ->
Logger.error("[Map Pool] Rollback failed for registry: #{Exception.message(e)}")
end
end
end
defp do_stop_map(map_id, %{map_ids: map_ids, uuid: uuid} = state) do
# Track what operations succeeded for potential rollback
completed_operations = []
try do
# Step 1: Update Registry (most critical, do first)
registry_result =
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
completed_operations = [:registry | completed_operations]
case registry_result do
{new_value, _old_value} when is_list(new_value) ->
:ok
:error ->
raise "Failed to update registry for pool #{uuid}"
end
# Step 2: Delete from cache
case Cachex.del(@cache, map_id) do
{:ok, _} ->
completed_operations = [:cache | completed_operations]
{:error, reason} ->
raise "Failed to delete from cache: #{inspect(reason)}"
end
# Step 3: Stop the map server (clean up all map resources)
map_id
|> Server.Impl.stop_map()
completed_operations = [:map_server | completed_operations]
# Step 4: Update GenServer state (last, as this is in-memory and fast)
new_state = %{state | map_ids: map_ids |> Enum.reject(fn id -> id == map_id end)}
Logger.debug("[Map Pool] Successfully stopped map #{map_id} from pool #{uuid}")
{:ok, new_state}
rescue
e ->
Logger.error("""
[Map Pool] Failed to stop map #{map_id} (completed: #{inspect(completed_operations)}): #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
# Attempt rollback of completed operations
rollback_stop_map_operations(map_id, uuid, completed_operations)
{:error, Exception.message(e)}
end
end
defp rollback_stop_map_operations(map_id, uuid, completed_operations) do
Logger.warning("[Map Pool] Attempting to rollback stop_map operations for #{map_id}")
# Rollback in reverse order
if :cache in completed_operations do
Logger.debug("[Map Pool] Rollback: Re-adding #{map_id} to cache")
case Cachex.put(@cache, map_id, uuid) do
{:ok, _} ->
:ok
{:error, reason} ->
Logger.error("[Map Pool] Rollback failed for cache: #{inspect(reason)}")
end
end
if :registry in completed_operations do
Logger.debug("[Map Pool] Rollback: Re-adding #{map_id} to registry")
try do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
if map_id in r_map_ids do
r_map_ids
else
[map_id | r_map_ids]
end
end)
rescue
e ->
Logger.error("[Map Pool] Rollback failed for registry: #{Exception.message(e)}")
end
end
# Note: We don't rollback map_server stop as Server.Impl.stop_map() is idempotent
# and the cleanup operations are safe to leave in a "stopped" state
end
@impl true
@@ -231,25 +443,38 @@ defmodule WandererApp.Map.MapPool do
Process.send_after(self(), :garbage_collect, @garbage_collection_interval)
try do
map_ids
|> Enum.each(fn map_id ->
# presence_character_ids =
# WandererApp.Cache.lookup!("map_#{map_id}:presence_character_ids", [])
# Process each map and accumulate state changes
new_state =
map_ids
|> Enum.reduce(state, fn map_id, current_state ->
presence_character_ids =
WandererApp.Cache.lookup!("map_#{map_id}:presence_character_ids", [])
# if presence_character_ids |> Enum.empty?() do
Logger.info(
"#{uuid}: No more characters present on: #{map_id}, shutting down map server..."
)
if presence_character_ids |> Enum.empty?() do
Logger.info(
"#{uuid}: No more characters present on: #{map_id}, shutting down map server..."
)
GenServer.cast(self(), {:stop_map, map_id})
# end
end)
case do_stop_map(map_id, current_state) do
{:ok, updated_state} ->
Logger.debug("#{uuid}: Successfully stopped map #{map_id}")
updated_state
{:error, reason} ->
Logger.error("#{uuid}: Failed to stop map #{map_id}: #{reason}")
current_state
end
else
current_state
end
end)
{:noreply, new_state}
rescue
e ->
Logger.error(Exception.message(e))
Logger.error("#{uuid}: Garbage collection error: #{Exception.message(e)}")
{:noreply, state}
end
{:noreply, state}
end
@impl true
@@ -310,7 +535,17 @@ defmodule WandererApp.Map.MapPool do
end
def handle_info(event, state) do
Server.Impl.handle_event(event)
try do
Server.Impl.handle_event(event)
rescue
e ->
Logger.error("""
[Map Pool] handle_info => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
ErrorTracker.report(e, __STACKTRACE__)
end
{:noreply, state}
end

View File

@@ -7,7 +7,7 @@ defmodule WandererApp.Map.MapPoolDynamicSupervisor do
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@map_pool_limit 10
@map_pool_limit 20
@name __MODULE__
@@ -30,23 +30,84 @@ defmodule WandererApp.Map.MapPoolDynamicSupervisor do
start_child([map_id], pools |> Enum.count())
pid ->
GenServer.cast(pid, {:start_map, map_id})
GenServer.call(pid, {:start_map, map_id})
end
end
end
def stop_map(map_id) do
{:ok, pool_uuid} = Cachex.get(@cache, map_id)
case Cachex.get(@cache, map_id) do
{:ok, nil} ->
# Cache miss - try to find the pool by scanning the registry
Logger.warning(
"Cache miss for map #{map_id}, scanning registry for pool containing this map"
)
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
find_pool_by_scanning_registry(map_id)
{:ok, pool_uuid} ->
# Cache hit - use the pool_uuid to lookup the pool
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
[] ->
Logger.warning(
"Pool with UUID #{pool_uuid} not found in registry for map #{map_id}, scanning registry"
)
find_pool_by_scanning_registry(map_id)
[{pool_pid, _}] ->
GenServer.call(pool_pid, {:stop_map, map_id})
end
{:error, reason} ->
Logger.error("Failed to lookup map #{map_id} in cache: #{inspect(reason)}")
:ok
end
end
defp find_pool_by_scanning_registry(map_id) do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
Logger.debug("No map pools found in registry for map #{map_id}")
:ok
[{pool_pid, _}] ->
GenServer.cast(pool_pid, {:stop_map, map_id})
pools ->
# Scan all pools to find the one containing this map_id
found_pool =
Enum.find_value(pools, fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{pool_pid, map_ids}] ->
if map_id in map_ids do
{pool_pid, uuid}
else
nil
end
_ ->
nil
end
end)
case found_pool do
{pool_pid, pool_uuid} ->
Logger.info(
"Found map #{map_id} in pool #{pool_uuid} via registry scan, updating cache"
)
# Update the cache to fix the inconsistency
Cachex.put(@cache, map_id, pool_uuid)
GenServer.call(pool_pid, {:stop_map, map_id})
nil ->
Logger.debug("Map #{map_id} not found in any pool registry")
:ok
end
end
end

View File

@@ -14,7 +14,8 @@ defmodule WandererApp.Map.MapPoolSupervisor do
children = [
{Registry, [keys: :unique, name: @unique_registry]},
{Registry, [keys: :duplicate, name: @registry]},
{WandererApp.Map.MapPoolDynamicSupervisor, []}
{WandererApp.Map.MapPoolDynamicSupervisor, []},
{WandererApp.Map.Reconciler, []}
]
Supervisor.init(children, strategy: :rest_for_one, max_restarts: 10)

View File

@@ -0,0 +1,280 @@
defmodule WandererApp.Map.Reconciler do
@moduledoc """
Periodically reconciles map state across different stores (Cache, Registry, GenServer state)
to detect and fix inconsistencies that may prevent map servers from restarting.
"""
use GenServer
require Logger
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@reconciliation_interval :timer.minutes(5)
def start_link(_opts) do
GenServer.start_link(__MODULE__, [], name: __MODULE__)
end
@impl true
def init(_opts) do
Logger.info("Starting Map Reconciler")
schedule_reconciliation()
{:ok, %{}}
end
@impl true
def handle_info(:reconcile, state) do
schedule_reconciliation()
try do
reconcile_state()
rescue
e ->
Logger.error("""
[Map Reconciler] reconciliation error: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
@doc """
Manually trigger a reconciliation (useful for testing or manual cleanup)
"""
def trigger_reconciliation do
GenServer.cast(__MODULE__, :reconcile_now)
end
@impl true
def handle_cast(:reconcile_now, state) do
try do
reconcile_state()
rescue
e ->
Logger.error("""
[Map Reconciler] manual reconciliation error: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
defp schedule_reconciliation do
Process.send_after(self(), :reconcile, @reconciliation_interval)
end
defp reconcile_state do
Logger.debug("[Map Reconciler] Starting state reconciliation")
# Get started_maps from cache
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
# Get all maps from registries
registry_maps = get_all_registry_maps()
# Detect zombie maps (in started_maps but not in any registry)
zombie_maps = started_maps -- registry_maps
# Detect orphan maps (in registry but not in started_maps)
orphan_maps = registry_maps -- started_maps
# Detect cache inconsistencies (map_pool_cache pointing to wrong or non-existent pools)
cache_inconsistencies = find_cache_inconsistencies(registry_maps)
stats = %{
total_started_maps: length(started_maps),
total_registry_maps: length(registry_maps),
zombie_maps: length(zombie_maps),
orphan_maps: length(orphan_maps),
cache_inconsistencies: length(cache_inconsistencies)
}
Logger.info("[Map Reconciler] Reconciliation stats: #{inspect(stats)}")
# Emit telemetry
:telemetry.execute(
[:wanderer_app, :map, :reconciliation],
stats,
%{}
)
# Clean up zombie maps
cleanup_zombie_maps(zombie_maps)
# Fix orphan maps
fix_orphan_maps(orphan_maps)
# Fix cache inconsistencies
fix_cache_inconsistencies(cache_inconsistencies)
Logger.debug("[Map Reconciler] State reconciliation completed")
end
defp get_all_registry_maps do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
[]
pools ->
pools
|> Enum.flat_map(fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{_pool_pid, map_ids}] -> map_ids
_ -> []
end
end)
|> Enum.uniq()
end
end
defp find_cache_inconsistencies(registry_maps) do
registry_maps
|> Enum.filter(fn map_id ->
case Cachex.get(@cache, map_id) do
{:ok, nil} ->
# Map in registry but not in cache
true
{:ok, pool_uuid} ->
# Check if the pool_uuid actually exists in registry
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
[] ->
# Cache points to non-existent pool
true
[{_pool_pid, map_ids}] ->
# Check if this map is actually in the pool's map_ids
map_id not in map_ids
_ ->
false
end
{:error, _} ->
true
end
end)
end
defp cleanup_zombie_maps([]), do: :ok
defp cleanup_zombie_maps(zombie_maps) do
Logger.warning("[Map Reconciler] Found #{length(zombie_maps)} zombie maps: #{inspect(zombie_maps)}")
Enum.each(zombie_maps, fn map_id ->
Logger.info("[Map Reconciler] Cleaning up zombie map: #{map_id}")
# Remove from started_maps cache
WandererApp.Cache.insert_or_update(
"started_maps",
[],
fn started_maps ->
started_maps |> Enum.reject(fn started_map_id -> started_map_id == map_id end)
end
)
# Clean up any stale map_pool_cache entries
Cachex.del(@cache, map_id)
# Clean up map-specific caches
WandererApp.Cache.delete("map_#{map_id}:started")
WandererApp.Cache.delete("map_characters-#{map_id}")
WandererApp.Map.CacheRTree.clear_tree("rtree_#{map_id}")
WandererApp.Map.delete_map_state(map_id)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :zombie_cleanup],
%{count: 1},
%{map_id: map_id}
)
end)
end
defp fix_orphan_maps([]), do: :ok
defp fix_orphan_maps(orphan_maps) do
Logger.warning("[Map Reconciler] Found #{length(orphan_maps)} orphan maps: #{inspect(orphan_maps)}")
Enum.each(orphan_maps, fn map_id ->
Logger.info("[Map Reconciler] Fixing orphan map: #{map_id}")
# Add to started_maps cache
WandererApp.Cache.insert_or_update(
"started_maps",
[map_id],
fn existing ->
[map_id | existing] |> Enum.uniq()
end
)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :orphan_fixed],
%{count: 1},
%{map_id: map_id}
)
end)
end
defp fix_cache_inconsistencies([]), do: :ok
defp fix_cache_inconsistencies(inconsistent_maps) do
Logger.warning(
"[Map Reconciler] Found #{length(inconsistent_maps)} cache inconsistencies: #{inspect(inconsistent_maps)}"
)
Enum.each(inconsistent_maps, fn map_id ->
Logger.info("[Map Reconciler] Fixing cache inconsistency for map: #{map_id}")
# Find the correct pool for this map
case find_pool_for_map(map_id) do
{:ok, pool_uuid} ->
Logger.info("[Map Reconciler] Updating cache: #{map_id} -> #{pool_uuid}")
Cachex.put(@cache, map_id, pool_uuid)
:telemetry.execute(
[:wanderer_app, :map, :reconciliation, :cache_fixed],
%{count: 1},
%{map_id: map_id, pool_uuid: pool_uuid}
)
:error ->
Logger.warning("[Map Reconciler] Could not find pool for map #{map_id}, removing from cache")
Cachex.del(@cache, map_id)
end
end)
end
defp find_pool_for_map(map_id) do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
:error
pools ->
pools
|> Enum.find_value(:error, fn {_pid, uuid} ->
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, uuid)
) do
[{_pool_pid, map_ids}] ->
if map_id in map_ids do
{:ok, uuid}
else
nil
end
_ ->
nil
end
end)
end
end
end

View File

@@ -300,10 +300,9 @@ defmodule WandererApp.Map.SubscriptionManager do
defp is_expired(subscription) when is_map(subscription),
do: DateTime.compare(DateTime.utc_now(), subscription.active_till) == :gt
defp renew_subscription(%{auto_renew?: true} = subscription) when is_map(subscription) do
with {:ok, %{map: map}} <-
subscription |> WandererApp.MapSubscriptionRepo.load_relationships([:map]),
{:ok, estimated_price, discount} <- estimate_price(subscription, true),
defp renew_subscription(%{auto_renew?: true, map: map} = subscription)
when is_map(subscription) do
with {:ok, estimated_price, discount} <- estimate_price(subscription, true),
{:ok, map_balance} <- get_balance(map) do
case map_balance >= estimated_price do
true ->

View File

@@ -35,16 +35,14 @@ defmodule WandererApp.Map.ZkbDataFetcher do
|> Task.async_stream(
fn map_id ->
try do
if WandererApp.Map.Server.map_pid(map_id) do
# Always update kill counts
update_map_kills(map_id)
# Always update kill counts
update_map_kills(map_id)
# Update detailed kills for maps with active subscriptions
{:ok, is_subscription_active} = map_id |> WandererApp.Map.is_subscription_active?()
# Update detailed kills for maps with active subscriptions
{:ok, is_subscription_active} = map_id |> WandererApp.Map.is_subscription_active?()
if is_subscription_active do
update_detailed_map_kills(map_id)
end
if is_subscription_active do
update_detailed_map_kills(map_id)
end
rescue
e ->

View File

@@ -231,31 +231,15 @@ defmodule WandererApp.Map.Operations.Connections do
attrs
) do
with {:ok, conn_struct} <- MapConnectionRepo.get_by_id(map_id, conn_id),
result <-
:ok <-
(try do
_allowed_keys = [
:mass_status,
:ship_size_type,
:time_status,
:type
]
_update_map =
attrs
|> Enum.filter(fn {k, _v} ->
k in ["mass_status", "ship_size_type", "time_status", "type"]
end)
|> Enum.map(fn {k, v} -> {String.to_atom(k), v} end)
|> Enum.into(%{})
res = apply_connection_updates(map_id, conn_struct, attrs, char_id)
res
rescue
error ->
Logger.error("[update_connection] Exception: #{inspect(error)}")
{:error, :exception}
end),
:ok <- result do
end) do
# Since GenServer updates are asynchronous, manually apply updates to the current struct
# to return the correct data immediately instead of refetching from potentially stale cache
updated_attrs =
@@ -374,6 +358,7 @@ defmodule WandererApp.Map.Operations.Connections do
"ship_size_type" -> maybe_update_ship_size_type(map_id, conn, val)
"time_status" -> maybe_update_time_status(map_id, conn, val)
"type" -> maybe_update_type(map_id, conn, val)
"locked" -> maybe_update_locked(map_id, conn, val)
_ -> :ok
end
@@ -429,6 +414,16 @@ defmodule WandererApp.Map.Operations.Connections do
})
end
defp maybe_update_locked(_map_id, _conn, nil), do: :ok
defp maybe_update_locked(map_id, conn, value) do
Server.update_connection_locked(map_id, %{
solar_system_source_id: conn.solar_system_source,
solar_system_target_id: conn.solar_system_target,
locked: value
})
end
@doc "Creates a connection between two systems"
@spec create_connection(String.t(), map(), String.t()) ::
{:ok, :created} | {:skip, :exists} | {:error, atom()}

View File

@@ -5,9 +5,42 @@ defmodule WandererApp.Map.Operations.Signatures do
require Logger
alias WandererApp.Map.Operations
alias WandererApp.Api.{MapSystem, MapSystemSignature}
alias WandererApp.Api.{Character, MapSystem, MapSystemSignature}
alias WandererApp.Map.Server
# Private helper to validate character_eve_id from params and return internal character ID
# If character_eve_id is provided in params, validates it exists and returns the internal UUID
# If not provided, falls back to the owner's character ID (which is already the internal UUID)
@spec validate_character_eve_id(map() | nil, String.t()) ::
{:ok, String.t()} | {:error, :invalid_character}
defp validate_character_eve_id(params, fallback_char_id) when is_map(params) do
case Map.get(params, "character_eve_id") do
nil ->
# No character_eve_id provided, use fallback (owner's internal character UUID)
{:ok, fallback_char_id}
provided_char_eve_id when is_binary(provided_char_eve_id) ->
# Validate the provided character_eve_id exists and get internal UUID
case Character.by_eve_id(provided_char_eve_id) do
{:ok, character} ->
# Return the internal character UUID, not the eve_id
{:ok, character.id}
_ ->
{:error, :invalid_character}
end
_ ->
# Invalid format
{:error, :invalid_character}
end
end
# Handle nil or non-map params by falling back to owner's character
defp validate_character_eve_id(_params, fallback_char_id) do
{:ok, fallback_char_id}
end
@spec list_signatures(String.t()) :: [map()]
def list_signatures(map_id) do
systems = Operations.list_systems(map_id)
@@ -41,11 +74,14 @@ defmodule WandererApp.Map.Operations.Signatures do
%{"solar_system_id" => solar_system_id} = params
)
when is_integer(solar_system_id) do
# Convert solar_system_id to system_id for internal use
with {:ok, system} <- MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id) do
# Validate character first, then convert solar_system_id to system_id
# validated_char_uuid is the internal character UUID for Server.update_signatures
with {:ok, validated_char_uuid} <- validate_character_eve_id(params, char_id),
{:ok, system} <- MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id) do
# Keep character_eve_id in attrs if provided by user (parse_signatures will use it)
# If not provided, parse_signatures will use the character_eve_id from validated_char_uuid lookup
attrs =
params
|> Map.put("character_eve_id", char_id)
|> Map.put("system_id", system.id)
|> Map.delete("solar_system_id")
@@ -54,7 +90,7 @@ defmodule WandererApp.Map.Operations.Signatures do
updated_signatures: [],
removed_signatures: [],
solar_system_id: solar_system_id,
character_id: char_id,
character_id: validated_char_uuid, # Pass internal UUID here
user_id: user_id,
delete_connection_with_sigs: false
}) do
@@ -86,6 +122,10 @@ defmodule WandererApp.Map.Operations.Signatures do
{:error, :unexpected_error}
end
else
{:error, :invalid_character} ->
Logger.error("[create_signature] Invalid character_eve_id provided")
{:error, :invalid_character}
_ ->
Logger.error(
"[create_signature] System not found for solar_system_id: #{solar_system_id}"
@@ -111,7 +151,10 @@ defmodule WandererApp.Map.Operations.Signatures do
sig_id,
params
) do
with {:ok, sig} <- MapSystemSignature.by_id(sig_id),
# Validate character first, then look up signature and system
# validated_char_uuid is the internal character UUID
with {:ok, validated_char_uuid} <- validate_character_eve_id(params, char_id),
{:ok, sig} <- MapSystemSignature.by_id(sig_id),
{:ok, system} <- MapSystem.by_id(sig.system_id) do
base = %{
"eve_id" => sig.eve_id,
@@ -120,11 +163,11 @@ defmodule WandererApp.Map.Operations.Signatures do
"group" => sig.group,
"type" => sig.type,
"custom_info" => sig.custom_info,
"character_eve_id" => char_id,
"description" => sig.description,
"linked_system_id" => sig.linked_system_id
}
# Merge user params (which may include character_eve_id) with base
attrs = Map.merge(base, params)
:ok =
@@ -133,7 +176,7 @@ defmodule WandererApp.Map.Operations.Signatures do
updated_signatures: [attrs],
removed_signatures: [],
solar_system_id: system.solar_system_id,
character_id: char_id,
character_id: validated_char_uuid, # Pass internal UUID here
user_id: user_id,
delete_connection_with_sigs: false
})
@@ -151,6 +194,10 @@ defmodule WandererApp.Map.Operations.Signatures do
_ -> {:ok, attrs}
end
else
{:error, :invalid_character} ->
Logger.error("[update_signature] Invalid character_eve_id provided")
{:error, :invalid_character}
err ->
Logger.error("[update_signature] Unexpected error: #{inspect(err)}")
{:error, :unexpected_error}

View File

@@ -310,8 +310,8 @@ defmodule WandererApp.Map.Server.CharactersImpl do
start_solar_system_id =
WandererApp.Cache.take("map:#{map_id}:character:#{character_id}:start_solar_system_id")
case is_nil(old_location.solar_system_id) and
is_nil(start_solar_system_id) and
case is_nil(old_location.solar_system_id) &&
is_nil(start_solar_system_id) &&
ConnectionsImpl.can_add_location(scope, location.solar_system_id) do
true ->
:ok = SystemsImpl.maybe_add_system(map_id, location, nil, map_opts)

View File

@@ -373,36 +373,36 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
solar_system_target: solar_system_target
} = updated_connection
) do
source_system =
WandererApp.Map.find_system_by_location(
with source_system when not is_nil(source_system) <-
WandererApp.Map.find_system_by_location(
map_id,
%{solar_system_id: solar_system_source}
),
target_system when not is_nil(source_system) <-
WandererApp.Map.find_system_by_location(
map_id,
%{solar_system_id: solar_system_target}
),
source_linked_signatures <-
find_linked_signatures(source_system, target_system),
target_linked_signatures <- find_linked_signatures(target_system, source_system) do
update_signatures_time_status(
map_id,
%{solar_system_id: solar_system_source}
source_system.solar_system_id,
source_linked_signatures,
time_status
)
target_system =
WandererApp.Map.find_system_by_location(
update_signatures_time_status(
map_id,
%{solar_system_id: solar_system_target}
target_system.solar_system_id,
target_linked_signatures,
time_status
)
source_linked_signatures =
find_linked_signatures(source_system, target_system)
target_linked_signatures = find_linked_signatures(target_system, source_system)
update_signatures_time_status(
map_id,
source_system.solar_system_id,
source_linked_signatures,
time_status
)
update_signatures_time_status(
map_id,
target_system.solar_system_id,
target_linked_signatures,
time_status
)
else
error ->
Logger.error("Failed to update_linked_signature_time_status: #{inspect(error)}")
end
end
defp find_linked_signatures(
@@ -438,7 +438,7 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
%{custom_info: updated_custom_info}
end
SignaturesImpl.apply_update_signature(%{map_id: map_id}, sig, update_params)
SignaturesImpl.apply_update_signature(map_id, sig, update_params)
end)
Impl.broadcast!(map_id, :signatures_updated, solar_system_id)
@@ -537,6 +537,12 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
Impl.broadcast!(map_id, :add_connection, connection)
Impl.broadcast!(map_id, :maybe_link_signature, %{
character_id: character_id,
solar_system_source: old_location.solar_system_id,
solar_system_target: location.solar_system_id
})
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :connection_added, %{
connection_id: connection.id,
@@ -548,19 +554,12 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
time_status: connection.time_status
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: character_id,
user_id: character.user_id,
map_id: map_id,
solar_system_source_id: old_location.solar_system_id,
solar_system_target_id: location.solar_system_id
})
Impl.broadcast!(map_id, :maybe_link_signature, %{
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: character_id,
solar_system_source: old_location.solar_system_id,
solar_system_target: location.solar_system_id
user_id: character.user_id,
map_id: map_id,
solar_system_source_id: old_location.solar_system_id,
solar_system_target_id: location.solar_system_id
})
:ok
@@ -657,12 +656,14 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
)
)
def is_connection_valid(:all, _from_solar_system_id, _to_solar_system_id), do: true
def is_connection_valid(:all, from_solar_system_id, to_solar_system_id),
do: from_solar_system_id != to_solar_system_id
def is_connection_valid(:none, _from_solar_system_id, _to_solar_system_id), do: false
def is_connection_valid(scope, from_solar_system_id, to_solar_system_id)
when not is_nil(from_solar_system_id) and not is_nil(to_solar_system_id) do
when not is_nil(from_solar_system_id) and not is_nil(to_solar_system_id) and
from_solar_system_id != to_solar_system_id do
with {:ok, known_jumps} <- find_solar_system_jump(from_solar_system_id, to_solar_system_id),
{:ok, from_system_static_info} <- get_system_static_info(from_solar_system_id),
{:ok, to_system_static_info} <- get_system_static_info(to_solar_system_id) do

View File

@@ -279,7 +279,8 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
group: sig["group"],
type: Map.get(sig, "type"),
custom_info: Map.get(sig, "custom_info"),
character_eve_id: character_eve_id,
# Use character_eve_id from sig if provided, otherwise use the default
character_eve_id: Map.get(sig, "character_eve_id", character_eve_id),
deleted: false
}
end)

View File

@@ -642,13 +642,12 @@ defmodule WandererApp.Map.Server.SystemsImpl do
position_y: system.position_y
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:system_added, %{
character_id: character_id,
user_id: user_id,
map_id: map_id,
solar_system_id: solar_system_id
})
WandererApp.User.ActivityTracker.track_map_event(:system_added, %{
character_id: character_id,
user_id: user_id,
map_id: map_id,
solar_system_id: solar_system_id
})
end
defp maybe_update_extra_info(system, nil), do: system

View File

@@ -11,7 +11,9 @@ defmodule WandererApp.Server.ServerStatusTracker do
:server_version,
:start_time,
:vip,
:retries
:retries,
:in_forced_downtime,
:downtime_notified
]
@retries_count 3
@@ -21,9 +23,17 @@ defmodule WandererApp.Server.ServerStatusTracker do
retries: @retries_count,
server_version: "0",
start_time: "0",
vip: true
vip: true,
in_forced_downtime: false,
downtime_notified: false
}
# EVE Online daily downtime period (UTC/GMT)
@downtime_start_hour 10
@downtime_start_minute 58
@downtime_end_hour 11
@downtime_end_minute 2
@refresh_interval :timer.minutes(1)
@logger Application.compile_env(:wanderer_app, :logger)
@@ -57,13 +67,51 @@ defmodule WandererApp.Server.ServerStatusTracker do
def handle_info(
:refresh_status,
%{
retries: retries
retries: retries,
in_forced_downtime: was_in_downtime
} = state
) do
Process.send_after(self(), :refresh_status, @refresh_interval)
Task.async(fn -> get_server_status(retries) end)
{:noreply, state}
in_downtime = in_forced_downtime?()
cond do
# Entering downtime period - broadcast offline status immediately
in_downtime and not was_in_downtime ->
@logger.info("#{__MODULE__} entering forced downtime period (10:58-11:02 GMT)")
downtime_status = %{
players: 0,
server_version: "downtime",
start_time: DateTime.utc_now() |> DateTime.to_iso8601(),
vip: true
}
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"server_status",
{:server_status, downtime_status}
)
{:noreply,
%{state | in_forced_downtime: true, downtime_notified: true}
|> Map.merge(downtime_status)}
# Currently in downtime - skip API call
in_downtime ->
{:noreply, state}
# Exiting downtime period - resume normal operations
not in_downtime and was_in_downtime ->
@logger.info("#{__MODULE__} exiting forced downtime period, resuming normal operations")
Task.async(fn -> get_server_status(retries) end)
{:noreply, %{state | in_forced_downtime: false, downtime_notified: false}}
# Normal operation
true ->
Task.async(fn -> get_server_status(retries) end)
{:noreply, state}
end
end
@impl true
@@ -155,4 +203,19 @@ defmodule WandererApp.Server.ServerStatusTracker do
vip: false
}
end
# Checks if the current UTC time falls within the forced downtime period (10:58-11:02 GMT).
defp in_forced_downtime? do
now = DateTime.utc_now()
current_hour = now.hour
current_minute = now.minute
# Convert times to minutes since midnight for easier comparison
current_time_minutes = current_hour * 60 + current_minute
downtime_start_minutes = @downtime_start_hour * 60 + @downtime_start_minute
downtime_end_minutes = @downtime_end_hour * 60 + @downtime_end_minute
current_time_minutes >= downtime_start_minutes and
current_time_minutes < downtime_end_minutes
end
end

View File

@@ -1,16 +1,57 @@
defmodule WandererApp.User.ActivityTracker do
@moduledoc false
@moduledoc """
Activity tracking wrapper that ensures audit logging never crashes application logic.
Activity tracking is best-effort and errors are logged but not propagated to callers.
This prevents race conditions (e.g., duplicate activity records) from affecting
critical business operations like character tracking or connection management.
"""
require Logger
def track_map_event(
event_type,
metadata
),
do: WandererApp.Map.Audit.track_map_event(event_type, metadata)
@doc """
Track a map-related event. Always returns `{:ok, result}` even on error.
def track_acl_event(
event_type,
metadata
),
do: WandererApp.Map.Audit.track_acl_event(event_type, metadata)
Errors (such as unique constraint violations from concurrent operations)
are logged but do not propagate to prevent crashing critical application logic.
"""
def track_map_event(event_type, metadata) do
case WandererApp.Map.Audit.track_map_event(event_type, metadata) do
{:ok, result} ->
{:ok, result}
{:error, error} ->
Logger.warning("Failed to track map event (non-critical)",
event_type: event_type,
map_id: metadata[:map_id],
error: inspect(error),
reason: :best_effort_tracking
)
# Return success to prevent crashes - activity tracking is best-effort
{:ok, nil}
end
end
@doc """
Track an ACL-related event. Always returns `{:ok, result}` even on error.
Errors are logged but do not propagate to prevent crashing critical application logic.
"""
def track_acl_event(event_type, metadata) do
case WandererApp.Map.Audit.track_acl_event(event_type, metadata) do
{:ok, result} ->
{:ok, result}
{:error, error} ->
Logger.warning("Failed to track ACL event (non-critical)",
event_type: event_type,
acl_id: metadata[:acl_id],
error: inspect(error),
reason: :best_effort_tracking
)
# Return success to prevent crashes - activity tracking is best-effort
{:ok, nil}
end
end
end

View File

@@ -38,14 +38,18 @@
</div>
<div class="navbar-end"></div>
</navbar>
<div class="!z-10 min-h-[calc(100vh-7rem)]">
<div class="!z-10 min-h-[calc(100vh-11rem)]">
{@inner_content}
</div>
<!--Footer-->
<footer class="!z-10 w-full pb-4 text-sm text-center fade-in">
<a class="text-gray-500 no-underline hover:no-underline" href="#">
&copy; Wanderer 2024
</a>
<footer class="!z-10 w-full pt-8 pb-4 text-sm text-center fade-in flex justify-center items-center">
<div class="flex flex-col justify-center items-center">
<a target="_blank" rel="noopener noreferrer" href="https://www.eveonline.com/partners"><img src="/images/eo_pp.png" style="width: 300px;" alt="Eve Online Partnership Program"></a>
<div class="text-gray-500 no-underline hover:no-underline">
All <a href="/license">EVE related materials</a> are property of <a href="https://www.ccpgames.com">CCP Games</a>
&copy; {Date.utc_today().year} Wanderer Industries.
</div>
</div>
</footer>
<div class="fixed top-0 left-0 w-full h-full !-z-1 maps_bg" />
</main>

View File

@@ -432,32 +432,42 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
responses: ResponseSchemas.standard_responses(@detail_response_schema)
)
def show(%{assigns: %{map_id: map_id}} = conn, %{"id" => id}) do
with {:ok, system_uuid} <- APIUtils.validate_uuid(id),
{:ok, system} <- WandererApp.Api.MapSystem.by_id(system_uuid) do
# Verify the system belongs to the requested map
if system.map_id == map_id do
APIUtils.respond_data(conn, APIUtils.map_system_to_json(system))
else
# Look up by solar_system_id (EVE Online integer ID)
case APIUtils.parse_int(id) do
{:ok, solar_system_id} ->
case Operations.get_system(map_id, solar_system_id) do
{:ok, system} ->
APIUtils.respond_data(conn, APIUtils.map_system_to_json(system))
{:error, :not_found} ->
{:error, :not_found}
end
{:error, _} ->
{:error, :not_found}
end
else
{:error, %Ash.Error.Query.NotFound{}} -> {:error, :not_found}
{:error, _} -> {:error, :not_found}
error -> error
end
end
operation(:create,
summary: "Upsert Systems and Connections (batch or single)",
summary: "Create or Update Systems and Connections",
description: """
Creates or updates systems and connections. Supports two formats:
1. **Single System Format**: Post a single system object directly (e.g., `{"solar_system_id": 30000142, "position_x": 100, ...}`)
2. **Batch Format**: Post multiple systems and connections (e.g., `{"systems": [...], "connections": [...]}`)
Systems are identified by solar_system_id and will be updated if they already exist on the map.
""",
parameters: [
map_identifier: [
in: :path,
@@ -472,8 +482,22 @@ defmodule WandererAppWeb.MapSystemAPIController do
)
def create(conn, params) do
systems = Map.get(params, "systems", [])
connections = Map.get(params, "connections", [])
# Support both batch format {"systems": [...], "connections": [...]}
# and single system format {"solar_system_id": ..., ...}
{systems, connections} =
cond do
Map.has_key?(params, "systems") ->
# Batch format
{Map.get(params, "systems", []), Map.get(params, "connections", [])}
Map.has_key?(params, "solar_system_id") or Map.has_key?(params, :solar_system_id) ->
# Single system format - wrap it in an array
{[params], []}
true ->
# Empty request
{[], []}
end
case Operations.upsert_systems_and_connections(conn, systems, connections) do
{:ok, result} ->
@@ -496,9 +520,10 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
request_body: {"System update request", "application/json", @system_update_schema},
@@ -506,11 +531,15 @@ defmodule WandererAppWeb.MapSystemAPIController do
)
def update(conn, %{"id" => id} = params) do
with {:ok, system_uuid} <- APIUtils.validate_uuid(id),
{:ok, system} <- WandererApp.Api.MapSystem.by_id(system_uuid),
{:ok, attrs} <- APIUtils.extract_update_params(params),
{:ok, updated_system} <- Ash.update(system, attrs) do
APIUtils.respond_data(conn, APIUtils.map_system_to_json(updated_system))
with {:ok, solar_system_id} <- APIUtils.parse_int(id),
{:ok, attrs} <- APIUtils.extract_update_params(params) do
case Operations.update_system(conn, solar_system_id, attrs) do
{:ok, result} ->
APIUtils.respond_data(conn, result)
error ->
error
end
end
end
@@ -578,9 +607,10 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
responses: ResponseSchemas.standard_responses(@delete_response_schema)

View File

@@ -12,28 +12,32 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
# Inlined OpenAPI schema for a map system signature
@signature_schema %OpenApiSpex.Schema{
title: "MapSystemSignature",
description: "A cosmic signature scanned in an EVE Online solar system",
type: :object,
properties: %{
id: %OpenApiSpex.Schema{type: :string, format: :uuid},
solar_system_id: %OpenApiSpex.Schema{type: :integer},
eve_id: %OpenApiSpex.Schema{type: :string},
character_eve_id: %OpenApiSpex.Schema{type: :string},
name: %OpenApiSpex.Schema{type: :string, nullable: true},
description: %OpenApiSpex.Schema{type: :string, nullable: true},
type: %OpenApiSpex.Schema{type: :string, nullable: true},
linked_system_id: %OpenApiSpex.Schema{type: :integer, nullable: true},
kind: %OpenApiSpex.Schema{type: :string, nullable: true},
group: %OpenApiSpex.Schema{type: :string, nullable: true},
custom_info: %OpenApiSpex.Schema{type: :string, nullable: true},
updated: %OpenApiSpex.Schema{type: :integer, nullable: true},
inserted_at: %OpenApiSpex.Schema{type: :string, format: :date_time},
updated_at: %OpenApiSpex.Schema{type: :string, format: :date_time}
id: %OpenApiSpex.Schema{type: :string, format: :uuid, description: "Unique signature identifier"},
solar_system_id: %OpenApiSpex.Schema{type: :integer, description: "EVE Online solar system ID"},
eve_id: %OpenApiSpex.Schema{type: :string, description: "In-game signature ID (e.g., ABC-123)"},
character_eve_id: %OpenApiSpex.Schema{
type: :string,
description: "EVE character ID who scanned/updated this signature. Must be a valid character in the database. If not provided, defaults to the map owner's character.",
nullable: true
},
name: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Signature name"},
description: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Additional notes"},
type: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Signature type"},
linked_system_id: %OpenApiSpex.Schema{type: :integer, nullable: true, description: "Connected solar system ID for wormholes"},
kind: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Signature kind (e.g., cosmic_signature)"},
group: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Signature group (e.g., wormhole, data, relic)"},
custom_info: %OpenApiSpex.Schema{type: :string, nullable: true, description: "Custom metadata"},
updated: %OpenApiSpex.Schema{type: :integer, nullable: true, description: "Update counter"},
inserted_at: %OpenApiSpex.Schema{type: :string, format: :date_time, description: "Creation timestamp"},
updated_at: %OpenApiSpex.Schema{type: :string, format: :date_time, description: "Last update timestamp"}
},
required: [
:id,
:solar_system_id,
:eve_id,
:character_eve_id
:eve_id
],
example: %{
id: "sig-uuid-1",
@@ -143,6 +147,10 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
@doc """
Create a new signature.
The `character_eve_id` field is optional. If provided, it must be a valid character
that exists in the database, otherwise a 422 error will be returned. If not provided,
the signature will be associated with the map owner's character.
"""
operation(:create,
summary: "Create a new signature",
@@ -162,6 +170,18 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
type: :object,
properties: %{data: @signature_schema},
example: %{data: @signature_schema.example}
}},
unprocessable_entity:
{"Validation error", "application/json",
%OpenApiSpex.Schema{
type: :object,
properties: %{
error: %OpenApiSpex.Schema{
type: :string,
description: "Error type (e.g., 'invalid_character', 'system_not_found', 'missing_params')"
}
},
example: %{error: "invalid_character"}
}}
]
)
@@ -175,6 +195,9 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
@doc """
Update a signature by ID.
The `character_eve_id` field is optional. If provided, it must be a valid character
that exists in the database, otherwise a 422 error will be returned.
"""
operation(:update,
summary: "Update a signature by ID",
@@ -195,6 +218,18 @@ defmodule WandererAppWeb.MapSystemSignatureAPIController do
type: :object,
properties: %{data: @signature_schema},
example: %{data: @signature_schema.example}
}},
unprocessable_entity:
{"Validation error", "application/json",
%OpenApiSpex.Schema{
type: :object,
properties: %{
error: %OpenApiSpex.Schema{
type: :string,
description: "Error type (e.g., 'invalid_character', 'unexpected_error')"
}
},
example: %{error: "invalid_character"}
}}
]
)

View File

@@ -149,12 +149,12 @@ defmodule WandererAppWeb.Plugs.CheckJsonApiAuth do
end
defp validate_api_token(conn, token) do
# Check for map identifier in path params
# According to PR feedback, routes supply params["map_identifier"]
case conn.params["map_identifier"] do
# Try to get map identifier from multiple sources
map_identifier = get_map_identifier(conn)
case map_identifier do
nil ->
# No map identifier in path - this might be a general API endpoint
# For now, we'll return an error since we need to validate against a specific map
# No map identifier found - this might be a general API endpoint
{:error, "Authentication failed", :no_map_context}
identifier ->
@@ -182,6 +182,37 @@ defmodule WandererAppWeb.Plugs.CheckJsonApiAuth do
end
end
# Extract map identifier from multiple sources
defp get_map_identifier(conn) do
# 1. Check path params (e.g., /api/v1/maps/:map_identifier/systems)
case conn.params["map_identifier"] do
id when is_binary(id) and id != "" -> id
_ ->
# 2. Check request body for map_id (JSON:API format)
case conn.body_params do
%{"data" => %{"attributes" => %{"map_id" => map_id}}} when is_binary(map_id) and map_id != "" ->
map_id
%{"data" => %{"relationships" => %{"map" => %{"data" => %{"id" => map_id}}}}} when is_binary(map_id) and map_id != "" ->
map_id
# 3. Check flat body params (non-JSON:API format)
%{"map_id" => map_id} when is_binary(map_id) and map_id != "" ->
map_id
_ ->
# 4. Check query params (e.g., ?filter[map_id]=...)
case conn.params do
%{"filter" => %{"map_id" => map_id}} when is_binary(map_id) and map_id != "" ->
map_id
_ ->
nil
end
end
end
end
# Helper to resolve map by ID or slug
defp resolve_map_identifier(identifier) do
# Try as UUID first

View File

@@ -353,7 +353,7 @@ defmodule WandererAppWeb.Helpers.APIUtils do
def connection_to_json(conn) do
Map.take(conn, ~w(
id map_id solar_system_source solar_system_target mass_status
time_status ship_size_type type wormhole_type inserted_at updated_at
time_status ship_size_type type wormhole_type locked inserted_at updated_at
)a)
end
end

View File

@@ -272,6 +272,9 @@
<.icon name="hero-check-badge-solid" class="w-5 h-5" />
</div>
</:col>
<:col :let={subscription} label="Map">
{subscription.map.name}
</:col>
<:col :let={subscription} label="Active Till">
<.local_time
:if={subscription.active_till}
@@ -333,7 +336,7 @@
label="Valid"
options={Enum.map(@valid_types, fn valid_type -> {valid_type.label, valid_type.id} end)}
/>
<!-- API Key Section with grid layout -->
<div class="modal-action">
<.button class="mt-2" type="submit" phx-disable-with="Saving...">

View File

@@ -59,14 +59,13 @@ defmodule WandererAppWeb.MapConnectionsEventHandler do
character_id: main_character_id
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: main_character_id,
user_id: current_user_id,
map_id: map_id,
solar_system_source_id: "#{solar_system_source_id}" |> String.to_integer(),
solar_system_target_id: "#{solar_system_target_id}" |> String.to_integer()
})
WandererApp.User.ActivityTracker.track_map_event(:map_connection_added, %{
character_id: main_character_id,
user_id: current_user_id,
map_id: map_id,
solar_system_source_id: "#{solar_system_source_id}" |> String.to_integer(),
solar_system_target_id: "#{solar_system_target_id}" |> String.to_integer()
})
{:noreply, socket}
end
@@ -149,7 +148,6 @@ defmodule WandererAppWeb.MapConnectionsEventHandler do
end
end
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_removed, %{
character_id: main_character_id,
user_id: current_user_id,
@@ -202,7 +200,6 @@ defmodule WandererAppWeb.MapConnectionsEventHandler do
_ -> nil
end
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_connection_updated, %{
character_id: main_character_id,
user_id: current_user_id,

View File

@@ -165,13 +165,12 @@ defmodule WandererAppWeb.MapRoutesEventHandler do
solar_system_id: solar_system_id
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:hub_added, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: solar_system_id
})
WandererApp.User.ActivityTracker.track_map_event(:hub_added, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: solar_system_id
})
{:noreply, socket}
else
@@ -204,13 +203,12 @@ defmodule WandererAppWeb.MapRoutesEventHandler do
solar_system_id: solar_system_id
})
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:hub_removed, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: solar_system_id
})
WandererApp.User.ActivityTracker.track_map_event(:hub_removed, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: solar_system_id
})
{:noreply, socket}
end

View File

@@ -250,15 +250,14 @@ defmodule WandererAppWeb.MapSystemsEventHandler do
|> Map.put_new(key_atom, value)
])
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:system_updated, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: "#{solar_system_id}" |> String.to_integer(),
key: key_atom,
value: value
})
WandererApp.User.ActivityTracker.track_map_event(:system_updated, %{
character_id: main_character_id,
user_id: current_user.id,
map_id: map_id,
solar_system_id: "#{solar_system_id}" |> String.to_integer(),
key: key_atom,
value: value
})
end
{:noreply, socket}

View File

@@ -383,24 +383,22 @@ defmodule WandererAppWeb.MapsLive do
added_acls
|> Enum.each(fn acl_id ->
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_acl_added, %{
character_id: first_tracked_character_id,
user_id: current_user.id,
map_id: map.id,
acl_id: acl_id
})
WandererApp.User.ActivityTracker.track_map_event(:map_acl_added, %{
character_id: first_tracked_character_id,
user_id: current_user.id,
map_id: map.id,
acl_id: acl_id
})
end)
removed_acls
|> Enum.each(fn acl_id ->
{:ok, _} =
WandererApp.User.ActivityTracker.track_map_event(:map_acl_removed, %{
character_id: first_tracked_character_id,
user_id: current_user.id,
map_id: map.id,
acl_id: acl_id
})
WandererApp.User.ActivityTracker.track_map_event(:map_acl_removed, %{
character_id: first_tracked_character_id,
user_id: current_user.id,
map_id: map.id,
acl_id: acl_id
})
end)
{:noreply,

View File

@@ -10,529 +10,8 @@ defmodule WandererAppWeb.OpenApiV1Spec do
@impl OpenApiSpex.OpenApi
def spec do
# This is called by the modify_open_api option in the router
# We should return the spec from WandererAppWeb.OpenApi module
# We delegate to WandererAppWeb.OpenApi module which generates
# the spec from AshJsonApi with custom endpoints merged in
WandererAppWeb.OpenApi.spec()
end
defp generate_spec_manually do
%OpenApi{
info: %Info{
title: "WandererApp v1 JSON:API",
version: "1.0.0",
description: """
JSON:API compliant endpoints for WandererApp.
## Features
- Filtering: Use `filter[attribute]=value` parameters
- Sorting: Use `sort=attribute` or `sort=-attribute` for descending
- Pagination: Use `page[limit]=n` and `page[offset]=n`
- Relationships: Include related resources with `include=relationship`
## Authentication
All endpoints require Bearer token authentication:
```
Authorization: Bearer YOUR_API_KEY
```
"""
},
servers: [
Server.from_endpoint(WandererAppWeb.Endpoint)
],
paths: get_v1_paths(),
components: %Components{
schemas: get_v1_schemas(),
securitySchemes: %{
"bearerAuth" => %{
"type" => "http",
"scheme" => "bearer",
"description" => "Map API key for authentication"
}
}
},
security: [%{"bearerAuth" => []}],
tags: get_v1_tags()
}
end
defp get_v1_tags do
[
%{"name" => "Access Lists", "description" => "Access control list management"},
%{"name" => "Access List Members", "description" => "ACL member management"},
%{"name" => "Characters", "description" => "Character management"},
%{"name" => "Maps", "description" => "Map management"},
%{"name" => "Map Systems", "description" => "Map system operations"},
%{"name" => "Map Connections", "description" => "System connection management"},
%{"name" => "Map Solar Systems", "description" => "Solar system data"},
%{"name" => "Map System Signatures", "description" => "Wormhole signature tracking"},
%{"name" => "Map System Structures", "description" => "Structure management"},
%{"name" => "Map System Comments", "description" => "System comments"},
%{"name" => "Map Character Settings", "description" => "Character map settings"},
%{"name" => "Map User Settings", "description" => "User map preferences"},
%{"name" => "Map Subscriptions", "description" => "Map subscription management"},
%{"name" => "Map Access Lists", "description" => "Map-specific ACLs"},
%{"name" => "Map States", "description" => "Map state information"},
%{"name" => "Users", "description" => "User management"},
%{"name" => "User Activities", "description" => "User activity tracking"},
%{"name" => "Ship Type Info", "description" => "Ship type information"}
]
end
defp get_v1_paths do
# Generate paths for all resources
resources = [
{"access_lists", "Access Lists"},
{"access_list_members", "Access List Members"},
{"characters", "Characters"},
{"maps", "Maps"},
{"map_systems", "Map Systems"},
{"map_connections", "Map Connections"},
{"map_solar_systems", "Map Solar Systems"},
{"map_system_signatures", "Map System Signatures"},
{"map_system_structures", "Map System Structures"},
{"map_system_comments", "Map System Comments"},
{"map_character_settings", "Map Character Settings"},
{"map_user_settings", "Map User Settings"},
{"map_subscriptions", "Map Subscriptions"},
{"map_access_lists", "Map Access Lists"},
{"map_states", "Map States"},
{"users", "Users"},
{"user_activities", "User Activities"},
{"ship_type_infos", "Ship Type Info"}
]
Enum.reduce(resources, %{}, fn {resource, tag}, acc ->
base_path = "/api/v1/#{resource}"
paths = %{
base_path => %{
"get" => %{
"summary" => "List #{resource}",
"tags" => [tag],
"parameters" => get_standard_list_parameters(resource),
"responses" => %{
"200" => %{
"description" => "List of #{resource}",
"content" => %{
"application/vnd.api+json" => %{
"schema" => %{
"$ref" => "#/components/schemas/#{String.capitalize(resource)}ListResponse"
}
}
}
}
}
},
"post" => %{
"summary" => "Create #{String.replace(resource, "_", " ")}",
"tags" => [tag],
"requestBody" => %{
"required" => true,
"content" => %{
"application/vnd.api+json" => %{
"schema" => %{
"$ref" => "#/components/schemas/#{String.capitalize(resource)}CreateRequest"
}
}
}
},
"responses" => %{
"201" => %{"description" => "Created"}
}
}
},
"#{base_path}/{id}" => %{
"get" => %{
"summary" => "Get #{String.replace(resource, "_", " ")}",
"tags" => [tag],
"parameters" => [
%{
"name" => "id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"}
}
],
"responses" => %{
"200" => %{"description" => "Resource details"}
}
},
"patch" => %{
"summary" => "Update #{String.replace(resource, "_", " ")}",
"tags" => [tag],
"parameters" => [
%{
"name" => "id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"}
}
],
"requestBody" => %{
"required" => true,
"content" => %{
"application/vnd.api+json" => %{
"schema" => %{
"$ref" => "#/components/schemas/#{String.capitalize(resource)}UpdateRequest"
}
}
}
},
"responses" => %{
"200" => %{"description" => "Updated"}
}
},
"delete" => %{
"summary" => "Delete #{String.replace(resource, "_", " ")}",
"tags" => [tag],
"parameters" => [
%{
"name" => "id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"}
}
],
"responses" => %{
"204" => %{"description" => "Deleted"}
}
}
}
}
Map.merge(acc, paths)
end)
|> add_custom_paths()
end
defp add_custom_paths(paths) do
# Add custom action paths
custom_paths = %{
"/api/v1/maps/{id}/duplicate" => %{
"post" => %{
"summary" => "Duplicate map",
"tags" => ["Maps"],
"parameters" => [
%{
"name" => "id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"}
}
],
"responses" => %{
"201" => %{"description" => "Map duplicated"}
}
}
},
"/api/v1/maps/{map_id}/systems_and_connections" => %{
"get" => %{
"summary" => "Get Map Systems and Connections",
"description" => "Retrieve both systems and connections for a map in a single response",
"tags" => ["Maps"],
"parameters" => [
%{
"name" => "map_id",
"in" => "path",
"required" => true,
"schema" => %{"type" => "string"},
"description" => "Map ID"
}
],
"responses" => %{
"200" => %{
"description" => "Combined systems and connections data",
"content" => %{
"application/json" => %{
"schema" => %{
"type" => "object",
"properties" => %{
"systems" => %{
"type" => "array",
"items" => %{
"type" => "object",
"properties" => %{
"id" => %{"type" => "string"},
"solar_system_id" => %{"type" => "integer"},
"name" => %{"type" => "string"},
"status" => %{"type" => "string"},
"visible" => %{"type" => "boolean"},
"locked" => %{"type" => "boolean"},
"position_x" => %{"type" => "integer"},
"position_y" => %{"type" => "integer"}
}
}
},
"connections" => %{
"type" => "array",
"items" => %{
"type" => "object",
"properties" => %{
"id" => %{"type" => "string"},
"solar_system_source" => %{"type" => "integer"},
"solar_system_target" => %{"type" => "integer"},
"type" => %{"type" => "string"},
"time_status" => %{"type" => "string"},
"mass_status" => %{"type" => "string"}
}
}
}
}
}
}
}
},
"404" => %{"description" => "Map not found"},
"401" => %{"description" => "Unauthorized"}
}
}
}
}
Map.merge(paths, custom_paths)
end
defp get_standard_list_parameters(resource) do
base_params = [
%{
"name" => "sort",
"in" => "query",
"description" => "Sort results (e.g., 'name', '-created_at')",
"schema" => %{"type" => "string"}
},
%{
"name" => "page[limit]",
"in" => "query",
"description" => "Number of results per page",
"schema" => %{"type" => "integer", "default" => 50}
},
%{
"name" => "page[offset]",
"in" => "query",
"description" => "Offset for pagination",
"schema" => %{"type" => "integer", "default" => 0}
},
%{
"name" => "include",
"in" => "query",
"description" => "Include related resources (comma-separated)",
"schema" => %{"type" => "string"}
}
]
# Add resource-specific filter parameters
filter_params =
case resource do
"characters" ->
[
%{
"name" => "filter[name]",
"in" => "query",
"description" => "Filter by character name",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[user_id]",
"in" => "query",
"description" => "Filter by user ID",
"schema" => %{"type" => "string"}
}
]
"maps" ->
[
%{
"name" => "filter[scope]",
"in" => "query",
"description" => "Filter by map scope",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[archived]",
"in" => "query",
"description" => "Filter by archived status",
"schema" => %{"type" => "boolean"}
}
]
"map_systems" ->
[
%{
"name" => "filter[map_id]",
"in" => "query",
"description" => "Filter by map ID",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[solar_system_id]",
"in" => "query",
"description" => "Filter by solar system ID",
"schema" => %{"type" => "integer"}
}
]
"map_connections" ->
[
%{
"name" => "filter[map_id]",
"in" => "query",
"description" => "Filter by map ID",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[source_id]",
"in" => "query",
"description" => "Filter by source system ID",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[target_id]",
"in" => "query",
"description" => "Filter by target system ID",
"schema" => %{"type" => "string"}
}
]
"map_system_signatures" ->
[
%{
"name" => "filter[system_id]",
"in" => "query",
"description" => "Filter by system ID",
"schema" => %{"type" => "string"}
},
%{
"name" => "filter[type]",
"in" => "query",
"description" => "Filter by signature type",
"schema" => %{"type" => "string"}
}
]
_ ->
[]
end
base_params ++ filter_params
end
defp get_v1_schemas do
%{
# Generic JSON:API response wrapper
"JsonApiWrapper" => %{
"type" => "object",
"properties" => %{
"data" => %{
"type" => "object",
"description" => "Primary data"
},
"included" => %{
"type" => "array",
"description" => "Included related resources"
},
"meta" => %{
"type" => "object",
"description" => "Metadata about the response"
},
"links" => %{
"type" => "object",
"description" => "Links for pagination and relationships"
}
}
},
# Character schemas
"CharacterResource" => %{
"type" => "object",
"properties" => %{
"type" => %{"type" => "string", "enum" => ["characters"]},
"id" => %{"type" => "string"},
"attributes" => %{
"type" => "object",
"properties" => %{
"name" => %{"type" => "string"},
"eve_id" => %{"type" => "integer"},
"corporation_id" => %{"type" => "integer"},
"alliance_id" => %{"type" => "integer"},
"online" => %{"type" => "boolean"},
"location" => %{"type" => "object"},
"inserted_at" => %{"type" => "string", "format" => "date-time"},
"updated_at" => %{"type" => "string", "format" => "date-time"}
}
},
"relationships" => %{
"type" => "object",
"properties" => %{
"user" => %{
"type" => "object",
"properties" => %{
"data" => %{
"type" => "object",
"properties" => %{
"type" => %{"type" => "string"},
"id" => %{"type" => "string"}
}
}
}
}
}
}
}
},
"CharactersListResponse" => %{
"type" => "object",
"properties" => %{
"data" => %{
"type" => "array",
"items" => %{"$ref" => "#/components/schemas/CharacterResource"}
},
"meta" => %{
"type" => "object",
"properties" => %{
"page" => %{
"type" => "object",
"properties" => %{
"offset" => %{"type" => "integer"},
"limit" => %{"type" => "integer"},
"total" => %{"type" => "integer"}
}
}
}
}
}
},
# Map schemas
"MapResource" => %{
"type" => "object",
"properties" => %{
"type" => %{"type" => "string", "enum" => ["maps"]},
"id" => %{"type" => "string"},
"attributes" => %{
"type" => "object",
"properties" => %{
"name" => %{"type" => "string"},
"slug" => %{"type" => "string"},
"scope" => %{"type" => "string"},
"public_key" => %{"type" => "string"},
"archived" => %{"type" => "boolean"},
"inserted_at" => %{"type" => "string", "format" => "date-time"},
"updated_at" => %{"type" => "string", "format" => "date-time"}
}
},
"relationships" => %{
"type" => "object",
"properties" => %{
"owner" => %{
"type" => "object"
},
"characters" => %{
"type" => "object"
},
"acls" => %{
"type" => "object"
}
}
}
}
}
}
end
end

View File

@@ -597,7 +597,7 @@ defmodule WandererAppWeb.Router do
scope "/api/v1" do
pipe_through :api_v1
# Custom combined endpoints
# Custom combined endpoint with map_id in path
get "/maps/:map_id/systems_and_connections",
WandererAppWeb.Api.MapSystemsConnectionsController,
:show
@@ -605,6 +605,18 @@ defmodule WandererAppWeb.Router do
# Forward all v1 requests to AshJsonApi router
# This will automatically generate RESTful JSON:API endpoints
# for all Ash resources once they're configured with the AshJsonApi extension
#
# NOTE: AshJsonApi generates flat routes (e.g., /api/v1/map_systems)
# Phoenix's `forward` cannot be used with dynamic path segments, so proper
# nested routes like /api/v1/maps/{id}/systems would require custom controllers.
#
# Current approach: Use flat routes with map_id in request body or filters:
# - POST /api/v1/map_systems with {"data": {"attributes": {"map_id": "..."}}}
# - GET /api/v1/map_systems?filter[map_id]=...
# - PATCH /api/v1/map_systems/{id} with map_id in body
#
# Authentication is handled by CheckJsonApiAuth which validates the Bearer
# token against the map's API key.
forward "/", WandererAppWeb.ApiV1Router
end
end

View File

@@ -3,7 +3,7 @@ defmodule WandererApp.MixProject do
@source_url "https://github.com/wanderer-industries/wanderer"
@version "1.84.1"
@version "1.84.17"
def project do
[

View File

@@ -144,33 +144,28 @@ The API v1 provides access to over 25 resources through the Ash Framework. Here
### Core Resources
- **Maps** (`/api/v1/maps`) - Map management with full CRUD operations
- **Characters** (`/api/v1/characters`) - Character tracking and management (GET, DELETE only)
- **Access Lists** (`/api/v1/access_lists`) - ACL management and permissions
- **Access List Members** (`/api/v1/access_list_members`) - ACL member management
- **Access Lists** (`/api/v1/access_lists`) - ACL management and permissions with full CRUD operations
- **Access List Members** (`/api/v1/access_list_members`) - ACL member management with full CRUD operations
- **Map Access Lists** (`/api/v1/map_access_lists`) - Map-ACL associations with full CRUD operations
### Map Resources
- **Map Systems** (`/api/v1/map_systems`) - Solar system data and metadata
- **Map Connections** (`/api/v1/map_connections`) - Wormhole connections
- **Map Signatures** (`/api/v1/map_system_signatures`) - Signature scanning data (GET, DELETE only)
- **Map Structures** (`/api/v1/map_system_structures`) - Structure information
- **Map Subscriptions** (`/api/v1/map_subscriptions`) - Subscription management (GET only)
- **Map Systems and Connections** (`/api/v1/maps/{map_id}/systems_and_connections`) - Combined endpoint (GET only)
- **Map Systems** (`/api/v1/map_systems`) - Solar system data and metadata with full CRUD operations (paginated: default 100, max 500)
- **Map Connections** (`/api/v1/map_connections`) - Wormhole connections with full CRUD operations
- **Map Signatures** (`/api/v1/map_system_signatures`) - Signature scanning data (read and delete only, paginated: default 50, max 200)
- **Map Structures** (`/api/v1/map_system_structures`) - Structure information with full CRUD operations
- **Map Subscriptions** (`/api/v1/map_subscriptions`) - Subscription management (read-only)
- **Map Default Settings** (`/api/v1/map_default_settings`) - Default map configurations with full CRUD operations
- **Map Systems and Connections** (`/api/v1/maps/{map_id}/systems_and_connections`) - Combined endpoint (read-only)
### System Resources
- **Map System Comments** (`/api/v1/map_system_comments`) - System annotations (GET only)
- **Map System Comments** (`/api/v1/map_system_comments`) - System annotations (read-only)
### User Resources
- **User Activities** (`/api/v1/user_activities`) - User activity tracking (GET only)
- **Map Character Settings** (`/api/v1/map_character_settings`) - Character preferences (GET only)
- **Map User Settings** (`/api/v1/map_user_settings`) - User map preferences (GET only)
- **User Activities** (`/api/v1/user_activities`) - User activity tracking (read-only, paginated: default 15)
- **Map Character Settings** (`/api/v1/map_character_settings`) - Character preferences (read-only)
- **Map User Settings** (`/api/v1/map_user_settings`) - User map preferences (read-only)
### Additional Resources
- **Map Webhook Subscriptions** (`/api/v1/map_webhook_subscriptions`) - Webhook management
- **Map Invites** (`/api/v1/map_invites`) - Map invitation system
- **Map Pings** (`/api/v1/map_pings`) - In-game ping tracking
- **Corp Wallet Transactions** (`/api/v1/corp_wallet_transactions`) - Corporation finances
*Note: Some resources have been restricted to read-only access for security and consistency. Resources marked as "(GET only)" support only read operations, while "(GET, DELETE only)" support read and delete operations.*
*Note: Resources marked as "full CRUD operations" support create, read, update, and delete. Resources marked as "read-only" support only GET operations. Resources marked as "read and delete only" support GET and DELETE operations. Pagination limits are configurable via `page[limit]` and `page[offset]` parameters where supported.*
## API v1 Feature Set

View File

@@ -0,0 +1,24 @@
defmodule WandererApp.Repo.Migrations.AddMapPerformanceIndexes do
@moduledoc """
Updates resources based on their most recent snapshots.
This file was autogenerated with `mix ash_postgres.generate_migrations`
"""
use Ecto.Migration
def up do
create index(:map_system_v1, [:map_id],
name: "map_system_v1_map_id_visible_index",
where: "visible = true"
)
create index(:map_chain_v1, [:map_id], name: "map_chain_v1_map_id_index")
end
def down do
drop_if_exists index(:map_chain_v1, [:map_id], name: "map_chain_v1_map_id_index")
drop_if_exists index(:map_system_v1, [:map_id], name: "map_system_v1_map_id_visible_index")
end
end

View File

@@ -0,0 +1,144 @@
defmodule WandererApp.Repo.Migrations.FixDuplicateMapSlugs do
use Ecto.Migration
import Ecto.Query
def up do
# Check for duplicates first
has_duplicates = check_for_duplicates()
# If duplicates exist, drop the index first to allow fixing them
if has_duplicates do
IO.puts("Duplicates found, dropping index before cleanup...")
drop_index_if_exists()
end
# Fix duplicate slugs in maps_v1 table
fix_duplicate_slugs()
# Ensure unique index exists (recreate if needed)
ensure_unique_index()
end
def down do
# This migration is idempotent and safe to run multiple times
# No need to revert as it only fixes data integrity issues
:ok
end
defp check_for_duplicates do
duplicates_query = """
SELECT COUNT(*) as duplicate_count
FROM (
SELECT slug
FROM maps_v1
GROUP BY slug
HAVING count(*) > 1
) duplicates
"""
case repo().query(duplicates_query, []) do
{:ok, %{rows: [[count]]}} when count > 0 ->
IO.puts("Found #{count} duplicate slug(s)")
true
{:ok, %{rows: [[0]]}} ->
false
{:error, error} ->
IO.puts("Error checking for duplicates: #{inspect(error)}")
false
end
end
defp drop_index_if_exists do
index_exists_query = """
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
)
"""
case repo().query(index_exists_query, []) do
{:ok, %{rows: [[true]]}} ->
IO.puts("Dropping existing unique index...")
execute("DROP INDEX IF EXISTS maps_v1_unique_slug_index")
IO.puts("✓ Index dropped")
{:ok, %{rows: [[false]]}} ->
IO.puts("No existing index to drop")
{:error, error} ->
IO.puts("Error checking index: #{inspect(error)}")
end
end
defp fix_duplicate_slugs do
# Get all duplicate slugs with their IDs
duplicates_query = """
SELECT slug, array_agg(id::text ORDER BY updated_at) as ids
FROM maps_v1
GROUP BY slug
HAVING count(*) > 1
"""
case repo().query(duplicates_query, []) do
{:ok, %{rows: rows}} when length(rows) > 0 ->
IO.puts("Fixing #{length(rows)} duplicate slug(s)...")
Enum.each(rows, fn [slug, ids] ->
IO.puts("Processing duplicate slug: #{slug} (#{length(ids)} occurrences)")
# Keep the first one (oldest), rename the rest
[_keep_id | rename_ids] = ids
rename_ids
|> Enum.with_index(2)
|> Enum.each(fn {id_string, n} ->
new_slug = "#{slug}-#{n}"
# Use parameterized query for safety
update_query = "UPDATE maps_v1 SET slug = $1 WHERE id::text = $2"
repo().query!(update_query, [new_slug, id_string])
IO.puts(" ✓ Renamed #{id_string} to '#{new_slug}'")
end)
end)
IO.puts("✓ All duplicate slugs fixed!")
{:ok, %{rows: []}} ->
IO.puts("No duplicate slugs to fix")
{:error, error} ->
IO.puts("Error checking for duplicates: #{inspect(error)}")
raise "Failed to check for duplicate slugs: #{inspect(error)}"
end
end
defp ensure_unique_index do
# Check if index exists
index_exists_query = """
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
)
"""
case repo().query(index_exists_query, []) do
{:ok, %{rows: [[true]]}} ->
IO.puts("Unique index on slug already exists")
{:ok, %{rows: [[false]]}} ->
IO.puts("Creating unique index on slug...")
create_if_not_exists index(:maps_v1, [:slug], unique: true, name: :maps_v1_unique_slug_index)
IO.puts("✓ Index created successfully!")
{:error, error} ->
IO.puts("Error checking index: #{inspect(error)}")
raise "Failed to check index: #{inspect(error)}"
end
end
end

View File

@@ -0,0 +1,201 @@
{
"attributes": [
{
"allow_nil?": false,
"default": "fragment(\"gen_random_uuid()\")",
"generated?": false,
"primary_key?": true,
"references": null,
"size": null,
"source": "id",
"type": "uuid"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_source",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_target",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "mass_status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "time_status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "2",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "ship_size_type",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "type",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "wormhole_type",
"type": "text"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "count_of_passage",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "locked",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "custom_info",
"type": "text"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "inserted_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "updated_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": {
"deferrable": false,
"destination_attribute": "id",
"destination_attribute_default": null,
"destination_attribute_generated": null,
"index?": false,
"match_type": null,
"match_with": null,
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"name": "map_chain_v1_map_id_fkey",
"on_delete": null,
"on_update": null,
"primary_key?": true,
"schema": null,
"table": "maps_v1"
},
"size": null,
"source": "map_id",
"type": "uuid"
}
],
"base_filter": null,
"check_constraints": [],
"custom_indexes": [
{
"all_tenants?": false,
"concurrently": false,
"error_fields": [
"map_id"
],
"fields": [
{
"type": "atom",
"value": "map_id"
}
],
"include": null,
"message": null,
"name": "map_chain_v1_map_id_index",
"nulls_distinct": true,
"prefix": null,
"table": null,
"unique": false,
"using": null,
"where": null
}
],
"custom_statements": [],
"has_create_action": true,
"hash": "43AE341D09AA875BB0F0D2ACE7AC6301064697D656FD1729FC36E6A1F77E4CB7",
"identities": [],
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"repo": "Elixir.WandererApp.Repo",
"schema": null,
"table": "map_chain_v1"
}

View File

@@ -0,0 +1,260 @@
{
"attributes": [
{
"allow_nil?": false,
"default": "fragment(\"gen_random_uuid()\")",
"generated?": false,
"primary_key?": true,
"references": null,
"size": null,
"source": "id",
"type": "uuid"
},
{
"allow_nil?": false,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_id",
"type": "bigint"
},
{
"allow_nil?": false,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "custom_name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "description",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "tag",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "temporary_name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "labels",
"type": "text"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "true",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "visible",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "false",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "locked",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "position_x",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "position_y",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "added_at",
"type": "utc_datetime"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "linked_sig_eve_id",
"type": "text"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "inserted_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "updated_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": {
"deferrable": false,
"destination_attribute": "id",
"destination_attribute_default": null,
"destination_attribute_generated": null,
"index?": false,
"match_type": null,
"match_with": null,
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"name": "map_system_v1_map_id_fkey",
"on_delete": null,
"on_update": null,
"primary_key?": true,
"schema": null,
"table": "maps_v1"
},
"size": null,
"source": "map_id",
"type": "uuid"
}
],
"base_filter": null,
"check_constraints": [],
"custom_indexes": [
{
"all_tenants?": false,
"concurrently": false,
"error_fields": [
"map_id"
],
"fields": [
{
"type": "atom",
"value": "map_id"
}
],
"include": null,
"message": null,
"name": "map_system_v1_map_id_visible_index",
"nulls_distinct": true,
"prefix": null,
"table": null,
"unique": false,
"using": null,
"where": "visible = true"
}
],
"custom_statements": [],
"has_create_action": true,
"hash": "AD7B82611EDA495AD35F114406C7F0C2D941C10E51105361002AA3144D7F7EA9",
"identities": [
{
"all_tenants?": false,
"base_filter": null,
"index_name": "map_system_v1_map_solar_system_id_index",
"keys": [
{
"type": "atom",
"value": "map_id"
},
{
"type": "atom",
"value": "solar_system_id"
}
],
"name": "map_solar_system_id",
"nils_distinct?": true,
"where": null
}
],
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"repo": "Elixir.WandererApp.Repo",
"schema": null,
"table": "map_system_v1"
}

View File

@@ -2,4 +2,15 @@
export ERL_AFLAGS="-proto_dist inet6_tcp"
export RELEASE_DISTRIBUTION="name"
export RELEASE_NODE="${FLY_APP_NAME}-${FLY_IMAGE_REF##*-}@${FLY_PRIVATE_IP}"
# Use custom RELEASE_NODE if set, otherwise detect environment
if [ -n "$RELEASE_NODE" ]; then
# RELEASE_NODE already set, use as-is
export RELEASE_NODE
elif [ -n "$FLY_APP_NAME" ] && [ -n "$FLY_IMAGE_REF" ] && [ -n "$FLY_PRIVATE_IP" ]; then
# Fly.io environment detected
export RELEASE_NODE="${FLY_APP_NAME}-${FLY_IMAGE_REF##*-}@${FLY_PRIVATE_IP}"
else
# Generic deployment - use hostname
export RELEASE_NODE="wanderer@$(hostname)"
fi

View File

@@ -0,0 +1,18 @@
# Example environment file for manual API tests
# Copy this to .env and fill in your values
# Your Wanderer server URL
API_BASE_URL=http://localhost:8000
# Your map's slug (found in the map URL: /your-map-slug)
MAP_SLUG=your-map-slug
# Your map's public API token (found in map settings)
API_TOKEN=your_map_public_api_key_here
# For character_eve_id testing:
# Find a valid character EVE ID from your database
VALID_CHAR_ID=111111111
# Use any non-existent character ID for invalid tests
INVALID_CHAR_ID=999999999

View File

@@ -0,0 +1,249 @@
# Manual cURL Testing for Character EVE ID Fix (Issue #539)
This guide provides standalone curl commands to manually test the character_eve_id fix.
## Prerequisites
1. **Get your Map's Public API Token:**
- Log into Wanderer
- Go to your map settings
- Find the "Public API Key" section
- Copy your API token
2. **Find your Map Slug:**
- Look at your map URL: `https://your-instance.com/your-map-slug`
- The slug is the last part of the URL
3. **Get a valid Character EVE ID:**
```bash
# Option 1: Query your database
psql $DATABASE_URL -c "SELECT eve_id, name FROM character_v1 WHERE deleted = false LIMIT 5;"
# Option 2: Use the characters API
curl -H "Authorization: Bearer YOUR_API_TOKEN" \
http://localhost:8000/api/characters
```
4. **Get a Solar System ID from your map:**
```bash
curl -H "Authorization: Bearer YOUR_API_TOKEN" \
http://localhost:8000/api/maps/YOUR_SLUG/systems \
| jq '.data[0].solar_system_id'
```
## Set Environment Variables (for convenience)
```bash
export API_BASE_URL="http://localhost:8000"
export MAP_SLUG="your-map-slug"
export API_TOKEN="your_api_token_here"
export SOLAR_SYSTEM_ID="30000142" # Replace with actual system ID from your map
export VALID_CHAR_ID="111111111" # Replace with real character eve_id
export INVALID_CHAR_ID="999999999" # Non-existent character
```
---
## Test 1: Create Signature with Valid character_eve_id
**Expected Result:** HTTP 201, returned object has the submitted character_eve_id
```bash
curl -v -X POST \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"solar_system_id": '"$SOLAR_SYSTEM_ID"',
"eve_id": "TEST-001",
"character_eve_id": "'"$VALID_CHAR_ID"'",
"group": "wormhole",
"kind": "cosmic_signature",
"name": "Test Signature 1"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" | jq '.'
```
**Verification:**
```bash
# The response should contain:
# "character_eve_id": "111111111" (your VALID_CHAR_ID)
```
---
## Test 2: Create Signature with Invalid character_eve_id
**Expected Result:** HTTP 422 with error "invalid_character"
```bash
curl -v -X POST \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"solar_system_id": '"$SOLAR_SYSTEM_ID"',
"eve_id": "TEST-002",
"character_eve_id": "'"$INVALID_CHAR_ID"'",
"group": "wormhole",
"kind": "cosmic_signature"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" | jq '.'
```
**Expected Response:**
```json
{
"error": "invalid_character"
}
```
---
## Test 3: Create Signature WITHOUT character_eve_id (Backward Compatibility)
**Expected Result:** HTTP 201, uses map owner's character_eve_id as fallback
```bash
curl -v -X POST \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"solar_system_id": '"$SOLAR_SYSTEM_ID"',
"eve_id": "TEST-003",
"group": "data",
"kind": "cosmic_signature",
"name": "Test Signature 3"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" | jq '.'
```
**Verification:**
```bash
# The response should contain the map owner's character_eve_id
# This proves backward compatibility is maintained
```
---
## Test 4: Update Signature with Valid character_eve_id
**Expected Result:** HTTP 200, returned object has the submitted character_eve_id
```bash
# First, save a signature ID from Test 1 or 3
export SIG_ID="paste-signature-id-here"
curl -v -X PUT \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Updated Signature Name",
"character_eve_id": "'"$VALID_CHAR_ID"'",
"description": "Updated via API"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID" | jq '.'
```
**Verification:**
```bash
# The response should contain:
# "character_eve_id": "111111111" (your VALID_CHAR_ID)
```
---
## Test 5: Update Signature with Invalid character_eve_id
**Expected Result:** HTTP 422 with error "invalid_character"
```bash
curl -v -X PUT \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Should Fail",
"character_eve_id": "'"$INVALID_CHAR_ID"'"
}' \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID" | jq '.'
```
**Expected Response:**
```json
{
"error": "invalid_character"
}
```
---
## Cleanup
Delete test signatures:
```bash
# List all signatures to find IDs
curl -H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" | jq '.data[] | {id, eve_id, name}'
# Delete specific signature
export SIG_ID="signature-uuid-here"
curl -v -X DELETE \
-H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID"
```
---
## Quick Debugging Tips
### View All Signatures
```bash
curl -H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/maps/$MAP_SLUG/signatures" \
| jq '.data[] | {id, eve_id, character_eve_id, name}'
```
### View All Characters in Database
```bash
curl -H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/characters" \
| jq '.[] | {eve_id, name}'
```
### View All Systems in Map
```bash
curl -H "Authorization: Bearer $API_TOKEN" \
"$API_BASE_URL/api/maps/$MAP_SLUG/systems" \
| jq '.data[] | {id, solar_system_id, name}'
```
---
## Expected Behavior Summary
| Test Case | HTTP Status | character_eve_id in Response |
|-----------|-------------|------------------------------|
| Create with valid char ID | 201 | Matches submitted value |
| Create with invalid char ID | 422 | N/A (error returned) |
| Create without char ID | 201 | Map owner's char ID (fallback) |
| Update with valid char ID | 200 | Matches submitted value |
| Update with invalid char ID | 422 | N/A (error returned) |
---
## Troubleshooting
### "Unauthorized (invalid token for map)"
- Double-check your API_TOKEN matches the map's public API key
- Verify the token doesn't have extra spaces or newlines
### "Map not found"
- Verify your MAP_SLUG is correct
- Try using the map UUID instead of slug
### "System not found for solar_system_id"
- The system must already exist in your map
- Run the "View All Systems" command to find valid system IDs
### "invalid_character" when using what should be valid
- Verify the character exists: `SELECT * FROM character_v1 WHERE eve_id = 'YOUR_ID';`
- Make sure `deleted = false` for the character

View File

@@ -0,0 +1,289 @@
#!/bin/bash
# test/manual/api/test_character_eve_id_fix.sh
# ─── Manual Test for Character EVE ID Fix (Issue #539) ────────────────────────
#
# This script tests the fix for GitHub issue #539 where character_eve_id
# was being ignored when creating/updating signatures via the REST API.
#
# Usage:
# 1. Create a .env file in this directory with:
# API_TOKEN=your_map_public_api_key
# API_BASE_URL=http://localhost:8000 # or your server URL
# MAP_SLUG=your_map_slug
# VALID_CHAR_ID=111111111 # A character that exists in your database
# INVALID_CHAR_ID=999999999 # A character that does NOT exist
#
# 2. Run: ./test_character_eve_id_fix.sh
#
# Prerequisites:
# - curl and jq must be installed
# - A map must exist with a valid API token
# - At least one system must be added to the map
set -eu
source "$(dirname "$0")/utils.sh"
echo "═══════════════════════════════════════════════════════════════════"
echo "Testing Character EVE ID Fix (GitHub Issue #539)"
echo "═══════════════════════════════════════════════════════════════════"
echo ""
# Check required environment variables
: "${API_BASE_URL:?Error: API_BASE_URL not set}"
: "${MAP_SLUG:?Error: MAP_SLUG not set}"
: "${VALID_CHAR_ID:?Error: VALID_CHAR_ID not set (provide a character eve_id that exists in DB)}"
: "${INVALID_CHAR_ID:?Error: INVALID_CHAR_ID not set (provide a non-existent character eve_id)}"
# Get a system to use for testing
echo "📋 Fetching available systems from map..."
SYSTEMS_RAW=$(make_request GET "$API_BASE_URL/api/maps/$MAP_SLUG/systems")
SYSTEMS_STATUS=$(parse_status "$SYSTEMS_RAW")
SYSTEMS_RESPONSE=$(parse_response "$SYSTEMS_RAW")
if [ "$SYSTEMS_STATUS" != "200" ]; then
echo "❌ Failed to fetch systems (HTTP $SYSTEMS_STATUS)"
echo "$SYSTEMS_RESPONSE"
exit 1
fi
# Extract first system's solar_system_id
SOLAR_SYSTEM_ID=$(echo "$SYSTEMS_RESPONSE" | jq -r '.data[0].solar_system_id // empty')
if [ -z "$SOLAR_SYSTEM_ID" ]; then
echo "❌ No systems found in map. Please add at least one system first."
exit 1
fi
echo "✅ Using solar_system_id: $SOLAR_SYSTEM_ID"
echo ""
# ═══════════════════════════════════════════════════════════════════════
# Test 1: Create signature with valid character_eve_id
# ═══════════════════════════════════════════════════════════════════════
echo "─────────────────────────────────────────────────────────────────"
echo "Test 1: Create signature with VALID character_eve_id"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD1=$(cat <<EOF
{
"solar_system_id": $SOLAR_SYSTEM_ID,
"eve_id": "TEST-001",
"character_eve_id": "$VALID_CHAR_ID",
"group": "wormhole",
"kind": "cosmic_signature",
"name": "Test Sig 1"
}
EOF
)
echo "Request:"
echo "$PAYLOAD1" | jq '.'
echo ""
RAW1=$(make_request POST "$API_BASE_URL/api/maps/$MAP_SLUG/signatures" "$PAYLOAD1")
STATUS1=$(parse_status "$RAW1")
RESPONSE1=$(parse_response "$RAW1")
echo "Response (HTTP $STATUS1):"
echo "$RESPONSE1" | jq '.'
echo ""
if [ "$STATUS1" = "201" ]; then
RETURNED_CHAR_ID=$(echo "$RESPONSE1" | jq -r '.data.character_eve_id')
if [ "$RETURNED_CHAR_ID" = "$VALID_CHAR_ID" ]; then
echo "✅ PASS: Signature created with correct character_eve_id: $RETURNED_CHAR_ID"
SIG_ID_1=$(echo "$RESPONSE1" | jq -r '.data.id')
else
echo "❌ FAIL: Expected character_eve_id=$VALID_CHAR_ID, got $RETURNED_CHAR_ID"
fi
else
echo "❌ FAIL: Expected HTTP 201, got $STATUS1"
fi
echo ""
# ═══════════════════════════════════════════════════════════════════════
# Test 2: Create signature with invalid character_eve_id
# ═══════════════════════════════════════════════════════════════════════
echo "─────────────────────────────────────────────────────────────────"
echo "Test 2: Create signature with INVALID character_eve_id"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD2=$(cat <<EOF
{
"solar_system_id": $SOLAR_SYSTEM_ID,
"eve_id": "TEST-002",
"character_eve_id": "$INVALID_CHAR_ID",
"group": "wormhole",
"kind": "cosmic_signature"
}
EOF
)
echo "Request:"
echo "$PAYLOAD2" | jq '.'
echo ""
RAW2=$(make_request POST "$API_BASE_URL/api/maps/$MAP_SLUG/signatures" "$PAYLOAD2")
STATUS2=$(parse_status "$RAW2")
RESPONSE2=$(parse_response "$RAW2")
echo "Response (HTTP $STATUS2):"
echo "$RESPONSE2" | jq '.'
echo ""
if [ "$STATUS2" = "422" ]; then
ERROR_MSG=$(echo "$RESPONSE2" | jq -r '.error // empty')
if [ "$ERROR_MSG" = "invalid_character" ]; then
echo "✅ PASS: Correctly rejected invalid character_eve_id with error: $ERROR_MSG"
else
echo "⚠️ PARTIAL: Got HTTP 422 but unexpected error message: $ERROR_MSG"
fi
else
echo "❌ FAIL: Expected HTTP 422, got $STATUS2"
fi
echo ""
# ═══════════════════════════════════════════════════════════════════════
# Test 3: Create signature WITHOUT character_eve_id (fallback test)
# ═══════════════════════════════════════════════════════════════════════
echo "─────────────────────────────────────────────────────────────────"
echo "Test 3: Create signature WITHOUT character_eve_id (fallback)"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD3=$(cat <<EOF
{
"solar_system_id": $SOLAR_SYSTEM_ID,
"eve_id": "TEST-003",
"group": "data",
"kind": "cosmic_signature",
"name": "Test Sig 3"
}
EOF
)
echo "Request:"
echo "$PAYLOAD3" | jq '.'
echo ""
RAW3=$(make_request POST "$API_BASE_URL/api/maps/$MAP_SLUG/signatures" "$PAYLOAD3")
STATUS3=$(parse_status "$RAW3")
RESPONSE3=$(parse_response "$RAW3")
echo "Response (HTTP $STATUS3):"
echo "$RESPONSE3" | jq '.'
echo ""
if [ "$STATUS3" = "201" ]; then
RETURNED_CHAR_ID=$(echo "$RESPONSE3" | jq -r '.data.character_eve_id')
echo "✅ PASS: Signature created with fallback character_eve_id: $RETURNED_CHAR_ID"
echo " (This should be the map owner's character)"
SIG_ID_3=$(echo "$RESPONSE3" | jq -r '.data.id')
else
echo "❌ FAIL: Expected HTTP 201, got $STATUS3"
fi
echo ""
# ═══════════════════════════════════════════════════════════════════════
# Test 4: Update signature with valid character_eve_id
# ═══════════════════════════════════════════════════════════════════════
if [ -n "${SIG_ID_1:-}" ]; then
echo "─────────────────────────────────────────────────────────────────"
echo "Test 4: Update signature with VALID character_eve_id"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD4=$(cat <<EOF
{
"name": "Updated Test Sig 1",
"character_eve_id": "$VALID_CHAR_ID",
"description": "Updated via API"
}
EOF
)
echo "Request:"
echo "$PAYLOAD4" | jq '.'
echo ""
RAW4=$(make_request PUT "$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID_1" "$PAYLOAD4")
STATUS4=$(parse_status "$RAW4")
RESPONSE4=$(parse_response "$RAW4")
echo "Response (HTTP $STATUS4):"
echo "$RESPONSE4" | jq '.'
echo ""
if [ "$STATUS4" = "200" ]; then
RETURNED_CHAR_ID=$(echo "$RESPONSE4" | jq -r '.data.character_eve_id')
if [ "$RETURNED_CHAR_ID" = "$VALID_CHAR_ID" ]; then
echo "✅ PASS: Signature updated with correct character_eve_id: $RETURNED_CHAR_ID"
else
echo "❌ FAIL: Expected character_eve_id=$VALID_CHAR_ID, got $RETURNED_CHAR_ID"
fi
else
echo "❌ FAIL: Expected HTTP 200, got $STATUS4"
fi
echo ""
fi
# ═══════════════════════════════════════════════════════════════════════
# Test 5: Update signature with invalid character_eve_id
# ═══════════════════════════════════════════════════════════════════════
if [ -n "${SIG_ID_3:-}" ]; then
echo "─────────────────────────────────────────────────────────────────"
echo "Test 5: Update signature with INVALID character_eve_id"
echo "─────────────────────────────────────────────────────────────────"
PAYLOAD5=$(cat <<EOF
{
"name": "Should Fail",
"character_eve_id": "$INVALID_CHAR_ID"
}
EOF
)
echo "Request:"
echo "$PAYLOAD5" | jq '.'
echo ""
RAW5=$(make_request PUT "$API_BASE_URL/api/maps/$MAP_SLUG/signatures/$SIG_ID_3" "$PAYLOAD5")
STATUS5=$(parse_status "$RAW5")
RESPONSE5=$(parse_response "$RAW5")
echo "Response (HTTP $STATUS5):"
echo "$RESPONSE5" | jq '.'
echo ""
if [ "$STATUS5" = "422" ]; then
ERROR_MSG=$(echo "$RESPONSE5" | jq -r '.error // empty')
if [ "$ERROR_MSG" = "invalid_character" ]; then
echo "✅ PASS: Correctly rejected invalid character_eve_id with error: $ERROR_MSG"
else
echo "⚠️ PARTIAL: Got HTTP 422 but unexpected error message: $ERROR_MSG"
fi
else
echo "❌ FAIL: Expected HTTP 422, got $STATUS5"
fi
echo ""
fi
# ═══════════════════════════════════════════════════════════════════════
# Cleanup (optional)
# ═══════════════════════════════════════════════════════════════════════
echo "─────────────────────────────────────────────────────────────────"
echo "Cleanup"
echo "─────────────────────────────────────────────────────────────────"
echo "Created signature IDs: ${SIG_ID_1:-none} ${SIG_ID_3:-none}"
echo ""
echo "To clean up manually, delete these signatures via the UI or API:"
for sig_id in ${SIG_ID_1:-} ${SIG_ID_3:-}; do
if [ -n "$sig_id" ]; then
echo " curl -X DELETE -H 'Authorization: Bearer \$API_TOKEN' \\"
echo " $API_BASE_URL/api/maps/$MAP_SLUG/signatures/$sig_id"
fi
done
echo ""
echo "═══════════════════════════════════════════════════════════════════"
echo "Test Complete!"
echo "═══════════════════════════════════════════════════════════════════"

View File

@@ -0,0 +1,343 @@
defmodule WandererApp.Map.MapPoolTest do
use ExUnit.Case, async: false
alias WandererApp.Map.{MapPool, MapPoolDynamicSupervisor, Reconciler}
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
setup do
# Clean up any existing test data
cleanup_test_data()
# Check if required infrastructure is running
registries_running? =
try do
Registry.keys(@registry, self()) != :error
rescue
_ -> false
end
reconciler_running? = Process.whereis(Reconciler) != nil
on_exit(fn ->
cleanup_test_data()
end)
{:ok, registries_running: registries_running?, reconciler_running: reconciler_running?}
end
defp cleanup_test_data do
# Clean up test caches
WandererApp.Cache.delete("started_maps")
Cachex.clear(@cache)
end
describe "garbage collection with synchronous stop" do
@tag :skip
test "garbage collector successfully stops map with synchronous call" do
# This test would require setting up a full map pool with a test map
# Skipping for now as it requires more complex setup with actual map data
:ok
end
@tag :skip
test "garbage collector handles stop failures gracefully" do
# This test would verify error handling when stop fails
:ok
end
end
describe "cache lookup with registry fallback" do
test "stop_map handles cache miss by scanning registry", %{registries_running: registries_running?} do
if registries_running? do
# Setup: Create a map_id that's not in cache but will be found in registry scan
map_id = "test_map_#{:rand.uniform(1_000_000)}"
# Verify cache is empty for this map
assert {:ok, nil} = Cachex.get(@cache, map_id)
# Call stop_map - should handle gracefully with fallback
assert :ok = MapPoolDynamicSupervisor.stop_map(map_id)
else
# Skip test if registries not running
:ok
end
end
test "stop_map handles non-existent pool_uuid in registry", %{registries_running: registries_running?} do
if registries_running? do
map_id = "test_map_#{:rand.uniform(1_000_000)}"
fake_uuid = "fake_uuid_#{:rand.uniform(1_000_000)}"
# Put fake uuid in cache that doesn't exist in registry
Cachex.put(@cache, map_id, fake_uuid)
# Call stop_map - should handle gracefully with fallback
assert :ok = MapPoolDynamicSupervisor.stop_map(map_id)
else
:ok
end
end
test "stop_map updates cache when found via registry scan", %{registries_running: registries_running?} do
if registries_running? do
# This test would require a running pool with registered maps
# For now, we verify the fallback logic doesn't crash
map_id = "test_map_#{:rand.uniform(1_000_000)}"
assert :ok = MapPoolDynamicSupervisor.stop_map(map_id)
else
:ok
end
end
end
describe "state cleanup atomicity" do
@tag :skip
test "rollback occurs when registry update fails" do
# This would require mocking Registry.update_value to fail
# Skipping for now as it requires more complex mocking setup
:ok
end
@tag :skip
test "rollback occurs when cache delete fails" do
# This would require mocking Cachex.del to fail
:ok
end
@tag :skip
test "successful cleanup updates all three state stores" do
# This would verify Registry, Cache, and GenServer state are all updated
:ok
end
end
describe "Reconciler - zombie map detection and cleanup" do
test "reconciler detects zombie maps in started_maps cache", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
# Setup: Add maps to started_maps that aren't in any registry
zombie_map_id = "zombie_map_#{:rand.uniform(1_000_000)}"
WandererApp.Cache.insert_or_update(
"started_maps",
[zombie_map_id],
fn existing -> [zombie_map_id | existing] |> Enum.uniq() end
)
# Get started_maps
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
assert zombie_map_id in started_maps
# Trigger reconciliation
send(Reconciler, :reconcile)
# Give it time to process
Process.sleep(200)
# Verify zombie was cleaned up
{:ok, started_maps_after} = WandererApp.Cache.lookup("started_maps", [])
refute zombie_map_id in started_maps_after
else
:ok
end
end
test "reconciler cleans up zombie map caches", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
zombie_map_id = "zombie_map_#{:rand.uniform(1_000_000)}"
# Setup zombie state
WandererApp.Cache.insert_or_update(
"started_maps",
[zombie_map_id],
fn existing -> [zombie_map_id | existing] |> Enum.uniq() end
)
WandererApp.Cache.insert("map_#{zombie_map_id}:started", true)
Cachex.put(@cache, zombie_map_id, "fake_uuid")
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
# Verify all caches cleaned
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
refute zombie_map_id in started_maps
{:ok, cache_entry} = Cachex.get(@cache, zombie_map_id)
assert cache_entry == nil
else
:ok
end
end
end
describe "Reconciler - orphan map detection and fix" do
@tag :skip
test "reconciler detects orphan maps in registry" do
# This would require setting up a pool with maps in registry
# but not in started_maps cache
:ok
end
@tag :skip
test "reconciler adds orphan maps to started_maps cache" do
# This would verify orphan maps get added to the cache
:ok
end
end
describe "Reconciler - cache inconsistency detection and fix" do
test "reconciler detects map with missing cache entry", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
# This test verifies the reconciler can detect when a map
# is in the registry but has no cache entry
# Since we can't easily set up a full pool, we test the detection logic
map_id = "test_map_#{:rand.uniform(1_000_000)}"
# Ensure no cache entry
Cachex.del(@cache, map_id)
# The reconciler would detect this if the map was in a registry
# For now, we just verify the logic doesn't crash
send(Reconciler, :reconcile)
Process.sleep(200)
# No assertions needed - just verifying no crashes
end
end
test "reconciler detects cache pointing to non-existent pool", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
map_id = "test_map_#{:rand.uniform(1_000_000)}"
fake_uuid = "fake_uuid_#{:rand.uniform(1_000_000)}"
# Put fake uuid in cache
Cachex.put(@cache, map_id, fake_uuid)
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
# Cache entry should be removed since pool doesn't exist
{:ok, cache_entry} = Cachex.get(@cache, map_id)
assert cache_entry == nil
else
:ok
end
end
end
describe "Reconciler - stats and telemetry" do
test "reconciler emits telemetry events", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
# Setup telemetry handler
test_pid = self()
:telemetry.attach(
"test-reconciliation",
[:wanderer_app, :map, :reconciliation],
fn _event, measurements, _metadata, _config ->
send(test_pid, {:telemetry, measurements})
end,
nil
)
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
# Should receive telemetry event
assert_receive {:telemetry, measurements}, 500
assert is_integer(measurements.total_started_maps)
assert is_integer(measurements.total_registry_maps)
assert is_integer(measurements.zombie_maps)
assert is_integer(measurements.orphan_maps)
assert is_integer(measurements.cache_inconsistencies)
# Cleanup
:telemetry.detach("test-reconciliation")
else
:ok
end
end
end
describe "Reconciler - manual trigger" do
test "trigger_reconciliation runs reconciliation immediately", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
zombie_map_id = "zombie_map_#{:rand.uniform(1_000_000)}"
# Setup zombie state
WandererApp.Cache.insert_or_update(
"started_maps",
[zombie_map_id],
fn existing -> [zombie_map_id | existing] |> Enum.uniq() end
)
# Verify it exists
{:ok, started_maps_before} = WandererApp.Cache.lookup("started_maps", [])
assert zombie_map_id in started_maps_before
# Trigger manual reconciliation
Reconciler.trigger_reconciliation()
Process.sleep(200)
# Verify zombie was cleaned up
{:ok, started_maps_after} = WandererApp.Cache.lookup("started_maps", [])
refute zombie_map_id in started_maps_after
else
:ok
end
end
end
describe "edge cases and error handling" do
test "stop_map with cache error returns ok", %{registries_running: registries_running?} do
if registries_running? do
map_id = "test_map_#{:rand.uniform(1_000_000)}"
# Even if cache operations fail, should return :ok
assert :ok = MapPoolDynamicSupervisor.stop_map(map_id)
else
:ok
end
end
test "reconciler handles empty registries gracefully", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
# Clear everything
cleanup_test_data()
# Should not crash even with empty data
send(Reconciler, :reconcile)
Process.sleep(200)
# No assertions - just verifying no crash
assert true
else
:ok
end
end
test "reconciler handles nil values in caches", %{reconciler_running: reconciler_running?} do
if reconciler_running? do
map_id = "test_map_#{:rand.uniform(1_000_000)}"
# Explicitly set nil
Cachex.put(@cache, map_id, nil)
# Should handle gracefully
send(Reconciler, :reconcile)
Process.sleep(200)
assert true
else
:ok
end
end
end
end

View File

@@ -580,6 +580,155 @@ defmodule WandererApp.Map.Operations.SignaturesTest do
end
end
describe "character_eve_id validation" do
test "create_signature uses provided character_eve_id when valid" do
# Create a test character
{:ok, character} =
WandererApp.Api.Character.create(%{
eve_id: "111111111",
name: "Test Character"
})
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123",
"character_eve_id" => character.eve_id
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
case result do
{:ok, data} ->
# Should use the provided character_eve_id, not the owner's
assert Map.get(data, "character_eve_id") == character.eve_id
{:error, _} ->
# System not found error is acceptable
:ok
end
end)
end
test "create_signature rejects invalid character_eve_id" do
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123",
"character_eve_id" => "invalid_char_id_999"
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
# Should return invalid_character error
assert {:error, :invalid_character} = result
end)
end
test "create_signature falls back to owner when character_eve_id not provided" do
owner_char_id = "888888888"
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: owner_char_id,
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123"
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
case result do
{:ok, data} ->
# Should use the owner's character_eve_id
assert Map.get(data, "character_eve_id") == owner_char_id
{:error, _} ->
# System not found error is acceptable
:ok
end
end)
end
test "update_signature respects provided character_eve_id when valid" do
# Create a test character
{:ok, character} =
WandererApp.Api.Character.create(%{
eve_id: "222222222",
name: "Another Test Character"
})
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
sig_id = Ecto.UUID.generate()
params = %{
"name" => "Updated Name",
"character_eve_id" => character.eve_id
}
result = Signatures.update_signature(conn, sig_id, params)
case result do
{:ok, data} ->
# Should use the provided character_eve_id
assert Map.get(data, "character_eve_id") == character.eve_id
{:error, _} ->
# Signature not found error is acceptable
:ok
end
end
test "update_signature rejects invalid character_eve_id" do
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
sig_id = Ecto.UUID.generate()
params = %{
"name" => "Updated Name",
"character_eve_id" => "totally_invalid_char"
}
result = Signatures.update_signature(conn, sig_id, params)
# Should return invalid_character error
assert {:error, :invalid_character} = result
end
end
describe "parameter merging and character_eve_id injection" do
test "create_signature injects character_eve_id correctly" do
char_id = "987654321"

View File

@@ -0,0 +1,84 @@
defmodule WandererApp.User.ActivityTrackerTest do
use WandererApp.DataCase, async: false
alias WandererApp.User.ActivityTracker
describe "track_map_event/2" do
test "returns {:ok, result} on success" do
# This test verifies the happy path
# In real scenarios, this would succeed when creating a new activity record
assert {:ok, _} = ActivityTracker.track_map_event(:test_event, %{})
end
test "returns {:ok, nil} on error without crashing" do
# This simulates the scenario where tracking fails (e.g., unique constraint violation)
# The function should handle the error gracefully and return {:ok, nil}
# Note: In actual implementation, this would catch errors from:
# - Unique constraint violations
# - Database connection issues
# - Invalid data
# The key requirement is that it NEVER crashes the calling code
result = ActivityTracker.track_map_event(:map_connection_added, %{
character_id: nil, # This will cause the function to skip tracking
user_id: nil,
map_id: nil
})
# Should return success even when input is incomplete
assert {:ok, _} = result
end
test "handles errors gracefully and logs them" do
# Verify that errors are logged for observability
# This is important for monitoring and debugging
# The function should complete without raising even with incomplete data
assert {:ok, _} = ActivityTracker.track_map_event(:map_connection_added, %{
character_id: nil,
user_id: nil,
map_id: nil
})
end
end
describe "track_acl_event/2" do
test "returns {:ok, result} on success" do
assert {:ok, _} = ActivityTracker.track_acl_event(:test_event, %{})
end
test "returns {:ok, nil} on error without crashing" do
result = ActivityTracker.track_acl_event(:map_acl_added, %{
user_id: nil,
acl_id: nil
})
assert {:ok, _} = result
end
end
describe "error resilience" do
test "always returns success tuple even on internal errors" do
# The key guarantee is that activity tracking never crashes calling code
# Even if the internal tracking fails (e.g., unique constraint violation),
# the wrapper ensures a success tuple is returned
# This test verifies that the function signature guarantees {:ok, _}
# regardless of internal errors
# Test with nil values (which will fail validation)
assert {:ok, _} = ActivityTracker.track_map_event(:test_event, %{
character_id: nil,
user_id: nil,
map_id: nil
})
# Test with empty map (which will fail validation)
assert {:ok, _} = ActivityTracker.track_map_event(:test_event, %{})
# The guarantee is: no matter what, it returns {:ok, _}
# This prevents MatchError crashes in calling code
end
end
end