Compare commits

..

35 Commits

Author SHA1 Message Date
CI
375a9ef65b chore: release version v1.84.8 2025-11-12 12:42:08 +00:00
Dmitry Popov
9bf90ab752 fix(core): added cleanup jobs for old system signatures & chain passages 2025-11-12 13:41:33 +01:00
CI
90c3481151 chore: [skip ci] 2025-11-12 10:57:58 +00:00
CI
e36b08a7e5 chore: release version v1.84.7 2025-11-12 10:57:58 +00:00
Dmitry Popov
e1f79170c3 Merge pull request #540 from guarzo/guarzo/apifun
fix: api and search fixes
2025-11-12 14:54:33 +04:00
Guarzo
68b5455e91 bug fix 2025-11-12 07:25:49 +00:00
Guarzo
f28e75c7f4 pr updates 2025-11-12 07:16:21 +00:00
Guarzo
6091adb28e fix: api and structure search fixes 2025-11-12 07:07:39 +00:00
CI
d4657b335f chore: [skip ci] 2025-11-12 00:13:07 +00:00
CI
7fee850902 chore: release version v1.84.6 2025-11-12 00:13:07 +00:00
Dmitry Popov
648c168a66 fix(core): Added map slug uniqness checking while using API 2025-11-12 01:12:13 +01:00
CI
f5c4b2c407 chore: [skip ci] 2025-11-11 12:52:39 +00:00
CI
b592223d52 chore: release version v1.84.5 2025-11-11 12:52:39 +00:00
Dmitry Popov
5cf118c6ee Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-11 13:52:11 +01:00
Dmitry Popov
b25013c652 fix(core): Added tracking for map & character event handling errors 2025-11-11 13:52:07 +01:00
CI
cf43861b11 chore: [skip ci] 2025-11-11 12:27:54 +00:00
CI
b5fe8f8878 chore: release version v1.84.4 2025-11-11 12:27:54 +00:00
Dmitry Popov
5e5068c7de fix(core): fixed issue with updating system signatures 2025-11-11 13:27:17 +01:00
CI
624b51edfb chore: [skip ci] 2025-11-11 09:52:29 +00:00
CI
a72f8e60c4 chore: release version v1.84.3 2025-11-11 09:52:29 +00:00
Dmitry Popov
dec8ae50c9 Merge branch 'develop' 2025-11-11 10:51:55 +01:00
Dmitry Popov
0332d36a8e fix(core): fixed linked signature time status update
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
2025-11-11 10:51:43 +01:00
CI
8444c7f82d chore: [skip ci] 2025-11-10 16:57:53 +00:00
CI
ec3fc7447e chore: release version v1.84.2 2025-11-10 16:57:53 +00:00
Dmitry Popov
20ec2800c9 Merge pull request #538 from wanderer-industries/develop
Develop
2025-11-10 20:56:53 +04:00
Dmitry Popov
6fbf43e860 fix(api): fixed api for get/update map systems
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
2025-11-10 17:23:44 +01:00
Dmitry Popov
697da38020 Merge pull request #537 from guarzo/guarzo/apisystemperf
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
fix: add indexes for map/system
2025-11-09 01:48:01 +04:00
Guarzo
4bc65b43d2 fix: add index for map/systems api 2025-11-08 14:30:19 +00:00
Dmitry Popov
910ec97fd1 chore: refactored map server processes
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
2025-11-06 09:23:19 +01:00
Dmitry Popov
40ed58ee8c Merge pull request #536 from wanderer-industries/refactor-map-servers
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build / merge (push) Has been cancelled
Build / 🏷 Create Release (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Refactor map servers
2025-11-06 03:03:57 +04:00
Dmitry Popov
c18d241c77 Merge branch 'develop' into refactor-map-servers 2025-11-06 00:01:32 +01:00
Dmitry Popov
8b42908a5c chore: refactored map server processes 2025-11-06 00:01:04 +01:00
Dmitry Popov
6d32505a59 chore: added map cached rtree implementation 2025-11-04 23:40:37 +01:00
Dmitry Popov
fe8a34c77d chore: refactored map state usage 2025-11-04 22:40:04 +01:00
CI
d12cafcca8 chore: [skip ci] 2025-11-01 20:01:52 +00:00
59 changed files with 3249 additions and 1276 deletions

View File

@@ -2,6 +2,71 @@
<!-- changelog -->
## [v1.84.8](https://github.com/wanderer-industries/wanderer/compare/v1.84.7...v1.84.8) (2025-11-12)
### Bug Fixes:
* core: added cleanup jobs for old system signatures & chain passages
## [v1.84.7](https://github.com/wanderer-industries/wanderer/compare/v1.84.6...v1.84.7) (2025-11-12)
### Bug Fixes:
* api and structure search fixes
## [v1.84.6](https://github.com/wanderer-industries/wanderer/compare/v1.84.5...v1.84.6) (2025-11-12)
### Bug Fixes:
* core: Added map slug uniqness checking while using API
## [v1.84.5](https://github.com/wanderer-industries/wanderer/compare/v1.84.4...v1.84.5) (2025-11-11)
### Bug Fixes:
* core: Added tracking for map & character event handling errors
## [v1.84.4](https://github.com/wanderer-industries/wanderer/compare/v1.84.3...v1.84.4) (2025-11-11)
### Bug Fixes:
* core: fixed issue with updating system signatures
## [v1.84.3](https://github.com/wanderer-industries/wanderer/compare/v1.84.2...v1.84.3) (2025-11-11)
### Bug Fixes:
* core: fixed linked signature time status update
## [v1.84.2](https://github.com/wanderer-industries/wanderer/compare/v1.84.1...v1.84.2) (2025-11-10)
### Bug Fixes:
* api: fixed api for get/update map systems
* add index for map/systems api
## [v1.84.1](https://github.com/wanderer-industries/wanderer/compare/v1.84.0...v1.84.1) (2025-11-01)

View File

@@ -30,9 +30,6 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
const { outCommand } = useMapRootState();
const [prevQuery, setPrevQuery] = useState('');
const [prevResults, setPrevResults] = useState<{ label: string; value: string }[]>([]);
useEffect(() => {
if (structure) {
setEditData(structure);
@@ -46,34 +43,24 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
// Searching corporation owners via auto-complete
const searchOwners = useCallback(
async (e: { query: string }) => {
const newQuery = e.query.trim();
if (!newQuery) {
const query = e.query.trim();
if (!query) {
setOwnerSuggestions([]);
return;
}
// If user typed more text but we have partial match in prevResults
if (newQuery.startsWith(prevQuery) && prevResults.length > 0) {
const filtered = prevResults.filter(item => item.label.toLowerCase().includes(newQuery.toLowerCase()));
setOwnerSuggestions(filtered);
return;
}
try {
// TODO fix it
const { results = [] } = await outCommand({
type: OutCommand.getCorporationNames,
data: { search: newQuery },
data: { search: query },
});
setOwnerSuggestions(results);
setPrevQuery(newQuery);
setPrevResults(results);
} catch (err) {
console.error('Failed to fetch owners:', err);
setOwnerSuggestions([]);
}
},
[prevQuery, prevResults, outCommand],
[outCommand],
);
const handleChange = (field: keyof StructureItem, val: string | Date) => {
@@ -122,7 +109,6 @@ export const SystemStructuresDialog: React.FC<StructuresEditDialogProps> = ({
// fetch corporation ticker if we have an ownerId
if (editData.ownerId) {
try {
// TODO fix it
const { ticker } = await outCommand({
type: OutCommand.getCorporationTicker,
data: { corp_id: editData.ownerId },

View File

@@ -25,7 +25,7 @@ config :wanderer_app,
ecto_repos: [WandererApp.Repo],
ash_domains: [WandererApp.Api],
generators: [timestamp_type: :utc_datetime],
ddrt: DDRT,
ddrt: WandererApp.Map.CacheRTree,
logger: Logger,
pubsub_client: Phoenix.PubSub,
wanderer_kills_base_url:

View File

@@ -258,7 +258,9 @@ config :wanderer_app, WandererApp.Scheduler,
timezone: :utc,
jobs:
[
{"@daily", {WandererApp.Map.Audit, :archive, []}}
{"@daily", {WandererApp.Map.Audit, :archive, []}},
{"@daily", {WandererApp.Map.GarbageCollector, :cleanup_chain_passages, []}},
{"@daily", {WandererApp.Map.GarbageCollector, :cleanup_system_signatures, []}}
] ++ sheduler_jobs,
timeout: :infinity

View File

@@ -2,6 +2,7 @@ defmodule WandererApp.Api.Changes.SlugifyName do
use Ash.Resource.Change
alias Ash.Changeset
require Ash.Query
@impl true
@spec change(Changeset.t(), keyword, Change.context()) :: Changeset.t()
@@ -12,10 +13,56 @@ defmodule WandererApp.Api.Changes.SlugifyName do
defp maybe_slugify_name(changeset) do
case Changeset.get_attribute(changeset, :slug) do
slug when is_binary(slug) ->
Changeset.force_change_attribute(changeset, :slug, Slug.slugify(slug))
base_slug = Slug.slugify(slug)
unique_slug = ensure_unique_slug(changeset, base_slug)
Changeset.force_change_attribute(changeset, :slug, unique_slug)
_ ->
changeset
end
end
defp ensure_unique_slug(changeset, base_slug) do
# Get the current record ID if this is an update operation
current_id = Changeset.get_attribute(changeset, :id)
# Check if the base slug is available
if slug_available?(base_slug, current_id) do
base_slug
else
# Find the next available slug with a numeric suffix
find_available_slug(base_slug, current_id, 2)
end
end
defp find_available_slug(base_slug, current_id, n) do
candidate_slug = "#{base_slug}-#{n}"
if slug_available?(candidate_slug, current_id) do
candidate_slug
else
find_available_slug(base_slug, current_id, n + 1)
end
end
defp slug_available?(slug, current_id) do
query =
WandererApp.Api.Map
|> Ash.Query.filter(slug == ^slug)
|> then(fn query ->
# Exclude the current record if this is an update
if current_id do
Ash.Query.filter(query, id != ^current_id)
else
query
end
end)
|> Ash.Query.limit(1)
case Ash.read(query) do
{:ok, []} -> true
{:ok, _} -> false
{:error, _} -> false
end
end
end

View File

@@ -37,7 +37,7 @@ defmodule WandererApp.Api.Map do
delete(:destroy)
# Custom action for map duplication
post(:duplicate, route: "/:id/duplicate")
# post(:duplicate, route: "/:id/duplicate")
end
end

View File

@@ -9,6 +9,11 @@ defmodule WandererApp.Api.MapConnection do
postgres do
repo(WandererApp.Repo)
table("map_chain_v1")
custom_indexes do
# Critical index for list_connections query performance
index [:map_id], name: "map_chain_v1_map_id_index"
end
end
json_api do

View File

@@ -65,7 +65,7 @@ defmodule WandererApp.Api.MapSubscription do
defaults [:create, :read, :update, :destroy]
read :all_active do
prepare build(sort: [updated_at: :asc])
prepare build(sort: [updated_at: :asc], load: [:map])
filter(expr(status == :active))
end

View File

@@ -1,6 +1,26 @@
defmodule WandererApp.Api.MapSystem do
@moduledoc false
@derive {Jason.Encoder,
only: [
:id,
:map_id,
:name,
:solar_system_id,
:position_x,
:position_y,
:status,
:visible,
:locked,
:custom_name,
:description,
:tag,
:temporary_name,
:labels,
:added_at,
:linked_sig_eve_id
]}
use Ash.Resource,
domain: WandererApp.Api,
data_layer: AshPostgres.DataLayer,
@@ -9,6 +29,11 @@ defmodule WandererApp.Api.MapSystem do
postgres do
repo(WandererApp.Repo)
table("map_system_v1")
custom_indexes do
# Partial index for efficient visible systems query
index [:map_id], where: "visible = true", name: "map_system_v1_map_id_visible_index"
end
end
json_api do

View File

@@ -38,7 +38,12 @@ defmodule WandererApp.Application do
),
Supervisor.child_spec({Cachex, name: :ship_types_cache}, id: :ship_types_cache_worker),
Supervisor.child_spec({Cachex, name: :character_cache}, id: :character_cache_worker),
Supervisor.child_spec({Cachex, name: :acl_cache}, id: :acl_cache_worker),
Supervisor.child_spec({Cachex, name: :map_cache}, id: :map_cache_worker),
Supervisor.child_spec({Cachex, name: :map_pool_cache},
id: :map_pool_cache_worker
),
Supervisor.child_spec({Cachex, name: :map_state_cache}, id: :map_state_cache_worker),
Supervisor.child_spec({Cachex, name: :character_state_cache},
id: :character_state_cache_worker
),
@@ -48,10 +53,7 @@ defmodule WandererApp.Application do
Supervisor.child_spec({Cachex, name: :wanderer_app_cache},
id: :wanderer_app_cache_worker
),
{Registry, keys: :unique, name: WandererApp.MapRegistry},
{Registry, keys: :unique, name: WandererApp.Character.TrackerRegistry},
{PartitionSupervisor,
child_spec: DynamicSupervisor, name: WandererApp.Map.DynamicSupervisors},
{PartitionSupervisor,
child_spec: DynamicSupervisor, name: WandererApp.Character.DynamicSupervisors},
WandererAppWeb.PresenceGracePeriodManager,
@@ -78,6 +80,7 @@ defmodule WandererApp.Application do
WandererApp.Server.ServerStatusTracker,
WandererApp.Server.TheraDataFetcher,
{WandererApp.Character.TrackerPoolSupervisor, []},
{WandererApp.Map.MapPoolSupervisor, []},
WandererApp.Character.TrackerManager,
WandererApp.Map.Manager
] ++ security_audit_children

View File

@@ -46,10 +46,6 @@ defmodule WandererApp.Character.TrackerPool do
{:ok, _} = Registry.register(@unique_registry, Module.concat(__MODULE__, uuid), tracked_ids)
{:ok, _} = Registry.register(@registry, __MODULE__, uuid)
# Cachex.get_and_update(@cache, :tracked_characters, fn ids ->
# {:commit, ids ++ tracked_ids}
# end)
tracked_ids
|> Enum.each(fn id ->
Cachex.put(@cache, id, uuid)
@@ -79,9 +75,6 @@ defmodule WandererApp.Character.TrackerPool do
[tracked_id | r_tracked_ids]
end)
# Cachex.get_and_update(@cache, :tracked_characters, fn ids ->
# {:commit, ids ++ [tracked_id]}
# end)
Cachex.put(@cache, tracked_id, uuid)
{:noreply, %{state | characters: [tracked_id | characters]}}
@@ -96,10 +89,6 @@ defmodule WandererApp.Character.TrackerPool do
r_tracked_ids |> Enum.reject(fn id -> id == tracked_id end)
end)
# Cachex.get_and_update(@cache, :tracked_characters, fn ids ->
# {:commit, ids |> Enum.reject(fn id -> id == tracked_id end)}
# end)
#
Cachex.del(@cache, tracked_id)
{:noreply, %{state | characters: characters |> Enum.reject(fn id -> id == tracked_id end)}}
@@ -191,6 +180,8 @@ defmodule WandererApp.Character.TrackerPool do
[Tracker Pool] update_online => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
ErrorTracker.report(e, __STACKTRACE__)
end
{:noreply, state}

View File

@@ -50,11 +50,6 @@ defmodule WandererApp.Character.TrackerPoolDynamicSupervisor do
end
end
def is_not_tracked?(tracked_id) do
{:ok, tracked_ids} = Cachex.get(@cache, :tracked_characters)
tracked_ids |> Enum.member?(tracked_id) |> Kernel.not()
end
defp get_available_pool([]), do: nil
defp get_available_pool([{pid, uuid} | pools]) do

View File

@@ -240,8 +240,6 @@ defmodule WandererApp.Character.TrackingUtils do
})
end)
# WandererApp.Map.Server.untrack_characters(map_id, character_ids)
:ok
else
true ->
@@ -250,20 +248,6 @@ defmodule WandererApp.Character.TrackingUtils do
end
end
# def add_characters([], _map_id, _track_character), do: :ok
# def add_characters([character | characters], map_id, track_character) do
# :ok = WandererApp.Map.Server.add_character(map_id, character, track_character)
# add_characters(characters, map_id, track_character)
# end
# def remove_characters([], _map_id), do: :ok
# def remove_characters([character | characters], map_id) do
# :ok = WandererApp.Map.Server.remove_character(map_id, character.id)
# remove_characters(characters, map_id)
# end
def get_main_character(
nil,
current_user_characters,

View File

@@ -212,6 +212,7 @@ defmodule WandererApp.ExternalEvents.JsonApiFormatter do
"time_status" => payload["time_status"] || payload[:time_status],
"mass_status" => payload["mass_status"] || payload[:mass_status],
"ship_size_type" => payload["ship_size_type"] || payload[:ship_size_type],
"locked" => payload["locked"] || payload[:locked],
"updated_at" => event.timestamp
},
"relationships" => %{

View File

@@ -109,8 +109,8 @@ defmodule WandererApp.Kills.MapEventListener do
# Handle re-subscription attempt
def handle_info(:resubscribe_to_maps, state) do
running_maps = WandererApp.Map.RegistryHelper.list_all_maps()
current_running_map_ids = MapSet.new(Enum.map(running_maps, & &1.id))
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
current_running_map_ids = MapSet.new(started_maps)
Logger.debug(fn ->
"[MapEventListener] Resubscribing to maps. Running maps: #{MapSet.size(current_running_map_ids)}"

View File

@@ -88,13 +88,13 @@ defmodule WandererApp.Kills.Subscription.MapIntegration do
def get_tracked_system_ids do
try do
# Get systems from currently running maps
active_maps = WandererApp.Map.RegistryHelper.list_all_maps()
{:ok, started_maps_ids} = WandererApp.Cache.lookup("started_maps", [])
Logger.debug("[MapIntegration] Found #{length(active_maps)} active maps")
Logger.debug("[MapIntegration] Found #{length(started_maps_ids)} active maps")
map_systems =
active_maps
|> Enum.map(fn %{id: map_id} ->
started_maps_ids
|> Enum.map(fn map_id ->
case WandererApp.MapSystemRepo.get_visible_by_map(map_id) do
{:ok, systems} ->
system_ids = Enum.map(systems, & &1.solar_system_id)
@@ -114,7 +114,7 @@ defmodule WandererApp.Kills.Subscription.MapIntegration do
|> Enum.uniq()
Logger.debug(fn ->
"[MapIntegration] Total tracked systems: #{length(system_ids)} across #{length(active_maps)} maps"
"[MapIntegration] Total tracked systems: #{length(system_ids)} across #{length(started_maps_ids)} maps"
end)
{:ok, system_ids}

View File

@@ -7,6 +7,8 @@ defmodule WandererApp.Map do
require Logger
@map_state_cache :map_state_cache
defstruct map_id: nil,
name: nil,
scope: :none,
@@ -69,6 +71,50 @@ defmodule WandererApp.Map do
end)
end
def get_map_state(map_id, init_if_empty? \\ true) do
case Cachex.get(@map_state_cache, map_id) do
{:ok, nil} ->
case init_if_empty? do
true ->
map_state = WandererApp.Map.Server.Impl.do_init_state(map_id: map_id)
Cachex.put(@map_state_cache, map_id, map_state)
{:ok, map_state}
_ ->
{:ok, nil}
end
{:ok, map_state} ->
{:ok, map_state}
end
end
def get_map_state!(map_id) do
case get_map_state(map_id) do
{:ok, map_state} ->
map_state
_ ->
Logger.error("Failed to get map_state #{map_id}")
throw("Failed to get map_state #{map_id}")
end
end
def update_map_state(map_id, state_update),
do:
Cachex.get_and_update(@map_state_cache, map_id, fn map_state ->
case map_state do
nil ->
new_state = WandererApp.Map.Server.Impl.do_init_state(map_id: map_id)
{:commit, Map.merge(new_state, state_update)}
_ ->
{:commit, Map.merge(map_state, state_update)}
end
end)
def delete_map_state(map_id), do: Cachex.del(@map_state_cache, map_id)
def get_characters_limit(map_id),
do: {:ok, map_id |> get_map!() |> Map.get(:characters_limit, 50)}
@@ -486,15 +532,16 @@ defmodule WandererApp.Map do
solar_system_source,
solar_system_target
) do
case map_id
|> get_map!()
|> Map.get(:connections, Map.new())
|> Map.get("#{solar_system_source}_#{solar_system_target}") do
nil ->
{:ok,
connections =
map_id
|> get_map!()
|> Map.get(:connections, Map.new())
case connections
|> Map.get("#{solar_system_source}_#{solar_system_target}") do
nil ->
{:ok,
connections
|> Map.get("#{solar_system_target}_#{solar_system_source}")}
connection ->

View File

@@ -0,0 +1,347 @@
defmodule WandererApp.Map.CacheRTree do
@moduledoc """
Cache-based spatial index implementing DDRT behavior.
Provides R-tree-like spatial indexing using grid-based storage in Nebulex cache.
No GenServer processes required - all operations are functional and cache-based.
## Storage Structure
Data is stored in the cache with the following keys:
- `"rtree:<name>:leaves"` - Map of solar_system_id => {id, bounding_box}
- `"rtree:<name>:grid"` - Map of {grid_x, grid_y} => [solar_system_id, ...]
- `"rtree:<name>:config"` - Tree configuration
## Spatial Grid
Uses 150x150 pixel grid cells for O(1) spatial queries. Each system node
(130x34 pixels) typically overlaps 1-2 grid cells, providing fast collision
detection without the overhead of GenServer-based tree traversal.
"""
@behaviour WandererApp.Test.DDRT
alias WandererApp.Cache
@grid_size 150 # Grid cell size in pixels
# Type definitions matching DDRT behavior
@type id :: number() | String.t()
@type coord_range :: {number(), number()}
@type bounding_box :: list(coord_range())
@type leaf :: {id(), bounding_box()}
# ============================================================================
# Public API - DDRT Behavior Implementation
# ============================================================================
@doc """
Insert one or more leaves into the spatial index.
## Parameters
- `leaf_or_leaves` - Single `{id, bounding_box}` tuple or list of tuples
- `name` - Name of the R-tree instance
## Examples
iex> CacheRTree.insert({30000142, [{100, 230}, {50, 84}]}, "rtree_map_123")
{:ok, %{}}
iex> CacheRTree.insert([
...> {30000142, [{100, 230}, {50, 84}]},
...> {30000143, [{250, 380}, {100, 134}]}
...> ], "rtree_map_123")
{:ok, %{}}
"""
@impl true
def insert(leaf_or_leaves, name) do
leaves = normalize_leaves(leaf_or_leaves)
# Update leaves storage
current_leaves = get_leaves(name)
new_leaves = Enum.reduce(leaves, current_leaves, fn {id, box}, acc ->
Map.put(acc, id, {id, box})
end)
put_leaves(name, new_leaves)
# Update spatial grid
current_grid = get_grid(name)
new_grid = Enum.reduce(leaves, current_grid, fn leaf, grid ->
add_to_grid(grid, leaf)
end)
put_grid(name, new_grid)
{:ok, %{}} # Match DRTree return format
end
@doc """
Delete one or more leaves from the spatial index.
## Parameters
- `id_or_ids` - Single ID or list of IDs to remove
- `name` - Name of the R-tree instance
## Examples
iex> CacheRTree.delete([30000142], "rtree_map_123")
{:ok, %{}}
iex> CacheRTree.delete([30000142, 30000143], "rtree_map_123")
{:ok, %{}}
"""
@impl true
def delete(id_or_ids, name) do
ids = normalize_ids(id_or_ids)
current_leaves = get_leaves(name)
current_grid = get_grid(name)
# Remove from leaves and track bounding boxes for grid cleanup
{new_leaves, removed} = Enum.reduce(ids, {current_leaves, []}, fn id, {leaves, removed} ->
case Map.pop(leaves, id) do
{nil, leaves} -> {leaves, removed}
{{^id, box}, leaves} -> {leaves, [{id, box} | removed]}
end
end)
# Update grid
new_grid = Enum.reduce(removed, current_grid, fn {id, box}, grid ->
remove_from_grid(grid, id, box)
end)
put_leaves(name, new_leaves)
put_grid(name, new_grid)
{:ok, %{}}
end
@doc """
Update a leaf's bounding box.
## Parameters
- `id` - ID of the leaf to update
- `box_or_tuple` - Either a new `bounding_box` or `{old_box, new_box}` tuple
- `name` - Name of the R-tree instance
## Examples
iex> CacheRTree.update(30000142, [{150, 280}, {200, 234}], "rtree_map_123")
{:ok, %{}}
iex> CacheRTree.update(30000142, {[{100, 230}, {50, 84}], [{150, 280}, {200, 234}]}, "rtree_map_123")
{:ok, %{}}
"""
@impl true
def update(id, box_or_tuple, name) do
{old_box, new_box} = case box_or_tuple do
{old, new} ->
{old, new}
box ->
# Need to look up old box
leaves = get_leaves(name)
case Map.get(leaves, id) do
{^id, old} -> {old, box}
nil -> {nil, box} # Will be handled as new insert
end
end
# Delete old, insert new
if old_box, do: delete([id], name)
insert({id, new_box}, name)
end
@doc """
Query for all leaves intersecting a bounding box.
Uses grid-based spatial indexing for O(1) average case performance.
## Parameters
- `bounding_box` - Query bounding box `[{x_min, x_max}, {y_min, y_max}]`
- `name` - Name of the R-tree instance
## Returns
- `{:ok, [id()]}` - List of IDs intersecting the query box
- `{:error, term()}` - Error if query fails
## Examples
iex> CacheRTree.query([{200, 330}, {90, 124}], "rtree_map_123")
{:ok, [30000143]}
iex> CacheRTree.query([{0, 50}, {0, 50}], "rtree_map_123")
{:ok, []}
"""
@impl true
def query(bounding_box, name) do
# Get candidate IDs from grid cells
grid = get_grid(name)
grid_cells = get_grid_cells(bounding_box)
candidate_ids =
grid_cells
|> Enum.flat_map(fn cell -> Map.get(grid, cell, []) end)
|> Enum.uniq()
# Precise intersection test
leaves = get_leaves(name)
matching_ids =
Enum.filter(candidate_ids, fn id ->
case Map.get(leaves, id) do
{^id, leaf_box} -> boxes_intersect?(bounding_box, leaf_box)
nil -> false
end
end)
{:ok, matching_ids}
rescue
error -> {:error, error}
end
# ============================================================================
# Initialization and Management
# ============================================================================
@doc """
Initialize an empty R-tree in the cache.
## Parameters
- `name` - Name for this R-tree instance
- `config` - Optional configuration map (width, verbose, etc.)
## Examples
iex> CacheRTree.init_tree("rtree_map_123")
:ok
iex> CacheRTree.init_tree("rtree_map_456", %{width: 150, verbose: false})
:ok
"""
def init_tree(name, config \\ %{}) do
Cache.put(cache_key(name, :leaves), %{})
Cache.put(cache_key(name, :grid), %{})
Cache.put(cache_key(name, :config), Map.merge(default_config(), config))
:ok
end
@doc """
Clear all data for an R-tree from the cache.
Should be called when a map is shut down to free memory.
## Parameters
- `name` - Name of the R-tree instance to clear
## Examples
iex> CacheRTree.clear_tree("rtree_map_123")
:ok
"""
def clear_tree(name) do
Cache.delete(cache_key(name, :leaves))
Cache.delete(cache_key(name, :grid))
Cache.delete(cache_key(name, :config))
:ok
end
# ============================================================================
# Private Helper Functions
# ============================================================================
# Cache access helpers
defp cache_key(name, suffix), do: "rtree:#{name}:#{suffix}"
defp get_leaves(name) do
Cache.get(cache_key(name, :leaves)) || %{}
end
defp put_leaves(name, leaves) do
Cache.put(cache_key(name, :leaves), leaves)
end
defp get_grid(name) do
Cache.get(cache_key(name, :grid)) || %{}
end
defp put_grid(name, grid) do
Cache.put(cache_key(name, :grid), grid)
end
defp default_config do
%{
width: 150,
grid_size: @grid_size,
verbose: false
}
end
# Grid operations
defp add_to_grid(grid, {id, bounding_box}) do
grid_cells = get_grid_cells(bounding_box)
Enum.reduce(grid_cells, grid, fn cell, acc ->
Map.update(acc, cell, [id], fn existing_ids ->
if id in existing_ids do
existing_ids
else
[id | existing_ids]
end
end)
end)
end
defp remove_from_grid(grid, id, bounding_box) do
grid_cells = get_grid_cells(bounding_box)
Enum.reduce(grid_cells, grid, fn cell, acc ->
Map.update(acc, cell, [], fn existing_ids ->
List.delete(existing_ids, id)
end)
end)
end
# Calculate which grid cells a bounding box overlaps
defp get_grid_cells(bounding_box) do
[{x_min, x_max}, {y_min, y_max}] = bounding_box
# Calculate cell coordinates using integer division
# Handles negative coordinates correctly
cell_x_min = div_floor(x_min, @grid_size)
cell_x_max = div_floor(x_max, @grid_size)
cell_y_min = div_floor(y_min, @grid_size)
cell_y_max = div_floor(y_max, @grid_size)
# Generate all overlapping cells
for x <- cell_x_min..cell_x_max,
y <- cell_y_min..cell_y_max do
{x, y}
end
end
# Floor division that works correctly with negative numbers
defp div_floor(a, b) when a >= 0, do: div(a, b)
defp div_floor(a, b) when a < 0 do
case rem(a, b) do
0 -> div(a, b)
_ -> div(a, b) - 1
end
end
# Check if two bounding boxes intersect
defp boxes_intersect?(box1, box2) do
[{x1_min, x1_max}, {y1_min, y1_max}] = box1
[{x2_min, x2_max}, {y2_min, y2_max}] = box2
# Boxes intersect if they overlap on both axes
x_overlap = x1_min <= x2_max and x2_min <= x1_max
y_overlap = y1_min <= y2_max and y2_min <= y1_max
x_overlap and y_overlap
end
# Input normalization
defp normalize_leaves(leaf) when is_tuple(leaf), do: [leaf]
defp normalize_leaves(leaves) when is_list(leaves), do: leaves
defp normalize_ids(id) when is_number(id) or is_binary(id), do: [id]
defp normalize_ids(ids) when is_list(ids), do: ids
end

View File

@@ -1,42 +0,0 @@
defmodule WandererApp.Map.DynamicSupervisor do
@moduledoc """
Dynamically starts a map server
"""
use DynamicSupervisor
require Logger
alias WandererApp.Map.Server
def start_link(_arg) do
DynamicSupervisor.start_link(__MODULE__, nil, name: __MODULE__)
end
def init(nil) do
DynamicSupervisor.init(strategy: :one_for_one)
end
def _start_child(map_id) do
child_spec = %{
id: Server,
start: {Server, :start_link, [map_id]},
restart: :transient
}
case DynamicSupervisor.start_child(__MODULE__, child_spec) do
{:ok, _} ->
:ok
{:error, {:already_started, _}} ->
:ok
{:error, reason} ->
{:error, reason}
end
end
def which_children do
Supervisor.which_children(__MODULE__)
end
end

View File

@@ -0,0 +1,38 @@
defmodule WandererApp.Map.GarbageCollector do
@moduledoc """
Manager map subscription plans
"""
require Logger
require Ash.Query
@logger Application.compile_env(:wanderer_app, :logger)
@one_week_seconds 7 * 24 * 60 * 60
@two_weeks_seconds 14 * 24 * 60 * 60
def cleanup_chain_passages() do
Logger.info("Start cleanup old map chain passages...")
WandererApp.Api.MapChainPassages
|> Ash.Query.filter(updated_at: [less_than: get_cutoff_time(@one_week_seconds)])
|> Ash.bulk_destroy!(:destroy, %{}, batch_size: 100)
@logger.info(fn -> "All map chain passages processed" end)
:ok
end
def cleanup_system_signatures() do
Logger.info("Start cleanup old map system signatures...")
WandererApp.Api.MapSystemSignature
|> Ash.Query.filter(updated_at: [less_than: get_cutoff_time(@two_weeks_seconds)])
|> Ash.bulk_destroy!(:destroy, %{}, batch_size: 100)
@logger.info(fn -> "All map system signatures processed" end)
:ok
end
defp get_cutoff_time(seconds), do: DateTime.utc_now() |> DateTime.add(-seconds, :second)
end

View File

@@ -8,12 +8,10 @@ defmodule WandererApp.Map.Manager do
require Logger
alias WandererApp.Map.Server
alias WandererApp.Map.ServerSupervisor
@maps_start_per_second 10
@maps_start_interval 1000
@maps_queue :maps_queue
@garbage_collection_interval :timer.hours(1)
@check_maps_queue_interval :timer.seconds(1)
@pings_cleanup_interval :timer.minutes(10)
@@ -39,15 +37,11 @@ defmodule WandererApp.Map.Manager do
do: WandererApp.Queue.push_uniq(@maps_queue, map_id)
def stop_map(map_id) when is_binary(map_id) do
case Server.map_pid(map_id) do
pid when is_pid(pid) ->
GenServer.cast(
pid,
:stop
)
with {:ok, started_maps} <- WandererApp.Cache.lookup("started_maps", []),
true <- Enum.member?(started_maps, map_id) do
Logger.warning(fn -> "Shutting down map server: #{inspect(map_id)}" end)
nil ->
:ok
WandererApp.Map.MapPoolDynamicSupervisor.stop_map(map_id)
end
end
@@ -56,13 +50,11 @@ defmodule WandererApp.Map.Manager do
@impl true
def init([]) do
WandererApp.Queue.new(@maps_queue, [])
WandererApp.Cache.insert("started_maps", [])
{:ok, check_maps_queue_timer} =
:timer.send_interval(@check_maps_queue_interval, :check_maps_queue)
{:ok, garbage_collector_timer} =
:timer.send_interval(@garbage_collection_interval, :garbage_collect)
{:ok, pings_cleanup_timer} =
:timer.send_interval(@pings_cleanup_interval, :cleanup_pings)
@@ -72,7 +64,6 @@ defmodule WandererApp.Map.Manager do
{:ok,
%{
garbage_collector_timer: garbage_collector_timer,
check_maps_queue_timer: check_maps_queue_timer,
pings_cleanup_timer: pings_cleanup_timer
}}
@@ -106,36 +97,6 @@ defmodule WandererApp.Map.Manager do
end
end
@impl true
def handle_info(:garbage_collect, state) do
try do
WandererApp.Map.RegistryHelper.list_all_maps()
|> Enum.each(fn %{id: map_id, pid: server_pid} ->
case Process.alive?(server_pid) do
true ->
presence_character_ids =
WandererApp.Cache.lookup!("map_#{map_id}:presence_character_ids", [])
if presence_character_ids |> Enum.empty?() do
Logger.info("No more characters present on: #{map_id}, shutting down map server...")
stop_map(map_id)
end
false ->
Logger.warning("Server not alive: #{inspect(server_pid)}")
:ok
end
end)
{:noreply, state}
rescue
e ->
Logger.error(Exception.message(e))
{:noreply, state}
end
end
@impl true
def handle_info(:cleanup_pings, state) do
try do
@@ -156,7 +117,7 @@ defmodule WandererApp.Map.Manager do
Enum.each(pings, fn %{id: ping_id, map_id: map_id, type: type} = ping ->
{:ok, %{system: system}} = ping |> Ash.load([:system])
WandererApp.Map.Server.Impl.broadcast!(map_id, :ping_cancelled, %{
Server.Impl.broadcast!(map_id, :ping_cancelled, %{
id: ping_id,
solar_system_id: system.solar_system_id,
type: type
@@ -237,21 +198,21 @@ defmodule WandererApp.Map.Manager do
end
defp start_map_server(map_id) do
case DynamicSupervisor.start_child(
{:via, PartitionSupervisor, {WandererApp.Map.DynamicSupervisors, self()}},
{ServerSupervisor, map_id: map_id}
) do
{:ok, pid} ->
{:ok, pid}
with {:ok, started_maps} <- WandererApp.Cache.lookup("started_maps", []),
false <- Enum.member?(started_maps, map_id) do
WandererApp.Cache.insert_or_update(
"started_maps",
[map_id],
fn existing ->
[map_id | existing] |> Enum.uniq()
end
)
{:error, {:already_started, pid}} ->
{:ok, pid}
{:error, {:shutdown, {:failed_to_start_child, Server, {:already_started, pid}}}} ->
{:ok, pid}
{:error, reason} ->
{:error, reason}
WandererApp.Map.MapPoolDynamicSupervisor.start_map(map_id)
else
_error ->
Logger.warning("Map already started: #{map_id}")
:ok
end
end
end

View File

@@ -0,0 +1,360 @@
defmodule WandererApp.Map.MapPool do
@moduledoc false
use GenServer, restart: :transient
require Logger
alias WandererApp.Map.Server
defstruct [
:map_ids,
:uuid
]
@name __MODULE__
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@garbage_collection_interval :timer.hours(12)
@systems_cleanup_timeout :timer.minutes(30)
@characters_cleanup_timeout :timer.minutes(5)
@connections_cleanup_timeout :timer.minutes(5)
@backup_state_timeout :timer.minutes(1)
def new(), do: __struct__()
def new(args), do: __struct__(args)
def start_link(map_ids) do
uuid = UUID.uuid1()
GenServer.start_link(
@name,
{uuid, map_ids},
name: Module.concat(__MODULE__, uuid)
)
end
@impl true
def init({uuid, map_ids}) do
{:ok, _} = Registry.register(@unique_registry, Module.concat(__MODULE__, uuid), map_ids)
{:ok, _} = Registry.register(@registry, __MODULE__, uuid)
map_ids
|> Enum.each(fn id ->
Cachex.put(@cache, id, uuid)
end)
state =
%{
uuid: uuid,
map_ids: []
}
|> new()
{:ok, state, {:continue, {:start, map_ids}}}
end
@impl true
def terminate(_reason, _state) do
:ok
end
@impl true
def handle_continue({:start, map_ids}, state) do
Logger.info("#{@name} started")
map_ids
|> Enum.each(fn map_id ->
GenServer.cast(self(), {:start_map, map_id})
end)
Process.send_after(self(), :backup_state, @backup_state_timeout)
Process.send_after(self(), :cleanup_systems, 15_000)
Process.send_after(self(), :cleanup_characters, @characters_cleanup_timeout)
Process.send_after(self(), :cleanup_connections, @connections_cleanup_timeout)
Process.send_after(self(), :garbage_collect, @garbage_collection_interval)
# Start message queue monitoring
Process.send_after(self(), :monitor_message_queue, :timer.seconds(30))
{:noreply, state}
end
@impl true
def handle_cast(:stop, state), do: {:stop, :normal, state}
@impl true
def handle_cast({:start_map, map_id}, %{map_ids: map_ids, uuid: uuid} = state) do
if map_id not in map_ids do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
[map_id | r_map_ids]
end)
Cachex.put(@cache, map_id, uuid)
map_id
|> WandererApp.Map.get_map_state!()
|> Server.Impl.start_map()
{:noreply, %{state | map_ids: [map_id | map_ids]}}
else
{:noreply, state}
end
end
@impl true
def handle_cast(
{:stop_map, map_id},
%{map_ids: map_ids, uuid: uuid} = state
) do
Registry.update_value(@unique_registry, Module.concat(__MODULE__, uuid), fn r_map_ids ->
r_map_ids |> Enum.reject(fn id -> id == map_id end)
end)
Cachex.del(@cache, map_id)
map_id
|> Server.Impl.stop_map()
{:noreply, %{state | map_ids: map_ids |> Enum.reject(fn id -> id == map_id end)}}
end
@impl true
def handle_call(:error, _, state), do: {:stop, :error, :ok, state}
@impl true
def handle_info(:backup_state, %{map_ids: map_ids} = state) do
Process.send_after(self(), :backup_state, @backup_state_timeout)
try do
map_ids
|> Task.async_stream(
fn map_id ->
{:ok, _map_state} = Server.Impl.save_map_state(map_id)
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.minutes(1)
)
|> Enum.each(fn _result -> :ok end)
rescue
e ->
Logger.error("""
[Map Pool] backup_state => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
@impl true
def handle_info(:cleanup_systems, %{map_ids: map_ids} = state) do
Process.send_after(self(), :cleanup_systems, @systems_cleanup_timeout)
try do
map_ids
|> Task.async_stream(
fn map_id ->
Server.Impl.cleanup_systems(map_id)
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.minutes(1)
)
|> Enum.each(fn _result -> :ok end)
rescue
e ->
Logger.error("""
[Map Pool] cleanup_systems => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
@impl true
def handle_info(:cleanup_connections, %{map_ids: map_ids} = state) do
Process.send_after(self(), :cleanup_connections, @connections_cleanup_timeout)
try do
map_ids
|> Task.async_stream(
fn map_id ->
Server.Impl.cleanup_connections(map_id)
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.minutes(1)
)
|> Enum.each(fn _result -> :ok end)
rescue
e ->
Logger.error("""
[Map Pool] cleanup_connections => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
@impl true
def handle_info(:cleanup_characters, %{map_ids: map_ids} = state) do
Process.send_after(self(), :cleanup_characters, @characters_cleanup_timeout)
try do
map_ids
|> Task.async_stream(
fn map_id ->
Server.Impl.cleanup_characters(map_id)
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.minutes(1)
)
|> Enum.each(fn _result -> :ok end)
rescue
e ->
Logger.error("""
[Map Pool] cleanup_characters => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
@impl true
def handle_info(:garbage_collect, %{map_ids: map_ids, uuid: uuid} = state) do
Process.send_after(self(), :garbage_collect, @garbage_collection_interval)
try do
map_ids
|> Enum.each(fn map_id ->
# presence_character_ids =
# WandererApp.Cache.lookup!("map_#{map_id}:presence_character_ids", [])
# if presence_character_ids |> Enum.empty?() do
Logger.info(
"#{uuid}: No more characters present on: #{map_id}, shutting down map server..."
)
GenServer.cast(self(), {:stop_map, map_id})
# end
end)
rescue
e ->
Logger.error(Exception.message(e))
end
{:noreply, state}
end
@impl true
def handle_info(:monitor_message_queue, state) do
monitor_message_queue(state)
# Schedule next monitoring check
Process.send_after(self(), :monitor_message_queue, :timer.seconds(30))
{:noreply, state}
end
def handle_info({ref, result}, state) when is_reference(ref) do
Process.demonitor(ref, [:flush])
case result do
{:error, error} ->
Logger.error("#{__MODULE__} failed to process: #{inspect(error)}")
:ok
_ ->
:ok
end
{:noreply, state}
end
def handle_info(
:update_online,
%{
characters: characters,
server_online: true
} =
state
) do
Process.send_after(self(), :update_online, @update_online_interval)
try do
characters
|> Task.async_stream(
fn character_id ->
WandererApp.Character.Tracker.update_online(character_id)
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.seconds(5)
)
|> Enum.each(fn _result -> :ok end)
rescue
e ->
Logger.error("""
[Tracker Pool] update_online => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
end
{:noreply, state}
end
def handle_info(event, state) do
try do
Server.Impl.handle_event(event)
rescue
e ->
Logger.error("""
[Map Pool] handle_info => exception: #{Exception.message(e)}
#{Exception.format_stacktrace(__STACKTRACE__)}
""")
ErrorTracker.report(e, __STACKTRACE__)
end
{:noreply, state}
end
defp monitor_message_queue(state) do
try do
{_, message_queue_len} = Process.info(self(), :message_queue_len)
{_, memory} = Process.info(self(), :memory)
# Alert on high message queue
if message_queue_len > 50 do
Logger.warning("GENSERVER_QUEUE_HIGH: Map pool message queue buildup",
pool_id: state.uuid,
message_queue_length: message_queue_len,
memory_bytes: memory,
pool_length: length(state.map_ids)
)
# Emit telemetry
:telemetry.execute(
[:wanderer_app, :map, :map_pool, :queue_buildup],
%{
message_queue_length: message_queue_len,
memory_bytes: memory
},
%{
pool_id: state.uuid,
pool_length: length(state.map_ids)
}
)
end
rescue
error ->
Logger.debug("Failed to monitor message queue: #{inspect(error)}")
end
end
end

View File

@@ -0,0 +1,102 @@
defmodule WandererApp.Map.MapPoolDynamicSupervisor do
@moduledoc false
use DynamicSupervisor
require Logger
@cache :map_pool_cache
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
@map_pool_limit 10
@name __MODULE__
def start_link(_arg) do
DynamicSupervisor.start_link(@name, [], name: @name, max_restarts: 10)
end
def init(_arg) do
DynamicSupervisor.init(strategy: :one_for_one)
end
def start_map(map_id) do
case Registry.lookup(@registry, WandererApp.Map.MapPool) do
[] ->
start_child([map_id], 0)
pools ->
case get_available_pool(pools) do
nil ->
start_child([map_id], pools |> Enum.count())
pid ->
GenServer.cast(pid, {:start_map, map_id})
end
end
end
def stop_map(map_id) do
{:ok, pool_uuid} = Cachex.get(@cache, map_id)
case Registry.lookup(
@unique_registry,
Module.concat(WandererApp.Map.MapPool, pool_uuid)
) do
[] ->
:ok
[{pool_pid, _}] ->
GenServer.cast(pool_pid, {:stop_map, map_id})
end
end
defp get_available_pool([]), do: nil
defp get_available_pool([{pid, uuid} | pools]) do
case Registry.lookup(@unique_registry, Module.concat(WandererApp.Map.MapPool, uuid)) do
[] ->
nil
uuid_pools ->
case get_available_pool_pid(uuid_pools) do
nil ->
get_available_pool(pools)
pid ->
pid
end
end
end
defp get_available_pool_pid([]), do: nil
defp get_available_pool_pid([{pid, map_ids} | pools]) do
if Enum.count(map_ids) < @map_pool_limit do
pid
else
get_available_pool_pid(pools)
end
end
defp start_child(map_ids, pools_count) do
case DynamicSupervisor.start_child(@name, {WandererApp.Map.MapPool, map_ids}) do
{:ok, pid} ->
Logger.info("Starting map pool, total map_pools: #{pools_count + 1}")
{:ok, pid}
{:error, {:already_started, pid}} ->
{:ok, pid}
end
end
defp stop_child(uuid) do
case Registry.lookup(@registry, uuid) do
[{pid, _}] ->
GenServer.cast(pid, :stop)
_ ->
Logger.warn("Unable to locate pool assigned to #{inspect(uuid)}")
:ok
end
end
end

View File

@@ -0,0 +1,22 @@
defmodule WandererApp.Map.MapPoolSupervisor do
@moduledoc false
use Supervisor
@name __MODULE__
@registry :map_pool_registry
@unique_registry :unique_map_pool_registry
def start_link(_args) do
Supervisor.start_link(@name, [], name: @name)
end
def init(_args) do
children = [
{Registry, [keys: :unique, name: @unique_registry]},
{Registry, [keys: :duplicate, name: @registry]},
{WandererApp.Map.MapPoolDynamicSupervisor, []}
]
Supervisor.init(children, strategy: :rest_for_one, max_restarts: 10)
end
end

View File

@@ -2,6 +2,8 @@ defmodule WandererApp.Map.PositionCalculator do
@moduledoc false
require Logger
@ddrt Application.compile_env(:wanderer_app, :ddrt)
# Node height
@h 34
# Node weight
@@ -60,7 +62,7 @@ defmodule WandererApp.Map.PositionCalculator do
end
defp is_available_position({x, y} = _position, rtree_name) do
case DDRT.query(get_system_bounding_rect(%{position_x: x, position_y: y}), rtree_name) do
case @ddrt.query(get_system_bounding_rect(%{position_x: x, position_y: y}), rtree_name) do
{:ok, []} ->
true

View File

@@ -1,15 +0,0 @@
defmodule WandererApp.Map.RegistryHelper do
@moduledoc false
alias WandererApp.MapRegistry
def list_all_maps(),
do: Registry.select(MapRegistry, [{{:"$1", :"$2", :_}, [], [%{id: :"$1", pid: :"$2"}]}])
def list_all_maps_by_map_id(map_id) do
match_all = {:"$1", :"$2", :"$3"}
guards = [{:==, :"$1", map_id}]
map_result = [%{id: :"$1", pid: :"$2"}]
Registry.select(MapRegistry, [{match_all, guards, map_result}])
end
end

View File

@@ -1,41 +0,0 @@
defmodule WandererApp.Map.RtreeDynamicSupervisor do
@moduledoc """
Dynamically starts a map server
"""
use DynamicSupervisor
def start_link(_arg) do
DynamicSupervisor.start_link(__MODULE__, nil, name: __MODULE__)
end
def init(nil) do
DynamicSupervisor.init(strategy: :one_for_one)
end
def start(map_id) do
case DynamicSupervisor.start_child(
__MODULE__,
{DDRT.DynamicRtree,
[
conf: [name: "rtree_#{map_id}", width: 150, verbose: false, seed: 0],
name: Module.concat([map_id, DDRT.DynamicRtree])
]}
) do
{:ok, pid} -> {:ok, pid}
{:error, {:already_started, pid}} -> {:ok, pid}
{:error, reason} -> {:error, reason}
end
end
def stop(map_id) do
case Process.whereis(Module.concat([map_id, DDRT.DynamicRtree])) do
nil -> :ok
pid when is_pid(pid) -> DynamicSupervisor.terminate_child(__MODULE__, pid)
end
end
def which_children do
Supervisor.which_children(__MODULE__)
end
end

View File

@@ -2,52 +2,12 @@ defmodule WandererApp.Map.Server do
@moduledoc """
Holds state for a map and exposes an interface to managing the map instance
"""
use GenServer, restart: :transient, significant: true
require Logger
alias WandererApp.Map.Server.Impl
@logger Application.compile_env(:wanderer_app, :logger)
@spec start_link(keyword()) :: GenServer.on_start()
def start_link(args) when is_list(args) do
GenServer.start_link(__MODULE__, args, name: _via(args[:map_id]))
end
@impl true
def init(args), do: {:ok, Impl.init(args), {:continue, :load_state}}
def map_pid(map_id),
do:
map_id
|> _via()
|> GenServer.whereis()
def map_pid!(map_id) do
map_id
|> map_pid()
|> case do
map_id when is_pid(map_id) ->
map_id
nil ->
WandererApp.Cache.insert("map_#{map_id}:started", false)
throw("Map server not started")
end
end
def get_map(pid) when is_pid(pid),
do:
pid
|> GenServer.call({&Impl.get_map/1, []}, :timer.minutes(5))
def get_map(map_id) when is_binary(map_id),
do:
map_id
|> map_pid!
|> get_map()
def get_export_settings(%{id: map_id, hubs: hubs} = _map) do
with {:ok, all_systems} <- WandererApp.MapSystemRepo.get_all_by_map(map_id),
{:ok, connections} <- WandererApp.MapConnectionRepo.get_by_map(map_id) do
@@ -70,250 +30,67 @@ defmodule WandererApp.Map.Server do
end
end
def add_character(map_id, character, track_character \\ false) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.add_character/3, [character, track_character]})
defdelegate untrack_characters(map_id, character_ids), to: Impl
def remove_character(map_id, character_id) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.remove_character/2, [character_id]})
defdelegate add_system(map_id, system_info, user_id, character_id, opts \\ []), to: Impl
def untrack_characters(map_id, character_ids) when is_binary(map_id) do
map_id
|> map_pid()
|> case do
pid when is_pid(pid) ->
GenServer.cast(pid, {&Impl.untrack_characters/2, [character_ids]})
defdelegate paste_connections(map_id, connections, user_id, character_id), to: Impl
_ ->
WandererApp.Cache.insert("map_#{map_id}:started", false)
:ok
end
end
def add_system(map_id, system_info, user_id, character_id, opts \\ []) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.add_system/5, [system_info, user_id, character_id, opts]})
def paste_connections(map_id, connections, user_id, character_id) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.paste_connections/4, [connections, user_id, character_id]})
def paste_systems(map_id, systems, user_id, character_id, opts \\ []) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.paste_systems/5, [systems, user_id, character_id, opts]})
def add_system_comment(map_id, comment_info, user_id, character_id) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.add_system_comment/4, [comment_info, user_id, character_id]})
def remove_system_comment(map_id, comment_id, user_id, character_id) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.remove_system_comment/4, [comment_id, user_id, character_id]})
def update_system_position(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_position/2, [update]})
def update_system_linked_sig_eve_id(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_linked_sig_eve_id/2, [update]})
def update_system_name(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_name/2, [update]})
def update_system_description(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_description/2, [update]})
def update_system_status(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_status/2, [update]})
def update_system_tag(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_tag/2, [update]})
def update_system_temporary_name(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_temporary_name/2, [update]})
def update_system_locked(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_locked/2, [update]})
def update_system_labels(map_id, update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_system_labels/2, [update]})
def add_hub(map_id, hub_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.add_hub/2, [hub_info]})
def remove_hub(map_id, hub_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.remove_hub/2, [hub_info]})
def add_ping(map_id, ping_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.add_ping/2, [ping_info]})
def cancel_ping(map_id, ping_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.cancel_ping/2, [ping_info]})
def delete_systems(map_id, solar_system_ids, user_id, character_id) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.delete_systems/4, [solar_system_ids, user_id, character_id]})
def add_connection(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.add_connection/2, [connection_info]})
def import_settings(map_id, settings, user_id) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.call({&Impl.import_settings/3, [settings, user_id]}, :timer.minutes(30))
def update_subscription_settings(map_id, settings) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_subscription_settings/2, [settings]})
def delete_connection(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.delete_connection/2, [connection_info]})
def get_connection_info(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.call({&Impl.get_connection_info/2, [connection_info]}, :timer.minutes(1))
def update_connection_time_status(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_connection_time_status/2, [connection_info]})
def update_connection_type(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_connection_type/2, [connection_info]})
def update_connection_mass_status(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_connection_mass_status/2, [connection_info]})
def update_connection_ship_size_type(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_connection_ship_size_type/2, [connection_info]})
def update_connection_locked(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_connection_locked/2, [connection_info]})
def update_connection_custom_info(map_id, connection_info) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_connection_custom_info/2, [connection_info]})
def update_signatures(map_id, signatures_update) when is_binary(map_id),
do:
map_id
|> map_pid!
|> GenServer.cast({&Impl.update_signatures/2, [signatures_update]})
@impl true
def handle_continue(:load_state, state),
do: {:noreply, state |> Impl.load_state(), {:continue, :start_map}}
@impl true
def handle_continue(:start_map, state), do: {:noreply, state |> Impl.start_map()}
@impl true
def handle_call(
{impl_function, args},
_from,
state
)
when is_function(impl_function),
do: WandererApp.GenImpl.apply_call(impl_function, state, args)
@impl true
def handle_cast(:stop, state), do: {:stop, :normal, state |> Impl.stop_map()}
@impl true
def handle_cast({impl_function, args}, state)
when is_function(impl_function) do
case WandererApp.GenImpl.apply_call(impl_function, state, args) do
{:reply, _return, updated_state} ->
{:noreply, updated_state}
_ ->
{:noreply, state}
end
end
@impl true
def handle_info(event, state), do: {:noreply, Impl.handle_event(event, state)}
defp _via(map_id), do: {:via, Registry, {WandererApp.MapRegistry, map_id}}
defdelegate paste_systems(map_id, systems, user_id, character_id, opts \\ []), to: Impl
defdelegate add_system_comment(map_id, comment_info, user_id, character_id), to: Impl
defdelegate remove_system_comment(map_id, comment_id, user_id, character_id), to: Impl
defdelegate update_system_position(map_id, update), to: Impl
defdelegate update_system_linked_sig_eve_id(map_id, update), to: Impl
defdelegate update_system_name(map_id, update), to: Impl
defdelegate update_system_description(map_id, update), to: Impl
defdelegate update_system_status(map_id, update), to: Impl
defdelegate update_system_tag(map_id, update), to: Impl
defdelegate update_system_temporary_name(map_id, update), to: Impl
defdelegate update_system_locked(map_id, update), to: Impl
defdelegate update_system_labels(map_id, update), to: Impl
defdelegate add_hub(map_id, hub_info), to: Impl
defdelegate remove_hub(map_id, hub_info), to: Impl
defdelegate add_ping(map_id, ping_info), to: Impl
defdelegate cancel_ping(map_id, ping_info), to: Impl
defdelegate delete_systems(map_id, solar_system_ids, user_id, character_id), to: Impl
defdelegate add_connection(map_id, connection_info), to: Impl
defdelegate delete_connection(map_id, connection_info), to: Impl
defdelegate import_settings(map_id, settings, user_id), to: Impl
defdelegate update_subscription_settings(map_id, settings), to: Impl
defdelegate get_connection_info(map_id, connection_info), to: Impl
defdelegate update_connection_time_status(map_id, connection_info), to: Impl
defdelegate update_connection_type(map_id, connection_info), to: Impl
defdelegate update_connection_mass_status(map_id, connection_info), to: Impl
defdelegate update_connection_ship_size_type(map_id, connection_info), to: Impl
defdelegate update_connection_locked(map_id, connection_info), to: Impl
defdelegate update_connection_custom_info(map_id, connection_info), to: Impl
defdelegate update_signatures(map_id, signatures_update), to: Impl
end

View File

@@ -300,10 +300,9 @@ defmodule WandererApp.Map.SubscriptionManager do
defp is_expired(subscription) when is_map(subscription),
do: DateTime.compare(DateTime.utc_now(), subscription.active_till) == :gt
defp renew_subscription(%{auto_renew?: true} = subscription) when is_map(subscription) do
with {:ok, %{map: map}} <-
subscription |> WandererApp.MapSubscriptionRepo.load_relationships([:map]),
{:ok, estimated_price, discount} <- estimate_price(subscription, true),
defp renew_subscription(%{auto_renew?: true, map: map} = subscription)
when is_map(subscription) do
with {:ok, estimated_price, discount} <- estimate_price(subscription, true),
{:ok, map_balance} <- get_balance(map) do
case map_balance >= estimated_price do
true ->
@@ -328,7 +327,7 @@ defmodule WandererApp.Map.SubscriptionManager do
@pubsub_client.broadcast(
WandererApp.PubSub,
"maps:#{map.id}",
:subscription_settings_updated
{:subscription_settings_updated, map.id}
)
:telemetry.execute([:wanderer_app, :map, :subscription, :renew], %{count: 1}, %{
@@ -388,7 +387,7 @@ defmodule WandererApp.Map.SubscriptionManager do
@pubsub_client.broadcast(
WandererApp.PubSub,
"maps:#{map.id}",
:subscription_settings_updated
{:subscription_settings_updated, map.id}
)
case WandererApp.License.LicenseManager.get_license_by_map_id(map.id) do
@@ -423,7 +422,7 @@ defmodule WandererApp.Map.SubscriptionManager do
@pubsub_client.broadcast(
WandererApp.PubSub,
"maps:#{subscription.map_id}",
:subscription_settings_updated
{:subscription_settings_updated, subscription.map_id}
)
case WandererApp.License.LicenseManager.get_license_by_map_id(subscription.map_id) do

View File

@@ -29,11 +29,12 @@ defmodule WandererApp.Map.ZkbDataFetcher do
kills_enabled = Application.get_env(:wanderer_app, :wanderer_kills_service_enabled, true)
if kills_enabled do
WandererApp.Map.RegistryHelper.list_all_maps()
{:ok, started_maps_ids} = WandererApp.Cache.lookup("started_maps", [])
started_maps_ids
|> Task.async_stream(
fn %{id: map_id, pid: _server_pid} ->
fn map_id ->
try do
if WandererApp.Map.Server.map_pid(map_id) do
# Always update kill counts
update_map_kills(map_id)
@@ -43,7 +44,6 @@ defmodule WandererApp.Map.ZkbDataFetcher do
if is_subscription_active do
update_detailed_map_kills(map_id)
end
end
rescue
e ->
@logger.error(Exception.message(e))

View File

@@ -231,31 +231,15 @@ defmodule WandererApp.Map.Operations.Connections do
attrs
) do
with {:ok, conn_struct} <- MapConnectionRepo.get_by_id(map_id, conn_id),
result <-
:ok <-
(try do
_allowed_keys = [
:mass_status,
:ship_size_type,
:time_status,
:type
]
_update_map =
attrs
|> Enum.filter(fn {k, _v} ->
k in ["mass_status", "ship_size_type", "time_status", "type"]
end)
|> Enum.map(fn {k, v} -> {String.to_atom(k), v} end)
|> Enum.into(%{})
res = apply_connection_updates(map_id, conn_struct, attrs, char_id)
res
rescue
error ->
Logger.error("[update_connection] Exception: #{inspect(error)}")
{:error, :exception}
end),
:ok <- result do
end) do
# Since GenServer updates are asynchronous, manually apply updates to the current struct
# to return the correct data immediately instead of refetching from potentially stale cache
updated_attrs =
@@ -374,6 +358,7 @@ defmodule WandererApp.Map.Operations.Connections do
"ship_size_type" -> maybe_update_ship_size_type(map_id, conn, val)
"time_status" -> maybe_update_time_status(map_id, conn, val)
"type" -> maybe_update_type(map_id, conn, val)
"locked" -> maybe_update_locked(map_id, conn, val)
_ -> :ok
end
@@ -429,6 +414,16 @@ defmodule WandererApp.Map.Operations.Connections do
})
end
defp maybe_update_locked(_map_id, _conn, nil), do: :ok
defp maybe_update_locked(map_id, conn, value) do
Server.update_connection_locked(map_id, %{
solar_system_source_id: conn.solar_system_source,
solar_system_target_id: conn.solar_system_target,
locked: value
})
end
@doc "Creates a connection between two systems"
@spec create_connection(String.t(), map(), String.t()) ::
{:ok, :created} | {:skip, :exists} | {:error, atom()}

View File

@@ -5,9 +5,41 @@ defmodule WandererApp.Map.Operations.Signatures do
require Logger
alias WandererApp.Map.Operations
alias WandererApp.Api.{MapSystem, MapSystemSignature}
alias WandererApp.Api.{Character, MapSystem, MapSystemSignature}
alias WandererApp.Map.Server
# Private helper to validate character_eve_id from params
# If character_eve_id is provided in params, validates it exists in the system
# If not provided, falls back to the owner's character ID
@spec validate_character_eve_id(map() | nil, String.t()) ::
{:ok, String.t()} | {:error, :invalid_character}
defp validate_character_eve_id(params, fallback_char_id) when is_map(params) do
case Map.get(params, "character_eve_id") do
nil ->
# No character_eve_id provided, use fallback (owner's character)
{:ok, fallback_char_id}
provided_char_id when is_binary(provided_char_id) ->
# Validate the provided character_eve_id exists
case Character.by_eve_id(provided_char_id) do
{:ok, _character} ->
{:ok, provided_char_id}
_ ->
{:error, :invalid_character}
end
_ ->
# Invalid format
{:error, :invalid_character}
end
end
# Handle nil or non-map params by falling back to owner's character
defp validate_character_eve_id(_params, fallback_char_id) do
{:ok, fallback_char_id}
end
@spec list_signatures(String.t()) :: [map()]
def list_signatures(map_id) do
systems = Operations.list_systems(map_id)
@@ -41,11 +73,12 @@ defmodule WandererApp.Map.Operations.Signatures do
%{"solar_system_id" => solar_system_id} = params
)
when is_integer(solar_system_id) do
# Convert solar_system_id to system_id for internal use
with {:ok, system} <- MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id) do
# Validate character first, then convert solar_system_id to system_id
with {:ok, validated_char_id} <- validate_character_eve_id(params, char_id),
{:ok, system} <- MapSystem.by_map_id_and_solar_system_id(map_id, solar_system_id) do
attrs =
params
|> Map.put("character_eve_id", char_id)
|> Map.put("character_eve_id", validated_char_id)
|> Map.put("system_id", system.id)
|> Map.delete("solar_system_id")
@@ -54,7 +87,7 @@ defmodule WandererApp.Map.Operations.Signatures do
updated_signatures: [],
removed_signatures: [],
solar_system_id: solar_system_id,
character_id: char_id,
character_id: validated_char_id,
user_id: user_id,
delete_connection_with_sigs: false
}) do
@@ -86,6 +119,10 @@ defmodule WandererApp.Map.Operations.Signatures do
{:error, :unexpected_error}
end
else
{:error, :invalid_character} ->
Logger.error("[create_signature] Invalid character_eve_id provided")
{:error, :invalid_character}
_ ->
Logger.error(
"[create_signature] System not found for solar_system_id: #{solar_system_id}"
@@ -111,7 +148,9 @@ defmodule WandererApp.Map.Operations.Signatures do
sig_id,
params
) do
with {:ok, sig} <- MapSystemSignature.by_id(sig_id),
# Validate character first, then look up signature and system
with {:ok, validated_char_id} <- validate_character_eve_id(params, char_id),
{:ok, sig} <- MapSystemSignature.by_id(sig_id),
{:ok, system} <- MapSystem.by_id(sig.system_id) do
base = %{
"eve_id" => sig.eve_id,
@@ -120,7 +159,7 @@ defmodule WandererApp.Map.Operations.Signatures do
"group" => sig.group,
"type" => sig.type,
"custom_info" => sig.custom_info,
"character_eve_id" => char_id,
"character_eve_id" => validated_char_id,
"description" => sig.description,
"linked_system_id" => sig.linked_system_id
}
@@ -133,7 +172,7 @@ defmodule WandererApp.Map.Operations.Signatures do
updated_signatures: [attrs],
removed_signatures: [],
solar_system_id: system.solar_system_id,
character_id: char_id,
character_id: validated_char_id,
user_id: user_id,
delete_connection_with_sigs: false
})
@@ -151,6 +190,10 @@ defmodule WandererApp.Map.Operations.Signatures do
_ -> {:ok, attrs}
end
else
{:error, :invalid_character} ->
Logger.error("[update_signature] Invalid character_eve_id provided")
{:error, :invalid_character}
err ->
Logger.error("[update_signature] Unexpected error: #{inspect(err)}")
{:error, :unexpected_error}

View File

@@ -5,7 +5,7 @@ defmodule WandererApp.Map.Server.AclsImpl do
@pubsub_client Application.compile_env(:wanderer_app, :pubsub_client)
def handle_map_acl_updated(%{map_id: map_id, map: old_map} = state, added_acls, removed_acls) do
def handle_map_acl_updated(map_id, added_acls, removed_acls) do
{:ok, map} =
WandererApp.MapRepo.get(map_id,
acls: [
@@ -63,7 +63,11 @@ defmodule WandererApp.Map.Server.AclsImpl do
broadcast_acl_updates({:ok, result}, map_id)
%{state | map: Map.merge(old_map, map_update)}
{:ok, %{map: old_map}} = WandererApp.Map.get_map_state(map_id)
WandererApp.Map.update_map_state(map_id, %{
map: Map.merge(old_map, map_update)
})
end
def handle_acl_updated(map_id, acl_id) do
@@ -113,8 +117,18 @@ defmodule WandererApp.Map.Server.AclsImpl do
track_acls(rest)
end
defp track_acl(acl_id),
do: @pubsub_client.subscribe(WandererApp.PubSub, "acls:#{acl_id}")
defp track_acl(acl_id) do
Cachex.get_and_update(:acl_cache, acl_id, fn acl ->
case acl do
nil ->
@pubsub_client.subscribe(WandererApp.PubSub, "acls:#{acl_id}")
{:commit, acl_id}
_ ->
{:ignore, nil}
end
end)
end
defp broadcast_acl_updates(
{:ok,

View File

@@ -5,56 +5,33 @@ defmodule WandererApp.Map.Server.CharactersImpl do
alias WandererApp.Map.Server.{Impl, ConnectionsImpl, SystemsImpl}
def add_character(%{map_id: map_id} = state, %{id: character_id} = character, track_character) do
Task.start_link(fn ->
with :ok <- map_id |> WandererApp.Map.add_character(character),
{:ok, _settings} <-
WandererApp.MapCharacterSettingsRepo.create(%{
character_id: character_id,
map_id: map_id,
tracked: track_character
}),
{:ok, character} <- WandererApp.Character.get_character(character_id) do
Impl.broadcast!(map_id, :character_added, character)
def cleanup_characters(map_id) do
{:ok, invalidate_character_ids} =
WandererApp.Cache.get_and_remove(
"map_#{map_id}:invalidate_character_ids",
[]
)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :character_added, character)
:telemetry.execute([:wanderer_app, :map, :character, :added], %{count: 1})
if Enum.empty?(invalidate_character_ids) do
:ok
else
{:error, :not_found} ->
:ok
{:ok, %{acls: acls}} =
WandererApp.MapRepo.get(map_id,
acls: [
:owner_id,
members: [:role, :eve_character_id, :eve_corporation_id, :eve_alliance_id]
]
)
_error ->
{:ok, character} = WandererApp.Character.get_character(character_id)
Impl.broadcast!(map_id, :character_added, character)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :character_added, character)
:ok
process_invalidate_characters(invalidate_character_ids, map_id, acls)
end
end)
state
end
def remove_character(map_id, character_id) do
Task.start_link(fn ->
with :ok <- WandererApp.Map.remove_character(map_id, character_id),
{:ok, character} <- WandererApp.Character.get_map_character(map_id, character_id) do
Impl.broadcast!(map_id, :character_removed, character)
def track_characters(_map_id, []), do: :ok
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :character_removed, character)
:telemetry.execute([:wanderer_app, :map, :character, :removed], %{count: 1})
:ok
else
{:error, _error} ->
:ok
end
end)
def track_characters(map_id, [character_id | rest]) do
track_character(map_id, character_id)
track_characters(map_id, rest)
end
def update_tracked_characters(map_id) do
@@ -94,18 +71,18 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end)
end
def untrack_character(true, map_id, character_id) do
defp untrack_character(true, map_id, character_id) do
WandererApp.Character.TrackerManager.update_track_settings(character_id, %{
map_id: map_id,
track: false
})
end
def untrack_character(_is_character_map_active, _map_id, character_id) do
defp untrack_character(_is_character_map_active, _map_id, character_id) do
:ok
end
def is_character_map_active?(map_id, character_id) do
defp is_character_map_active?(map_id, character_id) do
case WandererApp.Character.get_character_state(character_id) do
{:ok, %{active_maps: active_maps}} ->
map_id in active_maps
@@ -115,29 +92,9 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
end
def cleanup_characters(map_id, owner_id) do
{:ok, invalidate_character_ids} =
WandererApp.Cache.get_and_remove(
"map_#{map_id}:invalidate_character_ids",
[]
)
defp process_invalidate_characters(invalidate_character_ids, map_id, acls) do
{:ok, %{map: %{owner_id: owner_id}}} = WandererApp.Map.get_map_state(map_id)
if Enum.empty?(invalidate_character_ids) do
:ok
else
{:ok, %{acls: acls}} =
WandererApp.MapRepo.get(map_id,
acls: [
:owner_id,
members: [:role, :eve_character_id, :eve_corporation_id, :eve_alliance_id]
]
)
process_invalidate_characters(invalidate_character_ids, map_id, owner_id, acls)
end
end
defp process_invalidate_characters(invalidate_character_ids, map_id, owner_id, acls) do
invalidate_character_ids
|> Task.async_stream(
fn character_id ->
@@ -194,6 +151,25 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
end
defp remove_character(map_id, character_id) do
Task.start_link(fn ->
with :ok <- WandererApp.Map.remove_character(map_id, character_id),
{:ok, character} <- WandererApp.Character.get_map_character(map_id, character_id) do
Impl.broadcast!(map_id, :character_removed, character)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :character_removed, character)
:telemetry.execute([:wanderer_app, :map, :character, :removed], %{count: 1})
:ok
else
{:error, _error} ->
:ok
end
end)
end
defp remove_and_untrack_characters(map_id, character_ids) do
Logger.debug(fn ->
"Map #{map_id} - remove and untrack characters #{inspect(character_ids)}"
@@ -217,14 +193,7 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
end
def track_characters(_map_id, []), do: :ok
def track_characters(map_id, [character_id | rest]) do
track_character(map_id, character_id)
track_characters(map_id, rest)
end
def update_characters(%{map_id: map_id} = state) do
def update_characters(map_id) do
try do
{:ok, presence_character_ids} =
WandererApp.Cache.lookup("map_#{map_id}:presence_character_ids", [])
@@ -246,11 +215,13 @@ defmodule WandererApp.Map.Server.CharactersImpl do
update
|> case do
{:character_location, location_info, old_location_info} ->
{:ok, map_state} = WandererApp.Map.get_map_state(map_id)
update_location(
map_state,
character_id,
location_info,
old_location_info,
state
old_location_info
)
:broadcast
@@ -330,34 +301,35 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
defp update_location(
%{map: %{scope: scope}, map_id: map_id, map_opts: map_opts} =
_state,
character_id,
location,
old_location,
%{map: map, map_id: map_id, rtree_name: rtree_name, map_opts: map_opts} = _state
old_location
) do
start_solar_system_id =
WandererApp.Cache.take("map:#{map_id}:character:#{character_id}:start_solar_system_id")
case is_nil(old_location.solar_system_id) and
is_nil(start_solar_system_id) and
ConnectionsImpl.can_add_location(map.scope, location.solar_system_id) do
ConnectionsImpl.can_add_location(scope, location.solar_system_id) do
true ->
:ok = SystemsImpl.maybe_add_system(map_id, location, nil, rtree_name, map_opts)
:ok = SystemsImpl.maybe_add_system(map_id, location, nil, map_opts)
_ ->
if is_nil(start_solar_system_id) || start_solar_system_id == old_location.solar_system_id do
ConnectionsImpl.is_connection_valid(
map.scope,
scope,
old_location.solar_system_id,
location.solar_system_id
)
|> case do
true ->
:ok =
SystemsImpl.maybe_add_system(map_id, location, old_location, rtree_name, map_opts)
SystemsImpl.maybe_add_system(map_id, location, old_location, map_opts)
:ok =
SystemsImpl.maybe_add_system(map_id, old_location, location, rtree_name, map_opts)
SystemsImpl.maybe_add_system(map_id, old_location, location, map_opts)
if is_character_in_space?(location) do
:ok =
@@ -381,17 +353,49 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
end
defp is_character_in_space?(%{station_id: station_id, structure_id: structure_id} = location) do
is_nil(structure_id) and is_nil(station_id)
defp is_character_in_space?(%{station_id: station_id, structure_id: structure_id} = _location),
do: is_nil(structure_id) && is_nil(station_id)
defp add_character(
map_id,
%{id: character_id} = map_character,
track_character
) do
Task.start_link(fn ->
with :ok <- map_id |> WandererApp.Map.add_character(map_character),
{:ok, _settings} <-
WandererApp.MapCharacterSettingsRepo.create(%{
character_id: character_id,
map_id: map_id,
tracked: track_character
}) do
Impl.broadcast!(map_id, :character_added, map_character)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :character_added, map_character)
:telemetry.execute([:wanderer_app, :map, :character, :added], %{count: 1})
:ok
else
{:error, :not_found} ->
:ok
_error ->
Impl.broadcast!(map_id, :character_added, map_character)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
WandererApp.ExternalEvents.broadcast(map_id, :character_added, map_character)
:ok
end
end)
end
defp track_character(map_id, character_id) do
{:ok, character} =
{:ok, %{solar_system_id: solar_system_id} = map_character} =
WandererApp.Character.get_map_character(map_id, character_id, not_present: true)
WandererApp.Cache.delete("character:#{character.id}:tracking_paused")
WandererApp.Cache.delete("character:#{character_id}:tracking_paused")
add_character(%{map_id: map_id}, character, true)
add_character(map_id, map_character, true)
WandererApp.Character.TrackerManager.update_track_settings(character_id, %{
map_id: map_id,
@@ -399,7 +403,7 @@ defmodule WandererApp.Map.Server.CharactersImpl do
track_online: true,
track_location: true,
track_ship: true,
solar_system_id: character.solar_system_id
solar_system_id: solar_system_id
})
end

View File

@@ -139,14 +139,14 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
def init_start_cache(_map_id, _connections_start_time), do: :ok
def add_connection(
%{map_id: map_id} = state,
map_id,
%{
solar_system_source_id: solar_system_source_id,
solar_system_target_id: solar_system_target_id,
character_id: character_id
} = connection_info
) do
:ok =
),
do:
maybe_add_connection(
map_id,
%{solar_system_id: solar_system_target_id},
@@ -158,11 +158,8 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
connection_info |> Map.get(:extra_info)
)
state
end
def paste_connections(
%{map_id: map_id} = state,
map_id,
connections,
_user_id,
character_id
@@ -175,47 +172,29 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
solar_system_source_id = source |> String.to_integer()
solar_system_target_id = target |> String.to_integer()
state
|> add_connection(%{
add_connection(map_id, %{
solar_system_source_id: solar_system_source_id,
solar_system_target_id: solar_system_target_id,
character_id: character_id,
extra_info: connection
})
end)
state
end
def delete_connection(
%{map_id: map_id} = state,
map_id,
%{
solar_system_source_id: solar_system_source_id,
solar_system_target_id: solar_system_target_id
} = _connection_info
) do
:ok =
),
do:
maybe_remove_connection(map_id, %{solar_system_id: solar_system_target_id}, %{
solar_system_id: solar_system_source_id
})
state
end
def update_connection_type(
%{map_id: map_id} = state,
%{
solar_system_source_id: solar_system_source_id,
solar_system_target_id: solar_system_target_id,
character_id: character_id
} = _connection_info,
type
) do
state
end
def get_connection_info(
%{map_id: map_id} = _state,
map_id,
%{
solar_system_source_id: solar_system_source_id,
solar_system_target_id: solar_system_target_id
@@ -237,11 +216,11 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
end
def update_connection_time_status(
%{map_id: map_id} = state,
map_id,
connection_update
),
do:
update_connection(state, :update_time_status, [:time_status], connection_update, fn
update_connection(map_id, :update_time_status, [:time_status], connection_update, fn
%{time_status: old_time_status},
%{id: connection_id, time_status: time_status} = updated_connection ->
case time_status == @connection_time_status_eol do
@@ -268,78 +247,46 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
end)
def update_connection_type(
state,
map_id,
connection_update
),
do: update_connection(state, :update_type, [:type], connection_update)
do: update_connection(map_id, :update_type, [:type], connection_update)
def update_connection_mass_status(
state,
map_id,
connection_update
),
do: update_connection(state, :update_mass_status, [:mass_status], connection_update)
do: update_connection(map_id, :update_mass_status, [:mass_status], connection_update)
def update_connection_ship_size_type(
state,
map_id,
connection_update
),
do: update_connection(state, :update_ship_size_type, [:ship_size_type], connection_update)
do: update_connection(map_id, :update_ship_size_type, [:ship_size_type], connection_update)
def update_connection_locked(
state,
map_id,
connection_update
),
do: update_connection(state, :update_locked, [:locked], connection_update)
do: update_connection(map_id, :update_locked, [:locked], connection_update)
def update_connection_custom_info(
state,
map_id,
connection_update
),
do: update_connection(state, :update_custom_info, [:custom_info], connection_update)
do: update_connection(map_id, :update_custom_info, [:custom_info], connection_update)
def cleanup_connections(%{map_id: map_id} = state) do
def cleanup_connections(map_id) do
connection_auto_expire_hours = get_connection_auto_expire_hours()
connection_auto_eol_hours = get_connection_auto_eol_hours()
connection_eol_expire_timeout_hours = get_eol_expire_timeout_mins() / 60
state =
map_id
|> WandererApp.Map.list_connections!()
|> Enum.reduce(state, fn %{
id: connection_id,
solar_system_source: solar_system_source_id,
solar_system_target: solar_system_target_id,
time_status: time_status,
type: type
},
state ->
if type == @connection_type_wormhole do
connection_start_time = get_start_time(map_id, connection_id)
new_time_status = get_new_time_status(connection_start_time, time_status)
if new_time_status != time_status &&
is_connection_valid(
:wormholes,
solar_system_source_id,
solar_system_target_id
) do
set_start_time(map_id, connection_id, DateTime.utc_now())
state
|> update_connection_time_status(%{
solar_system_source_id: solar_system_source_id,
solar_system_target_id: solar_system_target_id,
time_status: new_time_status
})
else
state
end
else
state
end
|> Enum.each(fn connection ->
maybe_update_connection_time_status(map_id, connection)
end)
state =
map_id
|> WandererApp.Map.list_connections!()
|> Enum.filter(fn %{
@@ -379,20 +326,45 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
connection_auto_expire_hours - connection_auto_eol_hours +
connection_eol_expire_timeout_hours)
end)
|> Enum.reduce(state, fn %{
|> Enum.each(fn %{
solar_system_source: solar_system_source_id,
solar_system_target: solar_system_target_id
},
state ->
delete_connection(state, %{
} ->
delete_connection(map_id, %{
solar_system_source_id: solar_system_source_id,
solar_system_target_id: solar_system_target_id
})
end)
state
end
defp maybe_update_connection_time_status(map_id, %{
id: connection_id,
solar_system_source: solar_system_source_id,
solar_system_target: solar_system_target_id,
time_status: time_status,
type: @connection_type_wormhole
}) do
connection_start_time = get_start_time(map_id, connection_id)
new_time_status = get_new_time_status(connection_start_time, time_status)
if new_time_status != time_status &&
is_connection_valid(
:wormholes,
solar_system_source_id,
solar_system_target_id
) do
set_start_time(map_id, connection_id, DateTime.utc_now())
update_connection_time_status(map_id, %{
solar_system_source_id: solar_system_source_id,
solar_system_target_id: solar_system_target_id,
time_status: new_time_status
})
end
end
defp maybe_update_connection_time_status(_map_id, _connection), do: :ok
defp maybe_update_linked_signature_time_status(
map_id,
%{
@@ -401,23 +373,19 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
solar_system_target: solar_system_target
} = updated_connection
) do
source_system =
with source_system when not is_nil(source_system) <-
WandererApp.Map.find_system_by_location(
map_id,
%{solar_system_id: solar_system_source}
)
target_system =
),
target_system when not is_nil(source_system) <-
WandererApp.Map.find_system_by_location(
map_id,
%{solar_system_id: solar_system_target}
)
source_linked_signatures =
find_linked_signatures(source_system, target_system)
target_linked_signatures = find_linked_signatures(target_system, source_system)
),
source_linked_signatures <-
find_linked_signatures(source_system, target_system),
target_linked_signatures <- find_linked_signatures(target_system, source_system) do
update_signatures_time_status(
map_id,
source_system.solar_system_id,
@@ -431,6 +399,10 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
target_linked_signatures,
time_status
)
else
error ->
Logger.error("Failed to update_linked_signature_time_status: #{inspect(error)}")
end
end
defp find_linked_signatures(
@@ -466,7 +438,7 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
%{custom_info: updated_custom_info}
end
SignaturesImpl.apply_update_signature(%{map_id: map_id}, sig, update_params)
SignaturesImpl.apply_update_signature(map_id, sig, update_params)
end)
Impl.broadcast!(map_id, :signatures_updated, solar_system_id)
@@ -779,7 +751,7 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
defp maybe_remove_connection(_map_id, _location, _old_location), do: :ok
defp update_connection(
%{map_id: map_id} = state,
map_id,
update_method,
attributes,
%{
@@ -824,12 +796,12 @@ defmodule WandererApp.Map.Server.ConnectionsImpl do
custom_info: updated_connection.custom_info
})
state
:ok
else
{:error, error} ->
Logger.error("Failed to update connection: #{inspect(error, pretty: true)}")
state
:ok
end
end

View File

@@ -24,12 +24,10 @@ defmodule WandererApp.Map.Server.Impl do
map_opts: []
]
@systems_cleanup_timeout :timer.minutes(30)
@characters_cleanup_timeout :timer.minutes(5)
@pubsub_client Application.compile_env(:wanderer_app, :pubsub_client)
@connections_cleanup_timeout :timer.minutes(1)
@pubsub_client Application.compile_env(:wanderer_app, :pubsub_client)
@backup_state_timeout :timer.minutes(1)
@update_presence_timeout :timer.seconds(5)
@update_characters_timeout :timer.seconds(1)
@update_tracked_characters_timeout :timer.minutes(1)
@@ -37,21 +35,16 @@ defmodule WandererApp.Map.Server.Impl do
def new(), do: __struct__()
def new(args), do: __struct__(args)
def init(args) do
map_id = args[:map_id]
Logger.info("Starting map server for #{map_id}")
ErrorTracker.set_context(%{map_id: map_id})
WandererApp.Cache.insert("map_#{map_id}:started", false)
def do_init_state(opts) do
map_id = opts[:map_id]
initial_state =
%{
map_id: map_id,
rtree_name: Module.concat([map_id, DDRT.DynamicRtree])
rtree_name: "rtree_#{map_id}"
}
|> new()
end
def load_state(%__MODULE__{map_id: map_id} = state) do
with {:ok, map} <-
WandererApp.MapRepo.get(map_id, [
:owner,
@@ -65,23 +58,23 @@ defmodule WandererApp.Map.Server.Impl do
{:ok, connections} <- WandererApp.MapConnectionRepo.get_by_map(map_id),
{:ok, subscription_settings} <-
WandererApp.Map.SubscriptionManager.get_active_map_subscription(map_id) do
state
initial_state
|> init_map(
map,
subscription_settings,
systems,
connections
)
|> SystemsImpl.init_map_systems(systems)
|> init_map_cache()
else
error ->
Logger.error("Failed to load map state: #{inspect(error, pretty: true)}")
state
initial_state
end
end
def start_map(%__MODULE__{map: map, map_id: map_id} = state) do
def start_map(%__MODULE__{map: map, map_id: map_id} = _state) do
WandererApp.Cache.insert("map_#{map_id}:started", false)
# Check if map was loaded successfully
case map do
nil ->
@@ -95,253 +88,299 @@ defmodule WandererApp.Map.Server.Impl do
"maps:#{map_id}"
)
Process.send_after(self(), :update_characters, @update_characters_timeout)
WandererApp.Map.CacheRTree.init_tree("rtree_#{map_id}", %{width: 150, verbose: false})
Process.send_after(self(), {:update_characters, map_id}, @update_characters_timeout)
Process.send_after(
self(),
:update_tracked_characters,
{:update_tracked_characters, map_id},
@update_tracked_characters_timeout
)
Process.send_after(self(), :update_presence, @update_presence_timeout)
Process.send_after(self(), :cleanup_connections, @connections_cleanup_timeout)
Process.send_after(self(), :cleanup_systems, 10_000)
Process.send_after(self(), :cleanup_characters, @characters_cleanup_timeout)
Process.send_after(self(), :backup_state, @backup_state_timeout)
Process.send_after(self(), {:update_presence, map_id}, @update_presence_timeout)
WandererApp.Cache.insert("map_#{map_id}:started", true)
# Initialize zkb cache structure to prevent timing issues
cache_key = "map:#{map_id}:zkb:detailed_kills"
WandererApp.Cache.insert(cache_key, %{}, ttl: :timer.hours(24))
WandererApp.Cache.insert("map:#{map_id}:zkb:detailed_kills", %{}, ttl: :timer.hours(24))
broadcast!(map_id, :map_server_started)
@pubsub_client.broadcast!(WandererApp.PubSub, "maps", :map_server_started)
:telemetry.execute([:wanderer_app, :map, :started], %{count: 1})
state
else
error ->
Logger.error("Failed to start map: #{inspect(error, pretty: true)}")
state
end
end
end
def stop_map(%{map_id: map_id} = state) do
def stop_map(map_id) do
Logger.debug(fn -> "Stopping map server for #{map_id}" end)
@pubsub_client.unsubscribe(
WandererApp.PubSub,
"maps:#{map_id}"
)
WandererApp.Cache.delete("map_#{map_id}:started")
WandererApp.Cache.delete("map_characters-#{map_id}")
WandererApp.Map.CacheRTree.clear_tree("rtree_#{map_id}")
WandererApp.Map.delete_map_state(map_id)
WandererApp.Cache.insert_or_update(
"started_maps",
[],
fn started_maps ->
started_maps
|> Enum.reject(fn started_map_id -> started_map_id == map_id end)
end
)
:telemetry.execute([:wanderer_app, :map, :stopped], %{count: 1})
state
|> maybe_stop_rtree()
end
def get_map(%{map: map} = _state), do: {:ok, map}
defdelegate cleanup_systems(map_id), to: SystemsImpl
defdelegate cleanup_connections(map_id), to: ConnectionsImpl
defdelegate cleanup_characters(map_id), to: CharactersImpl
defdelegate add_character(state, character, track_character), to: CharactersImpl
defdelegate untrack_characters(map_id, characters_ids), to: CharactersImpl
def remove_character(%{map_id: map_id} = state, character_id) do
CharactersImpl.remove_character(map_id, character_id)
defdelegate add_system(map_id, system_info, user_id, character_id, opts \\ []), to: SystemsImpl
state
end
defdelegate paste_connections(map_id, connections, user_id, character_id), to: ConnectionsImpl
def untrack_characters(%{map_id: map_id} = state, characters_ids) do
CharactersImpl.untrack_characters(map_id, characters_ids)
defdelegate paste_systems(map_id, systems, user_id, character_id, opts), to: SystemsImpl
state
end
defdelegate add_system_comment(map_id, comment_info, user_id, character_id), to: SystemsImpl
defdelegate add_system(state, system_info, user_id, character_id, opts \\ []), to: SystemsImpl
defdelegate paste_systems(state, systems, user_id, character_id, opts), to: SystemsImpl
defdelegate add_system_comment(state, comment_info, user_id, character_id), to: SystemsImpl
defdelegate remove_system_comment(state, comment_id, user_id, character_id), to: SystemsImpl
defdelegate remove_system_comment(map_id, comment_id, user_id, character_id), to: SystemsImpl
defdelegate delete_systems(
state,
map_id,
removed_ids,
user_id,
character_id
),
to: SystemsImpl
defdelegate update_system_name(state, update), to: SystemsImpl
defdelegate update_system_name(map_id, update), to: SystemsImpl
defdelegate update_system_description(state, update), to: SystemsImpl
defdelegate update_system_description(map_id, update), to: SystemsImpl
defdelegate update_system_status(state, update), to: SystemsImpl
defdelegate update_system_status(map_id, update), to: SystemsImpl
defdelegate update_system_tag(state, update), to: SystemsImpl
defdelegate update_system_tag(map_id, update), to: SystemsImpl
defdelegate update_system_temporary_name(state, update), to: SystemsImpl
defdelegate update_system_temporary_name(map_id, update), to: SystemsImpl
defdelegate update_system_locked(state, update), to: SystemsImpl
defdelegate update_system_locked(map_id, update), to: SystemsImpl
defdelegate update_system_labels(state, update), to: SystemsImpl
defdelegate update_system_labels(map_id, update), to: SystemsImpl
defdelegate update_system_linked_sig_eve_id(state, update), to: SystemsImpl
defdelegate update_system_linked_sig_eve_id(map_id, update), to: SystemsImpl
defdelegate update_system_position(state, update), to: SystemsImpl
defdelegate update_system_position(map_id, update), to: SystemsImpl
defdelegate add_hub(state, hub_info), to: SystemsImpl
defdelegate add_hub(map_id, hub_info), to: SystemsImpl
defdelegate remove_hub(state, hub_info), to: SystemsImpl
defdelegate remove_hub(map_id, hub_info), to: SystemsImpl
defdelegate add_ping(state, ping_info), to: PingsImpl
defdelegate add_ping(map_id, ping_info), to: PingsImpl
defdelegate cancel_ping(state, ping_info), to: PingsImpl
defdelegate cancel_ping(map_id, ping_info), to: PingsImpl
defdelegate add_connection(state, connection_info), to: ConnectionsImpl
defdelegate add_connection(map_id, connection_info), to: ConnectionsImpl
defdelegate delete_connection(state, connection_info), to: ConnectionsImpl
defdelegate delete_connection(map_id, connection_info), to: ConnectionsImpl
defdelegate get_connection_info(state, connection_info), to: ConnectionsImpl
defdelegate get_connection_info(map_id, connection_info), to: ConnectionsImpl
defdelegate paste_connections(state, connections, user_id, character_id), to: ConnectionsImpl
defdelegate update_connection_time_status(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_time_status(state, connection_update), to: ConnectionsImpl
defdelegate update_connection_type(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_type(state, connection_update), to: ConnectionsImpl
defdelegate update_connection_mass_status(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_mass_status(state, connection_update), to: ConnectionsImpl
defdelegate update_connection_ship_size_type(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_ship_size_type(state, connection_update), to: ConnectionsImpl
defdelegate update_connection_locked(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_locked(state, connection_update), to: ConnectionsImpl
defdelegate update_connection_custom_info(map_id, connection_update), to: ConnectionsImpl
defdelegate update_connection_custom_info(state, signatures_update), to: ConnectionsImpl
defdelegate update_signatures(map_id, signatures_update), to: SignaturesImpl
defdelegate update_signatures(state, signatures_update), to: SignaturesImpl
def import_settings(%{map_id: map_id} = state, settings, user_id) do
def import_settings(map_id, settings, user_id) do
WandererApp.Cache.put(
"map_#{map_id}:importing",
true
)
state =
state
|> maybe_import_systems(settings, user_id, nil)
|> maybe_import_connections(settings, user_id)
|> maybe_import_hubs(settings, user_id)
maybe_import_systems(map_id, settings, user_id, nil)
maybe_import_connections(map_id, settings, user_id)
maybe_import_hubs(map_id, settings, user_id)
WandererApp.Cache.take("map_#{map_id}:importing")
state
end
def update_subscription_settings(%{map: map} = state, subscription_settings),
do: %{
state
| map: map |> WandererApp.Map.update_subscription_settings!(subscription_settings)
}
def save_map_state(map_id) do
systems_last_activity =
map_id
|> WandererApp.Map.list_systems!()
|> Enum.reduce(%{}, fn %{id: system_id} = _system, acc ->
case WandererApp.Cache.get("map_#{map_id}:system_#{system_id}:last_activity") do
nil ->
acc
def handle_event(:update_characters, state) do
Process.send_after(self(), :update_characters, @update_characters_timeout)
value ->
acc |> Map.put_new(system_id, value)
end
end)
CharactersImpl.update_characters(state)
connections =
map_id
|> WandererApp.Map.list_connections!()
state
connections_eol_time =
connections
|> Enum.reduce(%{}, fn %{id: connection_id} = _connection, acc ->
case WandererApp.Cache.get("map_#{map_id}:conn_#{connection_id}:mark_eol_time") do
nil ->
acc
value ->
acc |> Map.put_new(connection_id, value)
end
end)
connections_start_time =
connections
|> Enum.reduce(%{}, fn %{id: connection_id} = _connection, acc ->
connection_start_time = ConnectionsImpl.get_start_time(map_id, connection_id)
acc |> Map.put_new(connection_id, connection_start_time)
end)
WandererApp.Api.MapState.create(%{
map_id: map_id,
systems_last_activity: systems_last_activity,
connections_eol_time: connections_eol_time,
connections_start_time: connections_start_time
})
end
def handle_event(:update_tracked_characters, %{map_id: map_id} = state) do
Process.send_after(self(), :update_tracked_characters, @update_tracked_characters_timeout)
def handle_event({:update_characters, map_id} = event) do
Process.send_after(self(), event, @update_characters_timeout)
CharactersImpl.update_characters(map_id)
end
def handle_event({:update_tracked_characters, map_id} = event) do
Process.send_after(
self(),
event,
@update_tracked_characters_timeout
)
CharactersImpl.update_tracked_characters(map_id)
state
end
def handle_event(:update_presence, %{map_id: map_id} = state) do
Process.send_after(self(), :update_presence, @update_presence_timeout)
def handle_event({:update_presence, map_id} = event) do
Process.send_after(self(), event, @update_presence_timeout)
update_presence(map_id)
state
end
def handle_event(:backup_state, state) do
Process.send_after(self(), :backup_state, @backup_state_timeout)
{:ok, _map_state} = state |> save_map_state()
state
def handle_event({:map_acl_updated, map_id, added_acls, removed_acls}) do
AclsImpl.handle_map_acl_updated(map_id, added_acls, removed_acls)
end
def handle_event(
{:map_acl_updated, added_acls, removed_acls},
state
def handle_event({:acl_updated, %{acl_id: acl_id}}) do
# Find all maps that use this ACL
case Ash.read(
WandererApp.Api.MapAccessList
|> Ash.Query.for_read(:read_by_acl, %{acl_id: acl_id})
) do
state |> AclsImpl.handle_map_acl_updated(added_acls, removed_acls)
end
{:ok, map_acls} ->
Logger.debug(fn ->
"Found #{length(map_acls)} maps using ACL #{acl_id}: #{inspect(Enum.map(map_acls, & &1.map_id))}"
end)
def handle_event({:acl_updated, %{acl_id: acl_id}}, %{map_id: map_id} = state) do
# Broadcast to each map
Enum.each(map_acls, fn %{map_id: map_id} ->
Logger.debug(fn -> "Broadcasting acl_updated to map #{map_id}" end)
AclsImpl.handle_acl_updated(map_id, acl_id)
end)
state
Logger.debug(fn ->
"Successfully broadcast acl_updated event to #{length(map_acls)} maps"
end)
{:error, error} ->
Logger.error("Failed to find maps for ACL #{acl_id}: #{inspect(error)}")
:ok
end
end
def handle_event({:acl_deleted, %{acl_id: acl_id}}, %{map_id: map_id} = state) do
def handle_event({:acl_deleted, %{acl_id: acl_id}}) do
case Ash.read(
WandererApp.Api.MapAccessList
|> Ash.Query.for_read(:read_by_acl, %{acl_id: acl_id})
) do
{:ok, map_acls} ->
Logger.debug(fn ->
"Found #{length(map_acls)} maps using ACL #{acl_id}: #{inspect(Enum.map(map_acls, & &1.map_id))}"
end)
# Broadcast to each map
Enum.each(map_acls, fn %{map_id: map_id} ->
Logger.debug(fn -> "Broadcasting acl_deleted to map #{map_id}" end)
AclsImpl.handle_acl_deleted(map_id, acl_id)
end)
state
Logger.debug(fn ->
"Successfully broadcast acl_deleted event to #{length(map_acls)} maps"
end)
{:error, error} ->
Logger.error("Failed to find maps for ACL #{acl_id}: #{inspect(error)}")
:ok
end
end
def handle_event(:cleanup_connections, state) do
Process.send_after(self(), :cleanup_connections, @connections_cleanup_timeout)
state |> ConnectionsImpl.cleanup_connections()
end
def handle_event(:cleanup_characters, %{map_id: map_id, map: %{owner_id: owner_id}} = state) do
Process.send_after(self(), :cleanup_characters, @characters_cleanup_timeout)
CharactersImpl.cleanup_characters(map_id, owner_id)
state
end
def handle_event(:cleanup_systems, state) do
Process.send_after(self(), :cleanup_systems, @systems_cleanup_timeout)
state |> SystemsImpl.cleanup_systems()
end
def handle_event(:subscription_settings_updated, %{map: map, map_id: map_id} = state) do
def handle_event({:subscription_settings_updated, map_id}) do
{:ok, subscription_settings} =
WandererApp.Map.SubscriptionManager.get_active_map_subscription(map_id)
%{
state
| map:
map
|> WandererApp.Map.update_subscription_settings!(subscription_settings)
}
update_subscription_settings(map_id, subscription_settings)
end
def handle_event({:options_updated, options}, %{map: map} = state) do
map |> WandererApp.Map.update_options!(options)
%{state | map_opts: map_options(options)}
def handle_event({:options_updated, map_id, options}) do
update_options(map_id, options)
end
def handle_event({ref, _result}, %{map_id: _map_id} = state) when is_reference(ref) do
def handle_event({ref, _result}) when is_reference(ref) do
Process.demonitor(ref, [:flush])
state
end
def handle_event(msg, state) do
def handle_event(msg) do
Logger.warning("Unhandled event: #{inspect(msg)}")
end
state
def update_subscription_settings(map_id, subscription_settings) do
{:ok, %{map: map}} = WandererApp.Map.get_map_state(map_id)
WandererApp.Map.update_map_state(map_id, %{
map: map |> WandererApp.Map.update_subscription_settings!(subscription_settings)
})
end
def update_options(map_id, options) do
{:ok, %{map: map}} = WandererApp.Map.get_map_state(map_id)
WandererApp.Map.update_map_state(map_id, %{
map: map |> WandererApp.Map.update_options!(options),
map_opts: map_options(options)
})
end
def broadcast!(map_id, event, payload \\ nil) do
@@ -387,64 +426,7 @@ defmodule WandererApp.Map.Server.Impl do
]
end
defp save_map_state(%{map_id: map_id} = _state) do
systems_last_activity =
map_id
|> WandererApp.Map.list_systems!()
|> Enum.reduce(%{}, fn %{id: system_id} = _system, acc ->
case WandererApp.Cache.get("map_#{map_id}:system_#{system_id}:last_activity") do
nil ->
acc
value ->
acc |> Map.put_new(system_id, value)
end
end)
connections =
map_id
|> WandererApp.Map.list_connections!()
connections_eol_time =
connections
|> Enum.reduce(%{}, fn %{id: connection_id} = _connection, acc ->
case WandererApp.Cache.get("map_#{map_id}:conn_#{connection_id}:mark_eol_time") do
nil ->
acc
value ->
acc |> Map.put_new(connection_id, value)
end
end)
connections_start_time =
connections
|> Enum.reduce(%{}, fn %{id: connection_id} = _connection, acc ->
connection_start_time = ConnectionsImpl.get_start_time(map_id, connection_id)
acc |> Map.put_new(connection_id, connection_start_time)
end)
WandererApp.Api.MapState.create(%{
map_id: map_id,
systems_last_activity: systems_last_activity,
connections_eol_time: connections_eol_time,
connections_start_time: connections_start_time
})
end
defp maybe_stop_rtree(%{rtree_name: rtree_name} = state) do
case Process.whereis(rtree_name) do
nil ->
:ok
pid when is_pid(pid) ->
GenServer.stop(pid, :normal)
end
state
end
defp init_map_cache(%__MODULE__{map_id: map_id} = state) do
defp init_map_cache(map_id) do
case WandererApp.Api.MapState.by_map_id(map_id) do
{:ok,
%{
@@ -456,10 +438,8 @@ defmodule WandererApp.Map.Server.Impl do
ConnectionsImpl.init_eol_cache(map_id, connections_eol_time)
ConnectionsImpl.init_start_cache(map_id, connections_start_time)
state
_ ->
state
:ok
end
end
@@ -481,20 +461,28 @@ defmodule WandererApp.Map.Server.Impl do
|> WandererApp.Map.add_connections!(connections)
|> WandererApp.Map.add_characters!(characters)
SystemsImpl.init_map_systems(map_id, systems)
character_ids =
map_id
|> WandererApp.Map.get_map!()
|> Map.get(:characters, [])
init_map_cache(map_id)
WandererApp.Cache.insert("map_#{map_id}:invalidate_character_ids", character_ids)
%{state | map: map, map_opts: map_options(options)}
end
def maybe_import_systems(state, %{"systems" => systems} = _settings, user_id, character_id) do
state =
def maybe_import_systems(
map_id,
%{"systems" => systems} = _settings,
user_id,
character_id
) do
systems
|> Enum.reduce(state, fn %{
|> Enum.each(fn %{
"description" => description,
"id" => id,
"labels" => labels,
@@ -504,30 +492,38 @@ defmodule WandererApp.Map.Server.Impl do
"status" => status,
"tag" => tag,
"temporary_name" => temporary_name
} = _system,
acc ->
acc
|> add_system(
} ->
solar_system_id = id |> String.to_integer()
add_system(
map_id,
%{
solar_system_id: id |> String.to_integer(),
solar_system_id: solar_system_id,
coordinates: %{"x" => round(x), "y" => round(y)}
},
user_id,
character_id
)
|> update_system_name(%{solar_system_id: id |> String.to_integer(), name: name})
|> update_system_description(%{
solar_system_id: id |> String.to_integer(),
update_system_name(map_id, %{solar_system_id: solar_system_id, name: name})
update_system_description(map_id, %{
solar_system_id: solar_system_id,
description: description
})
|> update_system_status(%{solar_system_id: id |> String.to_integer(), status: status})
|> update_system_tag(%{solar_system_id: id |> String.to_integer(), tag: tag})
|> update_system_temporary_name(%{
solar_system_id: id |> String.to_integer(),
update_system_status(map_id, %{solar_system_id: solar_system_id, status: status})
update_system_tag(map_id, %{solar_system_id: solar_system_id, tag: tag})
update_system_temporary_name(map_id, %{
solar_system_id: solar_system_id,
temporary_name: temporary_name
})
|> update_system_locked(%{solar_system_id: id |> String.to_integer(), locked: locked})
|> update_system_labels(%{solar_system_id: id |> String.to_integer(), labels: labels})
update_system_locked(map_id, %{solar_system_id: solar_system_id, locked: locked})
update_system_labels(map_id, %{solar_system_id: solar_system_id, labels: labels})
end)
removed_system_ids =
@@ -536,39 +532,39 @@ defmodule WandererApp.Map.Server.Impl do
|> Enum.map(fn system -> system["id"] end)
|> Enum.map(&String.to_integer/1)
state
|> delete_systems(removed_system_ids, user_id, character_id)
delete_systems(map_id, removed_system_ids, user_id, character_id)
end
def maybe_import_connections(state, %{"connections" => connections} = _settings, _user_id) do
def maybe_import_connections(map_id, %{"connections" => connections} = _settings, _user_id) do
connections
|> Enum.reduce(state, fn %{
|> Enum.each(fn %{
"source" => source,
"target" => target,
"mass_status" => mass_status,
"time_status" => time_status,
"ship_size_type" => ship_size_type
} = _system,
acc ->
} ->
source_id = source |> String.to_integer()
target_id = target |> String.to_integer()
acc
|> add_connection(%{
add_connection(map_id, %{
solar_system_source_id: source_id,
solar_system_target_id: target_id
})
|> update_connection_time_status(%{
update_connection_time_status(map_id, %{
solar_system_source_id: source_id,
solar_system_target_id: target_id,
time_status: time_status
})
|> update_connection_mass_status(%{
update_connection_mass_status(map_id, %{
solar_system_source_id: source_id,
solar_system_target_id: target_id,
mass_status: mass_status
})
|> update_connection_ship_size_type(%{
update_connection_ship_size_type(map_id, %{
solar_system_source_id: source_id,
solar_system_target_id: target_id,
ship_size_type: ship_size_type
@@ -576,13 +572,12 @@ defmodule WandererApp.Map.Server.Impl do
end)
end
def maybe_import_hubs(state, %{"hubs" => hubs} = _settings, _user_id) do
def maybe_import_hubs(map_id, %{"hubs" => hubs} = _settings, _user_id) do
hubs
|> Enum.reduce(state, fn hub, acc ->
|> Enum.each(fn hub ->
solar_system_id = hub |> String.to_integer()
acc
|> add_hub(%{solar_system_id: solar_system_id})
add_hub(map_id, %{solar_system_id: solar_system_id})
end)
end

View File

@@ -8,14 +8,14 @@ defmodule WandererApp.Map.Server.PingsImpl do
@ping_auto_expire_timeout :timer.minutes(15)
def add_ping(
%{map_id: map_id} = state,
map_id,
%{
solar_system_id: solar_system_id,
type: type,
message: message,
character_id: character_id,
user_id: user_id
} = ping_info
} = _ping_info
) do
with {:ok, character} <- WandererApp.Character.get_character(character_id),
system <-
@@ -57,23 +57,20 @@ defmodule WandererApp.Map.Server.PingsImpl do
map_id: map_id,
solar_system_id: "#{solar_system_id}"
})
state
else
error ->
Logger.error("Failed to add_ping: #{inspect(error, pretty: true)}")
state
end
end
def cancel_ping(
%{map_id: map_id} = state,
map_id,
%{
id: ping_id,
character_id: character_id,
user_id: user_id,
type: type
} = ping_info
} = _ping_info
) do
with {:ok, character} <- WandererApp.Character.get_character(character_id),
{:ok,
@@ -105,12 +102,9 @@ defmodule WandererApp.Map.Server.PingsImpl do
map_id: map_id,
solar_system_id: solar_system_id
})
state
else
error ->
Logger.error("Failed to cancel_ping: #{inspect(error, pretty: true)}")
state
end
end
end

View File

@@ -13,7 +13,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
Public entrypoint for updating signatures on a map system.
"""
def update_signatures(
%{map_id: map_id} = state,
map_id,
%{
solar_system_id: system_solar_id,
character_id: char_id,
@@ -31,7 +31,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
solar_system_id: system_solar_id
}) do
do_update_signatures(
state,
map_id,
system,
char_id,
user_id,
@@ -43,14 +43,13 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
else
error ->
Logger.warning("Skipping signature update: #{inspect(error)}")
state
end
end
def update_signatures(state, _), do: state
def update_signatures(_map_id, _), do: :ok
defp do_update_signatures(
state,
map_id,
system,
character_id,
user_id,
@@ -86,14 +85,14 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
# 1. Removals
existing_current
|> Enum.filter(&(&1.eve_id in removed_ids))
|> Enum.each(&remove_signature(&1, state, system, delete_conn?))
|> Enum.each(&remove_signature(map_id, &1, system, delete_conn?))
# 2. Updates
existing_current
|> Enum.filter(&(&1.eve_id in updated_ids))
|> Enum.each(fn existing ->
update = Enum.find(updated_sigs, &(&1.eve_id == existing.eve_id))
apply_update_signature(state, existing, update)
apply_update_signature(map_id, existing, update)
end)
# 3. Additions & restorations
@@ -119,7 +118,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
if added_ids != [] do
track_activity(
:signatures_added,
state.map_id,
map_id,
system.solar_system_id,
user_id,
character_id,
@@ -130,7 +129,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
if removed_ids != [] do
track_activity(
:signatures_removed,
state.map_id,
map_id,
system.solar_system_id,
user_id,
character_id,
@@ -139,12 +138,12 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
end
# 5. Broadcast to any live subscribers
Impl.broadcast!(state.map_id, :signatures_updated, system.solar_system_id)
Impl.broadcast!(map_id, :signatures_updated, system.solar_system_id)
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
# Send individual signature events
Enum.each(added_sigs, fn sig ->
WandererApp.ExternalEvents.broadcast(state.map_id, :signature_added, %{
WandererApp.ExternalEvents.broadcast(map_id, :signature_added, %{
solar_system_id: system.solar_system_id,
signature_id: sig.eve_id,
name: sig.name,
@@ -155,27 +154,25 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
end)
Enum.each(removed_ids, fn sig_eve_id ->
WandererApp.ExternalEvents.broadcast(state.map_id, :signature_removed, %{
WandererApp.ExternalEvents.broadcast(map_id, :signature_removed, %{
solar_system_id: system.solar_system_id,
signature_id: sig_eve_id
})
end)
# Also send the summary event for backwards compatibility
WandererApp.ExternalEvents.broadcast(state.map_id, :signatures_updated, %{
WandererApp.ExternalEvents.broadcast(map_id, :signatures_updated, %{
solar_system_id: system.solar_system_id,
added_count: length(added_ids),
updated_count: length(updated_ids),
removed_count: length(removed_ids)
})
state
end
defp remove_signature(sig, state, system, delete_conn?) do
defp remove_signature(map_id, sig, system, delete_conn?) do
# optionally remove the linked connection
if delete_conn? && sig.linked_system_id do
ConnectionsImpl.delete_connection(state, %{
ConnectionsImpl.delete_connection(map_id, %{
solar_system_source_id: system.solar_system_id,
solar_system_target_id: sig.linked_system_id
})
@@ -183,7 +180,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
# clear any linked_sig_eve_id on the target system
if sig.linked_system_id do
SystemsImpl.update_system_linked_sig_eve_id(state, %{
SystemsImpl.update_system_linked_sig_eve_id(map_id, %{
solar_system_id: sig.linked_system_id,
linked_sig_eve_id: nil
})
@@ -194,7 +191,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
end
def apply_update_signature(
state,
map_id,
%MapSystemSignature{} = existing,
update_params
)
@@ -204,8 +201,8 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
update_params |> Map.put(:update_forced_at, DateTime.utc_now())
) do
{:ok, updated} ->
maybe_update_connection_time_status(state, existing, updated)
maybe_update_connection_mass_status(state, existing, updated)
maybe_update_connection_time_status(map_id, existing, updated)
maybe_update_connection_mass_status(map_id, existing, updated)
:ok
{:error, reason} ->
@@ -214,7 +211,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
end
defp maybe_update_connection_time_status(
state,
map_id,
%{custom_info: old_custom_info} = old_sig,
%{custom_info: new_custom_info, system_id: system_id, linked_system_id: linked_system_id} =
updated_sig
@@ -226,7 +223,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
if old_time_status != new_time_status do
{:ok, source_system} = MapSystem.by_id(system_id)
ConnectionsImpl.update_connection_time_status(state, %{
ConnectionsImpl.update_connection_time_status(map_id, %{
solar_system_source_id: source_system.solar_system_id,
solar_system_target_id: linked_system_id,
time_status: new_time_status
@@ -234,10 +231,10 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
end
end
defp maybe_update_connection_time_status(_state, _old_sig, _updated_sig), do: :ok
defp maybe_update_connection_time_status(_map_id, _old_sig, _updated_sig), do: :ok
defp maybe_update_connection_mass_status(
state,
map_id,
%{type: old_type} = old_sig,
%{type: new_type, system_id: system_id, linked_system_id: linked_system_id} =
updated_sig
@@ -248,7 +245,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
signature_ship_size_type = EVEUtil.get_wh_size(new_type)
if not is_nil(signature_ship_size_type) do
ConnectionsImpl.update_connection_ship_size_type(state, %{
ConnectionsImpl.update_connection_ship_size_type(map_id, %{
solar_system_source_id: source_system.solar_system_id,
solar_system_target_id: linked_system_id,
ship_size_type: signature_ship_size_type
@@ -257,7 +254,7 @@ defmodule WandererApp.Map.Server.SignaturesImpl do
end
end
defp maybe_update_connection_mass_status(_state, _old_sig, _updated_sig), do: :ok
defp maybe_update_connection_mass_status(_map_id, _old_sig, _updated_sig), do: :ok
defp track_activity(event, map_id, solar_system_id, user_id, character_id, signatures) do
ActivityTracker.track_map_event(event, %{

View File

@@ -1,22 +0,0 @@
defmodule WandererApp.Map.ServerSupervisor do
@moduledoc false
use Supervisor, restart: :transient
alias WandererApp.Map.Server
def start_link(args), do: Supervisor.start_link(__MODULE__, args)
@impl true
def init(args) do
children = [
{Server, args},
{DDRT.DynamicRtree,
[
conf: [name: "rtree_#{args[:map_id]}", width: 150, verbose: false, seed: 0],
name: Module.concat([args[:map_id], DDRT.DynamicRtree])
]}
]
Supervisor.init(children, strategy: :one_for_one, auto_shutdown: :any_significant)
end
end

View File

@@ -20,14 +20,14 @@ defmodule WandererApp.Map.Server.SystemsImpl do
end)
end
def init_map_systems(state, [] = _systems), do: state
def init_map_systems(_map_id, [] = _systems), do: :ok
def init_map_systems(%{map_id: map_id, rtree_name: rtree_name} = state, systems) do
def init_map_systems(map_id, systems) do
systems
|> Enum.each(fn %{id: system_id, solar_system_id: solar_system_id} = system ->
@ddrt.insert(
{solar_system_id, WandererApp.Map.PositionCalculator.get_system_bounding_rect(system)},
rtree_name
"rtree_#{map_id}"
)
WandererApp.Cache.put(
@@ -36,12 +36,10 @@ defmodule WandererApp.Map.Server.SystemsImpl do
ttl: @system_inactive_timeout
)
end)
state
end
def add_system(
%{map_id: map_id} = state,
map_id,
%{
solar_system_id: solar_system_id
} = system_info,
@@ -49,17 +47,19 @@ defmodule WandererApp.Map.Server.SystemsImpl do
character_id,
opts
) do
case map_id |> WandererApp.Map.check_location(%{solar_system_id: solar_system_id}) do
map_id
|> WandererApp.Map.check_location(%{solar_system_id: solar_system_id})
|> case do
{:ok, _location} ->
state |> do_add_system(system_info, user_id, character_id)
do_add_system(map_id, system_info, user_id, character_id)
{:error, :already_exists} ->
state
:ok
end
end
def paste_systems(
%{map_id: map_id} = state,
map_id,
systems,
user_id,
character_id,
@@ -75,8 +75,8 @@ defmodule WandererApp.Map.Server.SystemsImpl do
case map_id |> WandererApp.Map.check_location(%{solar_system_id: solar_system_id}) do
{:ok, _location} ->
if opts |> Keyword.get(:add_not_existing, true) do
state
|> do_add_system(
do_add_system(
map_id,
%{solar_system_id: solar_system_id, coordinates: coordinates, extra_info: system},
user_id,
character_id
@@ -93,12 +93,10 @@ defmodule WandererApp.Map.Server.SystemsImpl do
end
end
end)
state
end
def add_system_comment(
%{map_id: map_id} = state,
map_id,
%{
solar_system_id: solar_system_id,
text: text
@@ -126,12 +124,10 @@ defmodule WandererApp.Map.Server.SystemsImpl do
solar_system_id: solar_system_id,
comment: comment
})
state
end
def remove_system_comment(
%{map_id: map_id} = state,
map_id,
comment_id,
user_id,
character_id
@@ -145,11 +141,9 @@ defmodule WandererApp.Map.Server.SystemsImpl do
solar_system_id: system.solar_system_id,
comment_id: comment_id
})
state
end
def cleanup_systems(%{map_id: map_id} = state) do
def cleanup_systems(map_id) do
expired_systems =
map_id
|> WandererApp.Map.list_systems!()
@@ -184,71 +178,66 @@ defmodule WandererApp.Map.Server.SystemsImpl do
end)
|> Enum.map(& &1.solar_system_id)
case expired_systems |> Enum.empty?() do
false ->
state |> delete_systems(expired_systems, nil, nil)
_ ->
state
if expired_systems |> Enum.empty?() |> Kernel.not() do
delete_systems(map_id, expired_systems, nil, nil)
end
end
def update_system_name(
state,
map_id,
update
),
do: state |> update_system(:update_name, [:name], update)
do: update_system(map_id, :update_name, [:name], update)
def update_system_description(
state,
map_id,
update
),
do: state |> update_system(:update_description, [:description], update)
do: update_system(map_id, :update_description, [:description], update)
def update_system_status(
state,
map_id,
update
),
do: state |> update_system(:update_status, [:status], update)
do: update_system(map_id, :update_status, [:status], update)
def update_system_tag(
state,
map_id,
update
),
do: state |> update_system(:update_tag, [:tag], update)
do: update_system(map_id, :update_tag, [:tag], update)
def update_system_temporary_name(
state,
map_id,
update
) do
state |> update_system(:update_temporary_name, [:temporary_name], update)
end
),
do: update_system(map_id, :update_temporary_name, [:temporary_name], update)
def update_system_locked(
state,
map_id,
update
),
do: state |> update_system(:update_locked, [:locked], update)
do: update_system(map_id, :update_locked, [:locked], update)
def update_system_labels(
state,
map_id,
update
),
do: state |> update_system(:update_labels, [:labels], update)
do: update_system(map_id, :update_labels, [:labels], update)
def update_system_linked_sig_eve_id(
state,
map_id,
update
),
do: state |> update_system(:update_linked_sig_eve_id, [:linked_sig_eve_id], update)
do: update_system(map_id, :update_linked_sig_eve_id, [:linked_sig_eve_id], update)
def update_system_position(
%{rtree_name: rtree_name} = state,
map_id,
update
),
do:
state
|> update_system(
update_system(
map_id,
:update_position,
[:position_x, :position_y],
update,
@@ -256,13 +245,13 @@ defmodule WandererApp.Map.Server.SystemsImpl do
@ddrt.update(
updated_system.solar_system_id,
WandererApp.Map.PositionCalculator.get_system_bounding_rect(updated_system),
rtree_name
"rtree_#{map_id}"
)
end
)
def add_hub(
%{map_id: map_id} = state,
map_id,
hub_info
) do
with :ok <- WandererApp.Map.add_hub(map_id, hub_info),
@@ -270,16 +259,15 @@ defmodule WandererApp.Map.Server.SystemsImpl do
{:ok, _} <-
WandererApp.MapRepo.update_hubs(map_id, hubs) do
Impl.broadcast!(map_id, :update_map, %{hubs: hubs})
state
else
error ->
Logger.error("Failed to add hub: #{inspect(error, pretty: true)}")
state
:ok
end
end
def remove_hub(
%{map_id: map_id} = state,
map_id,
hub_info
) do
with :ok <- WandererApp.Map.remove_hub(map_id, hub_info),
@@ -287,16 +275,15 @@ defmodule WandererApp.Map.Server.SystemsImpl do
{:ok, _} <-
WandererApp.MapRepo.update_hubs(map_id, hubs) do
Impl.broadcast!(map_id, :update_map, %{hubs: hubs})
state
else
error ->
Logger.error("Failed to remove hub: #{inspect(error, pretty: true)}")
state
:ok
end
end
def delete_systems(
%{map_id: map_id, rtree_name: rtree_name} = state,
map_id,
removed_ids,
user_id,
character_id
@@ -316,7 +303,7 @@ defmodule WandererApp.Map.Server.SystemsImpl do
|> case do
{:ok, result} ->
:ok = WandererApp.Map.remove_system(map_id, solar_system_id)
@ddrt.delete([solar_system_id], rtree_name)
@ddrt.delete([solar_system_id], "rtree_#{map_id}")
Impl.broadcast!(map_id, :systems_removed, [solar_system_id])
# ADDITIVE: Also broadcast to external event system (webhooks/WebSocket)
@@ -344,7 +331,7 @@ defmodule WandererApp.Map.Server.SystemsImpl do
end
try do
cleanup_linked_system_sig_eve_ids(state, [system_id])
cleanup_linked_system_sig_eve_ids(map_id, [system_id])
rescue
e ->
Logger.error("Failed to cleanup system linked sig eve ids: #{inspect(e)}")
@@ -357,8 +344,6 @@ defmodule WandererApp.Map.Server.SystemsImpl do
:ok
end
end)
state
end
defp track_systems_removed(map_id, user_id, character_id, removed_solar_system_ids)
@@ -424,7 +409,7 @@ defmodule WandererApp.Map.Server.SystemsImpl do
end)
end
defp cleanup_linked_system_sig_eve_ids(state, system_ids_to_remove) do
defp cleanup_linked_system_sig_eve_ids(map_id, system_ids_to_remove) do
linked_system_ids =
system_ids_to_remove
|> Enum.map(fn system_id ->
@@ -437,17 +422,19 @@ defmodule WandererApp.Map.Server.SystemsImpl do
linked_system_ids
|> Enum.each(fn linked_system_id ->
update_system_linked_sig_eve_id(state, %{
update_system(map_id, :update_linked_sig_eve_id, [:linked_sig_eve_id], %{
solar_system_id: linked_system_id,
linked_sig_eve_id: nil
})
end)
end
def maybe_add_system(map_id, location, old_location, rtree_name, map_opts)
def maybe_add_system(map_id, location, old_location, map_opts)
when not is_nil(location) do
case WandererApp.Map.check_location(map_id, location) do
{:ok, location} ->
rtree_name = "rtree_#{map_id}"
{:ok, position} = calc_new_system_position(map_id, old_location, rtree_name, map_opts)
case WandererApp.MapSystemRepo.get_by_map_and_solar_system_id(
@@ -546,10 +533,10 @@ defmodule WandererApp.Map.Server.SystemsImpl do
end
end
def maybe_add_system(_map_id, _location, _old_location, _rtree_name, _map_opts), do: :ok
def maybe_add_system(_map_id, _location, _old_location, _map_opts), do: :ok
defp do_add_system(
%{map_id: map_id, map_opts: map_opts, rtree_name: rtree_name} = state,
map_id,
%{
solar_system_id: solar_system_id,
coordinates: coordinates
@@ -558,6 +545,8 @@ defmodule WandererApp.Map.Server.SystemsImpl do
character_id
) do
extra_info = system_info |> Map.get(:extra_info)
rtree_name = "rtree_#{map_id}"
{:ok, %{map_opts: map_opts}} = WandererApp.Map.get_map_state(map_id)
%{"x" => x, "y" => y} =
coordinates
@@ -631,7 +620,7 @@ defmodule WandererApp.Map.Server.SystemsImpl do
})
end
:ok = map_id |> WandererApp.Map.add_system(system)
:ok = WandererApp.Map.add_system(map_id, system)
WandererApp.Cache.put(
"map_#{map_id}:system_#{system.id}:last_activity",
@@ -660,8 +649,6 @@ defmodule WandererApp.Map.Server.SystemsImpl do
map_id: map_id,
solar_system_id: solar_system_id
})
state
end
defp maybe_update_extra_info(system, nil), do: system
@@ -793,7 +780,7 @@ defmodule WandererApp.Map.Server.SystemsImpl do
|> WandererApp.Map.PositionCalculator.get_new_system_position(rtree_name, opts)}
defp update_system(
%{map_id: map_id} = state,
map_id,
update_method,
attributes,
update,
@@ -817,12 +804,10 @@ defmodule WandererApp.Map.Server.SystemsImpl do
end
update_map_system_last_activity(map_id, updated_system)
state
else
error ->
Logger.error("Failed to update system: #{inspect(error, pretty: true)}")
state
:ok
end
end
@@ -842,13 +827,9 @@ defmodule WandererApp.Map.Server.SystemsImpl do
WandererApp.ExternalEvents.broadcast(map_id, :system_metadata_changed, %{
solar_system_id: updated_system.solar_system_id,
name: updated_system.name,
# ADD
temporary_name: updated_system.temporary_name,
# ADD
labels: updated_system.labels,
# ADD
description: updated_system.description,
# ADD
status: updated_system.status
})
end

View File

@@ -4,8 +4,8 @@ defmodule WandererApp.Test.DDRT do
This allows mocking of DDRT calls in tests.
"""
@callback insert({integer(), any()}, String.t()) :: :ok | {:error, term()}
@callback update(integer(), any(), String.t()) :: :ok | {:error, term()}
@callback delete([integer()], String.t()) :: :ok | {:error, term()}
@callback search(any(), String.t()) :: [any()]
@callback insert({integer(), any()} | list({integer(), any()}), String.t()) :: {:ok, map()} | {:error, term()}
@callback update(integer(), any(), String.t()) :: {:ok, map()} | {:error, term()}
@callback delete(integer() | [integer()], String.t()) :: {:ok, map()} | {:error, term()}
@callback query(any(), String.t()) :: {:ok, [any()]} | {:error, term()}
end

View File

@@ -610,7 +610,7 @@ defmodule WandererAppWeb.MapAccessListAPIController do
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{loaded_map.id}",
{:map_acl_updated, [new_acl_id], []}
{:map_acl_updated, loaded_map.id, [new_acl_id], []}
)
end

View File

@@ -432,32 +432,42 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
responses: ResponseSchemas.standard_responses(@detail_response_schema)
)
def show(%{assigns: %{map_id: map_id}} = conn, %{"id" => id}) do
with {:ok, system_uuid} <- APIUtils.validate_uuid(id),
{:ok, system} <- WandererApp.Api.MapSystem.by_id(system_uuid) do
# Verify the system belongs to the requested map
if system.map_id == map_id do
# Look up by solar_system_id (EVE Online integer ID)
case APIUtils.parse_int(id) do
{:ok, solar_system_id} ->
case Operations.get_system(map_id, solar_system_id) do
{:ok, system} ->
APIUtils.respond_data(conn, APIUtils.map_system_to_json(system))
else
{:error, :not_found} ->
{:error, :not_found}
end
else
{:error, %Ash.Error.Query.NotFound{}} -> {:error, :not_found}
{:error, _} -> {:error, :not_found}
error -> error
{:error, _} ->
{:error, :not_found}
end
end
operation(:create,
summary: "Upsert Systems and Connections (batch or single)",
summary: "Create or Update Systems and Connections",
description: """
Creates or updates systems and connections. Supports two formats:
1. **Single System Format**: Post a single system object directly (e.g., `{"solar_system_id": 30000142, "position_x": 100, ...}`)
2. **Batch Format**: Post multiple systems and connections (e.g., `{"systems": [...], "connections": [...]}`)
Systems are identified by solar_system_id and will be updated if they already exist on the map.
""",
parameters: [
map_identifier: [
in: :path,
@@ -472,8 +482,22 @@ defmodule WandererAppWeb.MapSystemAPIController do
)
def create(conn, params) do
systems = Map.get(params, "systems", [])
connections = Map.get(params, "connections", [])
# Support both batch format {"systems": [...], "connections": [...]}
# and single system format {"solar_system_id": ..., ...}
{systems, connections} =
cond do
Map.has_key?(params, "systems") ->
# Batch format
{Map.get(params, "systems", []), Map.get(params, "connections", [])}
Map.has_key?(params, "solar_system_id") or Map.has_key?(params, :solar_system_id) ->
# Single system format - wrap it in an array
{[params], []}
true ->
# Empty request
{[], []}
end
case Operations.upsert_systems_and_connections(conn, systems, connections) do
{:ok, result} ->
@@ -496,9 +520,10 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
request_body: {"System update request", "application/json", @system_update_schema},
@@ -506,11 +531,15 @@ defmodule WandererAppWeb.MapSystemAPIController do
)
def update(conn, %{"id" => id} = params) do
with {:ok, system_uuid} <- APIUtils.validate_uuid(id),
{:ok, system} <- WandererApp.Api.MapSystem.by_id(system_uuid),
{:ok, attrs} <- APIUtils.extract_update_params(params),
{:ok, updated_system} <- Ash.update(system, attrs) do
APIUtils.respond_data(conn, APIUtils.map_system_to_json(updated_system))
with {:ok, solar_system_id} <- APIUtils.parse_int(id),
{:ok, attrs} <- APIUtils.extract_update_params(params) do
case Operations.update_system(conn, solar_system_id, attrs) do
{:ok, result} ->
APIUtils.respond_data(conn, result)
error ->
error
end
end
end
@@ -578,9 +607,10 @@ defmodule WandererAppWeb.MapSystemAPIController do
],
id: [
in: :path,
description: "System ID",
type: :string,
required: true
description: "Solar System ID (EVE Online system ID, e.g., 30000142 for Jita)",
type: :integer,
required: true,
example: 30_000_142
]
],
responses: ResponseSchemas.standard_responses(@delete_response_schema)

View File

@@ -0,0 +1,100 @@
defmodule WandererAppWeb.Plugs.ConditionalAssignMapOwner do
@moduledoc """
Conditionally assigns map owner information to conn.assigns for V1 API routes.
This plug enables PubSub broadcasting for map operations by ensuring owner_character_id
and owner_user_id are available when map context exists.
Unlike the standard :api_map pipeline plugs (CheckMapApiKey, CheckMapSubscription),
this plug does NOT halt the request if map context is missing, making it safe to use
for both map-specific and user-level resources.
Map context detection (in order of priority):
1. conn.assigns[:map_id] - Set by CheckJsonApiAuth for Bearer token requests with map_identifier
2. filter[map_id] - JSON:API filter parameter for map-specific queries
3. Request body map_id - For create/update operations on map resources
If no map context is found, the plug simply continues without setting owner fields.
This allows user-level resources (AccessList, UserActivity, etc.) to work normally.
"""
import Plug.Conn
alias WandererApp.Map.Operations
def init(opts), do: opts
def call(conn, _opts) do
case get_map_id(conn) do
{:ok, map_id} ->
# Map context found - fetch and assign owner information
case Operations.get_owner_character_id(map_id) do
{:ok, %{id: char_id, user_id: user_id}} ->
conn
|> assign(:map_id, map_id)
|> assign(:owner_character_id, char_id)
|> assign(:owner_user_id, user_id)
_ ->
# Map exists but owner not found - set nil values
conn
|> assign(:map_id, map_id)
|> assign(:owner_character_id, nil)
|> assign(:owner_user_id, nil)
end
:no_map_context ->
# No map context - this is okay for user-level resources
# Don't halt, just continue without setting map fields
conn
end
end
# Try to extract map_id from various sources
defp get_map_id(conn) do
# 1. Check if already set by CheckJsonApiAuth (Bearer token with map_identifier)
case conn.assigns[:map_id] do
map_id when is_binary(map_id) and map_id != "" ->
{:ok, map_id}
_ ->
# 2. Check JSON:API filter parameters (e.g., filter[map_id]=uuid)
case get_filter_map_id(conn) do
{:ok, map_id} -> {:ok, map_id}
:not_found -> check_body_map_id(conn)
end
end
end
# Extract map_id from JSON:API filter parameters
defp get_filter_map_id(conn) do
# JSON:API filters come as filter[map_id]=value
case conn.params do
%{"filter" => %{"map_id" => map_id}} when is_binary(map_id) and map_id != "" ->
{:ok, map_id}
_ ->
:not_found
end
end
# Extract map_id from request body (for create/update operations)
defp check_body_map_id(conn) do
case conn.body_params do
%{"data" => %{"attributes" => %{"map_id" => map_id}}}
when is_binary(map_id) and map_id != "" ->
{:ok, map_id}
%{"data" => %{"relationships" => %{"map" => %{"data" => %{"id" => map_id}}}}}
when is_binary(map_id) and map_id != "" ->
{:ok, map_id}
# Also check flat params for non-JSON:API formatted requests
%{"map_id" => map_id} when is_binary(map_id) and map_id != "" ->
{:ok, map_id}
_ ->
:no_map_context
end
end
end

View File

@@ -353,7 +353,7 @@ defmodule WandererAppWeb.Helpers.APIUtils do
def connection_to_json(conn) do
Map.take(conn, ~w(
id map_id solar_system_source solar_system_target mass_status
time_status ship_size_type type wormhole_type inserted_at updated_at
time_status ship_size_type type wormhole_type locked inserted_at updated_at
)a)
end
end

View File

@@ -272,6 +272,9 @@
<.icon name="hero-check-badge-solid" class="w-5 h-5" />
</div>
</:col>
<:col :let={subscription} label="Map">
{subscription.map.name}
</:col>
<:col :let={subscription} label="Active Till">
<.local_time
:if={subscription.active_till}

View File

@@ -108,7 +108,7 @@ defmodule WandererAppWeb.Maps.MapSubscriptionsComponent do
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{map_id}",
:subscription_settings_updated
{:subscription_settings_updated, map_id}
)
:telemetry.execute([:wanderer_app, :map, :subscription, :cancel], %{count: 1}, %{
@@ -213,7 +213,7 @@ defmodule WandererAppWeb.Maps.MapSubscriptionsComponent do
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{map_id}",
:subscription_settings_updated
{:subscription_settings_updated, map_id}
)
:telemetry.execute([:wanderer_app, :map, :subscription, :new], %{count: 1}, %{
@@ -299,7 +299,7 @@ defmodule WandererAppWeb.Maps.MapSubscriptionsComponent do
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{map_id}",
:subscription_settings_updated
{:subscription_settings_updated, map_id}
)
:telemetry.execute([:wanderer_app, :map, :subscription, :update], %{count: 1}, %{

View File

@@ -373,7 +373,7 @@ defmodule WandererAppWeb.MapsLive do
Phoenix.PubSub.broadcast(
WandererApp.PubSub,
"maps:#{map.id}",
{:map_acl_updated, added_acls, removed_acls}
{:map_acl_updated, map.id, added_acls, removed_acls}
)
{:ok, tracked_characters} =
@@ -460,7 +460,7 @@ defmodule WandererAppWeb.MapsLive do
@pubsub_client.broadcast(
WandererApp.PubSub,
"maps:#{map_id}",
{:options_updated, options}
{:options_updated, map_id, options}
)
{:noreply, socket |> assign(map: updated_map, options_form: options_form)}

View File

@@ -234,6 +234,7 @@ defmodule WandererAppWeb.Router do
plug WandererAppWeb.Plugs.CheckApiDisabled
plug WandererAppWeb.Plugs.JsonApiPerformanceMonitor
plug WandererAppWeb.Plugs.CheckJsonApiAuth
plug WandererAppWeb.Plugs.ConditionalAssignMapOwner
# Future: Add rate limiting, advanced permissions, etc.
end

View File

@@ -3,7 +3,7 @@ defmodule WandererApp.MixProject do
@source_url "https://github.com/wanderer-industries/wanderer"
@version "1.84.1"
@version "1.84.8"
def project do
[
@@ -120,7 +120,6 @@ defmodule WandererApp.MixProject do
{:makeup_elixir, ">= 0.0.0"},
{:makeup_erlang, ">= 0.0.0"},
{:better_number, "~> 1.0.0"},
{:delta_crdt, "~> 0.6.5", override: true},
{:qex, "~> 0.5"},
{:site_encrypt, "~> 0.6.0"},
{:bandit, "~> 1.0"},
@@ -132,7 +131,6 @@ defmodule WandererApp.MixProject do
{:git_ops, "~> 2.6.1"},
{:version_tasks, "~> 0.12.0"},
{:error_tracker, "~> 0.2"},
{:ddrt, "~> 0.2.1"},
{:live_view_events, "~> 0.1.0"},
{:ash_pagify, "~> 1.4.1"},
{:timex, "~> 3.0"},

View File

@@ -14,7 +14,6 @@
"certifi": {:hex, :certifi, "2.14.0", "ed3bef654e69cde5e6c022df8070a579a79e8ba2368a00acf3d75b82d9aceeed", [:rebar3], [], "hexpm", "ea59d87ef89da429b8e905264fdec3419f84f2215bb3d81e07a18aac919026c3"},
"cloak": {:hex, :cloak, "1.1.4", "aba387b22ea4d80d92d38ab1890cc528b06e0e7ef2a4581d71c3fdad59e997e7", [:mix], [{:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}], "hexpm", "92b20527b9aba3d939fab0dd32ce592ff86361547cfdc87d74edce6f980eb3d7"},
"combine": {:hex, :combine, "0.10.0", "eff8224eeb56498a2af13011d142c5e7997a80c8f5b97c499f84c841032e429f", [:mix], [], "hexpm", "1b1dbc1790073076580d0d1d64e42eae2366583e7aecd455d1215b0d16f2451b"},
"comparable": {:hex, :comparable, "1.0.0", "bb669e91cedd14ae9937053e5bcbc3c52bb2f22422611f43b6e38367d94a495f", [:mix], [{:typable, "~> 0.1", [hex: :typable, repo: "hexpm", optional: false]}], "hexpm", "277c11eeb1cd726e7cd41c6c199e7e52fa16ee6830b45ad4cdc62e51f62eb60c"},
"conv_case": {:hex, :conv_case, "0.2.3", "c1455c27d3c1ffcdd5f17f1e91f40b8a0bc0a337805a6e8302f441af17118ed8", [:mix], [], "hexpm", "88f29a3d97d1742f9865f7e394ed3da011abb7c5e8cc104e676fdef6270d4b4a"},
"cowboy": {:hex, :cowboy, "2.13.0", "09d770dd5f6a22cc60c071f432cd7cb87776164527f205c5a6b0f24ff6b38990", [:make, :rebar3], [{:cowlib, ">= 2.14.0 and < 3.0.0", [hex: :cowlib, repo: "hexpm", optional: false]}, {:ranch, ">= 1.8.0 and < 3.0.0", [hex: :ranch, repo: "hexpm", optional: false]}], "hexpm", "e724d3a70995025d654c1992c7b11dbfea95205c047d86ff9bf1cda92ddc5614"},
"cowboy_telemetry": {:hex, :cowboy_telemetry, "0.4.0", "f239f68b588efa7707abce16a84d0d2acf3a0f50571f8bb7f56a15865aae820c", [:rebar3], [{:cowboy, "~> 2.7", [hex: :cowboy, repo: "hexpm", optional: false]}, {:telemetry, "~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "7d98bac1ee4565d31b62d59f8823dfd8356a169e7fcbb83831b8a5397404c9de"},
@@ -23,11 +22,9 @@
"crontab": {:hex, :crontab, "1.1.13", "3bad04f050b9f7f1c237809e42223999c150656a6b2afbbfef597d56df2144c5", [:mix], [{:ecto, "~> 1.0 or ~> 2.0 or ~> 3.0", [hex: :ecto, repo: "hexpm", optional: true]}], "hexpm", "d67441bec989640e3afb94e123f45a2bc42d76e02988c9613885dc3d01cf7085"},
"dart_sass": {:hex, :dart_sass, "0.5.1", "d45f20a8e324313689fb83287d4702352793ce8c9644bc254155d12656ade8b6", [:mix], [{:castore, ">= 0.0.0", [hex: :castore, repo: "hexpm", optional: false]}], "hexpm", "24f8a1c67e8b5267c51a33cbe6c0b5ebf12c2c83ace88b5ac04947d676b4ec81"},
"db_connection": {:hex, :db_connection, "2.7.0", "b99faa9291bb09892c7da373bb82cba59aefa9b36300f6145c5f201c7adf48ec", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "dcf08f31b2701f857dfc787fbad78223d61a32204f217f15e881dd93e4bdd3ff"},
"ddrt": {:hex, :ddrt, "0.2.1", "c4e4bddcef36add5de6599ec72ec822699932413ece0ad310e4be4ab2b3ab6d3", [:mix], [{:delta_crdt, "~> 0.5.0", [hex: :delta_crdt, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:merkle_map, "~> 0.2.0", [hex: :merkle_map, repo: "hexpm", optional: false]}, {:uuid, "~> 1.1", [hex: :uuid, repo: "hexpm", optional: false]}], "hexpm", "1efcd60cf4ca4a4352e752d7f41ed9d696560e5860ee07d5bf31c16950100365"},
"debounce_and_throttle": {:hex, :debounce_and_throttle, "0.9.0", "fa86c982963e00365cc9808afa496e82ca2b48f8905c6c79f8edd304800d0892", [:mix], [], "hexpm", "573a7cff4032754023d8e6874f3eff5354864c90b39b692f1fc4a44b3eb7517b"},
"decimal": {:hex, :decimal, "2.3.0", "3ad6255aa77b4a3c4f818171b12d237500e63525c2fd056699967a3e7ea20f62", [:mix], [], "hexpm", "a4d66355cb29cb47c3cf30e71329e58361cfcb37c34235ef3bf1d7bf3773aeac"},
"decorator": {:hex, :decorator, "1.4.0", "a57ac32c823ea7e4e67f5af56412d12b33274661bb7640ec7fc882f8d23ac419", [:mix], [], "hexpm", "0a07cedd9083da875c7418dea95b78361197cf2bf3211d743f6f7ce39656597f"},
"delta_crdt": {:hex, :delta_crdt, "0.6.5", "c7bb8c2c7e60f59e46557ab4e0224f67ba22f04c02826e273738f3dcc4767adc", [:mix], [{:merkle_map, "~> 0.2.0", [hex: :merkle_map, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "c6ae23a525d30f96494186dd11bf19ed9ae21d9fe2c1f1b217d492a7cc7294ae"},
"dialyxir": {:hex, :dialyxir, "1.4.3", "edd0124f358f0b9e95bfe53a9fcf806d615d8f838e2202a9f430d59566b6b53b", [:mix], [{:erlex, ">= 0.2.6", [hex: :erlex, repo: "hexpm", optional: false]}], "hexpm", "bf2cfb75cd5c5006bec30141b131663299c661a864ec7fbbc72dfa557487a986"},
"dns_cluster": {:hex, :dns_cluster, "0.1.3", "0bc20a2c88ed6cc494f2964075c359f8c2d00e1bf25518a6a6c7fd277c9b0c66", [:mix], [], "hexpm", "46cb7c4a1b3e52c7ad4cbe33ca5079fbde4840dedeafca2baf77996c2da1bc33"},
"doctor": {:hex, :doctor, "0.21.0", "20ef89355c67778e206225fe74913e96141c4d001cb04efdeba1a2a9704f1ab5", [:mix], [{:decimal, "~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}], "hexpm", "a227831daa79784eb24cdeedfa403c46a4cb7d0eab0e31232ec654314447e4e0"},
@@ -44,7 +41,6 @@
"ex_check": {:hex, :ex_check, "0.14.0", "d6fbe0bcc51cf38fea276f5bc2af0c9ae0a2bb059f602f8de88709421dae4f0e", [:mix], [], "hexpm", "8a602e98c66e6a4be3a639321f1f545292042f290f91fa942a285888c6868af0"},
"ex_doc": {:hex, :ex_doc, "0.37.3", "f7816881a443cd77872b7d6118e8a55f547f49903aef8747dbcb345a75b462f9", [:mix], [{:earmark_parser, "~> 1.4.42", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "e6aebca7156e7c29b5da4daa17f6361205b2ae5f26e5c7d8ca0d3f7e18972233"},
"ex_rated": {:hex, :ex_rated, "2.1.0", "d40e6fe35097b10222df2db7bb5dd801d57211bac65f29063de5f201c2a6aebc", [:mix], [{:ex2ms, "~> 1.5", [hex: :ex2ms, repo: "hexpm", optional: false]}], "hexpm", "936c155337253ed6474f06d941999dd3a9cf0fe767ec99a59f2d2989dc2cc13f"},
"ex_ulid": {:hex, :ex_ulid, "0.1.0", "e6e717c57344f6e500d0190ccb4edc862b985a3680f15834af992ec065d4dcff", [:mix], [], "hexpm", "a2befd477aebc4639563de7e233e175cacf8a8f42c8f6778c88d60c13bf20860"},
"excoveralls": {:hex, :excoveralls, "0.18.5", "e229d0a65982613332ec30f07940038fe451a2e5b29bce2a5022165f0c9b157e", [:mix], [{:castore, "~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "523fe8a15603f86d64852aab2abe8ddbd78e68579c8525ae765facc5eae01562"},
"expo": {:hex, :expo, "0.5.2", "beba786aab8e3c5431813d7a44b828e7b922bfa431d6bfbada0904535342efe2", [:mix], [], "hexpm", "8c9bfa06ca017c9cb4020fabe980bc7fdb1aaec059fd004c2ab3bff03b1c599c"},
"exsync": {:hex, :exsync, "0.4.1", "0a14fe4bfcb80a509d8a0856be3dd070fffe619b9ba90fec13c58b316c176594", [:mix], [{:file_system, "~> 0.2 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}], "hexpm", "cefb22aa805ec97ffc5b75a4e1dc54bcaf781e8b32564bf74abbe5803d1b5178"},
@@ -105,7 +101,6 @@
"phoenix_live_dashboard": {:hex, :phoenix_live_dashboard, "0.8.4", "4508e481f791ce62ec6a096e13b061387158cbeefacca68c6c1928e1305e23ed", [:mix], [{:ecto, "~> 3.6.2 or ~> 3.7", [hex: :ecto, repo: "hexpm", optional: true]}, {:ecto_mysql_extras, "~> 0.5", [hex: :ecto_mysql_extras, repo: "hexpm", optional: true]}, {:ecto_psql_extras, "~> 0.7", [hex: :ecto_psql_extras, repo: "hexpm", optional: true]}, {:ecto_sqlite3_extras, "~> 1.1.7 or ~> 1.2.0", [hex: :ecto_sqlite3_extras, repo: "hexpm", optional: true]}, {:mime, "~> 1.6 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:phoenix_live_view, "~> 0.19 or ~> 1.0", [hex: :phoenix_live_view, repo: "hexpm", optional: false]}, {:telemetry_metrics, "~> 0.6 or ~> 1.0", [hex: :telemetry_metrics, repo: "hexpm", optional: false]}], "hexpm", "2984aae96994fbc5c61795a73b8fb58153b41ff934019cfb522343d2d3817d59"},
"phoenix_live_reload": {:hex, :phoenix_live_reload, "1.5.3", "f2161c207fda0e4fb55165f650f7f8db23f02b29e3bff00ff7ef161d6ac1f09d", [:mix], [{:file_system, "~> 0.3 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}, {:phoenix, "~> 1.4", [hex: :phoenix, repo: "hexpm", optional: false]}], "hexpm", "b4ec9cd73cb01ff1bd1cac92e045d13e7030330b74164297d1aee3907b54803c"},
"phoenix_live_view": {:hex, :phoenix_live_view, "1.0.5", "f072166f87c44ffaf2b47b65c5ced8c375797830e517bfcf0a006fe7eb113911", [:mix], [{:floki, "~> 0.36", [hex: :floki, repo: "hexpm", optional: true]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:phoenix, "~> 1.6.15 or ~> 1.7.0", [hex: :phoenix, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 3.3 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: false]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}, {:phoenix_view, "~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: true]}, {:plug, "~> 1.15", [hex: :plug, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.2 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "94abbc84df8a93a64514fc41528695d7326b6f3095e906b32f264ec4280811f3"},
"phoenix_multi_select": {:hex, :phoenix_multi_select, "0.1.2", "ffea2dfeebf518aaa9553871e786ea60d274a01774c033b80bad96d60beee86f", [:make, :mix], [{:phoenix, "~> 1.7", [hex: :phoenix, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 4.1", [hex: :phoenix_html, repo: "hexpm", optional: false]}, {:phoenix_live_view, "~> 0.20", [hex: :phoenix_live_view, repo: "hexpm", optional: false]}], "hexpm", "f26b21565b499ef7a7e52b37efbf795d8f2315ab59e8d3badc865297344634db"},
"phoenix_pubsub": {:hex, :phoenix_pubsub, "2.1.3", "3168d78ba41835aecad272d5e8cd51aa87a7ac9eb836eabc42f6e57538e3731d", [:mix], [], "hexpm", "bba06bc1dcfd8cb086759f0edc94a8ba2bc8896d5331a1e2c2902bf8e36ee502"},
"phoenix_template": {:hex, :phoenix_template, "1.0.4", "e2092c132f3b5e5b2d49c96695342eb36d0ed514c5b252a77048d5969330d639", [:mix], [{:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: true]}], "hexpm", "2c0c81f0e5c6753faf5cca2f229c9709919aba34fab866d3bc05060c9c444206"},
"plug": {:hex, :plug, "1.16.1", "40c74619c12f82736d2214557dedec2e9762029b2438d6d175c5074c933edc9d", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "a13ff6b9006b03d7e33874945b2755253841b238c34071ed85b0e86057f8cddc"},
@@ -119,7 +114,6 @@
"quantum": {:hex, :quantum, "3.5.3", "ee38838a07761663468145f489ad93e16a79440bebd7c0f90dc1ec9850776d99", [:mix], [{:crontab, "~> 1.1", [hex: :crontab, repo: "hexpm", optional: false]}, {:gen_stage, "~> 0.14 or ~> 1.0", [hex: :gen_stage, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}, {:telemetry_registry, "~> 0.2", [hex: :telemetry_registry, repo: "hexpm", optional: false]}], "hexpm", "500fd3fa77dcd723ed9f766d4a175b684919ff7b6b8cfd9d7d0564d58eba8734"},
"ranch": {:hex, :ranch, "2.2.0", "25528f82bc8d7c6152c57666ca99ec716510fe0925cb188172f41ce93117b1b0", [:make, :rebar3], [], "hexpm", "fa0b99a1780c80218a4197a59ea8d3bdae32fbff7e88527d7d8a4787eff4f8e7"},
"reactor": {:hex, :reactor, "0.10.0", "1206113c21ba69b889e072b2c189c05a7aced523b9c3cb8dbe2dab7062cb699a", [:mix], [{:igniter, "~> 0.2", [hex: :igniter, repo: "hexpm", optional: false]}, {:iterex, "~> 0.1", [hex: :iterex, repo: "hexpm", optional: false]}, {:libgraph, "~> 0.16", [hex: :libgraph, repo: "hexpm", optional: false]}, {:spark, "~> 2.0", [hex: :spark, repo: "hexpm", optional: false]}, {:splode, "~> 0.2", [hex: :splode, repo: "hexpm", optional: false]}, {:telemetry, "~> 1.2", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "4003c33e4c8b10b38897badea395e404d74d59a31beb30469a220f2b1ffe6457"},
"redoc_ui_plug": {:hex, :redoc_ui_plug, "0.2.1", "5e9760c17ed450fc9df671d5fbc70a6f06179c41d9d04ae3c33f16baca3a5b19", [:mix], [{:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "7be01db31f210887e9fc18f8fbccc7788de32c482b204623556e415ed1fe714b"},
"req": {:hex, :req, "0.4.14", "103de133a076a31044e5458e0f850d5681eef23dfabf3ea34af63212e3b902e2", [:mix], [{:aws_signature, "~> 0.3.2", [hex: :aws_signature, repo: "hexpm", optional: true]}, {:brotli, "~> 0.3.1", [hex: :brotli, repo: "hexpm", optional: true]}, {:ezstd, "~> 1.0", [hex: :ezstd, repo: "hexpm", optional: true]}, {:finch, "~> 0.17", [hex: :finch, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:mime, "~> 1.6 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:nimble_csv, "~> 1.0", [hex: :nimble_csv, repo: "hexpm", optional: true]}, {:nimble_ownership, "~> 0.2.0 or ~> 0.3.0", [hex: :nimble_ownership, repo: "hexpm", optional: false]}, {:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: true]}], "hexpm", "2ddd3d33f9ab714ced8d3c15fd03db40c14dbf129003c4a3eb80fac2cc0b1b08"},
"retry": {:hex, :retry, "0.18.0", "dc58ebe22c95aa00bc2459f9e0c5400e6005541cf8539925af0aa027dc860543", [:mix], [], "hexpm", "9483959cc7bf69c9e576d9dfb2b678b71c045d3e6f39ab7c9aa1489df4492d73"},
"rewrite": {:hex, :rewrite, "0.10.5", "6afadeae0b9d843b27ac6225e88e165884875e0aed333ef4ad3bf36f9c101bed", [:mix], [{:glob_ex, "~> 0.1", [hex: :glob_ex, repo: "hexpm", optional: false]}, {:sourceror, "~> 1.0", [hex: :sourceror, repo: "hexpm", optional: false]}], "hexpm", "51cc347a4269ad3a1e7a2c4122dbac9198302b082f5615964358b4635ebf3d4f"},
@@ -143,10 +137,8 @@
"tesla": {:hex, :tesla, "1.11.0", "81b2b10213dddb27105ec6102d9eb0cc93d7097a918a0b1594f2dfd1a4601190", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:exjsx, ">= 3.0.0", [hex: :exjsx, repo: "hexpm", optional: true]}, {:finch, "~> 0.13", [hex: :finch, repo: "hexpm", optional: true]}, {:fuse, "~> 2.4", [hex: :fuse, repo: "hexpm", optional: true]}, {:gun, ">= 1.0.0", [hex: :gun, repo: "hexpm", optional: true]}, {:hackney, "~> 1.6", [hex: :hackney, repo: "hexpm", optional: true]}, {:ibrowse, "4.4.2", [hex: :ibrowse, repo: "hexpm", optional: true]}, {:jason, ">= 1.0.0", [hex: :jason, repo: "hexpm", optional: true]}, {:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.0", [hex: :mint, repo: "hexpm", optional: true]}, {:msgpax, "~> 2.3", [hex: :msgpax, repo: "hexpm", optional: true]}, {:poison, ">= 1.0.0", [hex: :poison, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: true]}], "hexpm", "b83ab5d4c2d202e1ea2b7e17a49f788d49a699513d7c4f08f2aef2c281be69db"},
"thousand_island": {:hex, :thousand_island, "1.3.11", "b68f3e91f74d564ae20b70d981bbf7097dde084343c14ae8a33e5b5fbb3d6f37", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "555c18c62027f45d9c80df389c3d01d86ba11014652c00be26e33b1b64e98d29"},
"timex": {:hex, :timex, "3.7.11", "bb95cb4eb1d06e27346325de506bcc6c30f9c6dea40d1ebe390b262fad1862d1", [:mix], [{:combine, "~> 0.10", [hex: :combine, repo: "hexpm", optional: false]}, {:gettext, "~> 0.20", [hex: :gettext, repo: "hexpm", optional: false]}, {:tzdata, "~> 1.1", [hex: :tzdata, repo: "hexpm", optional: false]}], "hexpm", "8b9024f7efbabaf9bd7aa04f65cf8dcd7c9818ca5737677c7b76acbc6a94d1aa"},
"typable": {:hex, :typable, "0.3.0", "0431e121d124cd26f312123e313d2689b9a5322b15add65d424c07779eaa3ca1", [:mix], [], "hexpm", "880a0797752da1a4c508ac48f94711e04c86156f498065a83d160eef945858f8"},
"tzdata": {:hex, :tzdata, "1.1.3", "b1cef7bb6de1de90d4ddc25d33892b32830f907e7fc2fccd1e7e22778ab7dfbc", [:mix], [{:hackney, "~> 1.17", [hex: :hackney, repo: "hexpm", optional: false]}], "hexpm", "d4ca85575a064d29d4e94253ee95912edfb165938743dbf002acdf0dcecb0c28"},
"ueberauth": {:hex, :ueberauth, "0.10.8", "ba78fbcbb27d811a6cd06ad851793aaf7d27c3b30c9e95349c2c362b344cd8f0", [:mix], [{:plug, "~> 1.5", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "f2d3172e52821375bccb8460e5fa5cb91cfd60b19b636b6e57e9759b6f8c10c1"},
"ulid": {:hex, :ulid, "0.2.0", "1ef02026b7c8fa78a6ae6cb5e0d8f4ba92ed726b369849da328f93b7c0dab9cd", [:mix], [], "hexpm", "fadcc1d4cfa49028172f54bab9e464a69fb14f48f7652dad706d2bbb1ef76a6c"},
"unicode_util_compat": {:hex, :unicode_util_compat, "0.7.0", "bc84380c9ab48177092f43ac89e4dfa2c6d62b40b8bd132b1059ecc7232f9a78", [:rebar3], [], "hexpm", "25eee6d67df61960cf6a794239566599b09e17e668d3700247bc498638152521"},
"unsafe": {:hex, :unsafe, "1.0.2", "23c6be12f6c1605364801f4b47007c0c159497d0446ad378b5cf05f1855c0581", [:mix], [], "hexpm", "b485231683c3ab01a9cd44cb4a79f152c6f3bb87358439c6f68791b85c2df675"},
"uuid": {:hex, :uuid, "1.1.8", "e22fc04499de0de3ed1116b770c7737779f226ceefa0badb3592e64d5cfb4eb9", [:mix], [], "hexpm", "c790593b4c3b601f5dc2378baae7efaf5b3d73c4c6456ba85759905be792f2ac"},

View File

@@ -0,0 +1,24 @@
defmodule WandererApp.Repo.Migrations.AddMapPerformanceIndexes do
@moduledoc """
Updates resources based on their most recent snapshots.
This file was autogenerated with `mix ash_postgres.generate_migrations`
"""
use Ecto.Migration
def up do
create index(:map_system_v1, [:map_id],
name: "map_system_v1_map_id_visible_index",
where: "visible = true"
)
create index(:map_chain_v1, [:map_id], name: "map_chain_v1_map_id_index")
end
def down do
drop_if_exists index(:map_chain_v1, [:map_id], name: "map_chain_v1_map_id_index")
drop_if_exists index(:map_system_v1, [:map_id], name: "map_system_v1_map_id_visible_index")
end
end

View File

@@ -0,0 +1,144 @@
defmodule WandererApp.Repo.Migrations.FixDuplicateMapSlugs do
use Ecto.Migration
import Ecto.Query
def up do
# Check for duplicates first
has_duplicates = check_for_duplicates()
# If duplicates exist, drop the index first to allow fixing them
if has_duplicates do
IO.puts("Duplicates found, dropping index before cleanup...")
drop_index_if_exists()
end
# Fix duplicate slugs in maps_v1 table
fix_duplicate_slugs()
# Ensure unique index exists (recreate if needed)
ensure_unique_index()
end
def down do
# This migration is idempotent and safe to run multiple times
# No need to revert as it only fixes data integrity issues
:ok
end
defp check_for_duplicates do
duplicates_query = """
SELECT COUNT(*) as duplicate_count
FROM (
SELECT slug
FROM maps_v1
GROUP BY slug
HAVING count(*) > 1
) duplicates
"""
case repo().query(duplicates_query, []) do
{:ok, %{rows: [[count]]}} when count > 0 ->
IO.puts("Found #{count} duplicate slug(s)")
true
{:ok, %{rows: [[0]]}} ->
false
{:error, error} ->
IO.puts("Error checking for duplicates: #{inspect(error)}")
false
end
end
defp drop_index_if_exists do
index_exists_query = """
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
)
"""
case repo().query(index_exists_query, []) do
{:ok, %{rows: [[true]]}} ->
IO.puts("Dropping existing unique index...")
execute("DROP INDEX IF EXISTS maps_v1_unique_slug_index")
IO.puts("✓ Index dropped")
{:ok, %{rows: [[false]]}} ->
IO.puts("No existing index to drop")
{:error, error} ->
IO.puts("Error checking index: #{inspect(error)}")
end
end
defp fix_duplicate_slugs do
# Get all duplicate slugs with their IDs
duplicates_query = """
SELECT slug, array_agg(id::text ORDER BY updated_at) as ids
FROM maps_v1
GROUP BY slug
HAVING count(*) > 1
"""
case repo().query(duplicates_query, []) do
{:ok, %{rows: rows}} when length(rows) > 0 ->
IO.puts("Fixing #{length(rows)} duplicate slug(s)...")
Enum.each(rows, fn [slug, ids] ->
IO.puts("Processing duplicate slug: #{slug} (#{length(ids)} occurrences)")
# Keep the first one (oldest), rename the rest
[_keep_id | rename_ids] = ids
rename_ids
|> Enum.with_index(2)
|> Enum.each(fn {id_string, n} ->
new_slug = "#{slug}-#{n}"
# Use parameterized query for safety
update_query = "UPDATE maps_v1 SET slug = $1 WHERE id::text = $2"
repo().query!(update_query, [new_slug, id_string])
IO.puts(" ✓ Renamed #{id_string} to '#{new_slug}'")
end)
end)
IO.puts("✓ All duplicate slugs fixed!")
{:ok, %{rows: []}} ->
IO.puts("No duplicate slugs to fix")
{:error, error} ->
IO.puts("Error checking for duplicates: #{inspect(error)}")
raise "Failed to check for duplicate slugs: #{inspect(error)}"
end
end
defp ensure_unique_index do
# Check if index exists
index_exists_query = """
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE tablename = 'maps_v1'
AND indexname = 'maps_v1_unique_slug_index'
)
"""
case repo().query(index_exists_query, []) do
{:ok, %{rows: [[true]]}} ->
IO.puts("Unique index on slug already exists")
{:ok, %{rows: [[false]]}} ->
IO.puts("Creating unique index on slug...")
create_if_not_exists index(:maps_v1, [:slug], unique: true, name: :maps_v1_unique_slug_index)
IO.puts("✓ Index created successfully!")
{:error, error} ->
IO.puts("Error checking index: #{inspect(error)}")
raise "Failed to check index: #{inspect(error)}"
end
end
end

View File

@@ -0,0 +1,201 @@
{
"attributes": [
{
"allow_nil?": false,
"default": "fragment(\"gen_random_uuid()\")",
"generated?": false,
"primary_key?": true,
"references": null,
"size": null,
"source": "id",
"type": "uuid"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_source",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_target",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "mass_status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "time_status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "2",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "ship_size_type",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "type",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "wormhole_type",
"type": "text"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "count_of_passage",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "locked",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "custom_info",
"type": "text"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "inserted_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "updated_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": {
"deferrable": false,
"destination_attribute": "id",
"destination_attribute_default": null,
"destination_attribute_generated": null,
"index?": false,
"match_type": null,
"match_with": null,
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"name": "map_chain_v1_map_id_fkey",
"on_delete": null,
"on_update": null,
"primary_key?": true,
"schema": null,
"table": "maps_v1"
},
"size": null,
"source": "map_id",
"type": "uuid"
}
],
"base_filter": null,
"check_constraints": [],
"custom_indexes": [
{
"all_tenants?": false,
"concurrently": false,
"error_fields": [
"map_id"
],
"fields": [
{
"type": "atom",
"value": "map_id"
}
],
"include": null,
"message": null,
"name": "map_chain_v1_map_id_index",
"nulls_distinct": true,
"prefix": null,
"table": null,
"unique": false,
"using": null,
"where": null
}
],
"custom_statements": [],
"has_create_action": true,
"hash": "43AE341D09AA875BB0F0D2ACE7AC6301064697D656FD1729FC36E6A1F77E4CB7",
"identities": [],
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"repo": "Elixir.WandererApp.Repo",
"schema": null,
"table": "map_chain_v1"
}

View File

@@ -0,0 +1,260 @@
{
"attributes": [
{
"allow_nil?": false,
"default": "fragment(\"gen_random_uuid()\")",
"generated?": false,
"primary_key?": true,
"references": null,
"size": null,
"source": "id",
"type": "uuid"
},
{
"allow_nil?": false,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "solar_system_id",
"type": "bigint"
},
{
"allow_nil?": false,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "custom_name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "description",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "tag",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "temporary_name",
"type": "text"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "labels",
"type": "text"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "status",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "true",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "visible",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "false",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "locked",
"type": "boolean"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "position_x",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "0",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "position_y",
"type": "bigint"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "added_at",
"type": "utc_datetime"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "linked_sig_eve_id",
"type": "text"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "inserted_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": false,
"default": "fragment(\"(now() AT TIME ZONE 'utc')\")",
"generated?": false,
"primary_key?": false,
"references": null,
"size": null,
"source": "updated_at",
"type": "utc_datetime_usec"
},
{
"allow_nil?": true,
"default": "nil",
"generated?": false,
"primary_key?": false,
"references": {
"deferrable": false,
"destination_attribute": "id",
"destination_attribute_default": null,
"destination_attribute_generated": null,
"index?": false,
"match_type": null,
"match_with": null,
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"name": "map_system_v1_map_id_fkey",
"on_delete": null,
"on_update": null,
"primary_key?": true,
"schema": null,
"table": "maps_v1"
},
"size": null,
"source": "map_id",
"type": "uuid"
}
],
"base_filter": null,
"check_constraints": [],
"custom_indexes": [
{
"all_tenants?": false,
"concurrently": false,
"error_fields": [
"map_id"
],
"fields": [
{
"type": "atom",
"value": "map_id"
}
],
"include": null,
"message": null,
"name": "map_system_v1_map_id_visible_index",
"nulls_distinct": true,
"prefix": null,
"table": null,
"unique": false,
"using": null,
"where": "visible = true"
}
],
"custom_statements": [],
"has_create_action": true,
"hash": "AD7B82611EDA495AD35F114406C7F0C2D941C10E51105361002AA3144D7F7EA9",
"identities": [
{
"all_tenants?": false,
"base_filter": null,
"index_name": "map_system_v1_map_solar_system_id_index",
"keys": [
{
"type": "atom",
"value": "map_id"
},
{
"type": "atom",
"value": "solar_system_id"
}
],
"name": "map_solar_system_id",
"nils_distinct?": true,
"where": null
}
],
"multitenancy": {
"attribute": null,
"global": null,
"strategy": null
},
"repo": "Elixir.WandererApp.Repo",
"schema": null,
"table": "map_system_v1"
}

View File

@@ -13,11 +13,11 @@ defmodule WandererAppWeb.MapConnectionAPIControllerSuccessTest do
map = insert(:map, %{owner_id: character.id})
# Start the map server for this test map
{:ok, _pid} =
DynamicSupervisor.start_child(
{:via, PartitionSupervisor, {WandererApp.Map.DynamicSupervisors, self()}},
{WandererApp.Map.ServerSupervisor, map_id: map.id}
)
# {:ok, _pid} =
# DynamicSupervisor.start_child(
# {:via, PartitionSupervisor, {WandererApp.Map.DynamicSupervisors, self()}},
# {WandererApp.Map.ServerSupervisor, map_id: map.id}
# )
# Create systems that connections can reference
system1 =
@@ -181,11 +181,11 @@ defmodule WandererAppWeb.MapConnectionAPIControllerSuccessTest do
map = insert(:map, %{owner_id: character.id})
# Start the map server for this test map
{:ok, _pid} =
DynamicSupervisor.start_child(
{:via, PartitionSupervisor, {WandererApp.Map.DynamicSupervisors, self()}},
{WandererApp.Map.ServerSupervisor, map_id: map.id}
)
# {:ok, _pid} =
# DynamicSupervisor.start_child(
# {:via, PartitionSupervisor, {WandererApp.Map.DynamicSupervisors, self()}},
# {WandererApp.Map.ServerSupervisor, map_id: map.id}
# )
conn =
build_conn()

View File

@@ -24,11 +24,11 @@ defmodule WandererAppWeb.MapSystemAPIControllerSuccessTest do
|> assign(:owner_user_id, user.id)
# Start the map server for the test map using the proper PartitionSupervisor
{:ok, _pid} =
DynamicSupervisor.start_child(
{:via, PartitionSupervisor, {WandererApp.Map.DynamicSupervisors, self()}},
{WandererApp.Map.ServerSupervisor, map_id: map.id}
)
# {:ok, _pid} =
# DynamicSupervisor.start_child(
# {:via, PartitionSupervisor, {WandererApp.Map.DynamicSupervisors, self()}},
# {WandererApp.Map.ServerSupervisor, map_id: map.id}
# )
%{conn: conn, user: user, character: character, map: map}
end

View File

@@ -0,0 +1,437 @@
defmodule WandererApp.Map.CacheRTreeTest do
use ExUnit.Case, async: true
alias WandererApp.Map.CacheRTree
setup do
# Unique tree name per test to ensure isolation
tree_name = "test_rtree_#{:rand.uniform(1_000_000)}"
CacheRTree.init_tree(tree_name)
on_exit(fn ->
CacheRTree.clear_tree(tree_name)
end)
{:ok, tree_name: tree_name}
end
describe "init_tree/2" do
test "initializes empty tree with default config" do
tree_name = "test_init_#{:rand.uniform(1_000_000)}"
assert :ok = CacheRTree.init_tree(tree_name)
# Verify empty tree
assert {:ok, []} = CacheRTree.query([{0, 100}, {0, 100}], tree_name)
# Cleanup
CacheRTree.clear_tree(tree_name)
end
test "initializes tree with custom config" do
tree_name = "test_init_config_#{:rand.uniform(1_000_000)}"
assert :ok = CacheRTree.init_tree(tree_name, %{width: 200, verbose: true})
# Cleanup
CacheRTree.clear_tree(tree_name)
end
end
describe "insert/2" do
test "inserts single leaf", %{tree_name: name} do
leaf = {30000142, [{100, 230}, {50, 84}]}
assert {:ok, %{}} = CacheRTree.insert(leaf, name)
# Verify insertion
{:ok, ids} = CacheRTree.query([{100, 230}, {50, 84}], name)
assert 30000142 in ids
end
test "inserts multiple leaves", %{tree_name: name} do
leaves = [
{30000142, [{100, 230}, {50, 84}]},
{30000143, [{250, 380}, {100, 134}]},
{30000144, [{400, 530}, {50, 84}]}
]
assert {:ok, %{}} = CacheRTree.insert(leaves, name)
# Verify all insertions
{:ok, ids1} = CacheRTree.query([{100, 230}, {50, 84}], name)
assert 30000142 in ids1
{:ok, ids2} = CacheRTree.query([{250, 380}, {100, 134}], name)
assert 30000143 in ids2
{:ok, ids3} = CacheRTree.query([{400, 530}, {50, 84}], name)
assert 30000144 in ids3
end
test "handles duplicate ID by overwriting", %{tree_name: name} do
# Insert first time
CacheRTree.insert({30000142, [{100, 230}, {50, 84}]}, name)
# Insert same ID with different bounding box
CacheRTree.insert({30000142, [{200, 330}, {100, 134}]}, name)
# Should find in new location
{:ok, ids_new} = CacheRTree.query([{200, 330}, {100, 134}], name)
assert 30000142 in ids_new
# Should NOT find in old location
{:ok, ids_old} = CacheRTree.query([{100, 230}, {50, 84}], name)
assert 30000142 not in ids_old
end
test "handles integer IDs", %{tree_name: name} do
leaf = {123456, [{0, 130}, {0, 34}]}
assert {:ok, %{}} = CacheRTree.insert(leaf, name)
end
test "handles string IDs", %{tree_name: name} do
leaf = {"system_abc", [{0, 130}, {0, 34}]}
assert {:ok, %{}} = CacheRTree.insert(leaf, name)
{:ok, ids} = CacheRTree.query([{0, 130}, {0, 34}], name)
assert "system_abc" in ids
end
end
describe "delete/2" do
test "deletes single leaf", %{tree_name: name} do
CacheRTree.insert({30000142, [{100, 230}, {50, 84}]}, name)
assert {:ok, %{}} = CacheRTree.delete([30000142], name)
# Verify deletion
{:ok, ids} = CacheRTree.query([{100, 230}, {50, 84}], name)
assert ids == []
end
test "deletes multiple leaves", %{tree_name: name} do
leaves = [
{30000142, [{100, 230}, {50, 84}]},
{30000143, [{250, 380}, {100, 134}]},
{30000144, [{400, 530}, {50, 84}]}
]
CacheRTree.insert(leaves, name)
# Delete two of them
assert {:ok, %{}} = CacheRTree.delete([30000142, 30000143], name)
# Verify deletions
{:ok, ids1} = CacheRTree.query([{100, 230}, {50, 84}], name)
assert ids1 == []
{:ok, ids2} = CacheRTree.query([{250, 380}, {100, 134}], name)
assert ids2 == []
# Third should still exist
{:ok, ids3} = CacheRTree.query([{400, 530}, {50, 84}], name)
assert 30000144 in ids3
end
test "handles non-existent ID gracefully", %{tree_name: name} do
assert {:ok, %{}} = CacheRTree.delete([99999], name)
assert {:ok, %{}} = CacheRTree.delete([99998, 99999], name)
end
test "handles deleting from empty tree", %{tree_name: name} do
assert {:ok, %{}} = CacheRTree.delete([30000142], name)
end
end
describe "update/3" do
test "updates leaf with new bounding box", %{tree_name: name} do
CacheRTree.insert({30000142, [{100, 230}, {50, 84}]}, name)
# Update to new position
new_box = [{200, 330}, {100, 134}]
assert {:ok, %{}} = CacheRTree.update(30000142, new_box, name)
# Should find in new location
{:ok, ids_new} = CacheRTree.query(new_box, name)
assert 30000142 in ids_new
# Should NOT find in old location
{:ok, ids_old} = CacheRTree.query([{100, 230}, {50, 84}], name)
assert 30000142 not in ids_old
end
test "updates leaf with old/new tuple", %{tree_name: name} do
old_box = [{100, 230}, {50, 84}]
new_box = [{200, 330}, {100, 134}]
CacheRTree.insert({30000142, old_box}, name)
# Update with tuple
assert {:ok, %{}} = CacheRTree.update(30000142, {old_box, new_box}, name)
# Should find in new location
{:ok, ids_new} = CacheRTree.query(new_box, name)
assert 30000142 in ids_new
end
test "handles updating non-existent leaf", %{tree_name: name} do
# Should work like insert
new_box = [{200, 330}, {100, 134}]
assert {:ok, %{}} = CacheRTree.update(99999, new_box, name)
{:ok, ids} = CacheRTree.query(new_box, name)
assert 99999 in ids
end
test "updates preserve ID type", %{tree_name: name} do
CacheRTree.insert({"system_abc", [{100, 230}, {50, 84}]}, name)
new_box = [{200, 330}, {100, 134}]
CacheRTree.update("system_abc", new_box, name)
{:ok, ids} = CacheRTree.query(new_box, name)
assert "system_abc" in ids
end
end
describe "query/2" do
test "returns empty list for empty tree", %{tree_name: name} do
assert {:ok, []} = CacheRTree.query([{0, 100}, {0, 100}], name)
end
test "finds intersecting leaves", %{tree_name: name} do
leaves = [
{30000142, [{100, 230}, {50, 84}]},
{30000143, [{250, 380}, {100, 134}]},
{30000144, [{400, 530}, {50, 84}]}
]
CacheRTree.insert(leaves, name)
# Query overlapping with first system
{:ok, ids} = CacheRTree.query([{150, 280}, {60, 94}], name)
assert 30000142 in ids
assert length(ids) == 1
end
test "excludes non-intersecting leaves", %{tree_name: name} do
leaves = [
{30000142, [{100, 230}, {50, 84}]},
{30000143, [{250, 380}, {100, 134}]}
]
CacheRTree.insert(leaves, name)
# Query that doesn't intersect any leaf
{:ok, ids} = CacheRTree.query([{500, 600}, {200, 250}], name)
assert ids == []
end
test "handles overlapping bounding boxes", %{tree_name: name} do
# Insert overlapping systems
leaves = [
{30000142, [{100, 230}, {50, 84}]},
{30000143, [{150, 280}, {60, 94}]} # Overlaps with first
]
CacheRTree.insert(leaves, name)
# Query that overlaps both
{:ok, ids} = CacheRTree.query([{175, 200}, {65, 80}], name)
assert 30000142 in ids
assert 30000143 in ids
assert length(ids) == 2
end
test "edge case: exact match", %{tree_name: name} do
box = [{100, 230}, {50, 84}]
CacheRTree.insert({30000142, box}, name)
{:ok, ids} = CacheRTree.query(box, name)
assert 30000142 in ids
end
test "edge case: contained box", %{tree_name: name} do
# Insert larger box
CacheRTree.insert({30000142, [{100, 300}, {50, 150}]}, name)
# Query with smaller box inside
{:ok, ids} = CacheRTree.query([{150, 250}, {75, 100}], name)
assert 30000142 in ids
end
test "edge case: containing box", %{tree_name: name} do
# Insert smaller box
CacheRTree.insert({30000142, [{150, 250}, {75, 100}]}, name)
# Query with larger box that contains it
{:ok, ids} = CacheRTree.query([{100, 300}, {50, 150}], name)
assert 30000142 in ids
end
test "edge case: adjacent boxes don't intersect", %{tree_name: name} do
CacheRTree.insert({30000142, [{100, 230}, {50, 84}]}, name)
# Adjacent box (touching but not overlapping)
{:ok, ids} = CacheRTree.query([{230, 360}, {50, 84}], name)
assert ids == []
end
test "handles negative coordinates", %{tree_name: name} do
leaves = [
{30000142, [{-200, -70}, {-100, -66}]},
{30000143, [{-50, 80}, {-25, 9}]}
]
CacheRTree.insert(leaves, name)
{:ok, ids} = CacheRTree.query([{-150, -100}, {-90, -70}], name)
assert 30000142 in ids
end
end
describe "spatial grid" do
test "correctly maps leaves to grid cells", %{tree_name: name} do
# System node is 130x34, grid is 150x150
# This should fit in one cell
leaf = {30000142, [{10, 140}, {10, 44}]}
CacheRTree.insert(leaf, name)
# Query should find it
{:ok, ids} = CacheRTree.query([{10, 140}, {10, 44}], name)
assert 30000142 in ids
end
test "handles leaves spanning multiple cells", %{tree_name: name} do
# Large box spanning 4 grid cells (150x150 each)
large_box = [{0, 300}, {0, 300}]
CacheRTree.insert({30000142, large_box}, name)
# Should be queryable from any quadrant
{:ok, ids1} = CacheRTree.query([{50, 100}, {50, 100}], name)
assert 30000142 in ids1
{:ok, ids2} = CacheRTree.query([{200, 250}, {50, 100}], name)
assert 30000142 in ids2
{:ok, ids3} = CacheRTree.query([{50, 100}, {200, 250}], name)
assert 30000142 in ids3
{:ok, ids4} = CacheRTree.query([{200, 250}, {200, 250}], name)
assert 30000142 in ids4
end
test "maintains grid consistency on delete", %{tree_name: name} do
# Insert leaf spanning multiple cells
large_box = [{0, 300}, {0, 300}]
CacheRTree.insert({30000142, large_box}, name)
# Delete it
CacheRTree.delete([30000142], name)
# Should not be found in any cell
{:ok, ids1} = CacheRTree.query([{50, 100}, {50, 100}], name)
assert ids1 == []
{:ok, ids2} = CacheRTree.query([{200, 250}, {200, 250}], name)
assert ids2 == []
end
test "grid handles boundary conditions", %{tree_name: name} do
# Boxes exactly on grid boundaries
leaves = [
{30000142, [{0, 130}, {0, 34}]}, # Cell (0,0)
{30000143, [{150, 280}, {0, 34}]}, # Cell (1,0)
{30000144, [{0, 130}, {150, 184}]} # Cell (0,1)
]
CacheRTree.insert(leaves, name)
# Each should be queryable
{:ok, ids1} = CacheRTree.query([{0, 130}, {0, 34}], name)
assert 30000142 in ids1
{:ok, ids2} = CacheRTree.query([{150, 280}, {0, 34}], name)
assert 30000143 in ids2
{:ok, ids3} = CacheRTree.query([{0, 130}, {150, 184}], name)
assert 30000144 in ids3
end
end
describe "integration" do
test "realistic map scenario with many systems", %{tree_name: name} do
# Simulate 100 systems in a typical map layout
systems = for i <- 1..100 do
x = rem(i, 10) * 200
y = div(i, 10) * 100
{30000000 + i, [{x, x + 130}, {y, y + 34}]}
end
# Insert all systems
assert {:ok, %{}} = CacheRTree.insert(systems, name)
# Query for a specific position
{:ok, ids} = CacheRTree.query([{200, 330}, {100, 134}], name)
assert 30000012 in ids
# Delete some systems
to_delete = Enum.map(1..10, & &1 + 30000000)
assert {:ok, %{}} = CacheRTree.delete(to_delete, name)
# Update some systems
assert {:ok, %{}} = CacheRTree.update(30000050, [{1000, 1130}, {500, 534}], name)
# Verify the update
{:ok, ids_updated} = CacheRTree.query([{1000, 1130}, {500, 534}], name)
assert 30000050 in ids_updated
end
test "handles rapid insert/delete cycles", %{tree_name: name} do
# Simulate dynamic map updates
for i <- 1..50 do
system_id = 30000000 + i
box = [{i * 10, i * 10 + 130}, {i * 5, i * 5 + 34}]
# Insert
CacheRTree.insert({system_id, box}, name)
# Immediately query
{:ok, ids} = CacheRTree.query(box, name)
assert system_id in ids
# Delete every other one
if rem(i, 2) == 0 do
CacheRTree.delete([system_id], name)
{:ok, ids_after} = CacheRTree.query(box, name)
assert system_id not in ids_after
end
end
end
test "stress test: position availability checking", %{tree_name: name} do
# Insert systems in a grid pattern
for x <- 0..9, y <- 0..9 do
system_id = x * 10 + y + 30000000
box = [{x * 200, x * 200 + 130}, {y * 100, y * 100 + 34}]
CacheRTree.insert({system_id, box}, name)
end
# Check many positions for availability (simulating auto-positioning)
test_positions = for x <- 0..20, y <- 0..20, do: {x * 100, y * 50}
for {x, y} do
box = [{x, x + 130}, {y, y + 34}]
{:ok, _ids} = CacheRTree.query(box, name)
# Not asserting anything, just verifying queries work
end
end
end
describe "clear_tree/1" do
test "removes all tree data from cache", %{tree_name: name} do
# Insert some data
CacheRTree.insert({30000142, [{100, 230}, {50, 84}]}, name)
# Clear the tree
assert :ok = CacheRTree.clear_tree(name)
# Re-initialize
CacheRTree.init_tree(name)
# Should be empty
{:ok, ids} = CacheRTree.query([{100, 230}, {50, 84}], name)
assert ids == []
end
end
end

View File

@@ -580,6 +580,155 @@ defmodule WandererApp.Map.Operations.SignaturesTest do
end
end
describe "character_eve_id validation" do
test "create_signature uses provided character_eve_id when valid" do
# Create a test character
{:ok, character} =
WandererApp.Api.Character.create(%{
eve_id: "111111111",
name: "Test Character"
})
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123",
"character_eve_id" => character.eve_id
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
case result do
{:ok, data} ->
# Should use the provided character_eve_id, not the owner's
assert Map.get(data, "character_eve_id") == character.eve_id
{:error, _} ->
# System not found error is acceptable
:ok
end
end)
end
test "create_signature rejects invalid character_eve_id" do
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123",
"character_eve_id" => "invalid_char_id_999"
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
# Should return invalid_character error
assert {:error, :invalid_character} = result
end)
end
test "create_signature falls back to owner when character_eve_id not provided" do
owner_char_id = "888888888"
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: owner_char_id,
owner_user_id: Ecto.UUID.generate()
}
}
params = %{
"solar_system_id" => 30_000_142,
"eve_id" => "ABC-123"
}
MapTestHelpers.expect_map_server_error(fn ->
result = Signatures.create_signature(conn, params)
case result do
{:ok, data} ->
# Should use the owner's character_eve_id
assert Map.get(data, "character_eve_id") == owner_char_id
{:error, _} ->
# System not found error is acceptable
:ok
end
end)
end
test "update_signature respects provided character_eve_id when valid" do
# Create a test character
{:ok, character} =
WandererApp.Api.Character.create(%{
eve_id: "222222222",
name: "Another Test Character"
})
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
sig_id = Ecto.UUID.generate()
params = %{
"name" => "Updated Name",
"character_eve_id" => character.eve_id
}
result = Signatures.update_signature(conn, sig_id, params)
case result do
{:ok, data} ->
# Should use the provided character_eve_id
assert Map.get(data, "character_eve_id") == character.eve_id
{:error, _} ->
# Signature not found error is acceptable
:ok
end
end
test "update_signature rejects invalid character_eve_id" do
conn = %{
assigns: %{
map_id: Ecto.UUID.generate(),
owner_character_id: "999999999",
owner_user_id: Ecto.UUID.generate()
}
}
sig_id = Ecto.UUID.generate()
params = %{
"name" => "Updated Name",
"character_eve_id" => "totally_invalid_char"
}
result = Signatures.update_signature(conn, sig_id, params)
# Should return invalid_character error
assert {:error, :invalid_character} = result
end
end
describe "parameter merging and character_eve_id injection" do
test "create_signature injects character_eve_id correctly" do
char_id = "987654321"