Compare commits

...

28 Commits

Author SHA1 Message Date
CI
d939e32500 chore: release version v1.88.9 2025-11-29 00:11:43 +00:00
Dmitry Popov
97ebe66db5 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-29 01:11:04 +01:00
Dmitry Popov
f437fc4541 fix(core): fixed linked signatures cleanup 2025-11-29 01:11:01 +01:00
CI
6c65538450 chore: [skip ci] 2025-11-28 23:54:56 +00:00
CI
d566a74df4 chore: release version v1.88.8 2025-11-28 23:54:56 +00:00
Dmitry Popov
03e030a7d3 Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-29 00:54:10 +01:00
Dmitry Popov
e738e1da9c fix(core): fixed pings issue 2025-11-29 00:54:07 +01:00
CI
972b3a6cbe chore: [skip ci] 2025-11-28 23:43:54 +00:00
CI
96b4a3077e chore: release version v1.88.7 2025-11-28 23:43:53 +00:00
Dmitry Popov
6b308e8a1e Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-29 00:43:16 +01:00
Dmitry Popov
d0874cbc6f fix(core): fixed tracking issues 2025-11-29 00:43:13 +01:00
CI
f106a51bf5 chore: [skip ci] 2025-11-28 22:50:24 +00:00
CI
dc47dc5f81 chore: release version v1.88.6 2025-11-28 22:50:24 +00:00
Dmitry Popov
dc81cffeea Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-28 23:49:53 +01:00
Dmitry Popov
5766fcf4d8 fix(core): fixed tracking issues 2025-11-28 23:49:48 +01:00
CI
c57a3b2cea chore: [skip ci] 2025-11-28 00:28:34 +00:00
CI
0c1fa8e79b chore: release version v1.88.5 2025-11-28 00:28:34 +00:00
Dmitry Popov
36cc91915c Merge branch 'main' of github.com:wanderer-industries/wanderer 2025-11-28 01:27:30 +01:00
Dmitry Popov
bb644fde31 fix(core): fixed env errors 2025-11-28 01:27:26 +01:00
CI
269b54d382 chore: [skip ci] 2025-11-27 11:17:21 +00:00
CI
a9115cc653 chore: release version v1.88.4 2025-11-27 11:17:21 +00:00
Dmitry Popov
eeea7aee8b Merge pull request #563 from guarzo/guarzo/killsdefense
fix: defensive check for undefined excluded systems
2025-11-27 15:16:52 +04:00
Guarzo
700089e381 fix: defensive check for undefined excluded systems 2025-11-27 04:12:59 +00:00
CI
932935557c chore: [skip ci] 2025-11-26 22:42:01 +00:00
CI
2890a76cf2 chore: release version v1.88.3 2025-11-26 22:42:01 +00:00
Dmitry Popov
4ac9b2e2b7 chore: Updated mix version 2025-11-26 23:41:24 +01:00
Dmitry Popov
f92436f3f0 Merge branch 'develop' 2025-11-26 22:37:38 +01:00
Dmitry Popov
22d97cc99d fix(core): fixed env issues
Some checks failed
Build Test / 🚀 Deploy to test env (fly.io) (push) Has been cancelled
Build Test / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
Build Develop / 🛠 Build (1.17, 18.x, 27) (push) Has been cancelled
🧪 Test Suite / Test Suite (push) Has been cancelled
Build Develop / 🛠 Build Docker Images (linux/amd64) (push) Has been cancelled
Build Develop / 🛠 Build Docker Images (linux/arm64) (push) Has been cancelled
Build Develop / merge (push) Has been cancelled
Build Develop / 🏷 Notify about develop release (push) Has been cancelled
2025-11-26 22:18:02 +01:00
34 changed files with 1728 additions and 325 deletions

View File

@@ -19,15 +19,19 @@ env:
jobs:
test:
name: Test Suite
name: Test Suite (Partition ${{ matrix.partition }})
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
partition: [1, 2, 3, 4]
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: wanderer_test
POSTGRES_DB: wanderer_test${{ matrix.partition }}
options: >-
--health-cmd pg_isready
--health-interval 10s
@@ -35,7 +39,7 @@ jobs:
--health-retries 5
ports:
- 5432:5432
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -102,11 +106,13 @@ jobs:
- name: Run tests with coverage
id: tests
env:
MIX_TEST_PARTITION: ${{ matrix.partition }}
run: |
# Run tests with coverage
output=$(mix test --cover 2>&1 || true)
# Run tests with coverage using partitioning
output=$(mix test --cover --partitions 4 2>&1 || true)
echo "$output" > test_output.txt
# Parse test results
if echo "$output" | grep -q "0 failures"; then
echo "status=✅ All Passed" >> $GITHUB_OUTPUT
@@ -115,16 +121,16 @@ jobs:
echo "status=❌ Some Failed" >> $GITHUB_OUTPUT
test_status="failed"
fi
# Extract test counts
test_line=$(echo "$output" | grep -E "[0-9]+ tests?, [0-9]+ failures?" | head -1 || echo "0 tests, 0 failures")
total_tests=$(echo "$test_line" | grep -o '[0-9]\+ tests\?' | grep -o '[0-9]\+' | head -1 || echo "0")
failures=$(echo "$test_line" | grep -o '[0-9]\+ failures\?' | grep -o '[0-9]\+' | head -1 || echo "0")
echo "total=$total_tests" >> $GITHUB_OUTPUT
echo "failures=$failures" >> $GITHUB_OUTPUT
echo "passed=$((total_tests - failures))" >> $GITHUB_OUTPUT
# Calculate success rate
if [ "$total_tests" -gt 0 ]; then
success_rate=$(echo "scale=1; ($total_tests - $failures) * 100 / $total_tests" | bc)
@@ -132,7 +138,7 @@ jobs:
success_rate="0"
fi
echo "success_rate=$success_rate" >> $GITHUB_OUTPUT
exit_code=$?
echo "exit_code=$exit_code" >> $GITHUB_OUTPUT
continue-on-error: true

View File

@@ -2,6 +2,69 @@
<!-- changelog -->
## [v1.88.9](https://github.com/wanderer-industries/wanderer/compare/v1.88.8...v1.88.9) (2025-11-29)
### Bug Fixes:
* core: fixed linked signatures cleanup
## [v1.88.8](https://github.com/wanderer-industries/wanderer/compare/v1.88.7...v1.88.8) (2025-11-28)
### Bug Fixes:
* core: fixed pings issue
## [v1.88.7](https://github.com/wanderer-industries/wanderer/compare/v1.88.6...v1.88.7) (2025-11-28)
### Bug Fixes:
* core: fixed tracking issues
## [v1.88.6](https://github.com/wanderer-industries/wanderer/compare/v1.88.5...v1.88.6) (2025-11-28)
### Bug Fixes:
* core: fixed tracking issues
## [v1.88.5](https://github.com/wanderer-industries/wanderer/compare/v1.88.4...v1.88.5) (2025-11-28)
### Bug Fixes:
* core: fixed env errors
## [v1.88.4](https://github.com/wanderer-industries/wanderer/compare/v1.88.3...v1.88.4) (2025-11-27)
### Bug Fixes:
* defensive check for undefined excluded systems
## [v1.88.3](https://github.com/wanderer-industries/wanderer/compare/v1.88.2...v1.88.3) (2025-11-26)
### Bug Fixes:
* core: fixed env issues
## [v1.88.1](https://github.com/wanderer-industries/wanderer/compare/v1.88.0...v1.88.1) (2025-11-26)

View File

@@ -31,7 +31,7 @@ export function useSystemKills({ systemId, outCommand, showAllVisible = false, s
storedSettings: { settingsKills },
} = useMapRootState();
const excludedSystems = useStableValue(settingsKills.excludedSystems);
const excludedSystems = useStableValue(settingsKills.excludedSystems ?? []);
const effectiveSystemIds = useMemo(() => {
if (showAllVisible) {

View File

@@ -63,6 +63,7 @@ config :wanderer_app, WandererAppWeb.Endpoint,
]
config :wanderer_app,
environment: :dev,
dev_routes: true
# Do not include metadata nor timestamps in development logs

View File

@@ -1,5 +1,8 @@
import Config
# Set environment at compile time for modules using Application.compile_env
config :wanderer_app, environment: :prod
# Note we also include the path to a cache manifest
# containing the digested version of static files. This
# manifest is generated by the `mix assets.deploy` task,

View File

@@ -1,5 +1,9 @@
import Config
# Disable Ash async operations in tests to ensure transactional safety
# This prevents Ash from spawning tasks that could bypass the Ecto sandbox
config :ash, :disable_async?, true
# Configure your database
#
# The MIX_TEST_PARTITION environment variable can be used

View File

@@ -128,6 +128,8 @@ defmodule WandererApp.Api.MapCharacterSettings do
require_atomic? false
accept([
:tracked,
:followed,
:ship,
:ship_name,
:ship_item_id,

View File

@@ -1,5 +1,18 @@
defmodule WandererApp.Character.TrackerManager.Impl do
@moduledoc false
@moduledoc """
Implementation of the character tracker manager.
This module manages the lifecycle of character trackers and handles:
- Starting/stopping character tracking
- Garbage collection of inactive trackers (5-minute timeout)
- Processing the untrack queue (5-minute interval)
## Logging
This module emits detailed logs for debugging character tracking issues:
- WARNING: Unexpected states or potential issues
- DEBUG: Start/stop tracking events, garbage collection, queue processing
"""
require Logger
defstruct [
@@ -27,6 +40,11 @@ defmodule WandererApp.Character.TrackerManager.Impl do
Process.send_after(self(), :garbage_collect, @garbage_collection_interval)
Process.send_after(self(), :untrack_characters, @untrack_characters_interval)
Logger.debug("[TrackerManager] Initialized with intervals: " <>
"garbage_collection=#{div(@garbage_collection_interval, 60_000)}min, " <>
"untrack=#{div(@untrack_characters_interval, 60_000)}min, " <>
"inactive_timeout=#{div(@inactive_character_timeout, 60_000)}min")
%{
characters: [],
opts: args
@@ -38,6 +56,10 @@ defmodule WandererApp.Character.TrackerManager.Impl do
{:ok, tracked_characters} = WandererApp.Cache.lookup("tracked_characters", [])
WandererApp.Cache.insert("tracked_characters", [])
if length(tracked_characters) > 0 do
Logger.debug("[TrackerManager] Restoring #{length(tracked_characters)} tracked characters from cache")
end
tracked_characters
|> Enum.each(fn character_id ->
start_tracking(state, character_id)
@@ -53,7 +75,9 @@ defmodule WandererApp.Character.TrackerManager.Impl do
true
)
Logger.debug(fn -> "Add character to track_characters_queue: #{inspect(character_id)}" end)
Logger.debug(fn ->
"[TrackerManager] Queuing character #{character_id} for tracking start"
end)
WandererApp.Cache.insert_or_update(
"track_characters_queue",
@@ -71,13 +95,33 @@ defmodule WandererApp.Character.TrackerManager.Impl do
with {:ok, characters} <- WandererApp.Cache.lookup("tracked_characters", []),
true <- Enum.member?(characters, character_id),
false <- WandererApp.Cache.has_key?("#{character_id}:track_requested") do
Logger.debug(fn -> "Shutting down character tracker: #{inspect(character_id)}" end)
Logger.debug(fn ->
"[TrackerManager] Stopping tracker for character #{character_id} - " <>
"reason: no active maps (garbage collected after #{div(@inactive_character_timeout, 60_000)} minutes)"
end)
WandererApp.Cache.delete("character:#{character_id}:last_active_time")
WandererApp.Character.delete_character_state(character_id)
WandererApp.Character.TrackerPoolDynamicSupervisor.stop_tracking(character_id)
:telemetry.execute([:wanderer_app, :character, :tracker, :stopped], %{count: 1})
:telemetry.execute(
[:wanderer_app, :character, :tracker, :stopped],
%{count: 1, system_time: System.system_time()},
%{character_id: character_id, reason: :garbage_collection}
)
else
{:ok, characters} when is_list(characters) ->
Logger.debug(fn ->
"[TrackerManager] Character #{character_id} not in tracked list, skipping stop"
end)
false ->
Logger.debug(fn ->
"[TrackerManager] Character #{character_id} has pending track request, skipping stop"
end)
_ ->
:ok
end
WandererApp.Cache.insert_or_update(
@@ -101,6 +145,10 @@ defmodule WandererApp.Character.TrackerManager.Impl do
} = track_settings
) do
if track do
Logger.debug(fn ->
"[TrackerManager] Enabling tracking for character #{character_id} on map #{map_id}"
end)
remove_from_untrack_queue(map_id, character_id)
{:ok, character_state} =
@@ -108,6 +156,11 @@ defmodule WandererApp.Character.TrackerManager.Impl do
WandererApp.Character.update_character_state(character_id, character_state)
else
Logger.debug(fn ->
"[TrackerManager] Queuing character #{character_id} for untracking from map #{map_id} - " <>
"will be processed within #{div(@untrack_characters_interval, 60_000)} minutes"
end)
add_to_untrack_queue(map_id, character_id)
end
@@ -130,8 +183,19 @@ defmodule WandererApp.Character.TrackerManager.Impl do
"character_untrack_queue",
[],
fn untrack_queue ->
untrack_queue
|> Enum.reject(fn {m_id, c_id} -> m_id == map_id and c_id == character_id end)
original_length = length(untrack_queue)
filtered =
untrack_queue
|> Enum.reject(fn {m_id, c_id} -> m_id == map_id and c_id == character_id end)
if length(filtered) < original_length do
Logger.debug(fn ->
"[TrackerManager] Removed character #{character_id} from untrack queue for map #{map_id} - " <>
"character re-enabled tracking"
end)
end
filtered
end
)
end
@@ -170,6 +234,12 @@ defmodule WandererApp.Character.TrackerManager.Impl do
Process.send_after(self(), :check_start_queue, @check_start_queue_interval)
{:ok, track_characters_queue} = WandererApp.Cache.lookup("track_characters_queue", [])
if length(track_characters_queue) > 0 do
Logger.debug(fn ->
"[TrackerManager] Processing start queue: #{length(track_characters_queue)} characters"
end)
end
track_characters_queue
|> Enum.each(fn character_id ->
track_character(character_id, %{})
@@ -186,35 +256,66 @@ defmodule WandererApp.Character.TrackerManager.Impl do
{:ok, characters} = WandererApp.Cache.lookup("tracked_characters", [])
characters
|> Task.async_stream(
fn character_id ->
case WandererApp.Cache.lookup("character:#{character_id}:last_active_time") do
{:ok, nil} ->
:skip
Logger.debug(fn ->
"[TrackerManager] Running garbage collection on #{length(characters)} tracked characters"
end)
{:ok, last_active_time} ->
duration = DateTime.diff(DateTime.utc_now(), last_active_time, :second)
if duration * 1000 > @inactive_character_timeout do
{:stop, character_id}
else
inactive_characters =
characters
|> Task.async_stream(
fn character_id ->
case WandererApp.Cache.lookup("character:#{character_id}:last_active_time") do
{:ok, nil} ->
# Character is still active (no last_active_time set)
:skip
end
end
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.seconds(60)
)
|> Enum.each(fn result ->
case result do
{:ok, {:stop, character_id}} ->
Process.send_after(self(), {:stop_track, character_id}, 100)
_ ->
:ok
end
{:ok, last_active_time} ->
duration_seconds = DateTime.diff(DateTime.utc_now(), last_active_time, :second)
duration_ms = duration_seconds * 1000
if duration_ms > @inactive_character_timeout do
Logger.debug(fn ->
"[TrackerManager] Character #{character_id} marked for garbage collection - " <>
"inactive for #{div(duration_seconds, 60)} minutes " <>
"(threshold: #{div(@inactive_character_timeout, 60_000)} minutes)"
end)
{:stop, character_id, duration_seconds}
else
:skip
end
end
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.seconds(60)
)
|> Enum.reduce([], fn result, acc ->
case result do
{:ok, {:stop, character_id, duration}} ->
[{character_id, duration} | acc]
_ ->
acc
end
end)
if length(inactive_characters) > 0 do
Logger.debug(fn ->
"[TrackerManager] Garbage collection found #{length(inactive_characters)} inactive characters to stop"
end)
# Emit telemetry for garbage collection
:telemetry.execute(
[:wanderer_app, :character, :tracker, :garbage_collection],
%{inactive_count: length(inactive_characters), total_tracked: length(characters)},
%{character_ids: Enum.map(inactive_characters, fn {id, _} -> id end)}
)
end
inactive_characters
|> Enum.each(fn {character_id, _duration} ->
Process.send_after(self(), {:stop_track, character_id}, 100)
end)
state
@@ -226,9 +327,22 @@ defmodule WandererApp.Character.TrackerManager.Impl do
) do
Process.send_after(self(), :untrack_characters, @untrack_characters_interval)
WandererApp.Cache.lookup!("character_untrack_queue", [])
untrack_queue = WandererApp.Cache.lookup!("character_untrack_queue", [])
if length(untrack_queue) > 0 do
Logger.debug(fn ->
"[TrackerManager] Processing untrack queue: #{length(untrack_queue)} character-map pairs"
end)
end
untrack_queue
|> Task.async_stream(
fn {map_id, character_id} ->
Logger.debug(fn ->
"[TrackerManager] Untracking character #{character_id} from map #{map_id} - " <>
"reason: character no longer present on map"
end)
remove_from_untrack_queue(map_id, character_id)
WandererApp.Cache.delete("map:#{map_id}:character:#{character_id}:solar_system_id")
@@ -255,12 +369,36 @@ defmodule WandererApp.Character.TrackerManager.Impl do
WandererApp.Character.update_character_state(character_id, character_state)
WandererApp.Map.Server.Impl.broadcast!(map_id, :untrack_character, character_id)
# Emit telemetry for untrack event
:telemetry.execute(
[:wanderer_app, :character, :tracker, :untracked_from_map],
%{system_time: System.system_time()},
%{character_id: character_id, map_id: map_id, reason: :presence_left}
)
{:ok, character_id, map_id}
end,
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task,
timeout: :timer.seconds(30)
)
|> Enum.each(fn _result -> :ok end)
|> Enum.each(fn result ->
case result do
{:ok, {:ok, character_id, map_id}} ->
Logger.debug(fn ->
"[TrackerManager] Successfully untracked character #{character_id} from map #{map_id}"
end)
{:exit, reason} ->
Logger.warning(fn ->
"[TrackerManager] Untrack task exited with reason: #{inspect(reason)}"
end)
_ ->
:ok
end
end)
state
end
@@ -268,9 +406,17 @@ defmodule WandererApp.Character.TrackerManager.Impl do
def handle_info({:stop_track, character_id}, state) do
if not WandererApp.Cache.has_key?("character:#{character_id}:is_stop_tracking") do
WandererApp.Cache.insert("character:#{character_id}:is_stop_tracking", true)
Logger.debug(fn -> "Stopping character tracker: #{inspect(character_id)}" end)
Logger.debug(fn ->
"[TrackerManager] Executing stop_track for character #{character_id}"
end)
stop_tracking(state, character_id)
WandererApp.Cache.delete("character:#{character_id}:is_stop_tracking")
else
Logger.debug(fn ->
"[TrackerManager] Character #{character_id} already being stopped, skipping duplicate request"
end)
end
state
@@ -279,7 +425,9 @@ defmodule WandererApp.Character.TrackerManager.Impl do
def track_character(character_id, opts) do
with {:ok, characters} <- WandererApp.Cache.lookup("tracked_characters", []),
false <- Enum.member?(characters, character_id) do
Logger.debug(fn -> "Start character tracker: #{inspect(character_id)}" end)
Logger.debug(fn ->
"[TrackerManager] Starting tracker for character #{character_id}"
end)
WandererApp.Cache.insert_or_update(
"tracked_characters",
@@ -312,7 +460,30 @@ defmodule WandererApp.Character.TrackerManager.Impl do
character_id,
%{opts: opts}
])
# Emit telemetry for tracker start
:telemetry.execute(
[:wanderer_app, :character, :tracker, :started],
%{count: 1, system_time: System.system_time()},
%{character_id: character_id}
)
else
true ->
Logger.debug(fn ->
"[TrackerManager] Character #{character_id} already being tracked"
end)
WandererApp.Cache.insert_or_update(
"track_characters_queue",
[],
fn existing ->
existing
|> Enum.reject(fn c_id -> c_id == character_id end)
end
)
WandererApp.Cache.delete("#{character_id}:track_requested")
_ ->
WandererApp.Cache.insert_or_update(
"track_characters_queue",

View File

@@ -114,8 +114,88 @@ defmodule WandererApp.Character.TrackingUtils do
# Private implementation of update character tracking
defp do_update_character_tracking(character, map_id, track, caller_pid) do
WandererApp.MapCharacterSettingsRepo.get(map_id, character.id)
|> case do
# First check current tracking state to avoid unnecessary permission checks
current_settings = WandererApp.MapCharacterSettingsRepo.get(map_id, character.id)
case {track, current_settings} do
# Already tracked and wants to stay tracked - no permission check needed
{true, {:ok, %{tracked: true} = settings}} ->
do_update_character_tracking_impl(character, map_id, track, caller_pid, {:ok, settings})
# Wants to enable tracking - check permissions first
{true, settings_result} ->
case check_character_tracking_permission(character, map_id) do
{:ok, :allowed} ->
do_update_character_tracking_impl(character, map_id, track, caller_pid, settings_result)
{:error, reason} ->
Logger.warning(
"[CharacterTracking] Character #{character.id} cannot be tracked on map #{map_id}: #{reason}"
)
{:error, reason}
end
# Untracking is always allowed
{false, settings_result} ->
do_update_character_tracking_impl(character, map_id, track, caller_pid, settings_result)
end
end
# Check if a character has permission to be tracked on a map
defp check_character_tracking_permission(character, map_id) do
with {:ok, %{acls: acls, owner_id: owner_id}} <-
WandererApp.MapRepo.get(map_id,
acls: [
:owner_id,
members: [:role, :eve_character_id, :eve_corporation_id, :eve_alliance_id]
]
) do
# Check if character is the map owner
if character.id == owner_id do
{:ok, :allowed}
else
# Check if character belongs to same user as owner (Option 3 check)
case check_same_user_as_owner(character, owner_id) do
true ->
{:ok, :allowed}
false ->
# Check ACL-based permissions
[character_permissions] =
WandererApp.Permissions.check_characters_access([character], acls)
map_permissions = WandererApp.Permissions.get_permissions(character_permissions)
if map_permissions.track_character and map_permissions.view_system do
{:ok, :allowed}
else
{:error,
"Character does not have tracking permission on this map. Please add the character to a map access list or ensure you are the map owner."}
end
end
end
else
{:error, _} ->
{:error, "Failed to verify map permissions"}
end
end
# Check if character belongs to the same user as the map owner
defp check_same_user_as_owner(_character, nil), do: false
defp check_same_user_as_owner(character, owner_id) do
case WandererApp.Character.get_character(owner_id) do
{:ok, owner_character} ->
character.user_id != nil and character.user_id == owner_character.user_id
_ ->
false
end
end
defp do_update_character_tracking_impl(character, map_id, track, caller_pid, settings_result) do
case settings_result do
# Untracking flow
{:ok, %{tracked: true} = existing_settings} ->
if not track do

View File

@@ -9,6 +9,8 @@ defmodule WandererApp.Map.Manager do
alias WandererApp.Map.Server
@environment Application.compile_env(:wanderer_app, :environment)
@maps_start_chunk_size 20
@maps_start_interval 500
@maps_queue :maps_queue
@@ -19,7 +21,7 @@ defmodule WandererApp.Map.Manager do
# Test-aware async task runner
defp safe_async_task(fun) do
if Mix.env() == :test do
if @environment == :test do
# In tests, run synchronously to avoid database ownership issues
try do
fun.()
@@ -139,7 +141,7 @@ defmodule WandererApp.Map.Manager do
WandererApp.Queue.clear(@maps_queue)
if Mix.env() == :test do
if @environment == :test do
# In tests, run synchronously to avoid database ownership issues
Logger.debug(fn -> "Starting maps synchronously in test mode" end)

View File

@@ -20,16 +20,18 @@ defmodule WandererApp.Map.MapPool do
@garbage_collection_interval :timer.hours(4)
# Use very long timeouts in test environment to prevent background tasks from running during tests
# This avoids database connection ownership errors when tests finish before async tasks complete
@systems_cleanup_timeout if Mix.env() == :test,
@environment Application.compile_env(:wanderer_app, :environment)
@systems_cleanup_timeout if @environment == :test,
do: :timer.hours(24),
else: :timer.minutes(30)
@characters_cleanup_timeout if Mix.env() == :test,
@characters_cleanup_timeout if @environment == :test,
do: :timer.hours(24),
else: :timer.minutes(5)
@connections_cleanup_timeout if Mix.env() == :test,
@connections_cleanup_timeout if @environment == :test,
do: :timer.hours(24),
else: :timer.minutes(5)
@backup_state_timeout if Mix.env() == :test,
@backup_state_timeout if @environment == :test,
do: :timer.hours(24),
else: :timer.minutes(1)

View File

@@ -1,5 +1,19 @@
defmodule WandererApp.Map.Server.CharactersImpl do
@moduledoc false
@moduledoc """
Handles character-related operations for map servers.
This module manages:
- Character tracking on maps
- Permission-based character cleanup
- Character presence updates
## Logging
This module emits detailed logs for debugging character tracking issues:
- INFO: Character track/untrack events, permission cleanup results
- WARNING: Permission failures, unexpected states
- DEBUG: Detailed permission check results
"""
require Logger
@@ -15,6 +29,11 @@ defmodule WandererApp.Map.Server.CharactersImpl do
if Enum.empty?(invalidate_character_ids) do
:ok
else
Logger.debug(fn ->
"[CharactersImpl] Running permission cleanup for map #{map_id} - " <>
"checking #{length(invalidate_character_ids)} characters"
end)
{:ok, %{acls: acls}} =
WandererApp.MapRepo.get(map_id,
acls: [
@@ -30,6 +49,11 @@ defmodule WandererApp.Map.Server.CharactersImpl do
def track_characters(_map_id, []), do: :ok
def track_characters(map_id, [character_id | rest]) do
Logger.debug(fn ->
"[CharactersImpl] Starting tracking for character #{character_id} on map #{map_id} - " <>
"reason: character joined presence"
end)
track_character(map_id, character_id)
track_characters(map_id, rest)
end
@@ -41,6 +65,12 @@ defmodule WandererApp.Map.Server.CharactersImpl do
|> WandererApp.Map.get_map!()
|> Map.get(:characters, [])
if length(character_ids) > 0 do
Logger.debug(fn ->
"[CharactersImpl] Scheduling permission check for #{length(character_ids)} characters on map #{map_id}"
end)
end
WandererApp.Cache.insert("map_#{map_id}:invalidate_character_ids", character_ids)
:ok
@@ -48,6 +78,13 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
def untrack_characters(map_id, character_ids) do
if length(character_ids) > 0 do
Logger.debug(fn ->
"[CharactersImpl] Untracking #{length(character_ids)} characters from map #{map_id} - " <>
"reason: characters no longer in presence_character_ids (grace period expired or user disconnected)"
end)
end
character_ids
|> Enum.each(fn character_id ->
character_map_active = is_character_map_active?(map_id, character_id)
@@ -58,13 +95,32 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
defp untrack_character(true, map_id, character_id) do
Logger.info(fn ->
"[CharactersImpl] Untracking character #{character_id} from map #{map_id} - " <>
"character was actively tracking this map"
end)
# Emit telemetry for tracking
:telemetry.execute(
[:wanderer_app, :character, :tracking, :stopped],
%{system_time: System.system_time()},
%{character_id: character_id, map_id: map_id, reason: :presence_expired}
)
WandererApp.Character.TrackerManager.update_track_settings(character_id, %{
map_id: map_id,
track: false
})
end
defp untrack_character(_is_character_map_active, _map_id, _character_id), do: :ok
defp untrack_character(false, map_id, character_id) do
Logger.debug(fn ->
"[CharactersImpl] Skipping untrack for character #{character_id} on map #{map_id} - " <>
"character was not actively tracking this map"
end)
:ok
end
defp is_character_map_active?(map_id, character_id) do
case WandererApp.Character.get_character_state(character_id) do
@@ -79,59 +135,134 @@ defmodule WandererApp.Map.Server.CharactersImpl do
defp process_invalidate_characters(invalidate_character_ids, map_id, acls) do
{:ok, %{map: %{owner_id: owner_id}}} = WandererApp.Map.get_map_state(map_id)
invalidate_character_ids
|> Task.async_stream(
fn character_id ->
character_id
|> WandererApp.Character.get_character()
|> case do
{:ok, %{user_id: nil}} ->
{:remove_character, character_id}
# Option 3: Get owner's user_id to allow all characters from the same user
owner_user_id = get_owner_user_id(owner_id)
{:ok, character} ->
[character_permissions] =
WandererApp.Permissions.check_characters_access([character], acls)
map_permissions =
WandererApp.Permissions.get_map_permissions(
character_permissions,
owner_id,
[character_id]
)
case map_permissions do
%{view_system: false} ->
{:remove_character, character_id}
%{track_character: false} ->
{:remove_character, character_id}
_ ->
:ok
end
_ ->
:ok
end
end,
timeout: :timer.seconds(60),
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task
)
|> Enum.reduce([], fn
{:ok, {:remove_character, character_id}}, acc ->
[character_id | acc]
{:ok, _result}, acc ->
acc
{:error, reason}, acc ->
Logger.error("Error in cleanup_characters: #{inspect(reason)}")
acc
Logger.debug(fn ->
"[CharacterCleanup] Map #{map_id} - validating permissions for #{length(invalidate_character_ids)} characters"
end)
|> case do
[] -> :ok
character_ids_to_remove -> remove_and_untrack_characters(map_id, character_ids_to_remove)
results =
invalidate_character_ids
|> Task.async_stream(
fn character_id ->
character_id
|> WandererApp.Character.get_character()
|> case do
{:ok, %{user_id: nil}} ->
{:remove_character, character_id, :no_user_id}
{:ok, character} ->
# Option 3: Check if character belongs to the same user as owner
is_same_user_as_owner =
owner_user_id != nil and character.user_id == owner_user_id
if is_same_user_as_owner do
# All characters from the map owner's account have full access
:ok
else
[character_permissions] =
WandererApp.Permissions.check_characters_access([character], acls)
map_permissions =
WandererApp.Permissions.get_map_permissions(
character_permissions,
owner_id,
[character_id]
)
case map_permissions do
%{view_system: false} ->
{:remove_character, character_id, :no_view_permission}
%{track_character: false} ->
{:remove_character, character_id, :no_track_permission}
_ ->
:ok
end
end
_ ->
:ok
end
end,
timeout: :timer.seconds(60),
max_concurrency: System.schedulers_online() * 4,
on_timeout: :kill_task
)
|> Enum.reduce([], fn
{:ok, {:remove_character, character_id, reason}}, acc ->
[{character_id, reason} | acc]
{:ok, _result}, acc ->
acc
{:error, reason}, acc ->
Logger.error(
"[CharacterCleanup] Error checking character permissions: #{inspect(reason)}"
)
acc
end)
case results do
[] ->
Logger.debug(fn ->
"[CharacterCleanup] Map #{map_id} - all #{length(invalidate_character_ids)} characters passed permission check"
end)
:ok
characters_to_remove ->
# Group by reason for better logging
by_reason = Enum.group_by(characters_to_remove, fn {_id, reason} -> reason end)
Enum.each(by_reason, fn {reason, chars} ->
char_ids = Enum.map(chars, fn {id, _} -> id end)
reason_str = permission_removal_reason_to_string(reason)
Logger.debug(fn ->
"[CharacterCleanup] Map #{map_id} - removing #{length(char_ids)} characters: #{reason_str} - " <>
"character_ids: #{inspect(char_ids)}"
end)
# Emit telemetry for each removal reason
:telemetry.execute(
[:wanderer_app, :character, :tracking, :permission_revoked],
%{count: length(char_ids), system_time: System.system_time()},
%{map_id: map_id, character_ids: char_ids, reason: reason}
)
end)
character_ids_to_remove = Enum.map(characters_to_remove, fn {id, _} -> id end)
Logger.debug(fn ->
"[CharacterCleanup] Map #{map_id} - total #{length(character_ids_to_remove)} characters " <>
"will be removed due to permission issues (NO GRACE PERIOD)"
end)
remove_and_untrack_characters(map_id, character_ids_to_remove)
end
end
defp permission_removal_reason_to_string(:no_user_id),
do: "no user_id associated with character"
defp permission_removal_reason_to_string(:no_view_permission), do: "lost view_system permission"
defp permission_removal_reason_to_string(:no_track_permission),
do: "lost track_character permission"
defp permission_removal_reason_to_string(reason), do: "#{inspect(reason)}"
# Helper to get the owner's user_id for Option 3
defp get_owner_user_id(nil), do: nil
defp get_owner_user_id(owner_id) do
case WandererApp.Character.get_character(owner_id) do
{:ok, %{user_id: user_id}} -> user_id
_ -> nil
end
end
@@ -161,10 +292,18 @@ defmodule WandererApp.Map.Server.CharactersImpl do
end
defp remove_and_untrack_characters(map_id, character_ids) do
Logger.debug(fn ->
"Map #{map_id} - remove and untrack characters #{inspect(character_ids)}"
# Option 4: Enhanced logging for character removal
Logger.info(fn ->
"[CharacterCleanup] Map #{map_id} - starting removal of #{length(character_ids)} characters: #{inspect(character_ids)}"
end)
# Emit telemetry for monitoring
:telemetry.execute(
[:wanderer_app, :map, :characters_cleanup, :removal_started],
%{character_count: length(character_ids), system_time: System.system_time()},
%{map_id: map_id, character_ids: character_ids}
)
map_id
|> untrack_characters(character_ids)
@@ -174,10 +313,21 @@ defmodule WandererApp.Map.Server.CharactersImpl do
{:ok, settings} ->
settings
|> Enum.each(fn s ->
Logger.info(fn ->
"[CharacterCleanup] Map #{map_id} - destroying settings and removing character #{s.character_id}"
end)
WandererApp.MapCharacterSettingsRepo.destroy!(s)
remove_character(map_id, s.character_id)
end)
# Emit telemetry for successful removal
:telemetry.execute(
[:wanderer_app, :map, :characters_cleanup, :removal_complete],
%{removed_count: length(settings), system_time: System.system_time()},
%{map_id: map_id}
)
_ ->
:ok
end

View File

@@ -45,12 +45,6 @@ defmodule WandererApp.Map.Server.Impl do
}
|> new()
# In test mode, give the test setup time to grant database access
# This is necessary for async tests where the sandbox needs to allow this process
if Mix.env() == :test do
Process.sleep(150)
end
# Parallelize database queries for faster initialization
start_time = System.monotonic_time(:millisecond)
@@ -314,56 +308,12 @@ defmodule WandererApp.Map.Server.Impl do
end)
# Create map state with retry logic for test scenarios
create_map_state_with_retry(
%{
map_id: map_id,
systems_last_activity: systems_last_activity,
connections_eol_time: connections_eol_time,
connections_start_time: connections_start_time
},
3
)
end
# Helper to create map state with retry logic for async tests
defp create_map_state_with_retry(attrs, retries_left) when retries_left > 0 do
case WandererApp.Api.MapState.create(attrs) do
{:ok, map_state} = result ->
result
{:error, %Ash.Error.Invalid{errors: errors}} = error ->
# Check if it's a foreign key constraint error
has_fkey_error =
Enum.any?(errors, fn
%Ash.Error.Changes.InvalidAttribute{private_vars: private_vars} ->
Enum.any?(private_vars, fn
{:constraint_type, :foreign_key} -> true
_ -> false
end)
_ ->
false
end)
if has_fkey_error and retries_left > 1 do
# In test environments with async tests, the parent map might not be
# visible yet due to sandbox timing. Brief retry with exponential backoff.
sleep_time = (4 - retries_left) * 15 + 10
Process.sleep(sleep_time)
create_map_state_with_retry(attrs, retries_left - 1)
else
# Return error if not a foreign key issue or out of retries
error
end
error ->
error
end
end
defp create_map_state_with_retry(attrs, 0) do
# Final attempt without retry
WandererApp.Api.MapState.create(attrs)
WandererApp.Api.MapState.create(%{
map_id: map_id,
systems_last_activity: systems_last_activity,
connections_eol_time: connections_eol_time,
connections_start_time: connections_start_time
})
end
def handle_event({:update_characters, map_id} = event) do
@@ -712,12 +662,45 @@ defmodule WandererApp.Map.Server.Impl do
not Enum.member?(presence_character_ids, character_id)
end)
# Log presence changes for debugging
if length(new_present_character_ids) > 0 or length(not_present_character_ids) > 0 do
Logger.debug(fn ->
"[MapServer] Map #{map_id} presence update - " <>
"newly_present: #{inspect(new_present_character_ids)}, " <>
"no_longer_present: #{inspect(not_present_character_ids)}, " <>
"total_present: #{length(presence_character_ids)}"
end)
end
WandererApp.Cache.insert(
"map_#{map_id}:old_presence_character_ids",
presence_character_ids
)
# Track new characters
if length(new_present_character_ids) > 0 do
Logger.debug(fn ->
"[MapServer] Map #{map_id} - starting tracking for #{length(new_present_character_ids)} newly present characters"
end)
end
CharactersImpl.track_characters(map_id, new_present_character_ids)
# Untrack characters no longer present (grace period has expired)
if length(not_present_character_ids) > 0 do
Logger.debug(fn ->
"[MapServer] Map #{map_id} - #{length(not_present_character_ids)} characters no longer in presence " <>
"(grace period expired or never had one) - will be untracked"
end)
# Emit telemetry for presence-based untracking
:telemetry.execute(
[:wanderer_app, :map, :presence, :characters_left],
%{count: length(not_present_character_ids), system_time: System.system_time()},
%{map_id: map_id, character_ids: not_present_character_ids}
)
end
CharactersImpl.untrack_characters(map_id, not_present_character_ids)
broadcast!(

View File

@@ -405,11 +405,20 @@ defmodule WandererApp.Map.Server.SystemsImpl do
{:ok, %{eve_id: eve_id, system: system}} = s |> Ash.load([:system])
:ok = Ash.destroy!(s)
Logger.warning(
"[cleanup_linked_signatures] for system #{system.solar_system_id}: #{inspect(eve_id)}"
)
# Handle case where parent system was already deleted
case system do
nil ->
Logger.warning(
"[cleanup_linked_signatures] signature #{eve_id} destroyed (parent system already deleted)"
)
Impl.broadcast!(map_id, :signatures_updated, system.solar_system_id)
%{solar_system_id: solar_system_id} ->
Logger.warning(
"[cleanup_linked_signatures] for system #{solar_system_id}: #{inspect(eve_id)}"
)
Impl.broadcast!(map_id, :signatures_updated, solar_system_id)
end
rescue
e ->
Logger.error("Failed to cleanup linked signature: #{inspect(e)}")

View File

@@ -1,6 +1,8 @@
defmodule WandererApp.MapCharacterSettingsRepo do
use WandererApp, :repository
require Logger
def get(map_id, character_id) do
case WandererApp.Api.MapCharacterSettings.read_by_map_and_character(%{
map_id: map_id,
@@ -54,13 +56,12 @@ defmodule WandererApp.MapCharacterSettingsRepo do
do: WandererApp.Api.MapCharacterSettings.tracked_by_map_all(%{map_id: map_id})
def track(%{map_id: map_id, character_id: character_id}) do
# Only update the tracked field, preserving other fields
case WandererApp.Api.MapCharacterSettings.track(%{
map_id: map_id,
character_id: character_id
}) do
{:ok, _} ->
:ok
# First ensure the record exists (get creates if not exists)
case get(map_id, character_id) do
{:ok, settings} when not is_nil(settings) ->
# Now update the tracked field
settings
|> WandererApp.Api.MapCharacterSettings.update(%{tracked: true})
error ->
Logger.error(
@@ -72,13 +73,12 @@ defmodule WandererApp.MapCharacterSettingsRepo do
end
def untrack(%{map_id: map_id, character_id: character_id}) do
# Only update the tracked field, preserving other fields
case WandererApp.Api.MapCharacterSettings.untrack(%{
map_id: map_id,
character_id: character_id
}) do
{:ok, _} ->
:ok
# First ensure the record exists (get creates if not exists)
case get(map_id, character_id) do
{:ok, settings} when not is_nil(settings) ->
# Now update the tracked field
settings
|> WandererApp.Api.MapCharacterSettings.update(%{tracked: false})
error ->
Logger.error(
@@ -103,18 +103,36 @@ defmodule WandererApp.MapCharacterSettingsRepo do
end
end
def follow(settings) do
WandererApp.Api.MapCharacterSettings.follow(%{
map_id: settings.map_id,
character_id: settings.character_id
})
def follow(%{map_id: map_id, character_id: character_id} = _settings) do
# First ensure the record exists (get creates if not exists)
case get(map_id, character_id) do
{:ok, settings} when not is_nil(settings) ->
settings
|> WandererApp.Api.MapCharacterSettings.update(%{followed: true})
error ->
Logger.error(
"Failed to follow character: #{character_id} on map: #{map_id}, #{inspect(error)}"
)
{:error, error}
end
end
def unfollow(settings) do
WandererApp.Api.MapCharacterSettings.unfollow(%{
map_id: settings.map_id,
character_id: settings.character_id
})
def unfollow(%{map_id: map_id, character_id: character_id} = _settings) do
# First ensure the record exists (get creates if not exists)
case get(map_id, character_id) do
{:ok, settings} when not is_nil(settings) ->
settings
|> WandererApp.Api.MapCharacterSettings.update(%{followed: false})
error ->
Logger.error(
"Failed to unfollow character: #{character_id} on map: #{map_id}, #{inspect(error)}"
)
{:error, error}
end
end
def follow!(settings) do

View File

@@ -1,6 +1,8 @@
defmodule WandererApp.TaskWrapper do
@environment Application.compile_env(:wanderer_app, :environment)
def start_link(module, func, args) do
if Mix.env() == :test do
if @environment == :test do
apply(module, func, args)
else
Task.start_link(module, func, args)

View File

@@ -17,6 +17,10 @@ defmodule WandererAppWeb.MapPingsEventHandler do
{:ok, pings} = WandererApp.MapPingsRepo.get_by_map(map_id)
pings
|> Enum.filter(fn ping ->
# Skip pings where system or character associations are nil (deleted)
not is_nil(ping.system) and not is_nil(ping.character)
end)
|> Enum.reduce(socket, fn %{
id: id,
type: type,

View File

@@ -60,7 +60,10 @@ defmodule WandererAppWeb.MapRoutesEventHandler do
ping_system_ids =
pings
|> Enum.map(fn %{system: %{solar_system_id: solar_system_id}} -> "#{solar_system_id}" end)
|> Enum.flat_map(fn
%{system: %{solar_system_id: solar_system_id}} -> ["#{solar_system_id}"]
_ -> []
end)
route_hubs = (ping_system_ids ++ hubs) |> Enum.uniq()

View File

@@ -165,12 +165,12 @@
field={f[:only_tracked_characters]}
label="Allow only tracked characters"
/>
<.input type="checkbox" field={f[:sse_enabled]} label="Enable Server-Sent Events (SSE)" />
<.input
:if={@live_action == :create}
type="checkbox"
field={f[:create_default_acl]}
label="Create default access list"
checked={Phoenix.HTML.Form.normalize_value("checkbox", f[:create_default_acl].value) == true or is_nil(f[:create_default_acl].value)}
/>
<.live_select
field={f[:acls]}

View File

@@ -3,18 +3,32 @@ defmodule WandererAppWeb.PresenceGracePeriodManager do
Manages grace period for character presence tracking.
This module prevents rapid start/stop cycles of character tracking
by introducing a 5-minute grace period before stopping tracking
by introducing a 30-minute grace period before stopping tracking
for characters that leave presence.
## Architecture
When a character's presence leaves (e.g., browser close, network disconnect):
1. Character is scheduled for removal after grace period (30 min)
2. Character remains in `presence_character_ids` during grace period
3. If character rejoins during grace period, removal is cancelled
4. After grace period expires, character is atomically removed from cache
## Logging
This module emits detailed logs for debugging character tracking issues:
- INFO: Grace period expire events (actual character removal)
- WARNING: Unexpected states or potential issues
- DEBUG: Grace period start/cancel, presence changes, state changes
"""
use GenServer
require Logger
# 30 minutes
@grace_period_ms :timer.minutes(30)
@check_remove_queue_interval :timer.seconds(30)
# 1 hour grace period before removing disconnected characters
@grace_period_ms :timer.hours(1)
defstruct pending_removals: %{}, timers: %{}, to_remove: []
defstruct pending_removals: %{}, timers: %{}
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
@@ -30,16 +44,105 @@ defmodule WandererAppWeb.PresenceGracePeriodManager do
GenServer.cast(__MODULE__, {:process_presence_change, map_id, presence_data})
end
@doc """
Get current grace period state for debugging purposes.
"""
def get_state do
GenServer.call(__MODULE__, :get_state)
end
@doc """
Reset state for testing purposes.
Cancels all pending timers and clears all state.
"""
def reset_state do
GenServer.call(__MODULE__, :reset_state)
end
@doc """
Clear state for a specific map. Used for cleanup.
Cancels any pending timers for characters on this map.
"""
def clear_map_state(map_id) do
GenServer.call(__MODULE__, {:clear_map_state, map_id})
end
@doc """
Synchronous version of process_presence_change for testing.
Returns :ok when processing is complete.
"""
def process_presence_change_sync(map_id, presence_data) do
GenServer.call(__MODULE__, {:process_presence_change_sync, map_id, presence_data})
end
@impl true
def init(_opts) do
Logger.info("#{__MODULE__} started")
Process.send_after(self(), :check_remove_queue, @check_remove_queue_interval)
Logger.debug("[PresenceGracePeriod] Manager started")
{:ok, %__MODULE__{}}
end
@impl true
def handle_call(:get_state, _from, state) do
{:reply, state, state}
end
@impl true
def handle_call(:reset_state, _from, state) do
# Cancel all pending timers
Enum.each(state.timers, fn {_key, timer_ref} ->
Process.cancel_timer(timer_ref)
end)
Logger.debug("[PresenceGracePeriod] State reset - cancelled #{map_size(state.timers)} timers")
{:reply, :ok, %__MODULE__{}}
end
@impl true
def handle_call({:clear_map_state, map_id}, _from, state) do
# Find and cancel all timers for this map
{timers_to_cancel, remaining_timers} =
Enum.split_with(state.timers, fn {{m_id, _char_id}, _ref} -> m_id == map_id end)
# Cancel the timers
Enum.each(timers_to_cancel, fn {_key, timer_ref} ->
Process.cancel_timer(timer_ref)
end)
# Filter pending_removals for this map
remaining_pending =
Enum.reject(state.pending_removals, fn {{m_id, _char_id}, _} -> m_id == map_id end)
|> Map.new()
if length(timers_to_cancel) > 0 do
Logger.debug("[PresenceGracePeriod] Cleared state for map #{map_id} - cancelled #{length(timers_to_cancel)} timers")
end
new_state = %{
state
| timers: Map.new(remaining_timers),
pending_removals: remaining_pending
}
{:reply, :ok, new_state}
end
@impl true
def handle_call({:process_presence_change_sync, map_id, presence_data}, _from, state) do
# Same logic as the cast version, but synchronous
new_state = do_process_presence_change(state, map_id, presence_data)
{:reply, :ok, new_state}
end
@impl true
def handle_cast({:process_presence_change, map_id, presence_data}, state) do
new_state = do_process_presence_change(state, map_id, presence_data)
{:noreply, new_state}
end
# Shared logic for presence change processing
defp do_process_presence_change(state, map_id, presence_data) do
# Extract currently tracked character IDs from presence data
current_tracked_character_ids =
presence_data
@@ -58,48 +161,83 @@ defmodule WandererAppWeb.PresenceGracePeriodManager do
# Characters that just left (in previous, but not in current)
newly_left = MapSet.difference(previous_set, current_set)
# Process newly joined characters - cancel any pending removals
# Log presence changes for debugging
if MapSet.size(newly_joined) > 0 or MapSet.size(newly_left) > 0 do
Logger.debug(fn ->
"[PresenceGracePeriod] Map #{map_id} presence change - " <>
"joined: #{inspect(MapSet.to_list(newly_joined))}, " <>
"left: #{inspect(MapSet.to_list(newly_left))}"
end)
end
# Cancel any pending removals for ALL currently present tracked characters
# This handles the case where a character rejoins during grace period
# (they're still in cache, so they won't be in "newly_joined")
state =
state
|> cancel_pending_removals(map_id, current_set)
|> schedule_removals(map_id, newly_left)
# Process newly left characters - schedule them for removal after grace period
# Calculate the final character IDs (current + still pending removal)
pending_for_map = get_pending_removals_for_map(state, map_id)
# Calculate the final character IDs (current + characters in grace period)
# This includes both pending_removals (timer not yet fired)
characters_in_grace_period = get_characters_in_grace_period(state, map_id)
final_character_ids = MapSet.union(current_set, pending_for_map) |> MapSet.to_list()
final_character_ids =
MapSet.union(current_set, characters_in_grace_period) |> MapSet.to_list()
# Update cache with final character IDs (includes grace period logic)
WandererApp.Cache.insert("map_#{map_id}:presence_character_ids", final_character_ids)
WandererApp.Cache.insert("map_#{map_id}:presence_data", presence_data)
WandererApp.Cache.insert("map_#{map_id}:presence_updated", true)
{:noreply, state}
Logger.debug(fn ->
"[PresenceGracePeriod] Map #{map_id} cache updated - " <>
"current: #{length(current_tracked_character_ids)}, " <>
"in_grace_period: #{MapSet.size(characters_in_grace_period)}, " <>
"final: #{length(final_character_ids)}"
end)
state
end
@impl true
def handle_info({:grace_period_expired, map_id, character_id}, state) do
Logger.debug(fn -> "Grace period expired for character #{character_id} on map #{map_id}" end)
# Check if this removal is still valid (wasn't cancelled)
case get_timer_ref(state, map_id, character_id) do
nil ->
# Timer was cancelled (character rejoined), ignore
Logger.debug(fn ->
"[PresenceGracePeriod] Grace period expired for character #{character_id} on map #{map_id} " <>
"but timer was already cancelled (character likely rejoined)"
end)
# Remove from pending removals and timers
state =
state
|> remove_pending_removal(map_id, character_id)
|> remove_after_grace_period(map_id, character_id)
{:noreply, state}
{:noreply, state}
end
@impl true
def handle_info(:check_remove_queue, state) do
Process.send_after(self(), :check_remove_queue, @check_remove_queue_interval)
remove_from_cache_after_grace_period(state)
{:noreply, %{state | to_remove: []}}
_timer_ref ->
# Grace period expired and is still valid - perform atomic removal
Logger.info(fn ->
"[PresenceGracePeriod] Grace period expired for character #{character_id} on map #{map_id} - " <>
"removing from tracking after #{div(@grace_period_ms, 60_000)} minutes of inactivity"
end)
# Remove from pending removals state
state = remove_pending_removal(state, map_id, character_id)
# Atomically remove from cache (Fix #2 - no batching)
remove_character_from_cache(map_id, character_id)
# Emit telemetry for monitoring
:telemetry.execute(
[:wanderer_app, :presence, :grace_period_expired],
%{duration_ms: @grace_period_ms, system_time: System.system_time()},
%{map_id: map_id, character_id: character_id, reason: :grace_period_timeout}
)
{:noreply, state}
end
end
# Cancel pending removals for characters that have rejoined
defp cancel_pending_removals(state, map_id, character_ids) do
Enum.reduce(character_ids, state, fn character_id, acc_state ->
case get_timer_ref(acc_state, map_id, character_id) do
@@ -107,23 +245,42 @@ defmodule WandererAppWeb.PresenceGracePeriodManager do
acc_state
timer_ref ->
# Character rejoined during grace period - cancel removal
time_remaining = Process.cancel_timer(timer_ref)
Logger.debug(fn ->
"Cancelling grace period for character #{character_id} on map #{map_id} (rejoined)"
time_remaining_str =
if is_integer(time_remaining) do
"#{div(time_remaining, 60_000)} minutes remaining"
else
"timer already fired"
end
"[PresenceGracePeriod] Cancelled grace period for character #{character_id} on map #{map_id} - " <>
"character rejoined (#{time_remaining_str})"
end)
Process.cancel_timer(timer_ref)
# Emit telemetry for cancelled grace period
:telemetry.execute(
[:wanderer_app, :presence, :grace_period_cancelled],
%{system_time: System.system_time()},
%{map_id: map_id, character_id: character_id, reason: :character_rejoined}
)
remove_pending_removal(acc_state, map_id, character_id)
end
end)
end
# Schedule removals for characters that have left presence
defp schedule_removals(state, map_id, character_ids) do
Enum.reduce(character_ids, state, fn character_id, acc_state ->
# Only schedule if not already pending
case get_timer_ref(acc_state, map_id, character_id) do
nil ->
Logger.debug(fn ->
"Scheduling grace period for character #{character_id} on map #{map_id}"
"[PresenceGracePeriod] Starting #{div(@grace_period_ms, 60_000)}-minute grace period " <>
"for character #{character_id} on map #{map_id} - character left presence"
end)
timer_ref =
@@ -133,9 +290,21 @@ defmodule WandererAppWeb.PresenceGracePeriodManager do
@grace_period_ms
)
# Emit telemetry for grace period start
:telemetry.execute(
[:wanderer_app, :presence, :grace_period_started],
%{grace_period_ms: @grace_period_ms, system_time: System.system_time()},
%{map_id: map_id, character_id: character_id, reason: :presence_left}
)
add_pending_removal(acc_state, map_id, character_id, timer_ref)
_ ->
_existing_timer ->
# Already has a pending removal scheduled
Logger.debug(fn ->
"[PresenceGracePeriod] Character #{character_id} on map #{map_id} already has pending removal"
end)
acc_state
end
end)
@@ -172,58 +341,52 @@ defmodule WandererAppWeb.PresenceGracePeriodManager do
end
end
defp get_pending_removals_for_map(state, map_id) do
# Fix #1: Include all characters in grace period (both pending and awaiting removal)
# This prevents race conditions where a character could be removed early
defp get_characters_in_grace_period(state, map_id) do
state.pending_removals
|> Enum.filter(fn {{pending_map_id, _character_id}, _} -> pending_map_id == map_id end)
|> Enum.map(fn {{_map_id, character_id}, _} -> character_id end)
|> MapSet.new()
end
defp remove_after_grace_period(%{to_remove: to_remove} = state, map_id, character_id_to_remove) do
%{
state
| to_remove:
(to_remove ++ [{map_id, character_id_to_remove}])
|> Enum.uniq_by(fn {map_id, character_id} -> map_id <> character_id end)
}
end
defp remove_from_cache_after_grace_period(%{to_remove: to_remove} = state) do
# Get current presence data to recalculate without the expired character
to_remove
|> Enum.each(fn {map_id, character_id_to_remove} ->
case WandererApp.Cache.get("map_#{map_id}:presence_data") do
nil ->
:ok
presence_data ->
# Recalculate tracked character IDs from current presence data
updated_presence_data =
presence_data
|> Enum.filter(fn %{character_id: character_id} ->
character_id != character_id_to_remove
end)
presence_tracked_character_ids =
updated_presence_data
|> Enum.filter(fn %{tracked: tracked} ->
tracked
end)
|> Enum.map(fn %{character_id: character_id} -> character_id end)
WandererApp.Cache.insert("map_#{map_id}:presence_data", updated_presence_data)
# Update both caches
WandererApp.Cache.insert(
"map_#{map_id}:presence_character_ids",
presence_tracked_character_ids
)
WandererApp.Cache.insert("map_#{map_id}:presence_updated", true)
Logger.debug(fn ->
"Updated cache after grace period for map #{map_id}, tracked characters: #{inspect(presence_tracked_character_ids)}"
end)
# Fix #2: Atomic removal from cache when grace period expires
# This removes the character immediately instead of batching
defp remove_character_from_cache(map_id, character_id_to_remove) do
# Get current presence_character_ids and remove the character
current_character_ids =
case WandererApp.Cache.get("map_#{map_id}:presence_character_ids") do
nil -> []
ids -> ids
end
updated_character_ids =
Enum.reject(current_character_ids, fn id -> id == character_id_to_remove end)
# Also update presence_data if it exists
case WandererApp.Cache.get("map_#{map_id}:presence_data") do
nil ->
# No presence data, just update character IDs
:ok
presence_data ->
updated_presence_data =
presence_data
|> Enum.filter(fn %{character_id: character_id} ->
character_id != character_id_to_remove
end)
WandererApp.Cache.insert("map_#{map_id}:presence_data", updated_presence_data)
end
WandererApp.Cache.insert("map_#{map_id}:presence_character_ids", updated_character_ids)
WandererApp.Cache.insert("map_#{map_id}:presence_updated", true)
Logger.debug(fn ->
"[PresenceGracePeriod] Removed character #{character_id_to_remove} from map #{map_id} cache - " <>
"remaining tracked characters: #{length(updated_character_ids)}"
end)
:ok
end
end

View File

@@ -3,7 +3,7 @@ defmodule WandererApp.MixProject do
@source_url "https://github.com/wanderer-industries/wanderer"
@version "1.88.1"
@version "1.88.9"
def project do
[

View File

@@ -81,8 +81,9 @@ defmodule WandererApp.Test.IntegrationConfig do
:ok
end
# Give the supervisor a moment to fully initialize its children
Process.sleep(100)
# Wait for MapPoolDynamicSupervisor to be ready using efficient polling
# instead of a fixed 100ms sleep
wait_for_process(WandererApp.Map.MapPoolDynamicSupervisor, 2000)
# Start Map.Manager AFTER MapPoolSupervisor
case GenServer.whereis(WandererApp.Map.Manager) do
@@ -96,6 +97,27 @@ defmodule WandererApp.Test.IntegrationConfig do
:ok
end
# Efficiently wait for a process to be registered
defp wait_for_process(name, timeout) do
deadline = System.monotonic_time(:millisecond) + timeout
do_wait_for_process(name, deadline)
end
defp do_wait_for_process(name, deadline) do
case Process.whereis(name) do
pid when is_pid(pid) ->
:ok
nil ->
if System.monotonic_time(:millisecond) < deadline do
Process.sleep(5)
do_wait_for_process(name, deadline)
else
:ok
end
end
end
@doc """
Cleans up integration test environment.

View File

@@ -76,7 +76,7 @@ defmodule WandererApp.MapTestHelpers do
raise "Map #{map_id} failed to stop within #{timeout}ms"
end
Process.sleep(50)
Process.sleep(10)
:continue
end
end)
@@ -86,23 +86,16 @@ defmodule WandererApp.MapTestHelpers do
@doc """
Continuously grants database access to all MapPool processes and their children.
This is necessary when maps are started dynamically during tests.
Polls multiple times to catch processes spawned at different stages.
Uses efficient polling with minimal delays.
"""
defp grant_database_access_continuously do
owner_pid = Process.get(:sandbox_owner_pid) || self()
# Grant access multiple times with delays to catch processes at different spawn stages
# First few times quickly, then with longer delays
# Quick initial grants (3 times with 10ms)
Enum.each(1..3, fn _ ->
# Grant access with minimal delays - 5 quick passes to catch spawned processes
# Total time: ~25ms instead of 170ms
Enum.each(1..5, fn _ ->
grant_database_access_to_map_pools(owner_pid)
Process.sleep(10)
end)
# Then slower grants (7 times with 20ms)
Enum.each(1..7, fn _ ->
grant_database_access_to_map_pools(owner_pid)
Process.sleep(20)
Process.sleep(5)
end)
end
@@ -164,19 +157,10 @@ defmodule WandererApp.MapTestHelpers do
map_started_flag and in_started_maps_list ->
{:ok, :started}
# Map is partially started (in one but not both) - keep waiting
map_started_flag or in_started_maps_list ->
if System.monotonic_time(:millisecond) < deadline do
Process.sleep(100)
:continue
else
{:error, :timeout}
end
# Map not started yet
# Map is partially started or not started yet - keep waiting
true ->
if System.monotonic_time(:millisecond) < deadline do
Process.sleep(100)
Process.sleep(20)
:continue
else
{:error, :timeout}
@@ -186,8 +170,8 @@ defmodule WandererApp.MapTestHelpers do
|> Enum.find(fn result -> result != :continue end)
|> case do
{:ok, :started} ->
# Give it a bit more time to fully initialize all subsystems
Process.sleep(200)
# Brief pause for subsystem initialization (reduced from 200ms)
Process.sleep(50)
:ok
{:error, :timeout} ->
@@ -462,7 +446,7 @@ defmodule WandererApp.MapTestHelpers do
{:ok, true}
else
if System.monotonic_time(:millisecond) < deadline do
Process.sleep(50)
Process.sleep(10)
:continue
else
{:error, :timeout}

View File

@@ -174,6 +174,39 @@ defmodule WandererApp.TestHelpers do
"Expected log to contain '#{expected_message}', but got: #{log_output}"
end
@doc """
Waits for a condition to become true, with configurable timeout and interval.
More efficient than fixed sleeps - uses small polling intervals.
## Options
* `:timeout` - Maximum time to wait in milliseconds (default: 5000)
* `:interval` - Polling interval in milliseconds (default: 10)
## Examples
wait_until(fn -> Process.whereis(:my_server) != nil end)
wait_until(fn -> cache_has_value?() end, timeout: 2000, interval: 5)
"""
def wait_until(condition_fn, opts \\ []) do
timeout = Keyword.get(opts, :timeout, 5000)
interval = Keyword.get(opts, :interval, 10)
deadline = System.monotonic_time(:millisecond) + timeout
do_wait_until(condition_fn, deadline, interval)
end
defp do_wait_until(condition_fn, deadline, interval) do
if condition_fn.() do
:ok
else
if System.monotonic_time(:millisecond) < deadline do
Process.sleep(interval)
do_wait_until(condition_fn, deadline, interval)
else
{:error, :timeout}
end
end
end
@doc """
Ensures a map server is started for testing.
This function has been simplified to use the standard map startup flow.
@@ -183,8 +216,13 @@ defmodule WandererApp.TestHelpers do
# Use the standard map startup flow through Map.Manager
:ok = WandererApp.Map.Manager.start_map(map_id)
# Wait a bit for the map to fully initialize
:timer.sleep(500)
# Wait for the map to be in started_maps cache with efficient polling
wait_until(fn ->
case WandererApp.Cache.lookup("map_#{map_id}:started") do
{:ok, true} -> true
_ -> false
end
end, timeout: 5000, interval: 20)
:ok
end

View File

@@ -1,5 +1,6 @@
defmodule WandererApp.Api.ActorHelpersTest do
use ExUnit.Case, async: false
# Pure unit tests - no database or external dependencies
use ExUnit.Case, async: true
alias WandererApp.Api.ActorHelpers
alias WandererApp.Api.ActorWithMap

View File

@@ -1,5 +1,6 @@
defmodule WandererApp.Api.ActorWithMapTest do
use ExUnit.Case, async: false
# Pure unit tests - no database or external dependencies
use ExUnit.Case, async: true
alias WandererApp.Api.ActorWithMap

View File

@@ -1,5 +1,6 @@
defmodule WandererApp.Api.Changes.InjectMapFromActorTest do
use ExUnit.Case, async: false
# Tests Ash changeset logic but doesn't need database
use ExUnit.Case, async: true
alias WandererApp.Api.ActorWithMap

View File

@@ -144,8 +144,8 @@ defmodule WandererApp.Map.MapPoolTest do
# Trigger reconciliation
send(Reconciler, :reconcile)
# Give it time to process
Process.sleep(200)
# Give it time to process (reduced from 200ms)
Process.sleep(50)
# Verify zombie was cleaned up
{:ok, started_maps_after} = WandererApp.Cache.lookup("started_maps", [])
@@ -171,7 +171,7 @@ defmodule WandererApp.Map.MapPoolTest do
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
Process.sleep(50)
# Verify all caches cleaned
{:ok, started_maps} = WandererApp.Cache.lookup("started_maps", [])
@@ -217,7 +217,7 @@ defmodule WandererApp.Map.MapPoolTest do
# The reconciler would detect this if the map was in a registry
# For now, we just verify the logic doesn't crash
send(Reconciler, :reconcile)
Process.sleep(200)
Process.sleep(50)
# No assertions needed - just verifying no crashes
end
@@ -236,7 +236,7 @@ defmodule WandererApp.Map.MapPoolTest do
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
Process.sleep(50)
# Cache entry should be removed since pool doesn't exist
{:ok, cache_entry} = Cachex.get(@cache, map_id)
@@ -264,7 +264,7 @@ defmodule WandererApp.Map.MapPoolTest do
# Trigger reconciliation
send(Reconciler, :reconcile)
Process.sleep(200)
Process.sleep(50)
# Should receive telemetry event
assert_receive {:telemetry, measurements}, 500
@@ -303,7 +303,7 @@ defmodule WandererApp.Map.MapPoolTest do
# Trigger manual reconciliation
Reconciler.trigger_reconciliation()
Process.sleep(200)
Process.sleep(50)
# Verify zombie was cleaned up
{:ok, started_maps_after} = WandererApp.Cache.lookup("started_maps", [])
@@ -335,7 +335,7 @@ defmodule WandererApp.Map.MapPoolTest do
# Should not crash even with empty data
send(Reconciler, :reconcile)
Process.sleep(200)
Process.sleep(50)
# No assertions - just verifying no crash
assert true
@@ -353,7 +353,7 @@ defmodule WandererApp.Map.MapPoolTest do
# Should handle gracefully
send(Reconciler, :reconcile)
Process.sleep(200)
Process.sleep(50)
assert true
else

View File

@@ -0,0 +1,685 @@
defmodule WandererAppWeb.PresenceGracePeriodManagerTest do
@moduledoc """
Comprehensive tests for PresenceGracePeriodManager.
Tests cover:
- Grace period scheduling when characters leave presence
- Grace period cancellation when characters rejoin
- Atomic cache removal after grace period expires
- Multiple characters and maps scenarios
- Edge cases and error handling
"""
use ExUnit.Case, async: false
alias WandererAppWeb.PresenceGracePeriodManager
setup do
# Generate unique map and character IDs for each test
map_id = "test_map_#{:rand.uniform(1_000_000)}"
character_id = "test_char_#{:rand.uniform(1_000_000)}"
character_id_2 = "test_char_2_#{:rand.uniform(1_000_000)}"
# Clean up GenServer state for this specific map
PresenceGracePeriodManager.clear_map_state(map_id)
# Clean up any existing cache data for this test
cleanup_cache(map_id)
on_exit(fn ->
PresenceGracePeriodManager.clear_map_state(map_id)
cleanup_cache(map_id)
end)
{:ok,
map_id: map_id,
character_id: character_id,
character_id_2: character_id_2}
end
defp cleanup_cache(map_id) do
WandererApp.Cache.delete("map_#{map_id}:presence_character_ids")
WandererApp.Cache.delete("map_#{map_id}:presence_data")
WandererApp.Cache.delete("map_#{map_id}:presence_updated")
end
defp build_presence_data(characters) do
Enum.map(characters, fn {character_id, tracked} ->
%{
character_id: character_id,
tracked: tracked,
from: DateTime.utc_now()
}
end)
end
defp get_presence_character_ids(map_id) do
case WandererApp.Cache.get("map_#{map_id}:presence_character_ids") do
nil -> []
ids -> ids
end
end
defp get_presence_data(map_id) do
WandererApp.Cache.get("map_#{map_id}:presence_data")
end
defp get_presence_updated(map_id) do
WandererApp.Cache.get("map_#{map_id}:presence_updated") || false
end
describe "initialization" do
test "manager starts successfully" do
# The manager should already be running as part of the application
assert Process.whereis(PresenceGracePeriodManager) != nil
end
test "get_state returns valid state structure" do
state = PresenceGracePeriodManager.get_state()
assert %PresenceGracePeriodManager{} = state
assert is_map(state.pending_removals)
assert is_map(state.timers)
end
test "reset_state clears all state" do
# First reset
PresenceGracePeriodManager.reset_state()
state = PresenceGracePeriodManager.get_state()
assert state.pending_removals == %{}
assert state.timers == %{}
end
end
describe "process_presence_change - character joins" do
test "first character joins - updates cache with character ID", %{
map_id: map_id,
character_id: character_id
} do
presence_data = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
assert get_presence_character_ids(map_id) == [character_id]
assert get_presence_data(map_id) == presence_data
assert get_presence_updated(map_id) == true
end
test "multiple characters join - all are in cache", %{
map_id: map_id,
character_id: character_id,
character_id_2: character_id_2
} do
presence_data = build_presence_data([{character_id, true}, {character_id_2, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
cached_ids = get_presence_character_ids(map_id)
assert Enum.sort(cached_ids) == Enum.sort([character_id, character_id_2])
end
test "untracked character is not included in presence_character_ids", %{
map_id: map_id,
character_id: character_id,
character_id_2: character_id_2
} do
presence_data = build_presence_data([{character_id, true}, {character_id_2, false}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Only tracked character should be in presence_character_ids
assert get_presence_character_ids(map_id) == [character_id]
# But both should be in presence_data
assert length(get_presence_data(map_id)) == 2
end
end
describe "process_presence_change - character leaves (grace period)" do
test "character leaving starts grace period - still in cache", %{
map_id: map_id,
character_id: character_id
} do
# First, character joins
presence_data = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
assert get_presence_character_ids(map_id) == [character_id]
# Character leaves (empty presence)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
# Character should still be in cache (grace period active)
assert get_presence_character_ids(map_id) == [character_id]
# State should have pending removal
state = PresenceGracePeriodManager.get_state()
assert Map.has_key?(state.pending_removals, {map_id, character_id})
assert Map.has_key?(state.timers, {map_id, character_id})
end
test "multiple characters leave - all have grace periods", %{
map_id: map_id,
character_id: character_id,
character_id_2: character_id_2
} do
# Both characters join
presence_data = build_presence_data([{character_id, true}, {character_id_2, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Both leave
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
# Both should still be in cache
cached_ids = get_presence_character_ids(map_id)
assert Enum.sort(cached_ids) == Enum.sort([character_id, character_id_2])
# Both should have pending removals
state = PresenceGracePeriodManager.get_state()
assert Map.has_key?(state.pending_removals, {map_id, character_id})
assert Map.has_key?(state.pending_removals, {map_id, character_id_2})
end
test "one character leaves, one stays - only leaving character has grace period", %{
map_id: map_id,
character_id: character_id,
character_id_2: character_id_2
} do
# Both characters join
presence_data = build_presence_data([{character_id, true}, {character_id_2, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Only character_id leaves
presence_data_after = build_presence_data([{character_id_2, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data_after)
# Both should be in cache (one current, one in grace period)
cached_ids = get_presence_character_ids(map_id)
assert Enum.sort(cached_ids) == Enum.sort([character_id, character_id_2])
# Only character_id should have pending removal
state = PresenceGracePeriodManager.get_state()
assert Map.has_key?(state.pending_removals, {map_id, character_id})
refute Map.has_key?(state.pending_removals, {map_id, character_id_2})
end
end
describe "process_presence_change - character rejoins (cancels grace period)" do
test "character rejoins during grace period - removal cancelled", %{
map_id: map_id,
character_id: character_id
} do
# Character joins
presence_data = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Character leaves (starts grace period)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
# Verify grace period started
state_before = PresenceGracePeriodManager.get_state()
assert Map.has_key?(state_before.pending_removals, {map_id, character_id})
# Character rejoins
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Grace period should be cancelled
state_after = PresenceGracePeriodManager.get_state()
refute Map.has_key?(state_after.pending_removals, {map_id, character_id})
refute Map.has_key?(state_after.timers, {map_id, character_id})
# Character should still be in cache
assert get_presence_character_ids(map_id) == [character_id]
end
test "character leaves and rejoins multiple times - only one grace period at a time", %{
map_id: map_id,
character_id: character_id
} do
presence_data = build_presence_data([{character_id, true}])
# Cycle 1: join -> leave -> rejoin
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Cycle 2: leave -> rejoin
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Should have no pending removals
state = PresenceGracePeriodManager.get_state()
refute Map.has_key?(state.pending_removals, {map_id, character_id})
# Character should be in cache
assert get_presence_character_ids(map_id) == [character_id]
end
end
describe "grace_period_expired - atomic removal" do
test "directly sending grace_period_expired removes character from cache", %{
map_id: map_id,
character_id: character_id
} do
# Setup: character joins then leaves
presence_data = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
# Verify grace period started
state = PresenceGracePeriodManager.get_state()
assert Map.has_key?(state.timers, {map_id, character_id})
# Simulate grace period expiration by sending the message directly
send(Process.whereis(PresenceGracePeriodManager), {:grace_period_expired, map_id, character_id})
# Small wait for the message to be processed
:timer.sleep(20)
# Character should be removed from cache
assert get_presence_character_ids(map_id) == []
# Pending removal should be cleared
state_after = PresenceGracePeriodManager.get_state()
refute Map.has_key?(state_after.pending_removals, {map_id, character_id})
refute Map.has_key?(state_after.timers, {map_id, character_id})
end
test "grace_period_expired for already cancelled timer is ignored", %{
map_id: map_id,
character_id: character_id
} do
# Setup: character joins, leaves, then rejoins
presence_data = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Timer was cancelled, but let's simulate the message arriving anyway
send(Process.whereis(PresenceGracePeriodManager), {:grace_period_expired, map_id, character_id})
:timer.sleep(20)
# Character should still be in cache (message was ignored)
assert get_presence_character_ids(map_id) == [character_id]
end
test "grace_period_expired with no presence_data in cache handles gracefully", %{
map_id: map_id,
character_id: character_id
} do
# Don't set up any presence data - just send the expired message
# This simulates a race condition where the map was stopped
# First add character to state manually by going through the flow
presence_data = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
# Clear the cache to simulate map being stopped
cleanup_cache(map_id)
# Send expired message
send(Process.whereis(PresenceGracePeriodManager), {:grace_period_expired, map_id, character_id})
:timer.sleep(20)
# Should handle gracefully without crashing
state = PresenceGracePeriodManager.get_state()
refute Map.has_key?(state.pending_removals, {map_id, character_id})
end
test "removes only the specified character, keeps others", %{
map_id: map_id,
character_id: character_id,
character_id_2: character_id_2
} do
# Both characters join then leave
presence_data = build_presence_data([{character_id, true}, {character_id_2, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
# Both in grace period
cached_before = get_presence_character_ids(map_id)
assert length(cached_before) == 2
# Only expire character_id
send(Process.whereis(PresenceGracePeriodManager), {:grace_period_expired, map_id, character_id})
:timer.sleep(20)
# Only character_id_2 should remain
assert get_presence_character_ids(map_id) == [character_id_2]
# character_id_2 should still have pending removal
state = PresenceGracePeriodManager.get_state()
refute Map.has_key?(state.pending_removals, {map_id, character_id})
assert Map.has_key?(state.pending_removals, {map_id, character_id_2})
end
end
describe "multiple maps scenarios" do
test "same character on different maps - independent grace periods", %{
character_id: character_id
} do
map_id_1 = "test_map_multi_1_#{:rand.uniform(1_000_000)}"
map_id_2 = "test_map_multi_2_#{:rand.uniform(1_000_000)}"
on_exit(fn ->
PresenceGracePeriodManager.clear_map_state(map_id_1)
PresenceGracePeriodManager.clear_map_state(map_id_2)
cleanup_cache(map_id_1)
cleanup_cache(map_id_2)
end)
presence_data = build_presence_data([{character_id, true}])
# Character joins both maps
PresenceGracePeriodManager.process_presence_change_sync(map_id_1, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id_2, presence_data)
# Character leaves map_id_1 only
PresenceGracePeriodManager.process_presence_change_sync(map_id_1, [])
# map_id_1 should have grace period, map_id_2 should not
state = PresenceGracePeriodManager.get_state()
assert Map.has_key?(state.pending_removals, {map_id_1, character_id})
refute Map.has_key?(state.pending_removals, {map_id_2, character_id})
# Character should be in cache for both maps
assert get_presence_character_ids(map_id_1) == [character_id]
assert get_presence_character_ids(map_id_2) == [character_id]
# Expire grace period for map_id_1
send(Process.whereis(PresenceGracePeriodManager), {:grace_period_expired, map_id_1, character_id})
:timer.sleep(20)
# map_id_1 should be empty, map_id_2 should still have character
assert get_presence_character_ids(map_id_1) == []
assert get_presence_character_ids(map_id_2) == [character_id]
end
test "grace period on one map doesn't affect other maps", %{
character_id: character_id,
character_id_2: character_id_2
} do
map_id_1 = "test_map_iso_1_#{:rand.uniform(1_000_000)}"
map_id_2 = "test_map_iso_2_#{:rand.uniform(1_000_000)}"
on_exit(fn ->
PresenceGracePeriodManager.clear_map_state(map_id_1)
PresenceGracePeriodManager.clear_map_state(map_id_2)
cleanup_cache(map_id_1)
cleanup_cache(map_id_2)
end)
# Different characters on different maps
presence_data_1 = build_presence_data([{character_id, true}])
presence_data_2 = build_presence_data([{character_id_2, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id_1, presence_data_1)
PresenceGracePeriodManager.process_presence_change_sync(map_id_2, presence_data_2)
# Character leaves map_id_1
PresenceGracePeriodManager.process_presence_change_sync(map_id_1, [])
# map_id_2 should be completely unaffected
assert get_presence_character_ids(map_id_2) == [character_id_2]
state = PresenceGracePeriodManager.get_state()
assert Map.has_key?(state.pending_removals, {map_id_1, character_id})
refute Map.has_key?(state.pending_removals, {map_id_2, character_id_2})
end
end
describe "edge cases" do
test "empty presence data on fresh map", %{map_id: map_id} do
# Process empty presence for a map that never had data
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
# Should not crash, cache should be empty
assert get_presence_character_ids(map_id) == []
end
test "presence data with all untracked characters", %{
map_id: map_id,
character_id: character_id,
character_id_2: character_id_2
} do
presence_data = build_presence_data([{character_id, false}, {character_id_2, false}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# No tracked characters, so presence_character_ids should be empty
assert get_presence_character_ids(map_id) == []
# But presence_data should have both characters
assert length(get_presence_data(map_id)) == 2
end
test "rapid presence changes don't cause issues", %{
map_id: map_id,
character_id: character_id
} do
presence_data = build_presence_data([{character_id, true}])
# Rapid fire presence changes (synchronous)
for _ <- 1..20 do
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
end
# Final state: character present
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Should have exactly one pending removal or none (depending on final state)
state = PresenceGracePeriodManager.get_state()
refute Map.has_key?(state.pending_removals, {map_id, character_id})
assert get_presence_character_ids(map_id) == [character_id]
end
test "character switching from tracked to untracked", %{
map_id: map_id,
character_id: character_id
} do
# Character joins as tracked
presence_data_tracked = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data_tracked)
assert get_presence_character_ids(map_id) == [character_id]
# Character becomes untracked (still present, but not tracking)
presence_data_untracked = build_presence_data([{character_id, false}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data_untracked)
# Character was tracked before, now untracked - should start grace period
state = PresenceGracePeriodManager.get_state()
assert Map.has_key?(state.pending_removals, {map_id, character_id})
# Character should still be in cache (grace period)
assert get_presence_character_ids(map_id) == [character_id]
end
test "character switching from untracked to tracked", %{
map_id: map_id,
character_id: character_id
} do
# Character joins as untracked
presence_data_untracked = build_presence_data([{character_id, false}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data_untracked)
assert get_presence_character_ids(map_id) == []
# Character becomes tracked
presence_data_tracked = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data_tracked)
# Character should now be in tracked list
assert get_presence_character_ids(map_id) == [character_id]
end
test "duplicate character IDs in presence data are handled", %{
map_id: map_id,
character_id: character_id
} do
# Presence data with duplicate entries (shouldn't happen but let's be safe)
presence_data = [
%{character_id: character_id, tracked: true, from: DateTime.utc_now()},
%{character_id: character_id, tracked: true, from: DateTime.utc_now()}
]
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# Should handle gracefully, character appears once in tracked IDs
cached_ids = get_presence_character_ids(map_id)
# Due to how the code works, duplicates may appear - that's a known limitation
# The important thing is it doesn't crash
assert character_id in cached_ids
end
end
describe "telemetry events" do
test "grace_period_started telemetry is emitted when character leaves", %{
map_id: map_id,
character_id: character_id
} do
test_pid = self()
handler_id = "test-grace-period-started-#{map_id}"
:telemetry.attach(
handler_id,
[:wanderer_app, :presence, :grace_period_started],
fn _name, measurements, metadata, _config ->
send(test_pid, {:telemetry, :started, measurements, metadata})
end,
nil
)
on_exit(fn ->
:telemetry.detach(handler_id)
end)
# Character joins then leaves
presence_data = build_presence_data([{character_id, true}])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
assert_receive {:telemetry, :started, measurements, metadata}, 500
assert measurements.grace_period_ms > 0
assert metadata.map_id == map_id
assert metadata.character_id == character_id
assert metadata.reason == :presence_left
end
test "grace_period_cancelled telemetry is emitted when character rejoins", %{
map_id: map_id,
character_id: character_id
} do
test_pid = self()
handler_id = "test-grace-period-cancelled-#{map_id}"
:telemetry.attach(
handler_id,
[:wanderer_app, :presence, :grace_period_cancelled],
fn _name, measurements, metadata, _config ->
send(test_pid, {:telemetry, :cancelled, measurements, metadata})
end,
nil
)
on_exit(fn ->
:telemetry.detach(handler_id)
end)
presence_data = build_presence_data([{character_id, true}])
# Join -> leave -> rejoin
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
assert_receive {:telemetry, :cancelled, _measurements, metadata}, 500
assert metadata.map_id == map_id
assert metadata.character_id == character_id
assert metadata.reason == :character_rejoined
end
test "grace_period_expired telemetry is emitted when timer fires", %{
map_id: map_id,
character_id: character_id
} do
test_pid = self()
handler_id = "test-grace-period-expired-#{map_id}"
:telemetry.attach(
handler_id,
[:wanderer_app, :presence, :grace_period_expired],
fn _name, measurements, metadata, _config ->
send(test_pid, {:telemetry, :expired, measurements, metadata})
end,
nil
)
on_exit(fn ->
:telemetry.detach(handler_id)
end)
presence_data = build_presence_data([{character_id, true}])
# Join -> leave -> simulate expiration
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
# Simulate grace period expiration
send(Process.whereis(PresenceGracePeriodManager), {:grace_period_expired, map_id, character_id})
:timer.sleep(20)
assert_receive {:telemetry, :expired, measurements, metadata}, 500
assert measurements.duration_ms > 0
assert metadata.map_id == map_id
assert metadata.character_id == character_id
assert metadata.reason == :grace_period_timeout
end
end
describe "cache consistency" do
test "presence_updated flag is set on every change", %{
map_id: map_id,
character_id: character_id
} do
presence_data = build_presence_data([{character_id, true}])
# Clear the flag
WandererApp.Cache.delete("map_#{map_id}:presence_updated")
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
assert get_presence_updated(map_id) == true
# Clear and change again
WandererApp.Cache.delete("map_#{map_id}:presence_updated")
PresenceGracePeriodManager.process_presence_change_sync(map_id, [])
assert get_presence_updated(map_id) == true
end
test "presence_data and presence_character_ids are always in sync", %{
map_id: map_id,
character_id: character_id,
character_id_2: character_id_2
} do
# Complex scenario: multiple characters, some tracked, some not
presence_data = build_presence_data([
{character_id, true},
{character_id_2, false}
])
PresenceGracePeriodManager.process_presence_change_sync(map_id, presence_data)
# presence_character_ids should only have tracked characters
cached_ids = get_presence_character_ids(map_id)
assert cached_ids == [character_id]
# presence_data should have all characters
cached_data = get_presence_data(map_id)
assert length(cached_data) == 2
data_ids = Enum.map(cached_data, & &1.character_id)
assert Enum.sort(data_ids) == Enum.sort([character_id, character_id_2])
end
end
end

View File

@@ -1,5 +1,6 @@
defmodule WandererApp.Repositories.MapContextHelperTest do
use ExUnit.Case, async: false
# Pure unit tests - no database or external dependencies
use ExUnit.Case, async: true
alias WandererApp.Repositories.MapContextHelper

View File

@@ -1,5 +1,6 @@
defmodule WandererApp.TestHelpersTest do
use ExUnit.Case
# Pure unit tests - no database or external dependencies
use ExUnit.Case, async: true
alias WandererApp.TestHelpers

View File

@@ -1,5 +1,6 @@
defmodule WandererAppWeb.ApiRouter.RouteSpecTest do
use ExUnit.Case, async: false
# Pure unit tests - no database or external dependencies
use ExUnit.Case, async: true
alias WandererAppWeb.ApiRouter.RouteSpec

View File

@@ -1,5 +1,6 @@
defmodule WandererAppWeb.ErrorHTMLTest do
use WandererAppWeb.ConnCase, async: false
# Pure function tests - no database or external dependencies needed
use ExUnit.Case, async: true
# Bring render_to_string/4 for testing custom views
import Phoenix.Template

View File

@@ -1,5 +1,6 @@
defmodule WandererAppWeb.ErrorJSONTest do
use WandererAppWeb.ConnCase, async: false
# Pure function tests - no database or external dependencies needed
use ExUnit.Case, async: true
test "renders 404" do
assert WandererAppWeb.ErrorJSON.render("404.json", %{}) == %{errors: %{detail: "Not Found"}}