mirror of
https://github.com/HeyPuter/puter.git
synced 2026-05-03 08:00:32 +00:00
d4d78ac7db
* fix: dynamodb health checks and client recreation (#2789) * wip: no nanoServices groundwork * feat: data clients in new shape * wip: auth and perms in new system * more wip * middlewaters mainly done * wip: fsv2 in new layout * old fs v2 migration * driver system * driver and old fs fixes * ai drivers wip * stream support * metering in ai chat driver * wip: new auth * rate limit and auth routes * captcha and anti csrf * fix: types * auth store * app logic * wip most other dricvers * fs * mostly kill all legacy stuff * fs finish * fix: redis usage * ai controller * driver cleanup * socket io in v2 * broadcast and crudq stuff * subdomains * notifcations and shares * fix bad syntaxes * auth wip Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * extensions * extension setup * more routes * sql migrations and default services * home router * tier 7 * everything else * everything else * remaining missing bits * server health * logs * cleanup * deps * cleanup 2 * more cleanup 2 * boot * fix launch * config fix * move file * fix: tsconfig things * fix: extension loading * launching * fix: drivers * fix: others * fix: icons * fix: file uploads * fs fixes * fix: fs api * fix: dev-center * config * add back telemetry * lint stuff * husky hooks * fix: fs oss * fix: config migration * config migration * migrate scripts + replicate * runner * fix: merge defafult config * fix: default region * fix: api domain * fix paths in readfile * fix fs entry default s3 * NS: Remove Referral && Entri Service * dep cleanups * fix: static assets * fix: kv and perms * fix: driver registrations * fix: home mapping * fix: rao * adding back 500 alarm * fix: build paths * fix: fs and kv shapes * fix: kv shape * more kv coercing and ai chat matching format as prior * fix: private app gates * private app caches * fix: whole bunch of legacy shape issues * update template jsonc * fix caching partial oidc and fs signed paths * more oidc fixes * fix: wip * fix: private apps * admin route fixes * fix: last few things hopefully * claude uploads * fix security for app only routes * fix kv system namespace * stuff * fix: app and kv and suggested apps * fix:open item * fix: FS operations * fix: default app icons * add back token-read and WSL support * metering fixes * fix: fsEntry * perm scanners and implicators * proper download endpoint * fix: download * fix anti csrft on v2 * fix file extensions, app icons * fold in v1 fixes from origin/main into v2 equivalents Re-applies the v1 fixes that landed on origin/main into their v2 counterparts since the v1 files were deleted on DS/wip during the v2 migration. v1 commits referenced below. - SQLBatcher: flush immediately when queue hits maxBatchSize instead of racing the timer (v1 12f48238). - RedisClient: drop maxRetriesPerRequest from 2 to 1 to shrink failure window (v1b6776ab4). - ChatCompletionDriver: default minimumCredits to 1 when unset/zero so zero-cost precheck doesn't auto-pass (v136bd6073). - OpenAiImageProvider: add gpt-image-2 support — open-ended size rules, token-based cost estimator, arbitrary-size normalizer, isGpt prefix broadened to gpt-image- (v1f14f1bf4). models.ts auto-merged via rename detection. - AppStore: bump row cache TTL from 5m to 24h (v16b3196ed). Not ported: v1 app-object Redis cache (bdfa12b5/b886dde3) — v2's #toClient recomputes filetype_associations/created_from_origin per read; adding a second cache layer is a larger change for a follow-up. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * remoe anti-csrf from auth routes that had not used them * more icon fixes * fix worker functionality * fix: app and subdomain es Co-authored-by: Copilot <copilot@github.com> * fix PUT-761 * fix: PUT-748 * fix: rename fsService * Add security back to WorkerDriver * Migrate worker from fsEntry to fs. Fix cache issue * remove ability to create symlinks * strict webdav acl * require auth for wisp * chore: service renames * Add metering back to puter peer api * fix: PUT-760 PUT-749 * fix: PUT-746 * fix: peer cost Co-authored-by: Copilot <copilot@github.com> * fix: 771 * change order of peer controller * fix: create appdata folder for app on get auth token * fix: align delete site and list sites * delete: putility * fix subdomains * Add support for tilde in subdomains, fix subdomain update * cleanup PeerController.ts and fix billing oversight (#2844) * fix: PUT-786 * fix: bugs * fix: issues with multiple subdomain queries, or permission checks * fix: harden response shapes to not contain uneeded fields * fix: move state to redis * fix: missing kv methods + better sec Co-authored-by: Copilot <copilot@github.com> * fix: subdomainStore limit * fix: missing path resolution Co-authored-by: Copilot <copilot@github.com> * fs fixes * fix: undef error * fix fs + cleanup * fix: npm audit fixes * heal path entries where missing Co-authored-by: Copilot <copilot@github.com> * fix: caching Co-authored-by: Copilot <copilot@github.com> * fix: cache inconsistencies Co-authored-by: Copilot <copilot@github.com> * fix: app driver metadata Co-authored-by: Copilot <copilot@github.com> * remove extraneous comma * fix: associated app icons * fix: bad tool call * Add validation to WorkerDriver#getFilePaths * misc fs and auth issues Co-authored-by: Copilot <copilot@github.com> * fix: oidc errors Co-authored-by: Copilot <copilot@github.com> * fix: PUT-797 * fix: legacy appdata_app Co-authored-by: Copilot <copilot@github.com> * fix: add alert logs Co-authored-by: Copilot <copilot@github.com> * fix: error handling * Disable sharecontroller * fix: remove private user identifier for ai * fix: private app fixes * Add backback signup_server * fix: completionId size Co-authored-by: Copilot <copilot@github.com> * fix: revalidate path for oidc * fix: revalidate path for oidc * fix: email validation Co-authored-by: Copilot <copilot@github.com> * fix: user create query * fix: middleware extensions Co-authored-by: Copilot <copilot@github.com> * use x-forwarded-for for req ip forwarded * fix: missing last_activity ts * feat: add cache broadcast to subdomains * fix: update config typing --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: ProgrammerIn-wonderland <3838shah@gmail.com> Co-authored-by: Copilot <copilot@github.com> Co-authored-by: Nariman Jelveh <nj@puter.com> Co-authored-by: velzie <velzie@velzie.rip>
228 lines
8.0 KiB
TypeScript
228 lines
8.0 KiB
TypeScript
import {
|
|
DeleteObjectCommand,
|
|
GetObjectCommand,
|
|
PutObjectCommand,
|
|
S3Client,
|
|
} from '@aws-sdk/client-s3';
|
|
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
|
|
import { extension } from '@heyputer/backend/src/extensions';
|
|
import crypto from 'node:crypto';
|
|
import sharp from 'sharp';
|
|
const clients = extension.import('client');
|
|
|
|
const MAX_THUMBNAIL_BYTES = 2 * 1024 * 1024;
|
|
const MAX_THUMBNAIL_PIXELS = 64e6;
|
|
|
|
// S3 client + bucket config — lazily resolved after boot from config.
|
|
let s3Client: S3Client | null = null;
|
|
let thumbnailBucketName = 'puter-local';
|
|
let extensionBucketEndpoint = 'http://127.0.0.1:4566/puter-local/';
|
|
|
|
function getClient(): S3Client {
|
|
if (s3Client) return s3Client;
|
|
|
|
// Top-level `thumbnailStore` config when the extension should use a
|
|
// dedicated S3 bucket instead of the main one.
|
|
const thumbStore = extension.config.thumbnailStore;
|
|
|
|
if (thumbStore?.endpoint && thumbStore.credentials) {
|
|
s3Client = new S3Client({
|
|
region: 'auto',
|
|
endpoint: thumbStore.endpoint,
|
|
credentials: thumbStore.credentials,
|
|
});
|
|
thumbnailBucketName = thumbStore.name ?? 'puter-local';
|
|
extensionBucketEndpoint = thumbStore.endpoint;
|
|
} else {
|
|
// Fall back to the project's S3 wrapper. `clients.s3` is the Puter
|
|
// `S3Client` wrapper (region-cache + lifecycle), not an AWS
|
|
// `S3Client`. Call `.get()` to obtain the underlying AWS client that
|
|
// `getSignedUrl` / `.send(command)` both expect.
|
|
const wrapper = clients.s3;
|
|
s3Client = wrapper.get();
|
|
}
|
|
return s3Client;
|
|
}
|
|
|
|
function base64ParseDataUrl(dataURL: string) {
|
|
dataURL = dataURL.slice(5);
|
|
const mimeType = dataURL.split(';')[0];
|
|
const data = Buffer.from(dataURL.split(',')[1], 'base64');
|
|
return { mimeType, data };
|
|
}
|
|
|
|
// Strictly decode a data: URL and validate the decoded image. Encoded-string
|
|
// length lies about decoded byte count (whitespace, padding) and says nothing
|
|
// about pixel count — a 2MB PNG can decompress to hundreds of MB of raster.
|
|
async function decodeAndValidateThumbnail(
|
|
dataURL: string,
|
|
): Promise<{ mimeType: string; data: Buffer } | null> {
|
|
const commaIdx = dataURL.indexOf(',');
|
|
if (commaIdx === -1) return null;
|
|
const mimeType = dataURL.slice(5, commaIdx).split(';')[0];
|
|
|
|
const data = Buffer.from(dataURL.slice(commaIdx + 1), 'base64');
|
|
if (data.length === 0 || data.length > MAX_THUMBNAIL_BYTES) return null;
|
|
|
|
try {
|
|
await sharp(data, {
|
|
limitInputPixels: MAX_THUMBNAIL_PIXELS,
|
|
density: 72,
|
|
failOn: 'error',
|
|
}).metadata();
|
|
} catch {
|
|
return null;
|
|
}
|
|
|
|
return { mimeType, data };
|
|
}
|
|
|
|
// ── thumbnail.created ───────────────────────────────────────────────
|
|
// Intercept data-URL thumbnails before they hit the DB: upload to S3
|
|
// and replace the URL with an s3:// pointer.
|
|
|
|
extension.on('thumbnail.created', async (event: Record<string, unknown>) => {
|
|
const url = event.url;
|
|
if (typeof url !== 'string' || !url.startsWith('data:')) return;
|
|
|
|
const decoded = await decodeAndValidateThumbnail(url);
|
|
if (!decoded) {
|
|
event.url = null;
|
|
return;
|
|
}
|
|
|
|
const key = crypto.randomUUID();
|
|
event.url = `s3://${thumbnailBucketName}/${key}`;
|
|
|
|
await getClient().send(
|
|
new PutObjectCommand({
|
|
Bucket: thumbnailBucketName,
|
|
Key: key,
|
|
Body: decoded.data,
|
|
ContentType: decoded.mimeType,
|
|
}),
|
|
);
|
|
});
|
|
|
|
// ── thumbnail.upload.prepare ────────────────────────────────────────
|
|
// Generate pre-signed upload URLs so the client can PUT directly to S3.
|
|
|
|
extension.on(
|
|
'thumbnail.upload.prepare',
|
|
async (event: Record<string, unknown>) => {
|
|
if (!event || !Array.isArray(event.items)) return;
|
|
const client = getClient();
|
|
|
|
for (const item of event.items as Array<Record<string, unknown>>) {
|
|
if (!item || typeof item !== 'object') {
|
|
throw new Error('thumbnail.upload.prepare item is invalid');
|
|
}
|
|
|
|
const contentType =
|
|
typeof item.contentType === 'string'
|
|
? item.contentType.trim()
|
|
: '';
|
|
if (!contentType) continue;
|
|
|
|
if (item.size !== undefined) {
|
|
const size = Number(item.size);
|
|
if (
|
|
!Number.isFinite(size) ||
|
|
size < 0 ||
|
|
size > MAX_THUMBNAIL_BYTES
|
|
)
|
|
continue;
|
|
}
|
|
|
|
const key = crypto.randomUUID();
|
|
const command = new PutObjectCommand({
|
|
Bucket: thumbnailBucketName,
|
|
Key: key,
|
|
ContentType: contentType,
|
|
});
|
|
item.uploadUrl = await getSignedUrl(client, command, {
|
|
expiresIn: 900,
|
|
});
|
|
item.thumbnailUrl = `s3://${thumbnailBucketName}/${key}`;
|
|
}
|
|
},
|
|
);
|
|
|
|
// ── thumbnail.read ──────────────────────────────────────────────────
|
|
// Convert s3:// or legacy https:// thumbnails to signed URLs.
|
|
|
|
extension.on('thumbnail.read', async (entry: Record<string, unknown>) => {
|
|
const thumb = entry.thumbnail;
|
|
if (typeof thumb !== 'string' || !thumb) return;
|
|
const client = getClient();
|
|
|
|
if (thumb.startsWith('s3://')) {
|
|
const [bucket, key] = thumb.slice(5).split('/');
|
|
entry.thumbnail = await getSignedUrl(
|
|
client,
|
|
new GetObjectCommand({ Bucket: bucket, Key: key }),
|
|
{ expiresIn: 604800 },
|
|
);
|
|
} else if (
|
|
thumb.startsWith('https') &&
|
|
thumb.includes(new URL(extensionBucketEndpoint).hostname)
|
|
) {
|
|
// Legacy format — remove after full migration
|
|
const [bucket, key] = new URL(thumb).pathname.slice(1).split('/');
|
|
entry.thumbnail = await getSignedUrl(
|
|
client,
|
|
new GetObjectCommand({ Bucket: bucket, Key: key }),
|
|
{ expiresIn: 604800 },
|
|
);
|
|
} else if (thumb.startsWith('data')) {
|
|
// Inline data-URL migration: upload to S3 and update the DB entry.
|
|
const key = crypto.randomUUID();
|
|
const { mimeType, data } = base64ParseDataUrl(thumb);
|
|
const newUrl = `s3://${thumbnailBucketName}/${key}`;
|
|
|
|
await client.send(
|
|
new PutObjectCommand({
|
|
Bucket: thumbnailBucketName,
|
|
Key: key,
|
|
Body: data,
|
|
ContentType: mimeType,
|
|
}),
|
|
);
|
|
|
|
// Best-effort async DB update
|
|
const uuid = entry.uuid ?? entry.uid;
|
|
if (uuid) {
|
|
clients.db
|
|
.write(
|
|
'UPDATE `fsentries` SET `thumbnail` = ? WHERE `uuid` = ?',
|
|
[newUrl, uuid],
|
|
)
|
|
.catch((err: unknown) =>
|
|
console.warn('[thumbnails] inline migration failed', err),
|
|
);
|
|
}
|
|
|
|
entry.thumbnail = await getSignedUrl(
|
|
client,
|
|
new GetObjectCommand({ Bucket: thumbnailBucketName, Key: key }),
|
|
{ expiresIn: 604800 },
|
|
);
|
|
}
|
|
});
|
|
|
|
// ── fs.remove.node ──────────────────────────────────────────────────
|
|
// Delete S3 thumbnail when the file is removed.
|
|
|
|
extension.on(
|
|
'fs.remove.node',
|
|
async ({ target }: { target: Record<string, unknown> }) => {
|
|
const thumbnailUrl = target.thumbnail as string | undefined;
|
|
if (!thumbnailUrl || !thumbnailUrl.startsWith('s3://')) return;
|
|
|
|
const [bucket, key] = thumbnailUrl.slice(5).split('/');
|
|
await getClient().send(
|
|
new DeleteObjectCommand({ Bucket: bucket, Key: key }),
|
|
);
|
|
},
|
|
);
|