How To Backup Data
In the Object Graph (OG), each resource in your project behaves like a table: it has a schema and many records (rows). The @innoflex-technology/og-schema package exposes GetProject() to read your resource definitions and ListRecords() to fetch all records for a resource, following pagination automatically.
This guide shows a small Node script that:
- Loads your project and reads every resource name from
project.schemas. - Calls
ListRecordsfor each resource and writes the combined result to one JSON file per resource.
Prerequisites
Section titled “Prerequisites”- Node.js 22+
- Your project id and API key (
X-Api-Key) for the Object Graph external API (https://og.svc.innoflex.cloud/ext/...by default). - Install the client:
npm install @innoflex-technology/og-schemaSet credentials via environment variables (do not commit secrets):
OG_PROJECT_ID— UUID of the projectOG_API_KEY— API key header value
Optional:
OUT_DIR— output directory (default./og-backup)- Pass
{ domain: 'your-og-host' }to theOGconstructor if you use a non-default Object Graph host.
How resource names map to “tables”
Section titled “How resource names map to “tables””GetProject() returns a Project whose schemas array lists each resource field definition. The short_name of each top-level schema is the resource identifier used in URLs such as /ext/projects/{projectId}/resources/{short_name}/records. Pass that string as the type argument to ListRecords(type).
If a resource was registered under a different identifier, use the same name your app uses in the OG API paths.
Example backup script
Section titled “Example backup script”Save as backup-og-resources.mjs:
import fs from "node:fs/promises";import path from "node:path";import process from "node:process";import { OG } from "@innoflex-technology/og-schema";
/** Maps a resource name to a safe filename. */const toSafeFilename = (name) => name.replace(/[^\w.-]/g, "_");
const main = async () => { const projectId = process.env.OG_PROJECT_ID; const apiKey = process.env.OG_API_KEY; if (!projectId || !apiKey) { console.error("Set OG_PROJECT_ID and OG_API_KEY"); process.exit(1); }
const outDir = path.resolve(process.env.OUT_DIR ?? "./og-backup"); const og = new OG(projectId, apiKey);
const project = await og.GetProject(); if (project?.error) { console.error("GetProject failed:", project.error); process.exit(1); }
const schemas = project.schemas ?? []; const resourceNames = schemas.map((s) => s.short_name).filter(Boolean);
if (resourceNames.length === 0) { console.warn("No schemas found; nothing to export."); return; }
await fs.mkdir(outDir, { recursive: true }); console.log(`Exporting ${resourceNames.length} resource(s) to ${outDir}`);
for (const resourceName of resourceNames) { const data = await og.ListRecords(resourceName); if (data?.error) { console.error(`ListRecords(${resourceName}) failed:`, data.error); continue; }
const payload = { resource: resourceName, items: data.Items ?? [], ...(data.Include?.length ? { include: data.Include } : {}), };
const file = path.join(outDir, `${toSafeFilename(resourceName)}.json`); await fs.writeFile(file, JSON.stringify(payload, null, 2), "utf8"); console.log(`Wrote ${file} (${payload.items.length} row(s))`); }};
main().catch((err) => { console.error(err); process.exit(1);});Run:
export OG_PROJECT_ID="your-project-uuid"export OG_API_KEY="your-api-key"export OUT_DIR=./my-og-backupnode backup-og-resources.mjsEach file contains:
resource— the resourceshort_nameitems— all records returned byListRecords(the package followsNextpagination until everything is loaded)include— only present when the API returned reference includes (optional)
- Blocks (
project.block) are layout metadata, not row collections; this script exports resource data only. Extend the script if you also need blocks or project-level JSON. - Rate limits: the
OGclient uses a small concurrency limiter and retries on some server errors; very large datasets may take a while. - Restore: use
CreateRecord(and related APIs) for writes; restoring from backup is project-specific and not covered here.
Alternative: push records into another database with Drizzle ORM
Section titled “Alternative: push records into another database with Drizzle ORM”Instead of (or after) writing JSON files, you can stream OG records into a relational database using Drizzle ORM. That requires three pieces:
- Target schema — Drizzle table definitions (
pgTable,mysqlTable, etc.) that describe where each resource’s fields land (column types, keys, indexes). - OG shape — From
GetProject(), each resource’sschemasentry defines fieldshort_namevalues; each record fromListRecordsis typically shaped like{ uuid, attribute: { … } }(seeDataSetDatabaseObjectinog-schematypes), withattributekeyed by those short names. - Transformation layer — Pure functions that turn one OG item into one row object your Drizzle schema accepts (rename columns, coerce types, stringify nested values, resolve references from
Include, splitattrliststructures, and so on).
There is no automatic one-to-one mapping from arbitrary OG DataSet trees to SQL: you encode your business rules in the transformers and keep them next to your Drizzle definitions.
Install (PostgreSQL example; adapt drivers and imports for SQLite or MySQL):
npm install drizzle-orm pg @innoflex-technology/og-schemanpm install -D drizzle-kit @types/pgBelow is an illustrative TypeScript sketch: one resource (customers) with an explicit mapper. Add a case (or a registry of { map, table }) for each resource you replicate. Use transactions or batched insert for production volumes.
import { drizzle } from "drizzle-orm/node-postgres";import { pgTable, text, uuid } from "drizzle-orm/pg-core";import pg from "pg";import { OG } from "@innoflex-technology/og-schema";
/** Drizzle target tables — mirror the columns you need from OG. */const customers = pgTable("customers", { id: uuid("id").primaryKey(), email: text("email").notNull(), displayName: text("display_name"),});
/** OG list item shape (records expose uuid + attribute bag). */type OgItem = { uuid: string; attribute: Record<string, unknown> };
/** Transformation layer: one function per resource / target table. */const customerRowFromOg = (item: OgItem) => ({ id: item.uuid, email: String(item.attribute.email ?? ""), displayName: item.attribute.display_name != null ? String(item.attribute.display_name) : null,});
const syncOgToPostgres = async () => { const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); const og = new OG(process.env.OG_PROJECT_ID!, process.env.OG_API_KEY!);
const project = await og.GetProject(); if (project?.error) throw new Error(JSON.stringify(project.error));
for (const schema of project.schemas ?? []) { const shortName = schema.short_name; const { Items = [] } = await og.ListRecords(shortName); const items = Items as OgItem[];
switch (shortName) { case "customers": { const rows = items.map(customerRowFromOg); if (rows.length > 0) { await db.insert(customers).values(rows).onConflictDoNothing(); } break; } default: break; } }
await pool.end();};
void syncOgToPostgres();Practical notes:
- Discover field names — Log one
ListRecordspayload or useGetProject().schemasand walk eachDataSet’sshort_name(and nestedattrlistfor structured fields) to build your mappers. - References — If you use
REFERENCEfields, use theIncludepayload fromListRecordswhen present, fetch related records in a second pass, or insert in dependency order and map foreign keys in the transformer. - Idempotency — Use
onConflictDoUpdate/onConflictDoNothingon primary keys (often OGuuid) so re-runs are safe. - Other DBs — Drizzle supports SQLite, MySQL, and others: swap
pgTable/drizzle-orm/node-postgresfor the matching module and driver.
Related
Section titled “Related”- Object Graph schema reference — OpenAPI and service overview
- Package:
@innoflex-technology/og-schemaon npm