Skip to content

How To Backup Data

In the Object Graph (OG), each resource in your project behaves like a table: it has a schema and many records (rows). The @innoflex-technology/og-schema package exposes GetProject() to read your resource definitions and ListRecords() to fetch all records for a resource, following pagination automatically.

This guide shows a small Node script that:

  1. Loads your project and reads every resource name from project.schemas.
  2. Calls ListRecords for each resource and writes the combined result to one JSON file per resource.
  • Node.js 22+
  • Your project id and API key (X-Api-Key) for the Object Graph external API (https://og.svc.innoflex.cloud/ext/... by default).
  • Install the client:
Terminal window
npm install @innoflex-technology/og-schema

Set credentials via environment variables (do not commit secrets):

  • OG_PROJECT_ID — UUID of the project
  • OG_API_KEY — API key header value

Optional:

  • OUT_DIR — output directory (default ./og-backup)
  • Pass { domain: 'your-og-host' } to the OG constructor if you use a non-default Object Graph host.

GetProject() returns a Project whose schemas array lists each resource field definition. The short_name of each top-level schema is the resource identifier used in URLs such as /ext/projects/{projectId}/resources/{short_name}/records. Pass that string as the type argument to ListRecords(type).

If a resource was registered under a different identifier, use the same name your app uses in the OG API paths.

Save as backup-og-resources.mjs:

import fs from "node:fs/promises";
import path from "node:path";
import process from "node:process";
import { OG } from "@innoflex-technology/og-schema";
/** Maps a resource name to a safe filename. */
const toSafeFilename = (name) => name.replace(/[^\w.-]/g, "_");
const main = async () => {
const projectId = process.env.OG_PROJECT_ID;
const apiKey = process.env.OG_API_KEY;
if (!projectId || !apiKey) {
console.error("Set OG_PROJECT_ID and OG_API_KEY");
process.exit(1);
}
const outDir = path.resolve(process.env.OUT_DIR ?? "./og-backup");
const og = new OG(projectId, apiKey);
const project = await og.GetProject();
if (project?.error) {
console.error("GetProject failed:", project.error);
process.exit(1);
}
const schemas = project.schemas ?? [];
const resourceNames = schemas.map((s) => s.short_name).filter(Boolean);
if (resourceNames.length === 0) {
console.warn("No schemas found; nothing to export.");
return;
}
await fs.mkdir(outDir, { recursive: true });
console.log(`Exporting ${resourceNames.length} resource(s) to ${outDir}`);
for (const resourceName of resourceNames) {
const data = await og.ListRecords(resourceName);
if (data?.error) {
console.error(`ListRecords(${resourceName}) failed:`, data.error);
continue;
}
const payload = {
resource: resourceName,
items: data.Items ?? [],
...(data.Include?.length ? { include: data.Include } : {}),
};
const file = path.join(outDir, `${toSafeFilename(resourceName)}.json`);
await fs.writeFile(file, JSON.stringify(payload, null, 2), "utf8");
console.log(`Wrote ${file} (${payload.items.length} row(s))`);
}
};
main().catch((err) => {
console.error(err);
process.exit(1);
});

Run:

Terminal window
export OG_PROJECT_ID="your-project-uuid"
export OG_API_KEY="your-api-key"
export OUT_DIR=./my-og-backup
node backup-og-resources.mjs

Each file contains:

  • resource — the resource short_name
  • items — all records returned by ListRecords (the package follows Next pagination until everything is loaded)
  • include — only present when the API returned reference includes (optional)
  • Blocks (project.block) are layout metadata, not row collections; this script exports resource data only. Extend the script if you also need blocks or project-level JSON.
  • Rate limits: the OG client uses a small concurrency limiter and retries on some server errors; very large datasets may take a while.
  • Restore: use CreateRecord (and related APIs) for writes; restoring from backup is project-specific and not covered here.

Alternative: push records into another database with Drizzle ORM

Section titled “Alternative: push records into another database with Drizzle ORM”

Instead of (or after) writing JSON files, you can stream OG records into a relational database using Drizzle ORM. That requires three pieces:

  1. Target schema — Drizzle table definitions (pgTable, mysqlTable, etc.) that describe where each resource’s fields land (column types, keys, indexes).
  2. OG shape — From GetProject(), each resource’s schemas entry defines field short_name values; each record from ListRecords is typically shaped like { uuid, attribute: { … } } (see DataSetDatabaseObject in og-schema types), with attribute keyed by those short names.
  3. Transformation layer — Pure functions that turn one OG item into one row object your Drizzle schema accepts (rename columns, coerce types, stringify nested values, resolve references from Include, split attrlist structures, and so on).

There is no automatic one-to-one mapping from arbitrary OG DataSet trees to SQL: you encode your business rules in the transformers and keep them next to your Drizzle definitions.

Install (PostgreSQL example; adapt drivers and imports for SQLite or MySQL):

Terminal window
npm install drizzle-orm pg @innoflex-technology/og-schema
npm install -D drizzle-kit @types/pg

Below is an illustrative TypeScript sketch: one resource (customers) with an explicit mapper. Add a case (or a registry of { map, table }) for each resource you replicate. Use transactions or batched insert for production volumes.

import { drizzle } from "drizzle-orm/node-postgres";
import { pgTable, text, uuid } from "drizzle-orm/pg-core";
import pg from "pg";
import { OG } from "@innoflex-technology/og-schema";
/** Drizzle target tables — mirror the columns you need from OG. */
const customers = pgTable("customers", {
id: uuid("id").primaryKey(),
email: text("email").notNull(),
displayName: text("display_name"),
});
/** OG list item shape (records expose uuid + attribute bag). */
type OgItem = { uuid: string; attribute: Record<string, unknown> };
/** Transformation layer: one function per resource / target table. */
const customerRowFromOg = (item: OgItem) => ({
id: item.uuid,
email: String(item.attribute.email ?? ""),
displayName: item.attribute.display_name != null ? String(item.attribute.display_name) : null,
});
const syncOgToPostgres = async () => {
const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });
const db = drizzle(pool);
const og = new OG(process.env.OG_PROJECT_ID!, process.env.OG_API_KEY!);
const project = await og.GetProject();
if (project?.error) throw new Error(JSON.stringify(project.error));
for (const schema of project.schemas ?? []) {
const shortName = schema.short_name;
const { Items = [] } = await og.ListRecords(shortName);
const items = Items as OgItem[];
switch (shortName) {
case "customers": {
const rows = items.map(customerRowFromOg);
if (rows.length > 0) {
await db.insert(customers).values(rows).onConflictDoNothing();
}
break;
}
default:
break;
}
}
await pool.end();
};
void syncOgToPostgres();

Practical notes:

  • Discover field names — Log one ListRecords payload or use GetProject().schemas and walk each DataSet’s short_name (and nested attrlist for structured fields) to build your mappers.
  • References — If you use REFERENCE fields, use the Include payload from ListRecords when present, fetch related records in a second pass, or insert in dependency order and map foreign keys in the transformer.
  • Idempotency — Use onConflictDoUpdate / onConflictDoNothing on primary keys (often OG uuid) so re-runs are safe.
  • Other DBs — Drizzle supports SQLite, MySQL, and others: swap pgTable/drizzle-orm/node-postgres for the matching module and driver.