Static Site Generation (SSG)
Transform your JopiJS application into a static website using the built-in crawler.
Static Site Generation (SSG)
JopiJS includes a powerful Static Site Generator (SSG) that allows you to export your entire application as a set of static HTML, CSS, and JS files. This is perfect for hosting on static hosting services like GitHub Pages, Vercel, or Netlify/
How it works
Unlike frameworks like Next.js that pre-render pages at build time, JopiJS takes a different approach: Crawling.
When you trigger the SSG process:
- JopiJS starts your application in a temporary local server.
- An internal Crawler starts visiting your website, starting from the home page.
- It discovers all links (
<a>,<img>,<script>, etc.) and follows them recursively. - It saves each visited page and resource (images, CSS, JS) to an output directory.
- If required, it rewrites links to make them relative (e.g.,
../../style.cssinstead of/style.css), ensuring the site works in any folder (relocatable) or domain.
Usage
You can trigger the SSG process without changing your code, simply by using a command-line flag or an environment variable when running your app.
Using Command Line Argument
The easiest way is to pass the --jopi-ssg argument when starting your application (usually with jopin or bun).
# Output to default directory ("static")
npm run start -- --jopi-ssg
# Output to a specific directory
npm run start -- --jopi-ssg ./dist-staticUsing Environment Variable
Alternatively, you can set the JOPI_SSG environment variable.
# Output to "static"
JOPI_SSG=1 npm run start
# Output to "./dist-static"
JOPI_SSG=./dist-static npm run startConfiguration
You can customize the crawler's behavior (ignoring urls, rewriting paths, etc.) in your application startup code using app.configure_crawler().
import { jopiApp } from "jopijs";
jopiApp.startApp(import.meta, (app) => {
app.configure_crawler()
// Change the default output directory
.set_outputDir("./my-static-site")
// Include specific URLs that might not be discoverable via links
// (e.g. typical for pages solely accessed via JS navigation)
.add_scanUrl("/special-hidden-page")
// Transform URLs if needed
.on_transformUrl((url, context) => {
if (url.includes("secret")) return "/rewritten-secret";
return url;
})
// Filter what to download
.on_canDownload((url, isResource) => {
// Don't download admin pages
if (url.startsWith("/admin")) return false;
return true;
})
.END_configure_crawler();
});Available Options
| Method | Description |
|---|---|
set_outputDir(path) | Sets the directory where the static files will be saved. Default is static. |
add_scanUrl(url) | Adds a URL to the list of entry points to crawl. Useful for orphans pages. |
enable_relocatableUrl(bool) | If true (default), converts absolute paths to relative paths (e.g. ../style.css). |
set_pauseDuration(ms) | Adds a pause between requests to avoid overloading the server/CPU. |
on_canDownload(fn) | Callback to filter URLs. Return false to skip a URL. |
on_rewriteHtmlBeforeProcessing(fn) | Modify HTML before the crawler parses it for links. |
on_rewriteHtmlBeforeStoring(fn) | Modify HTML before it is saved to disk. |
Important Considerations
- Dynamic Content: Since the output is static HTML, any server-side logic (database calls on each request, specialized headers) won't run when serving the static files generated. The content is "frozen" at the time of the crawl.
- Client-Side features: React components, client-side routing, and API calls (fetching data from external APIs) will continue to work normally.