Embedding GPS photo markers at build time with Hugo
How to replace browser-side EXIF GPS reading with a pre-build Node script that embeds coordinates directly in the HTML — faster maps, no async loading, no browser EXIF parsing.
The problem with reading GPS in the browser
An earlier version of the Velostevie map read GPS coordinates from image EXIF metadata in the browser using exifr. The flow was:
- Hugo shortcode emits a list of photo URLs as a
data-photosattribute - JavaScript fetches each image from the server
- exifr extracts the GPS coordinates from the EXIF data
- Leaflet markers are placed once all reads complete
This works, but it has a significant cost: the browser has to download every image just to read its metadata. On a page with thirty gallery photos that might mean thirty HTTP requests firing before a single marker appears. The map loads blank and fills in gradually as the GPS reads complete.
There’s also an architectural smell: the browser is doing work that could be done once, at build time. Coordinates don’t change. The same GPS data is computed fresh on every page load.
A better approach: extract GPS before Hugo runs
The site already runs a Node script before every build to prepare data. The pattern for moving GPS extraction to build time is:
- Pre-build: a Node script reads GPS EXIF from all images and writes a JSON data file
- Build: the Hugo shortcode reads that JSON and embeds coordinates directly in the HTML
- Runtime: the browser reads coordinates synchronously from the DOM — no fetches, no async, instant markers
Step 1: the pre-build script
scripts/extract-gps.mjs walks the image directory and writes data/photo-gps.json:
import { readdir, stat, writeFile } from 'fs/promises';
import { join, relative } from 'path';
import exifr from 'exifr';
const ASSETS_DIR = new URL('../assets', import.meta.url).pathname;
const OUT_FILE = new URL('../data/photo-gps.json', import.meta.url).pathname;
async function walk(dir) {
const entries = await readdir(dir, { withFileTypes: true });
const files = [];
for (const entry of entries) {
const full = join(dir, entry.name);
if (entry.isDirectory()) files.push(...await walk(full));
else if (/\.(jpg|jpeg|png)$/i.test(entry.name)) files.push(full);
}
return files;
}
const files = await walk(join(ASSETS_DIR, 'images'));
const result = {};
for (const file of files) {
try {
const gps = await exifr.gps(file);
if (gps?.latitude && gps?.longitude) {
const key = relative(ASSETS_DIR, file).replace(/\\/g, '/');
result[key] = { lat: gps.latitude, lng: gps.longitude };
}
} catch { /* no GPS — skip */ }
}
await writeFile(OUT_FILE, JSON.stringify(result, null, 2));
console.log(`Wrote ${Object.keys(result).length} GPS entries to data/photo-gps.json`);The keys are paths relative to assets/ with no leading slash — matching how Hugo’s resources.Match reports resource names (after stripping the leading / with strings.TrimLeft "/" .Name).
Wire it into the build in package.json:
"scripts": {
"extract-gps": "node scripts/extract-gps.mjs",
"prestart": "npm run -s mod:vendor && npm run -s extract-gps",
"prebuild": "npm run clean:public && npm run -s mod:vendor && npm run -s extract-gps"
}prestart and prebuild run automatically before npm run start and npm run build, so data/photo-gps.json is always fresh when Hugo runs. The Cloudflare Pages build command also needs to include the step explicitly:
npm ci && hugo mod vendor && node scripts/extract-gps.mjs && hugo --gc --minifyStep 2: the Hugo shortcode
layouts/shortcodes/gpxmap.html now reads from site.Data["photo-gps"] and embeds all the data it needs at build time:
{{- $dir := .Get "gallery" -}}
{{- $photoMarkers := slice -}}
{{- if and (not $isSection) $dir -}}
{{- $gpsData := index $.Site.Data "photo-gps" -}}
{{- $images := resources.Match (printf "%s/*" $dir) -}}
{{- range $images -}}
{{- $filename := path.Base .Name -}}
{{- if not (hasPrefix $filename ".") -}}
{{- $key := strings.TrimLeft "/" .Name -}}
{{- $gps := index $gpsData $key -}}
{{- if $gps -}}
{{- $full := .Resize "1920x webp" -}}
{{- $base := strings.TrimSuffix (path.Ext $filename) $filename -}}
{{- $caption := replace (strings.Trim (replaceRE "^[0-9]+" "" $base) "_") "_" " " -}}
{{- $marker := dict "url" $full.Permalink "lat" $gps.lat "lng" $gps.lng "caption" $caption -}}
{{- $photoMarkers = $photoMarkers | append $marker -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- with $photoMarkers }}
<div class="gpx-map" data-photo-markers="{{ jsonify . }}"></div>
{{- end -}}For each image that has a GPS entry in the data file, we build a {url, lat, lng, caption} object and serialise the whole array to JSON in the data-photo-markers attribute. Hugo does this work once at build time and caches it.
Key gotcha with resources.Match: .Name on a matched resource is the full path relative to assets/ with a leading / — e.g. /images/articles/2025/foo/gallery/bar.png. path.Base .Name gives you just the filename. strings.TrimLeft "/" .Name gives you the key for the JSON lookup (e.g. images/articles/2025/foo/gallery/bar.png). Use the strings.TrimLeft cutset string argument order — cutset first. strings.TrimLeft .Name "/" is wrong and silently returns an empty string (it treats the entire path as the cutset to strip from /).
Step 3: JavaScript reads synchronously
assets/js/gpxmap.js no longer imports exifr or fires any async GPS reads:
function addPhotoMarkers(map, el, onBoundsReady) {
var markers = [];
try {
markers = JSON.parse(el.dataset.photoMarkers || '[]');
} catch (e) {}
var photoBounds = L.latLngBounds();
markers.forEach(function (m, i) {
var marker = L.marker([m.lat, m.lng], {
icon: L.divIcon({
className: 'photo-marker',
html: '<span class="photo-marker-label">' + (i + 1) + '</span>',
iconSize: [22, 22], iconAnchor: [11, 11]
})
}).addTo(map);
photoBounds.extend([m.lat, m.lng]);
marker.bindTooltip(m.caption, { direction: 'top', offset: [0, -14] });
marker.on('click', function () {
var triggers = document.querySelectorAll('.lb-trigger[data-src]');
for (var j = 0; j < triggers.length; j++) {
try {
if (decodeURIComponent(triggers[j].dataset.src) === decodeURIComponent(m.url)) {
triggers[j].click(); break;
}
} catch (e) {}
}
});
});
if (onBoundsReady) onBoundsReady(photoBounds.isValid() ? photoBounds : L.latLngBounds());
}JSON.parse on a data- attribute is synchronous. All markers are placed in a single synchronous loop. onBoundsReady is called immediately at the end — no async waiting.
Before and after
| Before | After | |
|---|---|---|
| GPS data source | Read from EXIF in browser | Embedded in HTML at build time |
| Browser requests | One per gallery image (to read EXIF) | Zero |
| Marker appearance | Gradual, async | Instant, synchronous |
| exifr dependency | Required in browser | Only in pre-build Node script |
| Build time | No change | Slightly longer (one EXIF read per image) |
The trade-off is explicitly in favour of the reader: build time goes up marginally, page load speed improves significantly.
What doesn’t get a marker
Images without GPS metadata simply don’t appear in data/photo-gps.json and are silently skipped. This is correct behaviour for indoor photos (château interiors, restaurants) where the camera didn’t record location, and for photos exported without location metadata.
To audit which gallery images are missing GPS, scripts/check-gps.sh uses exiftool to check each file directly:
#!/usr/bin/env bash
ASSETS_DIR="$(cd "$(dirname "$0")/.." && pwd)/assets"
missing=0
while IFS= read -r -d '' img; do
gps=$(exiftool -GPSLatitude "$img" 2>/dev/null)
if [[ -z "$gps" ]]; then
echo "NO GPS: ${img#"$ASSETS_DIR/"}"
((missing++))
fi
done < <(find "$ASSETS_DIR/images" -path "*/gallery/*" -type f \( -iname "*.jpg" -o -iname "*.jpeg" -o -iname "*.png" \) -print0 | sort -z)
echo "$missing image(s) missing GPS metadata."Summary
Moving GPS extraction to build time eliminated all browser-side EXIF reads. The map now renders its markers synchronously from data already embedded in the HTML — no waiting, no progressive loading. The pre-build Node script runs automatically before every npm run start and npm run build, so data/photo-gps.json is always up to date.
This is a specific application of a general principle: if computation can happen at build time rather than in the browser, do it there. The build runs once; the page loads for every reader.