[{"url":"/2026/05/the-css-specificity-trap-that-killed-my-paragraph-spacing/","title":"The CSS specificity trap that killed my paragraph spacing","summary":"How a routine margin reset overrode the owl selector and made all my prose paragraphs run together — and the one-line fix.","date":"2026-05-04","tags":["css"],"cover":"pink","body":"I was looking at a freshly styled blog post and something felt wrong. The text was readable, the line height was fine, but the paragraphs looked wrong — like there was no gap between them. There was a gap, technically, but it was the same as the gap between lines in the same paragraph. The page felt like one continuous block of text.\nThe layout had looked fine in the mockup. Something had broken it when I wired it up to real content.\nThe setup The prose styles were built on the owl selector — a pattern for adding spacing between sibling elements without touching individual components:\ncss Copy 123 .prose \u0026gt; * \u0026#43; * { margin-top: var(--s-5); /* 24px */ } This adds a top margin to every direct child of .prose that follows another child: headings after paragraphs, paragraphs after headings, blockquotes, code blocks, all of it. One rule, no element-specific exceptions.\nThere was also a reset to kill the browser\u0026rsquo;s default paragraph margin:\ncss Copy 123 .prose p { margin: 0; } Browsers add margin-block-start and margin-block-end to \u0026lt;p\u0026gt; elements by default. If you don\u0026rsquo;t zero them out, they stack with whatever spacing your design adds, and you get gaps that are slightly too large and inconsistent across browsers.\nSo: owl selector adds spacing, margin reset kills the browser default. Except it also killed the owl selector\u0026rsquo;s spacing. Every \u0026lt;p\u0026gt; inside .prose had margin-top: 0, full stop.\nWhy it happened CSS specificity is calculated as three columns: ID selectors, class/attribute/pseudo-class selectors, and element/pseudo-element selectors. A higher number in any column beats a lower one to its left.\nRule IDs Classes Elements Specificity .prose \u0026gt; * + * 0 1 0 (0, 1, 0) .prose p 0 1 1 (0, 1, 1) .prose p has one more element selector than the owl selector, so it wins — regardless of which rule appears later in the source. Both rules target the same \u0026lt;p\u0026gt; element inside .prose. The reset wins, and the owl selector\u0026rsquo;s margin-top is overridden.\nThe common misconception is that source order is what matters. It does, but only as a tiebreaker when specificity is equal. Here they\u0026rsquo;re not equal, so source order is irrelevant.\nThe fix Add a third rule that is more specific than the reset, and only fires between adjacent paragraphs:\ncss Copy 123 .prose p \u0026#43; p { margin-top: var(--s-6); /* 32px */ } Rule IDs Classes Elements Specificity .prose \u0026gt; * + * 0 1 0 (0, 1, 0) .prose p 0 1 1 (0, 1, 1) .prose p + p 0 1 2 (0, 1, 2) .prose p + p wins over both. The reset still kills the browser default margin on every \u0026lt;p\u0026gt; (which is what it\u0026rsquo;s there for), and the p + p rule re-adds spacing only between consecutive paragraphs — which is exactly the case the owl selector was supposed to handle.\nI used --s-6 (32px) rather than the owl selector\u0026rsquo;s --s-5 (24px) to give paragraph breaks a bit more weight than other element transitions. Paragraphs after paragraphs need a clearer visual break than, say, a paragraph after a heading. That distinction was there in the original design and was worth preserving.\nThe general lesson The \u0026ldquo;reset to zero, then re-add where needed\u0026rdquo; pattern is common in CSS. It\u0026rsquo;s a sensible approach — clear out browser defaults, then apply your own spacing intentionally. The trap is when the reset selector is more specific than the rule that re-adds spacing.\nBefore writing element { margin: 0 }, check what selectors are responsible for adding that margin back. If the re-add rule has lower specificity than the reset, the re-add will silently lose every time, and you\u0026rsquo;ll spend a while wondering why the spacing you thought you defined isn\u0026rsquo;t showing up.\nThe owl selector in particular is vulnerable to this: it\u0026rsquo;s deliberately low-specificity (one class selector, two universal selectors) so it doesn\u0026rsquo;t get in the way. Any element-level reset inside the same scoping class will outrank it.\n"},{"url":"/2026/05/building-an-about-page-in-hugo-without-touching-single.html/","title":"Building an about page in Hugo without touching single.html","summary":"How to use Hugo's layout key to give a standalone page its own template, rather than bending a shared layout with conditionals.","date":"2026-05-02","tags":["hugo","devops"],"cover":"yellow","body":"The temptation You have a working layouts/_default/single.html for article pages. It renders a hero image, an eyebrow label, a date, and a comments section. Now you need an About page — same fonts, same nav, same footer, but none of that article-specific structure.\nThe tempting path: add a conditional.\ngo-html-template Copy 1234 {{ if ne .Type \u0026#34;about\u0026#34; }} \u0026lt;div class=\u0026#34;article-hero-image\u0026#34;\u0026gt;...\u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;eyebrow\u0026#34;\u0026gt;Route report · {{ .Date.Format \u0026#34;January 2006\u0026#34; }}\u0026lt;/div\u0026gt; {{ end }} Don\u0026rsquo;t. Every conditional you add to a shared layout is a claim that two fundamentally different things are the same thing. single.html accumulates special cases over time, and eventually you\u0026rsquo;re reading a template full of if branches trying to reconstruct which of five page types you\u0026rsquo;re on.\nThe layout front matter key Hugo has a cleaner answer. Any content file can declare the template it wants:\nyaml Copy 1234 --- title: Hello. layout: about --- Hugo looks up layouts/_default/about.html and uses it for this page. single.html is never involved. The about page gets its own template, does exactly what it needs to do, and nothing else changes.\nThe layout file go-html-template Copy 12345678910111213141516171819202122232425 {{ define \u0026#34;main\u0026#34; }} \u0026lt;div class=\u0026#34;prose\u0026#34;\u0026gt; \u0026lt;header class=\u0026#34;about-header\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;eyebrow\u0026#34;\u0026gt;About\u0026lt;/div\u0026gt; \u0026lt;h1 class=\u0026#34;about-title\u0026#34;\u0026gt;{{ .Title }}\u0026lt;/h1\u0026gt; \u0026lt;/header\u0026gt; {{- with .Params.portrait -}} {{- $img := resources.Get (strings.TrimLeft \u0026#34;/\u0026#34; .src) -}} {{- if $img -}} {{- $portrait := $img.Resize \u0026#34;680x webp\u0026#34; -}} \u0026lt;figure class=\u0026#34;about-portrait\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;{{ $portrait.RelPermalink }}\u0026#34; alt=\u0026#34;{{ $.Params.portrait.caption | default $.Title }}\u0026#34; loading=\u0026#34;lazy\u0026#34;\u0026gt; {{- with $.Params.portrait.caption -}} \u0026lt;figcaption\u0026gt;{{ . }}\u0026lt;/figcaption\u0026gt; {{- end -}} \u0026lt;/figure\u0026gt; {{- end -}} {{- end }} {{ .Content }} \u0026lt;/div\u0026gt; {{ end }} {{ define \u0026quot;main\u0026quot; }} plugs into the baseof.html base template — the about page still gets the nav, footer, and any globally-loaded scripts. It just has a different main block.\nThe portrait image is optional. resources.Get returns nil if the file doesn\u0026rsquo;t exist; the {{ if $img }} guard means the figure is simply omitted rather than causing a build error. The page renders correctly before you have a photo to put on it.\nThe content file yaml Copy 12345678910 --- title: Hello. description: One sentence for the HTML meta description. layout: about portrait: src: /images/about/portrait.jpg caption: Somewhere alongside the Loire. --- Body text here, written in Markdown as normal. description is kept for \u0026lt;meta name=\u0026quot;description\u0026quot;\u0026gt; but the layout doesn\u0026rsquo;t render it on the page — the design goes straight from h1 to photo to prose. One field, two uses, no duplication.\nWhen to use a dedicated layout This pattern is worth reaching for whenever a page differs structurally from the norm rather than just in content. Good candidates:\nAn About page (no hero, no date, no comments) A Contact page (a form, no article body) An index page that needs a custom header or grid If a page needs a single field suppressed, a conditional in the shared template is probably fine. If it needs multiple sections replaced or a different overall structure, give it its own layout. The distinction is: am I configuring the template, or am I fighting it?\n"},{"url":"/2026/05/adding-a-hover-preview-tooltip-to-leaflet-markers/","title":"Adding a hover-preview tooltip to Leaflet markers","summary":"How to build a floating thumbnail tooltip for Leaflet photo markers — shared DOM element, edge-flip positioning, hover delays, and keyboard accessibility.","date":"2026-05-01","tags":["javascript","leaflet"],"cover":"cobalt","body":"Leaflet\u0026rsquo;s bindTooltip is fine for text labels but limited for richer previews. This is how to build a floating thumbnail tooltip that appears when hovering a photo marker, stays within the map bounds, and works with the keyboard.\nOne element, not many The natural instinct is to create a tooltip element per marker. Don\u0026rsquo;t. With many markers on the map, that\u0026rsquo;s many hidden elements in the DOM, each needing positioning logic run on every hover.\nA better approach: one shared element, repositioned and repopulated on demand.\njavascript Copy 123456789101112 var tip = document.createElement(\u0026#39;div\u0026#39;); tip.className = \u0026#39;velo-preview\u0026#39;; tip.setAttribute(\u0026#39;aria-hidden\u0026#39;, \u0026#39;true\u0026#39;); tip.innerHTML = \u0026#39;\u0026lt;img class=\u0026#34;velo-preview__img\u0026#34; alt=\u0026#34;\u0026#34; /\u0026gt;\u0026#39; \u0026#43; \u0026#39;\u0026lt;div class=\u0026#34;velo-preview__body\u0026#34;\u0026gt;\u0026#39; \u0026#43; \u0026#39;\u0026lt;div class=\u0026#34;velo-preview__caption\u0026#34;\u0026gt;\u0026lt;/div\u0026gt;\u0026#39; \u0026#43; \u0026#39;\u0026lt;/div\u0026gt;\u0026#39;; map.getContainer().appendChild(tip); var tipImg = tip.querySelector(\u0026#39;.velo-preview__img\u0026#39;); var tipCaption = tip.querySelector(\u0026#39;.velo-preview__caption\u0026#39;); Append it to the map container, not the document body, so position coordinates are relative to the map.\nHover delays Firing immediately on mouseenter feels jittery — graze across a cluster of markers and tooltips flash in and out. A short delay smooths this out:\njavascript Copy 12345678910111213141516171819202122 var HOVER_IN_DELAY = 80; // ms before showing var HOVER_OUT_DELAY = 200; // ms before hiding var hoverInTimer = null; var hoverOutTimer = null; var activeUrl = null; function scheduleShow(m, btnEl) { clearTimeout(hoverOutTimer); clearTimeout(hoverInTimer); if (activeUrl !== null) { showFor(m, btnEl); // already showing something — swap immediately } else { hoverInTimer = setTimeout(function () { showFor(m, btnEl); }, HOVER_IN_DELAY); } } function scheduleHide() { clearTimeout(hoverInTimer); clearTimeout(hoverOutTimer); hoverOutTimer = setTimeout(hide, HOVER_OUT_DELAY); } When moving between adjacent markers, activeUrl !== null causes an immediate swap rather than waiting for the in-delay again. The out-delay gives the user a moment to move from the marker to the tooltip without it disappearing.\nEdge-flip positioning Anchoring the tooltip at a fixed offset from the marker breaks near the edges of the map. Measure the tooltip dimensions and flip when it would overflow:\njavascript Copy 123456789101112131415161718192021222324252627282930313233343536373839 function showFor(m, btnEl) { // Populate content tipImg.src = m.thumb; tipImg.alt = m.caption; tipCaption.textContent = m.caption; // Measure marker position relative to map container var containerRect = map.getContainer().getBoundingClientRect(); var btnRect = btnEl.getBoundingClientRect(); var mx = btnRect.left - containerRect.left \u0026#43; btnRect.width / 2; var my = btnRect.top - containerRect.top \u0026#43; btnRect.height / 2; // Measure tooltip height while invisible tip.style.visibility = \u0026#39;hidden\u0026#39;; tip.classList.add(\u0026#39;is-visible\u0026#39;); var th = tip.offsetHeight || 168; tip.classList.remove(\u0026#39;is-visible\u0026#39;); tip.style.visibility = \u0026#39;\u0026#39;; var W = containerRect.width; var H = containerRect.height; var TW = 200; // fixed tooltip width from CSS var tx = mx \u0026#43; 14; var ty = my - th - 12; if (tx \u0026#43; TW \u0026gt; W - 8) { tx = mx - TW - 14; } // flip left if (ty \u0026lt; 8) { ty = my \u0026#43; 14; } // flip below // Clamp within container tx = Math.max(8, Math.min(W - TW - 8, tx)); ty = Math.max(8, Math.min(H - th - 8, ty)); tip.style.left = tx \u0026#43; \u0026#39;px\u0026#39;; tip.style.top = ty \u0026#43; \u0026#39;px\u0026#39;; tip.classList.add(\u0026#39;is-visible\u0026#39;); tip.setAttribute(\u0026#39;aria-hidden\u0026#39;, \u0026#39;false\u0026#39;); activeUrl = m.url; } The key step is measuring the tooltip\u0026rsquo;s height while it\u0026rsquo;s invisible. Apply the is-visible class (which gives it display: block or equivalent), read offsetHeight, then remove it before setting the final position and showing it for real. Without this, the height measurement returns 0 and vertical positioning is wrong.\nButton markers for keyboard access Change the marker inner element from a \u0026lt;div\u0026gt; to a \u0026lt;button\u0026gt;:\njavascript Copy 12345678 icon: L.divIcon({ className: \u0026#39;photo-marker\u0026#39;, html: \u0026#39;\u0026lt;button class=\u0026#34;photo-marker-label\u0026#34; type=\u0026#34;button\u0026#34; \u0026#39; \u0026#43; \u0026#39;aria-label=\u0026#34;Photo \u0026#39; \u0026#43; (i \u0026#43; 1) \u0026#43; \u0026#39;: \u0026#39; \u0026#43; escapeHtml(m.caption) \u0026#43; \u0026#39;\u0026#34;\u0026gt;\u0026#39; \u0026#43; (i \u0026#43; 1) \u0026#43; \u0026#39;\u0026lt;/button\u0026gt;\u0026#39;, iconSize: [22, 22], iconAnchor: [11, 11] }) A \u0026lt;button\u0026gt; is focusable by default, responds to Enter and Space, and exposes a role of button to screen readers. Wire focus/blur to the same show/hide functions as mouseenter/mouseleave and the tooltip works with keyboard navigation for free.\nPrefetch thumbnails Hover-in delay is 80ms, but image loading might take longer on a slow connection, producing a blank flash in the tooltip. Prefetch all thumbnail URLs on map load:\njavascript Copy 123 markers.forEach(function (m) { if (m.thumb) { var img = new Image(); img.src = m.thumb; } }); The browser caches the images. By the time the hover fires and tipImg.src is set, the image is already available — the tooltip appears populated.\nDismiss on pan and zoom The tooltip\u0026rsquo;s position is calculated relative to a static marker position. When the map moves, the marker moves but the tooltip doesn\u0026rsquo;t — it hangs in the wrong place. Dismiss it:\njavascript Copy 12 map.on(\u0026#39;movestart zoomstart\u0026#39;, hide); map.on(\u0026#39;click\u0026#39;, hide); "},{"url":"/2026/05/touch-events-and-focus-on-mobile-the-two-tap-trap/","title":"Touch events and focus on mobile — the two-tap trap","summary":"Why the 'first tap previews, second tap acts' pattern is broken on touch devices, and what to do instead.","date":"2026-05-01","tags":["javascript","mobile"],"cover":"tangerine","body":"The pattern that seems reasonable You have a UI element — a map marker, a card, a thumbnail — where hovering reveals a preview and clicking performs an action. On desktop this works cleanly: mouseenter shows the preview, click performs the action.\nOn touch devices there\u0026rsquo;s no hover, so you adapt: first tap shows the preview, second tap performs the action. The implementation usually looks something like this:\njavascript Copy 1234567891011 var stickyUrl = null; btnEl.addEventListener(\u0026#39;focus\u0026#39;, function () { showPreview(); }); btnEl.addEventListener(\u0026#39;blur\u0026#39;, function () { hidePreview(); stickyUrl = null; }); btnEl.addEventListener(\u0026#39;click\u0026#39;, function () { if (stickyUrl !== null) { openLightbox(stickyUrl); // second tap } else { stickyUrl = m.url; // first tap — show preview, remember URL } }); Reasonable enough. First tap sets stickyUrl and shows the preview. Second tap finds stickyUrl set and opens the lightbox.\nIt doesn\u0026rsquo;t work.\nWhy it breaks On mobile, the browser fires a blur event after every tap. The moment the user lifts their finger, the element loses focus. Your blur handler runs, clears stickyUrl, and resets everything — before the second tap can register.\nThe sequence of events for two taps on mobile is actually:\nFirst tap: focus → click (stickyUrl set ✓) Finger lifts: blur (stickyUrl cleared ✗) Second tap: focus → click (stickyUrl is null, shows preview again) The lightbox never opens. The user taps forever.\nThis is not a bug you can easily reproduce on a desktop browser\u0026rsquo;s mobile emulator — device emulation doesn\u0026rsquo;t faithfully reproduce mobile focus behaviour. You need a real device or browser stack to catch it.\nThe fix The two-tap pattern assumes focus can persist between taps on touch. It can\u0026rsquo;t. The fix is to stop trying.\nThe hover preview is inherently a pointer feature: on touch there is no hover, so the preview adds friction rather than value. Showing a preview on first tap forces the user to tap twice to do what they came to do.\nRemove the two-tap logic entirely. One tap, one action:\njavascript Copy 12345 btnEl.addEventListener(\u0026#39;mouseenter\u0026#39;, function () { scheduleShow(m, btnEl); }); btnEl.addEventListener(\u0026#39;mouseleave\u0026#39;, function () { scheduleHide(); }); btnEl.addEventListener(\u0026#39;focus\u0026#39;, function () { scheduleShow(m, btnEl); }); btnEl.addEventListener(\u0026#39;blur\u0026#39;, function () { scheduleHide(); }); btnEl.addEventListener(\u0026#39;click\u0026#39;, function () { openLightbox(m.url); }); mouseenter and mouseleave handle the hover preview on pointer devices — they never fire on touch. click opens the lightbox on all devices. The preview still works for desktop users; mobile users get a direct tap-to-action.\nIf you want to call hide() before opening the lightbox — to cleanly dismiss any visible preview — do it at the start of the action function:\njavascript Copy 1234 function openLightbox(url) { hide(); // dismiss preview before lightbox opens // … open the lightbox … } The broader rule Don\u0026rsquo;t rely on focus persisting between separate user interactions on touch devices. Desktop users have a cursor that maintains hover/focus state continuously; touch users interact in discrete, stateless taps. Design for the touch model — one tap, one outcome — and layer hover enhancements on top for pointer devices.\nThe test for whether a pattern works on touch: if removing the hover/focus event listeners entirely would break the intended flow, the flow is designed for desktop and needs a touch alternative (or to be simplified).\n"},{"url":"/2026/05/validating-hugo-front-matter-with-nodetest/","title":"Validating Hugo front matter with node:test","summary":"A lightweight, zero-dependency test that walks your Hugo content tree and catches broken image paths before they reach production.","date":"2026-05-01","tags":["hugo","testing","devops"],"cover":"mint","body":"The silent failure problem Hugo doesn\u0026rsquo;t error on a missing image in front matter. If image: /images/articles/2025/foo/hero.jpg refers to a file that doesn\u0026rsquo;t exist, the build succeeds, the template gets nil back from resources.Get, and the page renders without a hero image. No warning. No clue.\nOn a site with dozens of articles and hundreds of image references, a single mistyped path is easy to miss. It might go live, or it might sit there broken until someone notices the blank space in a browser.\nThe fix: a one-file test Node 24 includes a built-in test runner — node:test — that needs no framework, no config, and no additional dependencies. A single file can walk the entire content tree and fail fast on any broken reference.\njavascript Copy 1234567891011121314151617181920212223242526272829303132333435363738394041 // tests/content-images.test.mjs import { test } from \u0026#39;node:test\u0026#39;; import assert from \u0026#39;node:assert/strict\u0026#39;; import { readFile } from \u0026#39;node:fs/promises\u0026#39;; import { existsSync } from \u0026#39;node:fs\u0026#39;; import { glob } from \u0026#39;node:fs/promises\u0026#39;; import { join, resolve } from \u0026#39;node:path\u0026#39;; const ROOT = resolve(import.meta.dirname, \u0026#39;..\u0026#39;); const ASSETS = join(ROOT, \u0026#39;assets\u0026#39;); const CONTENT = join(ROOT, \u0026#39;content\u0026#39;); function extractPaths(yaml) { const paths = []; // image: /images/articles/... const image = yaml.match(/^image:\\s*(.\u0026#43;)$/m); if (image) paths.push(image[1].trim()); // thumbnail: // url: /images/articles/... const thumb = yaml.match(/^\\s\u0026#43;url:\\s*(.\u0026#43;)$/m); if (thumb) paths.push(thumb[1].trim()); return paths; } test(\u0026#39;all front matter image references resolve to existing files in assets/\u0026#39;, async () =\u0026gt; { const files = await Array.fromAsync(glob(\u0026#39;articles/**/*.md\u0026#39;, { cwd: CONTENT })); const broken = []; for (const rel of files) { const src = await readFile(join(CONTENT, rel), \u0026#39;utf8\u0026#39;); const match = src.match(/^---\\n([\\s\\S]*?)\\n---/); if (!match) continue; for (const ref of extractPaths(match[1])) { const abs = join(ASSETS, ref.replace(/^\\//, \u0026#39;\u0026#39;)); if (!existsSync(abs)) broken.push(`${rel}: ${ref}`); } } assert.deepEqual(broken, [], `Broken image references:\\n${broken.join(\u0026#39;\\n\u0026#39;)}`); }); Run it:\nbash Copy 1 node --test tests/content-images.test.mjs Output when everything passes:\ntext Copy 1 ✔ all front matter image references resolve to existing files in assets/ (10ms) Output when something\u0026rsquo;s broken:\ntext Copy 123 ✗ all front matter image references resolve to existing files in assets/ AssertionError: Broken image references: articles/2025/canal-des-deux-mers/2025-09-05_cdm_day_05/index.md: /images/articles/2025/cdm/cdm_day_05/hero.jpg Integrating with the rest of your tests Add it to package.json:\njson Copy 123 \u0026#34;scripts\u0026#34;: { \u0026#34;test:content-images\u0026#34;: \u0026#34;node --test tests/content-images.test.mjs\u0026#34; } If you\u0026rsquo;re using Playwright, exclude it from Playwright\u0026rsquo;s discovery — it\u0026rsquo;s a node:test file, not a Playwright spec, and Playwright will try to run it as one if it matches the filename pattern:\ntypescript Copy 12345 // playwright.config.ts export default defineConfig({ testIgnore: [\u0026#39;**/content-images.test.mjs\u0026#39;], // … }); Run order: this test needs no server and no build, so it fits alongside ESLint and Stylelint in the fast, server-free check stage — run it before the Playwright tests that require a running dev server.\nExtending it The same pattern extends to any front-matter field that references a file. GPX tracks, thumbnail images, og:image overrides — add a regex for each field and a file-existence check. The test stays fast regardless of how many fields you add, because it\u0026rsquo;s just filesystem lookups, not HTTP requests or Hugo builds.\nFor a site with structured YAML front matter, you could replace the regex extraction with a proper YAML parser (js-yaml or yaml), but the regex approach covers the common simple cases without any extra dependency.\n"},{"url":"/2026/04/rules-engines-on-the-jvm-in-2026/","title":"Rules engines on the JVM in 2026","summary":"Drools is no longer the only game in town. A look at Easy Rules, RuleBook, and when you should reach for a rules engine at all.","date":"2026-04-28","tags":["rules","java","architecture"],"cover":"cobalt","body":"Rules engines occupy a strange corner of the Java ecosystem. They solve a real problem — externalising business logic that changes faster than your release cycle — but the dominant choice for years, Drools, has always carried significant weight: a steep learning curve, a KIE workbench nobody asked for, and a community that seems perpetually one Red Hat acquisition away from abandonment.\nIn 2026 the picture is a bit more interesting. Here is what I have been using and thinking about.\nThe contenders Easy Rules is the lightweight option. It is an annotation-driven framework that feels like writing plain Java, not a DSL. You define a rule as a POJO, annotate the condition and action methods, and register it with an engine. Five minutes to productive. The trade-off is expressiveness: it has no conflict resolution beyond priority ordering, no forward-chaining inference, and no fact pattern matching. If you need those things, Easy Rules is not your tool.\nRuleBook takes a fluent, functional approach. Rules are defined as lambdas in a chain. It is readable and testable. Like Easy Rules, it trades power for simplicity.\nDrools still wins on raw capability. RETE algorithm, backward chaining, complex event processing, a full rule language (DRL). If you are genuinely doing expert-system-style inference over a large fact base, nothing else comes close on the JVM. The cost is complexity, and the 8.x stream is navigating a messy transition to the cloud-native KOGITO platform.\nWhen to reach for one The honest answer is: not as often as you think.\nA database query or a feature flag will handle most conditional logic that looks like it needs a rules engine. The pattern that actually benefits is where you have a large, frequently-changing set of business rules that non-developers need to own — underwriting rules, pricing bands, compliance checks. The rules engine earns its keep when the alternative is a release cycle per business change.\nIf your \u0026ldquo;rules\u0026rdquo; are ten conditionals that a developer will touch twice a year, you do not need a framework. Write the conditions, test them, ship them.\nMy current default For most projects I reach for Easy Rules first. The annotation model maps well to how business analysts describe rules, it is straightforward to test, and its limitations become apparent quickly enough that you will know if you need to escalate to Drools before you are too deep.\nDrools gets the call when the problem is genuinely rule-heavy — trading limit validation, insurance underwriting, the sort of thing that arrives as a 40-page specification and changes monthly.\nThe JVM rules engine landscape is not exciting in 2026, but it is functional. Pick the simplest tool that solves the problem.\n"},{"url":"/2026/04/adding-copyright-watermarks-to-images-with-hugos-asset-pipeline/","title":"Adding copyright watermarks to images with Hugo's asset pipeline","summary":"How to stamp a copyright notice onto every image at Hugo build time using images.Text — including the font trap, the shadow technique for readability, and how to keep multiple shortcodes in sync.","date":"2026-04-27","tags":["hugo","devops"],"cover":"cobalt","body":"Hugo\u0026rsquo;s extended image processing pipeline includes an images.Text filter that can stamp text onto images at build time. This post shows how to use it to add a copyright watermark — covering the font requirement, a shadow technique for legibility on varied backgrounds, and a non-obvious consistency requirement when the same image is processed in more than one template.\nPrerequisites: images must be in assets/ Hugo\u0026rsquo;s image processing only works on resources in the assets/ directory. Files in static/ are served as-is and cannot be processed.\nIf your images are in static/images/, you\u0026rsquo;ll need to move them to assets/images/ first. Once there, use resources.Get and resources.Match instead of path string construction, and use .RelPermalink or .Permalink on the resulting resource instead of building URLs manually.\nThe basic pattern For a gallery lightbox image at 1920px:\ngo-html-template Copy 123456789 {{- $base := .Resize \u0026#34;1920x webp\u0026#34; -}} {{- $wm := images.Text \u0026#34;© 2025 Stephen Masters\u0026#34; (dict \u0026#34;color\u0026#34; \u0026#34;#ffffff\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; (sub $base.Width 260) \u0026#34;y\u0026#34; (sub $base.Height 26) ) -}} {{- $full := $base | images.Filter $wm -}} images.Text returns a filter. images.Filter applies it to the image and returns a new image resource. The original is unchanged.\nThe x and y parameters are the pixel coordinates of the top-left corner of the text, measured from the top-left of the image. To position in the bottom-right corner, subtract from the image\u0026rsquo;s .Width and .Height after resizing — you need to resize first to know the dimensions.\nThe font trap: Hugo\u0026rsquo;s default font is ASCII-only Here\u0026rsquo;s the problem that trips almost everyone:\nHugo\u0026rsquo;s default font for images.Text is Go\u0026rsquo;s basicfont.Face7x13 — a small bitmap font that covers printable ASCII (characters 0x20–0x7E). The copyright symbol © is Unicode U+00A9. It is not ASCII. If you use the default font, the © character will not render — you\u0026rsquo;ll get a blank or the character will be silently dropped.\nTo use ©, you must provide a TrueType font via the font parameter:\ngo-html-template Copy 12345678 {{- $font := resources.Get \u0026#34;fonts/watermark.ttf\u0026#34; -}} {{- $wm := images.Text \u0026#34;© 2025 Stephen Masters\u0026#34; (dict \u0026#34;color\u0026#34; \u0026#34;#ffffff\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; (sub $base.Width 260) \u0026#34;y\u0026#34; (sub $base.Height 26) ) -}} A good choice is DejaVu Sans — open-source (Bitstream Vera / SIL licence, freely redistributable), wide Unicode coverage, and a reasonable visual weight for a watermark. Place the .ttf file at assets/fonts/watermark.ttf.\nMaking it legible: the shadow technique A plain white watermark on a white or light background is invisible. A dark watermark on a dark background is equally invisible. Since photos vary widely in tone and colour, any single-colour text will disappear somewhere.\nThe solution is a drop shadow: apply two text filters in sequence — a dark semi-transparent layer offset by one pixel, then the main white text on top.\ngo-html-template Copy 123456789 {{- $font := resources.Get \u0026#34;fonts/watermark.ttf\u0026#34; -}} {{- $year := .Page.Date.Format \u0026#34;2006\u0026#34; -}} {{- $copyright := printf \u0026#34;© %s Stephen Masters\u0026#34; $year -}} {{- $base := .Resize \u0026#34;1920x webp\u0026#34; -}} {{- $wmX := sub $base.Width 260 -}} {{- $wmY := sub $base.Height 26 -}} {{- $shadow := images.Text $copyright (dict \u0026#34;color\u0026#34; \u0026#34;#000000cc\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; (add $wmX 1) \u0026#34;y\u0026#34; (add $wmY 1)) -}} {{- $text := images.Text $copyright (dict \u0026#34;color\u0026#34; \u0026#34;#ffffff\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; $wmX \u0026#34;y\u0026#34; $wmY ) -}} {{- $full := $base | images.Filter $shadow $text -}} images.Filter accepts multiple filters and applies them in order. The dark shadow (80% opacity, 1px down-right) provides contrast against light areas; the full-white text sits on top and reads against dark areas.\nUsing the article date for the year Rather than hardcoding the year, use the article\u0026rsquo;s date front matter field. This means 2024 articles automatically get \u0026ldquo;© 2024\u0026rdquo; and 2025 articles get \u0026ldquo;© 2025\u0026rdquo;:\ngo-html-template Copy 12 {{- $year := .Page.Date.Format \u0026#34;2006\u0026#34; -}} {{- $copyright := printf \u0026#34;© %s Stephen Masters\u0026#34; $year -}} In shortcode context .Page.Date is available directly. In a layout template (e.g. _default/single.html) use $.Date.\nWhich images to watermark Not every processed image needs a watermark. The priority is the full-size images that are actually worth copying:\nImage type Size Watermarked Gallery lightbox 1920px Yes — primary sharing target Inline article images 1200px Yes Article hero 1400px Yes Gallery thumbnails 800px No — too small to be useful Route card thumbnails 640px No — too small The multi-shortcode consistency requirement This is the non-obvious part.\nOn Velostevie, the same gallery images are processed in two places:\ngallery.html shortcode — produces thumbnail + lightbox versions; the lightbox data-src URL points to the processed image gpxmap.html shortcode — produces a full-size version for each GPS-tagged photo; the marker URL in data-photo-markers JSON points to the processed image When a user clicks a map marker, JavaScript matches the marker URL against the gallery\u0026rsquo;s data-src to open the lightbox. This match must succeed.\nHugo\u0026rsquo;s image pipeline caches processed images by their source file plus their processing operations. If gallery.html applies a watermark and gpxmap.html does not (or applies different parameters), they produce different processed images with different URLs — and the click-through silently fails.\nThe fix: both shortcodes must apply identical filter parameters. Same font, same colour, same size, same offsets. Then Hugo produces the same cached image resource in both places, and the URLs match.\ngo-html-template Copy 123456 {{- /* In both gallery.html AND gpxmap.html — identical */ -}} {{- $wmX := sub $base.Width 260 -}} {{- $wmY := sub $base.Height 26 -}} {{- $shadow := images.Text $copyright (dict \u0026#34;color\u0026#34; \u0026#34;#000000cc\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; (add $wmX 1) \u0026#34;y\u0026#34; (add $wmY 1)) -}} {{- $text := images.Text $copyright (dict \u0026#34;color\u0026#34; \u0026#34;#ffffff\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; $wmX \u0026#34;y\u0026#34; $wmY ) -}} {{- $full := $base | images.Filter $shadow $text -}} Since both shortcodes are on the same page, .Page.Date.Format \u0026quot;2006\u0026quot; produces the same year in both. The image path is the same. The operations are identical. Hugo returns the same cached file.\nCache invalidation When you change watermark parameters (colour, size, position), Hugo does not automatically reprocess the cached images. The cache at resources/_gen/images/ stores the result of each unique combination of source image + processing operations. Changing a filter parameter changes the cache key, so in theory a fresh build would produce the new version.\nIn practice, the dev server can serve stale content. To force a clean rebuild:\nbash Copy 12 rm -rf resources/_gen/images/ npm run start This is especially important during watermark tuning — if the text looks wrong and you\u0026rsquo;ve already made template changes, clearing the cache is the first thing to try.\nSizing the x offset The x parameter positions the left edge of the text. To avoid the text wrapping at the image edge, you need to leave enough room for the full text width.\nWith DejaVu Sans at 16px, \u0026ldquo;© 2025 Stephen Masters\u0026rdquo; is approximately 230px wide. Using sub $base.Width 260 places the left edge at 1660px on a 1920px image, leaving 260px of space to the right edge — comfortably more than 230px. If you change the font, size, or name, you may need to adjust this offset.\nThere is no programmatic way to get the rendered text width from images.Text — you can only work empirically. Add 20–30px of buffer beyond your estimate and inspect the result.\nSummary Images must be in assets/ to use Hugo\u0026rsquo;s processing pipeline Hugo\u0026rsquo;s default basicfont is ASCII-only — use a TrueType font for © A drop shadow (two sequential filters) gives readability on any background Use .Page.Date.Format \u0026quot;2006\u0026quot; for an automatically correct copyright year When the same image is processed in multiple templates, all must apply identical filter parameters or processed-image URLs will diverge Clear resources/_gen/images/ when changing filter parameters to avoid serving stale cached images "},{"url":"/2026/04/embedding-gps-photo-markers-at-build-time-with-hugo/","title":"Embedding GPS photo markers at build time with Hugo","summary":"How to replace browser-side EXIF GPS reading with a pre-build Node script that embeds coordinates directly in the HTML — faster maps, no async loading, no browser EXIF parsing.","date":"2026-04-27","tags":["hugo","javascript","devops"],"cover":"cobalt","body":"The problem with reading GPS in the browser An earlier version of the Velostevie map read GPS coordinates from image EXIF metadata in the browser using exifr. The flow was:\nHugo shortcode emits a list of photo URLs as a data-photos attribute JavaScript fetches each image from the server exifr extracts the GPS coordinates from the EXIF data Leaflet markers are placed once all reads complete This works, but it has a significant cost: the browser has to download every image just to read its metadata. On a page with thirty gallery photos that might mean thirty HTTP requests firing before a single marker appears. The map loads blank and fills in gradually as the GPS reads complete.\nThere\u0026rsquo;s also an architectural smell: the browser is doing work that could be done once, at build time. Coordinates don\u0026rsquo;t change. The same GPS data is computed fresh on every page load.\nA better approach: extract GPS before Hugo runs The site already runs a Node script before every build to prepare data. The pattern for moving GPS extraction to build time is:\nPre-build: a Node script reads GPS EXIF from all images and writes a JSON data file Build: the Hugo shortcode reads that JSON and embeds coordinates directly in the HTML Runtime: the browser reads coordinates synchronously from the DOM — no fetches, no async, instant markers Step 1: the pre-build script scripts/extract-gps.mjs walks the image directory and writes data/photo-gps.json:\njavascript Copy 123456789101112131415161718192021222324252627282930313233 import { readdir, stat, writeFile } from \u0026#39;fs/promises\u0026#39;; import { join, relative } from \u0026#39;path\u0026#39;; import exifr from \u0026#39;exifr\u0026#39;; const ASSETS_DIR = new URL(\u0026#39;../assets\u0026#39;, import.meta.url).pathname; const OUT_FILE = new URL(\u0026#39;../data/photo-gps.json\u0026#39;, import.meta.url).pathname; async function walk(dir) { const entries = await readdir(dir, { withFileTypes: true }); const files = []; for (const entry of entries) { const full = join(dir, entry.name); if (entry.isDirectory()) files.push(...await walk(full)); else if (/\\.(jpg|jpeg|png)$/i.test(entry.name)) files.push(full); } return files; } const files = await walk(join(ASSETS_DIR, \u0026#39;images\u0026#39;)); const result = {}; for (const file of files) { try { const gps = await exifr.gps(file); if (gps?.latitude \u0026amp;\u0026amp; gps?.longitude) { const key = relative(ASSETS_DIR, file).replace(/\\\\/g, \u0026#39;/\u0026#39;); result[key] = { lat: gps.latitude, lng: gps.longitude }; } } catch { /* no GPS — skip */ } } await writeFile(OUT_FILE, JSON.stringify(result, null, 2)); console.log(`Wrote ${Object.keys(result).length} GPS entries to data/photo-gps.json`); The keys are paths relative to assets/ with no leading slash — matching how Hugo\u0026rsquo;s resources.Match reports resource names (after stripping the leading / with strings.TrimLeft \u0026quot;/\u0026quot; .Name).\nWire it into the build in package.json:\njson Copy 12345 \u0026#34;scripts\u0026#34;: { \u0026#34;extract-gps\u0026#34;: \u0026#34;node scripts/extract-gps.mjs\u0026#34;, \u0026#34;prestart\u0026#34;: \u0026#34;npm run -s mod:vendor \u0026amp;\u0026amp; npm run -s extract-gps\u0026#34;, \u0026#34;prebuild\u0026#34;: \u0026#34;npm run clean:public \u0026amp;\u0026amp; npm run -s mod:vendor \u0026amp;\u0026amp; npm run -s extract-gps\u0026#34; } prestart and prebuild run automatically before npm run start and npm run build, so data/photo-gps.json is always fresh when Hugo runs. The Cloudflare Pages build command also needs to include the step explicitly:\nbash Copy 1 npm ci \u0026amp;\u0026amp; hugo mod vendor \u0026amp;\u0026amp; node scripts/extract-gps.mjs \u0026amp;\u0026amp; hugo --gc --minify Step 2: the Hugo shortcode layouts/shortcodes/gpxmap.html now reads from site.Data[\u0026quot;photo-gps\u0026quot;] and embeds all the data it needs at build time:\ngo-html-template Copy 123456789101112131415161718192021222324 {{- $dir := .Get \u0026#34;gallery\u0026#34; -}} {{- $photoMarkers := slice -}} {{- if and (not $isSection) $dir -}} {{- $gpsData := index $.Site.Data \u0026#34;photo-gps\u0026#34; -}} {{- $images := resources.Match (printf \u0026#34;%s/*\u0026#34; $dir) -}} {{- range $images -}} {{- $filename := path.Base .Name -}} {{- if not (hasPrefix $filename \u0026#34;.\u0026#34;) -}} {{- $key := strings.TrimLeft \u0026#34;/\u0026#34; .Name -}} {{- $gps := index $gpsData $key -}} {{- if $gps -}} {{- $full := .Resize \u0026#34;1920x webp\u0026#34; -}} {{- $base := strings.TrimSuffix (path.Ext $filename) $filename -}} {{- $caption := replace (strings.Trim (replaceRE \u0026#34;^[0-9]\u0026#43;\u0026#34; \u0026#34;\u0026#34; $base) \u0026#34;_\u0026#34;) \u0026#34;_\u0026#34; \u0026#34; \u0026#34; -}} {{- $marker := dict \u0026#34;url\u0026#34; $full.Permalink \u0026#34;lat\u0026#34; $gps.lat \u0026#34;lng\u0026#34; $gps.lng \u0026#34;caption\u0026#34; $caption -}} {{- $photoMarkers = $photoMarkers | append $marker -}} {{- end -}} {{- end -}} {{- end -}} {{- end -}} {{- with $photoMarkers }} \u0026lt;div class=\u0026#34;gpx-map\u0026#34; data-photo-markers=\u0026#34;{{ jsonify . }}\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; {{- end -}} For each image that has a GPS entry in the data file, we build a {url, lat, lng, caption} object and serialise the whole array to JSON in the data-photo-markers attribute. Hugo does this work once at build time and caches it.\nKey gotcha with resources.Match: .Name on a matched resource is the full path relative to assets/ with a leading / — e.g. /images/articles/2025/foo/gallery/bar.png. path.Base .Name gives you just the filename. strings.TrimLeft \u0026quot;/\u0026quot; .Name gives you the key for the JSON lookup (e.g. images/articles/2025/foo/gallery/bar.png). Use the strings.TrimLeft cutset string argument order — cutset first. strings.TrimLeft .Name \u0026quot;/\u0026quot; is wrong and silently returns an empty string (it treats the entire path as the cutset to strip from /).\nStep 3: JavaScript reads synchronously assets/js/gpxmap.js no longer imports exifr or fires any async GPS reads:\njavascript Copy 12345678910111213141516171819202122232425262728293031323334 function addPhotoMarkers(map, el, onBoundsReady) { var markers = []; try { markers = JSON.parse(el.dataset.photoMarkers || \u0026#39;[]\u0026#39;); } catch (e) {} var photoBounds = L.latLngBounds(); markers.forEach(function (m, i) { var marker = L.marker([m.lat, m.lng], { icon: L.divIcon({ className: \u0026#39;photo-marker\u0026#39;, html: \u0026#39;\u0026lt;span class=\u0026#34;photo-marker-label\u0026#34;\u0026gt;\u0026#39; \u0026#43; (i \u0026#43; 1) \u0026#43; \u0026#39;\u0026lt;/span\u0026gt;\u0026#39;, iconSize: [22, 22], iconAnchor: [11, 11] }) }).addTo(map); photoBounds.extend([m.lat, m.lng]); marker.bindTooltip(m.caption, { direction: \u0026#39;top\u0026#39;, offset: [0, -14] }); marker.on(\u0026#39;click\u0026#39;, function () { var triggers = document.querySelectorAll(\u0026#39;.lb-trigger[data-src]\u0026#39;); for (var j = 0; j \u0026lt; triggers.length; j\u0026#43;\u0026#43;) { try { if (decodeURIComponent(triggers[j].dataset.src) === decodeURIComponent(m.url)) { triggers[j].click(); break; } } catch (e) {} } }); }); if (onBoundsReady) onBoundsReady(photoBounds.isValid() ? photoBounds : L.latLngBounds()); } JSON.parse on a data- attribute is synchronous. All markers are placed in a single synchronous loop. onBoundsReady is called immediately at the end — no async waiting.\nBefore and after Before After GPS data source Read from EXIF in browser Embedded in HTML at build time Browser requests One per gallery image (to read EXIF) Zero Marker appearance Gradual, async Instant, synchronous exifr dependency Required in browser Only in pre-build Node script Build time No change Slightly longer (one EXIF read per image) The trade-off is explicitly in favour of the reader: build time goes up marginally, page load speed improves significantly.\nWhat doesn\u0026rsquo;t get a marker Images without GPS metadata simply don\u0026rsquo;t appear in data/photo-gps.json and are silently skipped. This is correct behaviour for indoor photos (château interiors, restaurants) where the camera didn\u0026rsquo;t record location, and for photos exported without location metadata.\nTo audit which gallery images are missing GPS, scripts/check-gps.sh uses exiftool to check each file directly:\nbash Copy 1234567891011 #!/usr/bin/env bash ASSETS_DIR=\u0026#34;$(cd \u0026#34;$(dirname \u0026#34;$0\u0026#34;)/..\u0026#34; \u0026amp;\u0026amp; pwd)/assets\u0026#34; missing=0 while IFS= read -r -d \u0026#39;\u0026#39; img; do gps=$(exiftool -GPSLatitude \u0026#34;$img\u0026#34; 2\u0026gt;/dev/null) if [[ -z \u0026#34;$gps\u0026#34; ]]; then echo \u0026#34;NO GPS: ${img#\u0026#34;$ASSETS_DIR/\u0026#34;}\u0026#34; ((missing\u0026#43;\u0026#43;)) fi done \u0026lt; \u0026lt;(find \u0026#34;$ASSETS_DIR/images\u0026#34; -path \u0026#34;*/gallery/*\u0026#34; -type f \\( -iname \u0026#34;*.jpg\u0026#34; -o -iname \u0026#34;*.jpeg\u0026#34; -o -iname \u0026#34;*.png\u0026#34; \\) -print0 | sort -z) echo \u0026#34;$missing image(s) missing GPS metadata.\u0026#34; Summary Moving GPS extraction to build time eliminated all browser-side EXIF reads. The map now renders its markers synchronously from data already embedded in the HTML — no waiting, no progressive loading. The pre-build Node script runs automatically before every npm run start and npm run build, so data/photo-gps.json is always up to date.\nThis is a specific application of a general principle: if computation can happen at build time rather than in the browser, do it there. The build runs once; the page loads for every reader.\n"},{"url":"/2026/04/hugo-image-processing-gotchas-what-the-docs-dont-warn-you-about/","title":"Hugo image processing gotchas: what the docs don't warn you about","summary":"A collection of non-obvious traps in Hugo's image processing pipeline: the ASCII-only default font, the strings.TrimLeft argument order, stale image caches, and why two shortcodes processing the same image can produce different URLs.","date":"2026-04-27","tags":["hugo","devops"],"cover":"tangerine","body":"Hugo\u0026rsquo;s image processing pipeline is powerful, but it has some sharp edges that are easy to hit and hard to diagnose because they all fail silently. This is a collection of the ones I\u0026rsquo;ve run into while building Velostevie.\n1. The default font for images.Text is ASCII-only Hugo\u0026rsquo;s images.Text filter uses Go\u0026rsquo;s basicfont.Face7x13 by default — a small bitmap font covering printable ASCII (0x20–0x7E). If you include any non-ASCII character in your text, it will not render. There is no error. The character is silently dropped or produces a blank glyph.\nThe most common casualty: the copyright symbol ©, which is U+00A9.\ngo-html-template Copy 12 {{- /* This produces \u0026#34;2025 Stephen Masters\u0026#34; with a gap where © should be */ -}} {{- $wm := images.Text \u0026#34;© 2025 Stephen Masters\u0026#34; (dict \u0026#34;size\u0026#34; 14) -}} Fix: provide a TrueType font via the font parameter.\ngo-html-template Copy 12 {{- $font := resources.Get \u0026#34;fonts/watermark.ttf\u0026#34; -}} {{- $wm := images.Text \u0026#34;© 2025 Stephen Masters\u0026#34; (dict \u0026#34;size\u0026#34; 14 \u0026#34;font\u0026#34; $font) -}} The font must be a resource in assets/. DejaVu Sans is a good choice for watermarks: open-source, comprehensive Unicode support, freely redistributable.\n2. strings.TrimLeft takes the cutset first This one is a classic Go template trap. Hugo\u0026rsquo;s strings.TrimLeft signature is:\ntext Copy 1 strings.TrimLeft CUTSET STRING The cutset (the set of characters to strip) comes first. The string to operate on comes second.\ngo-html-template Copy 123456 {{- /* Correct — strips leading \u0026#34;/\u0026#34; from .Name */ -}} {{- $key := strings.TrimLeft \u0026#34;/\u0026#34; .Name -}} {{- /* Wrong — treats .Name as the cutset, strips those characters from \u0026#34;/\u0026#34; */ -}} {{- /* Returns \u0026#34;\u0026#34; because every character in \u0026#34;/\u0026#34; is in the cutset. */ -}} {{- $key := strings.TrimLeft .Name \u0026#34;/\u0026#34; -}} The wrong version returns an empty string and produces no error. I hit this when building the GPS data lookup: the key came back empty, every GPS lookup returned nil, and no photo markers appeared. The fix was trivial once found, but finding it took a while.\nThis affects strings.TrimLeft, strings.TrimRight, and strings.Trim — all three take the cutset first.\n3. Changing filter parameters doesn\u0026rsquo;t automatically invalidate the dev server cache Hugo caches processed images in resources/_gen/images/. The cache key is derived from the source image and the processing operations applied. When you change filter parameters (font, size, colour, position), the cache key changes — so a new build will produce a new image.\nHowever, the dev server (hugo server) does not always detect that filter parameters have changed and re-run the template. In practice, if you change your images.Text parameters and the watermark looks wrong (or unchanged), the server may still be serving the old processed file from cache.\nFix: clear the image cache and restart.\nbash Copy 12 rm -rf resources/_gen/images/ npm run start This forces Hugo to reprocess every image from scratch. The first build after clearing will be slow; subsequent builds only reprocess changed files.\n4. Two templates processing the same image can produce different URLs Hugo\u0026rsquo;s image pipeline is deterministic: the same source file + the same operations = the same output file at the same URL. This is how the cache works, and it\u0026rsquo;s usually what you want.\nThe trap: if the same image is processed in two different templates with different operations, you get two different output files at two different URLs — and any code that expects them to match will fail silently.\nOn Velostevie, gallery images are processed in two places:\ngallery.html shortcode: image.Resize \u0026quot;1920x webp\u0026quot; + watermark filter → URL goes into data-src on lightbox trigger buttons gpxmap.html shortcode: image.Resize \u0026quot;1920x webp\u0026quot; + watermark filter → URL goes into data-photo-markers JSON, used by the map to open the lightbox when a marker is clicked The JavaScript match is: decodeURIComponent(marker.url) === decodeURIComponent(trigger.dataset.src). If the two shortcodes produce different URLs for the same image, this comparison silently fails and clicking a map marker does nothing.\nFix: ensure both templates apply identical processing steps in the same order with the same parameters.\ngo-html-template Copy 1234567891011 {{- /* gallery.html */ -}} {{- $base := .Resize \u0026#34;1920x webp\u0026#34; -}} {{- $shadow := images.Text $copyright (dict \u0026#34;color\u0026#34; \u0026#34;#000000cc\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; (add $wmX 1) \u0026#34;y\u0026#34; (add $wmY 1)) -}} {{- $text := images.Text $copyright (dict \u0026#34;color\u0026#34; \u0026#34;#ffffff\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; $wmX \u0026#34;y\u0026#34; $wmY) -}} {{- $full := $base | images.Filter $shadow $text -}} {{- /* gpxmap.html — identical */ -}} {{- $base := .Resize \u0026#34;1920x webp\u0026#34; -}} {{- $shadow := images.Text $copyright (dict \u0026#34;color\u0026#34; \u0026#34;#000000cc\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; (add $wmX 1) \u0026#34;y\u0026#34; (add $wmY 1)) -}} {{- $text := images.Text $copyright (dict \u0026#34;color\u0026#34; \u0026#34;#ffffff\u0026#34; \u0026#34;size\u0026#34; 16 \u0026#34;font\u0026#34; $font \u0026#34;x\u0026#34; $wmX \u0026#34;y\u0026#34; $wmY) -}} {{- $full := $base | images.Filter $shadow $text -}} Since both templates are on the same page, variables like $copyright (derived from .Page.Date) and $wmX/$wmY (derived from $base.Width/$base.Height) will have the same values in both. Hugo returns the same cached image resource and the URLs match.\n5. resources.Match returns full paths with a leading slash When you call resources.Match \u0026quot;images/gallery/*\u0026quot;, the .Name property on each result is the full path relative to assets/, with a leading / — e.g. /images/gallery/foo.png, not foo.png.\nThis matters when you need to use the path as a lookup key in a data file (where the key was written without a leading slash) or when extracting just the filename.\ngo-html-template Copy 123456789 {{- range $images -}} {{- /* .Name is \u0026#34;/images/gallery/foo.png\u0026#34; */ -}} {{- /* Filename only */ -}} {{- $filename := path.Base .Name -}} {{- /* \u0026#34;foo.png\u0026#34; */ -}} {{- /* Key for data lookup (no leading slash) */ -}} {{- $key := strings.TrimLeft \u0026#34;/\u0026#34; .Name -}} {{- /* \u0026#34;images/gallery/foo.png\u0026#34; */ -}} {{- end -}} Remember: strings.TrimLeft \u0026quot;/\u0026quot; .Name — cutset first (see gotcha 2).\nSummary Gotcha Symptom Fix Default font is ASCII-only © and other non-ASCII chars silently absent Provide a TrueType font via font parameter strings.TrimLeft argument order Empty string returned, lookups fail silently Cutset first: strings.TrimLeft \u0026quot;/\u0026quot; .Name Dev server caches stale images Watermark changes don\u0026rsquo;t appear rm -rf resources/_gen/images/ then restart Different operations = different URLs Marker click-through silently fails Keep all templates that process the same image in sync resources.Match returns full paths GPS/data lookups fail, captions wrong Use path.Base .Name for filename, strings.TrimLeft \u0026quot;/\u0026quot; .Name for keys All five of these fail silently. None produce a Hugo build error. The only diagnostic is to add logging or inspect the generated HTML to check what\u0026rsquo;s actually in the processed attributes.\n"},{"url":"/2026/04/building-a-gps-photo-map-with-hugo-leaflet-and-exifr/","title":"Building a GPS photo map with Hugo, Leaflet, and exifr","summary":"How to build an interactive map for a Hugo static site that reads GPS coordinates directly from image EXIF data and plots photo markers alongside a GPX route — with all the gotchas.","date":"2026-04-26","tags":["hugo","javascript","leaflet","devops"],"cover":"cobalt","body":"On my cycling blog Velostevie each trip article includes an interactive map showing the GPX route and numbered markers for every photo taken along the way. Clicking a marker opens the photo in a lightbox. The whole thing is a Hugo static site — no server, no database — so the map has to be built from static files.\nThis post walks through the architecture: a Hugo shortcode that wires up the data, a vanilla JavaScript IIFE that uses Leaflet for the map and exifr to extract GPS coordinates from image EXIF metadata, and the non-obvious gotchas I ran into along the way.\nThe finished map — GPX polyline with numbered photo markers on OpenStreetMap tiles What we\u0026rsquo;re building The end result looks like this:\nA Leaflet map is embedded in each article page. If the article directory contains a .gpx file, the route is drawn as a polyline. If the article has a gallery/ folder, each photo that has GPS metadata embedded gets a numbered circular marker at its location on the map. Clicking a marker opens the photo in the site\u0026rsquo;s lightbox. If there\u0026rsquo;s no GPX file, the map still renders and fits itself to the bounds of the photo markers. The shortcode is called like this in the article\u0026rsquo;s index.md:\nhugo Copy 1 {{\u0026lt; gpxmap gallery=\u0026#34;images/articles/2025/canal-des-deux-mers/2025-09-01_cdm_day_01/gallery\u0026#34; \u0026gt;}} Architecture overview The design is split cleanly across two phases:\nPhase Where What happens Build time Hugo shortcode (gpxmap.html) Finds GPX files and photo paths, encodes them as data-* attributes on a \u0026lt;div\u0026gt; Runtime JavaScript (gpxmap.js) Reads those attributes, initialises Leaflet, fetches GPX, reads EXIF GPS from photos Hugo templates run at build time with no access to the browser. JavaScript runs in the browser with no access to Hugo\u0026rsquo;s template context. The data-* attributes on the map \u0026lt;div\u0026gt; are the handoff point between the two.\nThe Hugo shortcode The full shortcode lives at layouts/shortcodes/gpxmap.html:\ngo-html-template Copy 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950 {{- $isSection := eq .Page.Kind \u0026#34;section\u0026#34; -}} {{- $gpxFiles := .Page.Resources.Match \u0026#34;*.gpx\u0026#34; -}} {{- /* On section pages, aggregate GPX files from all child articles */ -}} {{- if $isSection -}} {{- range .Page.RegularPages.ByDate -}} {{- range .Resources.Match \u0026#34;*.gpx\u0026#34; -}} {{- $gpxFiles = $gpxFiles | append . -}} {{- end -}} {{- end -}} {{- end -}} {{- /* On single pages, build photo URL list from gallery param */ -}} {{- $dir := .Get \u0026#34;gallery\u0026#34; -}} {{- $photoUrls := slice -}} {{- if and (not $isSection) $dir -}} {{- $files := readDir (printf \u0026#34;static/%s\u0026#34; $dir) -}} {{- range $files -}} {{- if not (hasPrefix .Name \u0026#34;.\u0026#34;) -}} {{- $photoUrls = $photoUrls | append (printf \u0026#34;%s/%s\u0026#34; $dir .Name) -}} {{- end -}} {{- end -}} {{- end -}} {{- if or $gpxFiles $photoUrls -}} {{- $urls := slice -}} {{- range $gpxFiles -}} {{- $urls = $urls | append .Permalink -}} {{- end -}} {{- $absPhotoUrls := slice -}} {{- range $photoUrls -}} {{- $absPhotoUrls = $absPhotoUrls | append (absURL .) -}} {{- end -}} \u0026lt;div class=\u0026#34;gpx-block\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;gpx-map\u0026#34; {{- with $urls }} data-gpx-files=\u0026#34;{{ delimit . \u0026#34;|\u0026#34; }}\u0026#34;{{- end }} {{- with $absPhotoUrls }} data-photos=\u0026#34;{{ delimit . \u0026#34;|\u0026#34; }}\u0026#34;{{- end }}\u0026gt;\u0026lt;/div\u0026gt; {{- if and (not $isSection) $gpxFiles -}} \u0026lt;div class=\u0026#34;gpx-download\u0026#34;\u0026gt; {{- range $i, $f := $gpxFiles -}} {{- $label := \u0026#34;Download GPX\u0026#34; -}} {{- if gt (len $gpxFiles) 1 -}} {{- $label = printf \u0026#34;Download GPX (%d of %d)\u0026#34; (add $i 1) (len $gpxFiles) -}} {{- end -}} \u0026lt;a href=\u0026#34;{{ $f.Permalink }}\u0026#34; download=\u0026#34;{{ $f.Name }}\u0026#34; class=\u0026#34;gpx-download-link\u0026#34;\u0026gt;↓ {{ $label }}\u0026lt;/a\u0026gt; {{- end -}} \u0026lt;/div\u0026gt; {{- end -}} \u0026lt;/div\u0026gt; {{- end -}} A few things worth noting:\nGPX files are page bundle resources. They live in the same directory as index.md and are accessed via .Page.Resources.Match \u0026quot;*.gpx\u0026quot;. Their .Permalink gives an absolute URL that the browser can fetch().\nPhoto paths are read from the filesystem. readDir lists the contents of static/\u0026lt;gallery\u0026gt;/ at build time. Each path is then converted to an absolute URL using absURL.\nSection pages aggregate GPX from all children. The shortcode can be dropped on a series _index.md to show the whole route across all days.\nThe shortcode renders nothing if there\u0026rsquo;s no data. If there are no GPX files and no gallery, the \u0026lt;div\u0026gt; is not emitted at all.\nGotcha 1: absURL and leading slashes This one cost me most of the debugging time.\nabsURL takes a path and prepends the site\u0026rsquo;s baseURL. The trap is that if you pass a path with a leading /, Hugo treats it as absolute from the domain root and strips the base URL subpath. This matters when the site lives at a subpath (e.g. GitHub Pages at https://username.github.io/repo-name/).\ngo-html-template Copy 1234567 {{- /* Wrong — leading slash strips the subpath */ -}} {{- absURL \u0026#34;/images/articles/foo/bar.png\u0026#34; -}} {{- /* → https://username.github.io/images/articles/foo/bar.png */ -}} {{- /* Correct — path-relative, subpath is preserved */ -}} {{- absURL \u0026#34;images/articles/foo/bar.png\u0026#34; -}} {{- /* → https://username.github.io/repo-name/images/articles/foo/bar.png */ -}} Since the gallery paths come from readDir they don\u0026rsquo;t start with /, so the fix was simply not to prepend one.\nGotcha 2: canonifyURLs doesn\u0026rsquo;t touch data-* attributes Hugo\u0026rsquo;s canonifyURLs = true setting rewrites root-relative URLs in standard HTML attributes (href, src, etc.) to absolute URLs. It does not touch data-* attributes. Any URL passed to JavaScript via a data- attribute must be made absolute in the template itself — as we do above with absURL.\nGotcha 3: Go template URL-encodes attributes whose name contains \u0026quot;url\u0026quot; Go\u0026rsquo;s html/template package has a security rule: any HTML attribute whose name contains the substring url is treated as a URL context and its value is URL-encoded. This will silently mangle a pipe-delimited list of paths.\ngo-html-template Copy 12345 {{- /* Dangerous — Go will URL-encode the value because the name contains \u0026#34;url\u0026#34; */ -}} \u0026lt;div data-photo-urls=\u0026#34;{{ delimit $absPhotoUrls \u0026#34;|\u0026#34; }}\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; {{- /* Safe — no \u0026#34;url\u0026#34; substring in the attribute name */ -}} \u0026lt;div data-photos=\u0026#34;{{ delimit $absPhotoUrls \u0026#34;|\u0026#34; }}\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; The fix is to choose attribute names that don\u0026rsquo;t contain url. I use data-gpx-files and data-photos.\nLoading the scripts The Leaflet CSS, Leaflet JS, exifr, and gpxmap.js should only load on pages that actually use the shortcode — there\u0026rsquo;s no point adding that weight to every page.\nHugo\u0026rsquo;s .HasShortcode method makes this easy. In layouts/_default/baseof.html:\ngo-html-template Copy 12345678910 \u0026lt;head\u0026gt;{{ partial \u0026#34;head.html\u0026#34; . }}\u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; ... {{- if .HasShortcode \u0026#34;gpxmap\u0026#34; }} \u0026lt;script src=\u0026#34;/leaflet/leaflet.js\u0026#34; defer\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;/exifr/exifr-lite.umd.js\u0026#34; defer\u0026gt;\u0026lt;/script\u0026gt; {{ $gpxMapJs := resources.Get \u0026#34;js/gpxmap.js\u0026#34; | minify | fingerprint }} \u0026lt;script src=\u0026#34;{{ $gpxMapJs.RelPermalink }}\u0026#34; integrity=\u0026#34;{{ $gpxMapJs.Data.Integrity }}\u0026#34; defer\u0026gt;\u0026lt;/script\u0026gt; {{- end }} \u0026lt;/body\u0026gt; And the Leaflet CSS in layouts/partials/head.html:\ngo-html-template Copy 123 {{- if .HasShortcode \u0026#34;gpxmap\u0026#34; }} \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;/leaflet/leaflet.css\u0026#34;\u0026gt; {{- end }} Leaflet and exifr are served locally from static/leaflet/ and static/exifr/ — not from a CDN. This keeps the site self-contained and avoids third-party dependencies.\nImportant: do not add crossorigin=\u0026quot;\u0026quot; to locally-served script tags. For same-origin resources, it triggers a CORS preflight that will fail. The attribute is only needed for cross-origin resources.\nThe JavaScript The full script is an IIFE (Immediately Invoked Function Expression) — no ES modules, no bundler required.\njavascript Copy 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115 (function () { function parseGPX(xmlText) { var parser = new DOMParser(); var doc = parser.parseFromString(xmlText, \u0026#39;application/xml\u0026#39;); var pts = doc.getElementsByTagName(\u0026#39;trkpt\u0026#39;); return Array.from(pts).map(function (pt) { return [parseFloat(pt.getAttribute(\u0026#39;lat\u0026#39;)), parseFloat(pt.getAttribute(\u0026#39;lon\u0026#39;))]; }); } function addPhotoMarkers(map, el, onBoundsReady) { var raw = el.dataset.photos; if (!raw || typeof exifr === \u0026#39;undefined\u0026#39;) { if (onBoundsReady) onBoundsReady(L.latLngBounds()); return; } var urls = raw.split(\u0026#39;|\u0026#39;).filter(Boolean); var photoBounds = L.latLngBounds(); var remaining = urls.length; function done() { remaining--; if (remaining === 0 \u0026amp;\u0026amp; onBoundsReady) onBoundsReady(photoBounds); } urls.forEach(function (url, i) { exifr.gps(url).then(function (gps) { if (!gps || !gps.latitude || !gps.longitude) { done(); return; } var filename = decodeURIComponent(url.split(\u0026#39;/\u0026#39;).pop()); var caption = filename.replace(/\\.[^.]\u0026#43;$/, \u0026#39;\u0026#39;).replace(/[_-]\u0026#43;/g, \u0026#39; \u0026#39;); var num = i \u0026#43; 1; var marker = L.marker([gps.latitude, gps.longitude], { icon: L.divIcon({ className: \u0026#39;photo-marker\u0026#39;, html: \u0026#39;\u0026lt;span class=\u0026#34;photo-marker-label\u0026#34;\u0026gt;\u0026#39; \u0026#43; num \u0026#43; \u0026#39;\u0026lt;/span\u0026gt;\u0026#39;, iconSize: [22, 22], iconAnchor: [11, 11] }) }).addTo(map); photoBounds.extend([gps.latitude, gps.longitude]); marker.bindTooltip(caption, { direction: \u0026#39;top\u0026#39;, offset: [0, -14] }); marker.on(\u0026#39;click\u0026#39;, function () { var triggers = document.querySelectorAll(\u0026#39;.lb-trigger\u0026#39;); for (var j = 0; j \u0026lt; triggers.length; j\u0026#43;\u0026#43;) { try { if (decodeURIComponent(triggers[j].dataset.src) === decodeURIComponent(url)) { triggers[j].click(); break; } } catch (e) { /* malformed URI — skip */ } } }); done(); }).catch(function () { done(); }); }); } function initMap(el) { var map = L.map(el); L.tileLayer(\u0026#39;https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png\u0026#39;, { attribution: \u0026#39;\u0026amp;copy; \u0026lt;a href=\u0026#34;https://www.openstreetmap.org/copyright\u0026#34;\u0026gt;OpenStreetMap\u0026lt;/a\u0026gt; contributors\u0026#39;, maxZoom: 19 }).addTo(map); var gpxRaw = el.dataset.gpxFiles; var gpxUrls = gpxRaw ? gpxRaw.split(\u0026#39;|\u0026#39;).filter(Boolean) : []; if (gpxUrls.length === 0) { addPhotoMarkers(map, el, function (photoBounds) { if (photoBounds \u0026amp;\u0026amp; photoBounds.isValid()) { map.fitBounds(photoBounds, { padding: [40, 40] }); } }); return; } var colors = [\u0026#39;#2563eb\u0026#39;, \u0026#39;#dc2626\u0026#39;]; var bounds = L.latLngBounds(); var pending = gpxUrls.length; addPhotoMarkers(map, el, null); gpxUrls.forEach(function (url, i) { fetch(url) .then(function (res) { return res.text(); }) .then(function (text) { var coords = parseGPX(text); if (coords.length) { var poly = L.polyline(coords, { color: colors[i % colors.length], weight: 3, opacity: 0.85 }).addTo(map); bounds.extend(poly.getBounds()); } }) .finally(function () { pending--; if (pending === 0 \u0026amp;\u0026amp; bounds.isValid()) { map.fitBounds(bounds, { padding: [20, 20] }); } }); }); } if (typeof L !== \u0026#39;undefined\u0026#39;) { document.querySelectorAll(\u0026#39;.gpx-map\u0026#39;).forEach(initMap); } })(); GPX parsing: getElementsByTagName, not querySelectorAll GPX files declare a default XML namespace (e.g. xmlns=\u0026quot;http://www.topografix.com/GPX/1/1\u0026quot;). In a namespaced document, querySelectorAll('trkpt') finds nothing because CSS selectors don\u0026rsquo;t match namespaced elements without a namespace prefix. getElementsByTagName('trkpt') ignores the namespace and works correctly.\nURL normalisation when matching photos to lightbox triggers The marker click handler needs to find the matching lightbox trigger element for the photo. Both the data-photos attribute on the map \u0026lt;div\u0026gt; and the data-src attribute on lightbox triggers carry URLs — but one may have literal spaces in filenames and the other may have %20. The comparison silently fails unless both sides are normalised with decodeURIComponent.\nAsync GPS reads and fitBounds exifr.gps(url) is asynchronous. In photo-only mode (no GPX file), fitBounds must not be called until all GPS reads have completed — otherwise you\u0026rsquo;re fitting to an empty or incomplete bounds object. The onBoundsReady callback pattern ensures fitBounds only runs once all the promises have settled.\nGotcha 4: exifr — use the full build, not the lite build The exifr library comes in two builds: a lite build and a full build. The lite build supports JPEG EXIF data but not PNG GPS. If your gallery images are PNGs (as mine are, exported from an iPhone), you must use the full build.\nThe file in this project is named exifr-lite.umd.js but is actually the full build — I replaced the lite build with node_modules/exifr/dist/full.umd.js and kept the original filename. Worth checking if you\u0026rsquo;re copying this pattern.\nEmbedding GPS metadata in photos For photo markers to appear, images need GPS coordinates embedded in their EXIF data. Modern smartphone photos include this automatically if location services are enabled during capture. If you\u0026rsquo;re exporting from a photo management app, make sure the export includes location metadata.\nTo check which images are missing GPS data, I wrote a small shell script:\nbash Copy 12345678910 #!/usr/bin/env bash # scripts/check-gps.sh # Lists gallery images that are missing GPS metadata. find static/images -type f \\( -iname \u0026#34;*.jpg\u0026#34; -o -iname \u0026#34;*.jpeg\u0026#34; -o -iname \u0026#34;*.png\u0026#34; \\) | while read -r f; do lat=$(exiftool -s3 -GPSLatitude \u0026#34;$f\u0026#34; 2\u0026gt;/dev/null) if [ -z \u0026#34;$lat\u0026#34; ]; then echo \u0026#34;NO GPS: $f\u0026#34; fi done Run it from the project root:\nbash Copy 1 ./scripts/check-gps.sh Deployment note: Cloudflare Pages vs GitHub Pages This site originally deployed to GitHub Pages, which serves Hugo sites at a subpath (https://username.github.io/repo-name/). That subpath causes all the URL generation headaches described above.\nI switched to Cloudflare Pages, which serves the site at a clean root domain — https://velostevie.com/ — with no subpath. This eliminates an entire class of URL problems. Cloudflare also deploys automatically on every push to main with no extra workflow configuration needed.\nIf you are deploying a Hugo site with data-* attributes carrying URLs and you have flexibility over your hosting, Cloudflare Pages is the simpler choice.\nSummary Here\u0026rsquo;s the full approach in brief:\nHugo shortcode runs at build time: finds .gpx page bundle resources and reads the gallery directory listing, converts both to absolute URLs, emits them as data-gpx-files and data-photos on a \u0026lt;div\u0026gt;. baseof.html uses .HasShortcode \u0026quot;gpxmap\u0026quot; to conditionally load Leaflet, exifr, and the map script — only on pages that need it. gpxmap.js reads the data attributes, initialises a Leaflet map, fetches and parses each GPX file, then calls exifr.gps() on each photo to get its coordinates and place a marker. Marker clicks trigger the lightbox via decodeURIComponent-normalised URL matching. The trickiest parts were all URL-related: absURL with path-relative inputs, canonifyURLs not touching data-* attributes, Go template URL-encoding attribute names, and URL normalisation in JavaScript. Once those were understood the architecture itself is fairly straightforward.\nThe source is available on GitHub at stephen-masters/velostevie.\nClicking a photo marker opens the photo in the lightbox "},{"url":"/2021/10/moving-to-hugo/","title":"Moving to Hugo","summary":"Switching the blog to Hugo on GitHub Pages after years on various platforms.","date":"2021-10-02","tags":["devops"],"cover":"yellow","body":" Over quite a few years this blog has spent time on a number of sites such as gratiartis.org and scattercode.co.uk. I\u0026rsquo;m now trying to simplify things, so I\u0026rsquo;m switching to Hugo on GitHub Pages.\nThis latest iteration involves a move to stephen-masters.github.io.\n"},{"url":"/2021/03/computing-in-armenia/","title":"Computing in Armenia","summary":"Armenia's pivotal role in developing computers in the Soviet Union — written in preparation for an Armenian Institute event on Women in Science.","date":"2021-03-16","tags":["armenia","history"],"cover":"pink","body":" As a Westerner, all of the history of computing I learned growing up was focused on developments in the UK and USA. Recently however, in preparation for an Armenian Institute event on Armenian Women in Science and Innovation, I have been reading up on the parallel developments within the Soviet Union and Armenia\u0026rsquo;s pivotal role.\nAs a companion to the event, I put together a short article about the history of History of computing in Armenia.\nComputing In Armenia - From Soviet Military Mainframes To Incubators And Startups A few years ago, I visited Bletchley Park, and I went to have a look at one of the bombes - the electro-mechanical devices that were used to decipher the German Enigma machine messages. An old lady in a wheelchair rolled up next to me and I struck up a conversation in which she started telling me about how she used to program it by setting up various configurations of patch cables. All of her fellow bombe operators were women from the Wrens (WRNS - Women\u0026rsquo;s Royal Naval Service).\nAs the war progressed, the Germans developed new ciphers that were harder to decipher than those produced by the original Enigma machine. Colossus the first programmable, electronic, digital computer, was designed to crack them. The operating team for Collossus was made up of 272 Wrens (Women\u0026rsquo;s Royal Navy Service) and only 27 men. In the USA, ENIAC, the first general purpose digital computer was developed to help develop artillery firing tables. The team of 6 who programmed ENIAC were all women.\nNowadays, if you look at the world\u0026rsquo;s five largest tech companies, only 14% of the software engineers are women. Silicon Valley has developed a reputation for a \u0026ldquo;bro culture\u0026rdquo; in the tech industry, and many large tech companies have reputations as highly toxic environments for women.\nSomething seems to have gone wrong.\nEven if we broaden our view beyond software engineers the percentage of women employed by tech companies seems to be around 20% in the USA and UK.\nHowever, as I discovered more recently, things seem to be quite different in Armenia. Armenia\u0026rsquo;s technology sector has been growing rapidly in recent years - by 33% in 2018. Reportedly, around 30% of the people working in this sector are women, and at many newer companies, this percentage is around 50% or more.\nArmenia has a long history in computing, and a much larger role in the history of Soviet computing than many would imagine for such a small country. For example, somewhere between 30% and 40% of Soviet military computers were built in Armenia.\nThis history seems to begin with Andronik Iosifyan. Born in 1905 in the Kalbajar district of Artsakh, he became director of the All-Union Scientific Research Institute of Electromechanics (AUSRIE) in Moscow. Iosifyan specialised in designing electronics, and used his skills to design electrical systems for missiles, nuclear submarines, satellites and spacecraft, such as the first Soviet Meteor weather satellites. The electronics for the Soyuz spacecraft and Mir space station were developed under his leadership.\nVictor Hambartsumyan, known as the founder of theoretical physics in the Soviet Union, was looking for designs for a computer that might be assembled at the Yerevan Scientific Research Institute of Mathematical Machines (YerSRIMM). He travelled to Moscow to meet Iosifyan, in the hope of securing such a design. Sergey Korolev, the lead designer of the first Soviet spaceships and satellites was also part of this meeting. Iosifyan knew Isaak Bruk, who had designed a minicomputer called the M-3 for scientific calculations, and arranged to build three at AUSRIE between 1957-1958. One of these stayed at AUSRIE, one went to Korolev and the other to Sergey Mergelyan at YerSRIMM.\nYerSRIMM had been established in 1956, with the mathematician Mergelyan as its founding director. Receiving the M-3 computer in Yerevan enabled Mergelyan and his team to accelerate their work in computing and they designed a new computer called Aragats between 1958-60, based on the M-3.\nThe Hrazdan/Razdan family of computers were designed at YerSRIMM between 1958 and 1965. This was the first semiconductor computer in the Soviet Union. Manufactured from 1961, the Razdan-2 could perform 5000 operations per second, and the Razdan-3 released in 1966 could perform in the order of 30,000 operations per second. The Razdan computers were large - designed to occupy a 50 square metre room - and were mostly used for military purposes. A Razdan-3 can still be seen in the Computer Science Museum in Szeged, Hungary.\nLater, the Nairi minicomputer, was developed to be used to solve scientific, engineering and economic problems. This was a smaller machine, designed to be operated by a single person, and some were in use in Moscow railway stations. A number of iterations of Nairi were developed, with those in the 1980s being designed to be compatible with DEC PDP-11 computers.\nSadly, the breakup of the Soviet Union seems to have led to a lack of support and funding for research. In 1996, disappointed by the situation, Mergelyan left Armenia to join his son in Sacramento, California. Through the 90s, it seems that much was lost, but by the late 90s and early 2000s, efforts were being made to revive the industry.\nFortunately, in recent years, the technology industry in Armenia has been experiencing a very positive outlook. In 2015, the technology industry was responsible for 5% of GDP, and it was realised that this industry is relatively unaffected by Armenia’s geopolitical situation, being landlocked and with two of its borders closed to trade. New laws were introduced, making it much easier to found, operate, and grow a tech startup in Armenia. In 2014, it was reported that the IT sector was growing at a rate of 20% per year; in 2018, it grew by 33%. Technology incubators have been set up, funded by Silicon Valley venture capital funds with the express aim of supporting Armenian startup businesses, and there are already success stories. The Armenian technology industry seems to have a bright future ahead.\nThe Armenian Institute will be hosting an event on Thursday 18th March 2021, to explore the current situation with a panel of speakers who are all involved in this exciting growth industry in Armenia. Please join us to discover more about what is happening and what the future looks like for innovation in Armenia.\n"},{"url":"/2020/08/digitizing-surmelian/","title":"Digitizing Surmelian","summary":"In 2020, the Armenian Institute republished I Ask You, Ladies and Gentlemen by Leon Surmelian. This is the story of how I got involved in digitising it.","date":"2020-08-06","tags":["book","armenia"],"cover":"lilac","body":" I Ask You, Ladies and Gentlemen, by Leon Surmelian was a bestseller when it was published in 1945, but for some reason it went out of print and was never republished. As it is such a beautiful book, at the Armenian Institute, we recently republished it. In order to do so, I had my first serious experience of digitizing a book via scanning and OCR.\nI wrote a short article about that experience of Digitizing Surmelian.\nIf you would like to buy a copy of the book, there are a few options. Some are better than others, depending on where you live.\nArmenian Institute store Amazon Kindle Abril Books (Los Angeles) National Association for Armenian Studies and Research (NAASR) (Massachusetts) "},{"url":"/2017/03/using-the-kie-workbench-api-to-create-a-project/","title":"Using the KIE Workbench API to create a project","summary":"The KIE Workbench REST API lets you automate project setup in a fresh container — useful when VirtualBox keeps changing your IP address.","date":"2017-03-20","tags":["java","rules"],"cover":"lilac","body":"I was playing around with the KIE Workbench Docker image and came across an issue whereby the container would become unusable if the IP address of the host changed. My sandbox is VirtualBox, running Ubuntu 16.04, so this would happen all the time. I needed some way to be able to blow away an existing container and start up a new one with the project I had been working on.\nThis turned out to be a bit fiddly. For example, I couldn’t clone the Git repository for my project and push it into a new container. The new container didn’t have a repository to push to. Similarly, it wasn’t enough to copy the myproject.git file out of the original image and into the new one. It clearly takes more than that.\nA process that I found, which did work was to start up a new container, go into the Workbench web application and create an organisation, repository and project, with the same names as in the previous container. I could then pull that repository into the project I had already cloned. Subsequent to a little bit of merging, I would then be able to push the repository back to the remote Workbench repository.\nAs you might imagine, creating the organisation, repository and project with the exact same names was a bit tedious and error prone, when done manually. I wasn’t happy. However, I spotted that Workbench provides a REST API for a small set of actions, such as creating an organisation, creating a repository and creating a project. There’s some documentation here:\nhttps://docs.jboss.org/drools/release/6.5.0.Final/drools-docs/html/ch20.html\nSo I dived in and tried it out. Unfortunately things went a pear-shaped rather quickly. It would appear that the API might have changed a little bit without the documentation being updated. But a little dig through the Workbench API code showed me that it wasn’t massively out of sync. It’s actually quite easy. You just need to understand the order that things need to be done and a couple of undocumented properties.\nFirst, create an organisation:\ncurl -X POST -H \u0026quot;Content-Type: application/json\u0026quot; --user admin:admin \\ 127.0.0.1:8080/drools-wb/rest/organizationalunits \\ -d '{ \u0026quot;name\u0026quot;: \u0026quot;com.sctrcd.kiewb\u0026quot;, \\ \u0026quot;description\u0026quot;: \u0026quot;Example Workbench Organisation\u0026quot;, \\ \u0026quot;owner\u0026quot;: \u0026quot;Scattercode\u0026quot;, \\ \u0026quot;defaultGroupId\u0026quot;: \u0026quot;com.sctrcd.kiewb\u0026quot; }' Note that the defaultGroupId is not mentioned in the documentation. Without it, you will find that an organisation seems to be created and can be seen in the Workbench web interface. However, there will be a couple of problems with it. For one, if you try making a GET request for it, the API will not be able to find it. You will receive a 404 response. Similarly, if you try to create a repository associated with the organisation, the creation will fail with a 404, when the API tries to find the organisation. But if you include the defaultGroupId, all will be well.\nSecond, create a repository associated with that organisation:\ncurl -X POST -H \u0026quot;Content-Type: application/json\u0026quot; --user admin:admin \\ 127.0.0.1:8080/drools-wb/rest/repositories \\ -d '{ \u0026quot;name\u0026quot;: \u0026quot;rulesrepo\u0026quot;, \\ \u0026quot;description\u0026quot;: \u0026quot;Example rules repo\u0026quot;, \\ \u0026quot;userName\u0026quot;: null, \u0026quot;password\u0026quot;: null, \u0026quot;gitURL\u0026quot;: null, \\ \u0026quot;requestType\u0026quot;: \u0026quot;new\u0026quot;, \\ \u0026quot;organizationalUnitName\u0026quot;: \u0026quot;com.sctrcd.kiewb\u0026quot; }' Note here, that the documentation implies that you can create a repository without associating it with an organisation. This is something you can do in the Workbench web interface, but through the API, it fails. So you should include the organizationalUnitName in the call to create the repository. There’s not much point in a repository without an organisation, anyway.\nFinally, create a project in the repository:\ncurl -X POST -H \u0026quot;Content-Type: application/json\u0026quot; --user admin:admin \\ 127.0.0.1:8080/drools-wb/rest/repositories/rulesrepo/projects/ \\ -d '{ \u0026quot;name\u0026quot;: \u0026quot;rulesproject\u0026quot;, \\ \u0026quot;description\u0026quot;: \u0026quot;Example rules project\u0026quot; }' Now you’re done. If you have a local Git project, cloned from the original Workbench container, and you have started the new container on the same ports and host, then you can just run a Git pull.\nAt this point, the Git pull will leave merge issues on a handful of files. I’m currently trying to think of my preferred process to correct this. I may just script up taking a copy of the files (5 of them), copying them back after the pull, committing and pushing.\nNow that you know the steps, I should mention that I created a script to perform all three steps. If you want to base a script of your own on it, feel free. Here it is:\nkie-workbench-rest-api-create-org.sh gist · stephen-masters/d7dea1aa7318ad5f20119727daa8afb2 sh Copy 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183 #!/bin/bash # -------------------------------------------------------------------------------- # # Script to demonstrate using the KIE (Drools) Workbench REST API to: # # create an organistion. # create a repository associated with the organisation. # create a project in the repository. # # Based on the documentation here: # https://docs.jboss.org/drools/release/6.5.0.Final/drools-docs/html/ch20.html # # At time of writing, the official documentation seems to be a little bit behind # the current state of the API. Therefore, if you use the example entities provided # in the documentation, the API calls will not work. # # Some of the values hardcoded below (URL, username, password), are based # on those defined in the Drools Workbench Showcase Docker image: # https://hub.docker.com/r/jboss/drools-workbench-showcase/ # I would recommend turning those into arguments or environment variables if you # intend to make use of this script. I have done that here, just to keep everything # for the example in one place. Please, don\u0026#39;t keep the hardcoded password! # # -------------------------------------------------------------------------------- # -------------------------------------------------------------------------------- # First, we create an organisation # -------------------------------------------------------------------------------- API_RESPONSE=`curl -X POST -H \u0026#34;Content-Type: application/json\u0026#34; --user admin:admin \\ 127.0.0.1:8080/drools-wb/rest/organizationalunits \\ -d \u0026#39;{ \u0026#34;name\u0026#34;: \u0026#34;com.sctrcd.kiewb\u0026#34;, \\ \u0026#34;description\u0026#34;: \u0026#34;Example Workbench Organisation\u0026#34;, \\ \u0026#34;owner\u0026#34;: \u0026#34;Scattercode\u0026#34;, \\ \u0026#34;defaultGroupId\u0026#34;: \u0026#34;com.sctrcd.kiewb\u0026#34; }\u0026#39;` echo \u0026#34;API_RESPONSE: \u0026#34; echo \u0026#34;$API_RESPONSE\u0026#34; echo \u0026#34;\u0026#34; JOB_STATE=`echo $API_RESPONSE | jq -c \u0026#39;. | {status}\u0026#39;` JOB_ID=`echo $API_RESPONSE | jq -c \u0026#39;. | {jobId}\u0026#39;` JOB_ID=${JOB_ID#\u0026#39;{\u0026#34;jobId\u0026#34;:\u0026#34;\u0026#39;} JOB_ID=${JOB_ID%\u0026#39;\u0026#34;}\u0026#39;} echo \u0026#34;JOB_STATE: $JOB_STATE\u0026#34; echo \u0026#34;JOB_ID: $JOB_ID\u0026#34; if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;APPROVED\u0026#34;}\u0026#39; ] then echo \u0026#34;Request rejected. Request state: $JOB_STATE\u0026#34; exit 1 fi # All jobs are async. We need to keep checking the state of the job until it is flagged as SUCCESS or fails. while [[ $JOB_STATE == \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;APPROVED\u0026#34;}\u0026#39; || $JOB_STATE == \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;ACCEPTED\u0026#34;}\u0026#39; ]]; do JOB_STATE=`curl 127.0.0.1:8080/drools-wb/rest/jobs/$JOB_ID --user admin:admin | jq -c \u0026#39;. | { status }\u0026#39;` echo \u0026#34;JOB_STATE: $JOB_STATE\u0026#34; sleep 1s done if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;SUCCESS\u0026#34;}\u0026#39; ] then echo \u0026#34;Request accepted, but failed. Job state: $JOB_STATE\u0026#34; if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;BAD_REQUEST\u0026#34;}\u0026#39; ] then # A BAD_REQUEST state indicates that the resource is already there. exit 1 fi fi echo \u0026#34;Request succeeded. Job state: $JOB_STATE\u0026#34; # -------------------------------------------------------------------------------- # Now that we have an organisation, we can create a repository. # -------------------------------------------------------------------------------- API_RESPONSE=`curl -X POST -H \u0026#34;Content-Type: application/json\u0026#34; --user admin:admin \\ 127.0.0.1:8080/drools-wb/rest/repositories \\ -d \u0026#39;{ \u0026#34;name\u0026#34;: \u0026#34;rulesrepo\u0026#34;, \\ \u0026#34;description\u0026#34;: \u0026#34;Example rules repo\u0026#34;, \\ \u0026#34;userName\u0026#34;: null, \u0026#34;password\u0026#34;: null, \u0026#34;gitURL\u0026#34;: null, \\ \u0026#34;requestType\u0026#34;: \u0026#34;new\u0026#34;, \\ \u0026#34;organizationalUnitName\u0026#34;: \u0026#34;com.sctrcd.kiewb\u0026#34; }\u0026#39;` echo \u0026#34;\u0026#34; echo \u0026#34;API_RESPONSE: \u0026#34; echo \u0026#34;$API_RESPONSE\u0026#34; echo \u0026#34;\u0026#34; JOB_STATE=`echo $API_RESPONSE | jq -c \u0026#39;. | {status}\u0026#39;` JOB_STATE=`echo $API_RESPONSE | jq -c \u0026#39;. | {status}\u0026#39;` JOB_ID=`echo $API_RESPONSE | jq -c \u0026#39;. | {jobId}\u0026#39;` JOB_ID=${JOB_ID#\u0026#39;{\u0026#34;jobId\u0026#34;:\u0026#34;\u0026#39;} JOB_ID=${JOB_ID%\u0026#39;\u0026#34;}\u0026#39;} echo \u0026#34;JOB_STATE: $JOB_STATE\u0026#34; echo \u0026#34;JOB_ID: $JOB_ID\u0026#34; if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;APPROVED\u0026#34;}\u0026#39; ] then echo \u0026#34;Request rejected. Request state: $JOB_STATE\u0026#34; exit 1 fi # All jobs are async. We need to keep checking the state of the job until it is flagged as SUCCESS or fails. while [[ $JOB_STATE == \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;APPROVED\u0026#34;}\u0026#39; || $JOB_STATE == \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;ACCEPTED\u0026#34;}\u0026#39; ]]; do JOB_STATE=`curl metis:8080/drools-wb/rest/jobs/$JOB_ID --user admin:admin | jq -c \u0026#39;. | { status }\u0026#39;` echo \u0026#34;JOB_STATE: $JOB_STATE\u0026#34; sleep 1s done if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;SUCCESS\u0026#34;}\u0026#39; ] then echo \u0026#34;Request accepted, but failed. Job state: $JOB_STATE\u0026#34; if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;BAD_REQUEST\u0026#34;}\u0026#39; ] then # A BAD_REQUEST state indicates that the resource is already there. exit 1 fi fi echo \u0026#34;Request succeeded. Job state: $JOB_STATE\u0026#34; # -------------------------------------------------------------------------------- # Now that we have a repository, lets create a project in it. # -------------------------------------------------------------------------------- API_RESPONSE=`curl -X POST -H \u0026#34;Content-Type: application/json\u0026#34; --user admin:admin \\ 127.0.0.1:8080/drools-wb/rest/repositories/rulesrepo/projects/ \\ -d \u0026#39;{ \u0026#34;name\u0026#34;: \u0026#34;rulesproject\u0026#34;, \\ \u0026#34;description\u0026#34;: \u0026#34;Example rules project\u0026#34; }\u0026#39;` echo \u0026#34;\u0026#34; echo \u0026#34;API_RESPONSE: \u0026#34; echo \u0026#34;$API_RESPONSE\u0026#34; echo \u0026#34;\u0026#34; JOB_STATE=`echo $API_RESPONSE | jq -c \u0026#39;. | {status}\u0026#39;` JOB_STATE=`echo $API_RESPONSE | jq -c \u0026#39;. | {status}\u0026#39;` JOB_ID=`echo $API_RESPONSE | jq -c \u0026#39;. | {jobId}\u0026#39;` JOB_ID=${JOB_ID#\u0026#39;{\u0026#34;jobId\u0026#34;:\u0026#34;\u0026#39;} JOB_ID=${JOB_ID%\u0026#39;\u0026#34;}\u0026#39;} echo \u0026#34;JOB_STATE: $JOB_STATE\u0026#34; echo \u0026#34;JOB_ID: $JOB_ID\u0026#34; if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;APPROVED\u0026#34;}\u0026#39; ] then echo \u0026#34;Request rejected. Request state: $JOB_STATE\u0026#34; exit 1 fi # All jobs are async. We need to keep checking the state of the job until it is flagged as SUCCESS or fails. while [[ $JOB_STATE == \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;APPROVED\u0026#34;}\u0026#39; || $JOB_STATE == \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;ACCEPTED\u0026#34;}\u0026#39; ]]; do JOB_STATE=`curl metis:8080/drools-wb/rest/jobs/$JOB_ID --user admin:admin | jq -c \u0026#39;. | { status }\u0026#39;` echo \u0026#34;JOB_STATE: $JOB_STATE\u0026#34; sleep 1s done if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;SUCCESS\u0026#34;}\u0026#39; ] then echo \u0026#34;Request accepted, but failed. Job state: $JOB_STATE\u0026#34; if [ \u0026#34;$JOB_STATE\u0026#34; != \u0026#39;{\u0026#34;status\u0026#34;:\u0026#34;BAD_REQUEST\u0026#34;}\u0026#39; ] then # A BAD_REQUEST state indicates that the resource is already there. exit 1 fi fi echo \u0026#34;Request succeeded. Job state: $JOB_STATE\u0026#34; "},{"url":"/2016/01/multiple-databases-with-spring-boot-and-spring-data-jpa/","title":"Multiple databases with Spring Boot and Spring Data JPA","summary":"Connecting a Spring Boot application to two separate databases with Spring Data JPA, working around Boot's default autowiring behaviour.","date":"2016-01-05","tags":["java","spring"],"cover":"mint","body":"A little while back I knocked up a post describing how to enable a Spring application to connect to multiple data sources. At the time, I had only just heard about Spring Boot at the SpringOne 2GX conference in Santa Clara, so the examples didn’t take advantage of that and also didn’t work around some of the autowiring that it does.\nRecently, I was working on a little ETL project to migrate data from one database to another with a different structure, so I returned to this problem and the following is the result.\nFirst, if you want to get hold of a working (including some simple tests) example project, here it is:\nhttps://github.com/gratiartis/multids-demo/tree/now-with-spring-boot\nAs previously, when you define an entity manager, you can define where it should scan for entities and repository classes. The classes can be named individually, but it is easiest if you put your domain entities and repository classes into their own packages and point the entity manager factory at the package. In this example, I used:\ncom.sctrcd.multids.foo.domain com.sctrcd.multids.foo.repo com.sctrcd.multids.bar.domain com.sctrcd.multids.bar.repo I suspect that it’s certainly possible to get around it, but I found that due to Spring Boot trying to inject beans based on default names, it was easiest to set up one of the data sources to use the defaults and the other to use bean names that I defined. As you can see in the application.yml below:\napplication.yml gist · stephen-masters/ce9990e6a4d04a53e799 yml Copy 123456789101112131415161718192021222324252627282930 spring: datasource: url: jdbc:mysql://localhost/foo_schema username: root password: d4t4b4s3sForLif3 driverClassName: com.mysql.jdbc.Driver test-on-borrow: true test-while-idle: true validation-query: select 1; maxActive: 1 jpa: show-sql: false generate-ddl: false properties: hibernate: dialect: org.hibernate.dialect.MySQL5InnoDBDialect ddl-auto: validate hbm2ddl: import_files: bar: datasource: url: jdbc:mysql://localhost/bar_schema username: root password: d4t4b4s3sForLif3 driverClassName: com.mysql.jdbc.Driver test-on-borrow: true test-while-idle: true validation-query: select 1; maxActive: 1 … the spring.datasource.url, spring.datasource.username and spring.datasource.password properties are all defined for the ‘default’ datasource. I define some additional non-conventional properties for the additional schema. We will see how those are picked up shortly.\nBeyond the application.yml configuration, all we need to do is define @Configuration beans which will pick up the properties. First, a @Configuration to wire up the ‘default’ data source. This defines each bean as @Primary, to ensure that they are the beans picked up by anything which does not specify a @Qualifier:\nFooDbConfig.java gist · stephen-masters/2c202c741c30cab102e6 java Copy 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152 package com.sctrcd.multidsdemo; import javax.persistence.EntityManagerFactory; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.boot.autoconfigure.jdbc.DataSourceBuilder; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.boot.orm.jpa.EntityManagerFactoryBuilder; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Primary; import org.springframework.data.jpa.repository.config.EnableJpaRepositories; import org.springframework.orm.jpa.JpaTransactionManager; import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean; import org.springframework.transaction.PlatformTransactionManager; import org.springframework.transaction.annotation.EnableTransactionManagement; @Configuration @EnableTransactionManagement @EnableJpaRepositories( entityManagerFactoryRef = \u0026#34;entityManagerFactory\u0026#34;, basePackages = { \u0026#34;com.sctrcd.multidsdemo.foo.repo\u0026#34; }) public class FooConfig { @Primary @Bean(name = \u0026#34;dataSource\u0026#34;) @ConfigurationProperties(prefix=\u0026#34;spring.datasource\u0026#34;) public DataSource dataSource() { return DataSourceBuilder.create().build(); } @Primary @Bean(name = \u0026#34;entityManagerFactory\u0026#34;) public LocalContainerEntityManagerFactoryBean entityManagerFactory( EntityManagerFactoryBuilder builder, @Qualifier(\u0026#34;dataSource\u0026#34;) DataSource dataSource) { return builder .dataSource(dataSource) .packages(\u0026#34;com.sctrcd.multidsdemo.foo.domain\u0026#34;) .persistenceUnit(\u0026#34;foo\u0026#34;) .build(); } @Primary @Bean(name = \u0026#34;transactionManager\u0026#34;) public PlatformTransactionManager transactionManager( @Qualifier(\u0026#34;entityManagerFactory\u0026#34;) EntityManagerFactory entityManagerFactory) { return new JpaTransactionManager(entityManagerFactory); } } Second a @Configuration to wire up the additional datasource. It is essentially identical to the ‘default’ configuration, except that it defines non-conventional names for the data source, entity manager factory and transaction manager and scans different packages for the entities and repositories. It also defines the named transaction manager in the @EnableJpaRepositories annotation and does not define the beans as @Primary.\nMultiDsBarDbConfig.java gist · stephen-masters/db8643cdd89714de494b java Copy 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849 package com.sctrcd.multidsdemo; import javax.persistence.EntityManagerFactory; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.boot.autoconfigure.jdbc.DataSourceBuilder; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.boot.orm.jpa.EntityManagerFactoryBuilder; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.jpa.repository.config.EnableJpaRepositories; import org.springframework.orm.jpa.JpaTransactionManager; import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean; import org.springframework.transaction.PlatformTransactionManager; import org.springframework.transaction.annotation.EnableTransactionManagement; @Configuration @EnableTransactionManagement @EnableJpaRepositories( entityManagerFactoryRef = \u0026#34;barEntityManagerFactory\u0026#34;, transactionManagerRef = \u0026#34;barTransactionManager\u0026#34;, basePackages = { \u0026#34;com.sctrcd.multidsdemo.bar.repo\u0026#34; }) public class BarConfig { @Bean(name = \u0026#34;barDataSource\u0026#34;) @ConfigurationProperties(prefix=\u0026#34;bar.datasource\u0026#34;) public DataSource barDataSource() { return DataSourceBuilder.create().build(); } @Bean(name = \u0026#34;barEntityManagerFactory\u0026#34;) public LocalContainerEntityManagerFactoryBean barEntityManagerFactory( EntityManagerFactoryBuilder builder, @Qualifier(\u0026#34;barDataSource\u0026#34;) DataSource barDataSource) { return builder .dataSource(barDataSource) .packages(\u0026#34;com.sctrcd.multidsdemo.bar.domain\u0026#34;) .persistenceUnit(\u0026#34;bar\u0026#34;) .build(); } @Bean(name = \u0026#34;barTransactionManager\u0026#34;) public PlatformTransactionManager barTransactionManager( @Qualifier(\u0026#34;barEntityManagerFactory\u0026#34;) EntityManagerFactory barEntityManagerFactory) { return new JpaTransactionManager(barEntityManagerFactory); } } Beyond those configuration classes, everything is just the standard setup for a Spring Boot / Spring Data JPA application, so if you have an application connecting to a single database already, there isn’t a lot of modification to support connecting to additional databases.\n"},{"url":"/2015/02/a-minimal-spring-boot-drools-web-service/","title":"A minimal Spring Boot Drools web service","summary":"Just the essentials: a Spring Boot application exposing a Drools rules engine as an HTTP API, nothing more.","date":"2015-02-06","tags":["java","spring","rules"],"cover":"cobalt","body":"A little while back, I knocked up Qzr to demonstrate using Spring Boot with the Drools rules engine. However, I also wanted to play around with a few more technologies (AngularJS and Spring HATEOAS), so it’s a bit large for just demonstrating exposing Drools rules as an HTTP web service.\nA few folks found it difficult to pick out the essentials of running Drools in a Spring Boot application, so I thought I’d have a go at creating a simpler application, which does nothing more than that.\nHence, the Bus Pass Web Service\nAs might be guessed from the project name, for the rules, I took my cues from the Drools Bus Pass example in the Drools project. I cut the rules down a little bit and reduced the code by replacing some of the Java fact classes with DRL declared types. I prefer this for facts which are only referenced from within the DRL.\nAssuming that you have a reasonably recent install of Maven and the JDK (I have tested with 8), you should be able to do the following from the command line.\nBuild the application:\nmvn clean package Run the application:\njava -jar target/buspass-ws-1.0.0-SNAPSHOT.jar Then send a request to the API using curl or your favourite web browser. The rules state that if you request a bus pass for a person with age less than 16, you should see a ChildBusPass. For someone 16 or over, you should see an AdultBusPass.\nFor example, opening http://127.0.0.1:8080/buspass?name=Steve\u0026amp;age=15 gives me:\n{\u0026quot;person\u0026quot;:{\u0026quot;name\u0026quot;:\u0026quot;Steve\u0026quot;,\u0026quot;age\u0026quot;:15},\u0026quot;busPassType\u0026quot;:\u0026quot;ChildBusPass\u0026quot;} … and opening http://127.0.0.1:8080/buspass?name=Steve\u0026amp;age=16 gives me:\n{\u0026quot;person\u0026quot;:{\u0026quot;name\u0026quot;:\u0026quot;Steve\u0026quot;,\u0026quot;age\u0026quot;:16},\u0026quot;busPassType\u0026quot;:\u0026quot;AdultBusPass\u0026quot;} The full source code is on GitHub, so that you can browse through it. I don’t intend to change it much now, other than to add a few comments. The following are some of the key features, that you should know about.\nFirst of all, it’s a Maven project, so I hope you’re familiar with that. The following XML is extracted from the pom.xml. Note that to enable Spring Boot, I have imported the Spring platform Bill of Materials and defined spring-boot-starter-web as a dependency. By including the spring-boot-maven-plugin, the Maven build will generate an executable jar, which will run up an embedded Tomcat instance to host the web application. You don’t need to have a web server installed on your machine, to run this application.\nThe Drools functionality is enabled by defining kie-ci as a dependency. This brings in the Drools API, and sets up classpath scanning so that it can find the rules in your application.\nspring-platform-bom.xml gist · stephen-masters/b60c730070304e6c4163 xml Copy 123456789101112131415161718192021222324252627282930313233343536373839 \u0026lt;!-- Transitively bring in the Spring IO Platform Bill-of-Materials `pom.xml` --\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.spring.platform\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;platform-bom\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.1.1.RELEASE\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;!-- ... --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.kie\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;kie-ci\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${kie.version}\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;repackage\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; Having kie-ci in the project means that Drools will scan for rules based on certain conventions. It will look for a file called kmodule.xml in src/main/resources/META-INF/.\nkmodule.xml gist · stephen-masters/781abd092397bf3d3a44 xml Copy 123456789 \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;kmodule xmlns=\u0026#34;http://jboss.org/kie/6.0.0/kmodule\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;kbase name=\u0026#34;BusPassKbase\u0026#34; packages=\u0026#34;com.sctrcd.buspassws.rules\u0026#34;\u0026gt; \u0026lt;ksession name=\u0026#34;BusPassSession\u0026#34; /\u0026gt; \u0026lt;/kbase\u0026gt; \u0026lt;/kmodule\u0026gt; The kmodule.xml defines the package where the rules for your knowledge base can be found. Based on the definition above, it will scan for rules (.drl files and others) in src/main/resources/com/sctrcd/buspassws/rules. I won’t explain the rules. Feel free to go take a look at them yourself. As can be seen in the XML, this also defines a knowledge session called BusPassSession. This means that you can now start a knowledge session like so:\ngistfile1.java gist · stephen-masters/e246df0650984490e7be java Copy 12 KieContainer kieContainer = KieServices.Factory.get().getKieClasspathContainer(); KieSession kieSession = kieContainer.newKieSession(\u0026#34;BusPassSession\u0026#34;); The heart of a Spring Boot application is its main class, which causes your application to be bootstrapped.\nBusPassApp.java gist · stephen-masters/35d5321e6c2373b579b9 java Copy 12345678910111213 @SpringBootApplication public class BusPassApp { public static void main(String[] args) { ApplicationContext ctx = SpringApplication.run(BusPassApp.class, args); } @Bean public KieContainer kieContainer() { return KieServices.Factory.get().getKieClasspathContainer(); } } This is standard Spring Boot stuff, but the addition we have here is to define a bean, which references the Drools KieClasspathContainer. In doing this, we have a reference to the container, which we can inject into our application beans. This is exactly what we do with the BusPassService.\nBusPassService.java gist · stephen-masters/cff5e162df0f465d3a6c java Copy 1234567891011121314151617181920212223242526 @Service public class BusPassService { private final KieContainer kieContainer; @Autowired public BusPassService(KieContainer kieContainer) { log.info(\u0026#34;Initialising a new bus pass session.\u0026#34;); this.kieContainer = kieContainer; } /** * Create a new session, insert a person\u0026#39;s details and fire rules to * determine what kind of bus pass is to be issued. */ public BusPass getBusPass(Person person) { KieSession kieSession = kieContainer.newKieSession(\u0026#34;BusPassSession\u0026#34;); kieSession.insert(person); kieSession.fireAllRules(); BusPass busPass = findBusPass(kieSession); kieSession.dispose(); return busPass; } // ... } As you can see, we are now exposing Drools functionality in our Spring Boot application. A service bean is injected with a reference to the Drools KieContainer. Subsequently, whenever a call is made to the getBusPass method, we instantiate a new KieSession (note the session name, which matches that defined in kmodule.xml), insert details about a person, fire rules, and see what kind of bus pass they should be given.\nFinally, we need a controller.\nBusPassController.java gist · stephen-masters/cf6db31dd6f643da4c19 java Copy 12345678910111213141516171819202122232425 @RestController public class BusPassController { private static Logger log = LoggerFactory.getLogger(BusPassController.class); private final BusPassService busPassService; @Autowired public BusPassController(BusPassService busPassService) { this.busPassService = busPassService; } @RequestMapping(value = \u0026#34;/buspass\u0026#34;, method = RequestMethod.GET, produces = \u0026#34;application/json\u0026#34;) public BusPass getBusPass( @RequestParam(required = true) String name, @RequestParam(required = true) int age) { Person person = new Person(name, age); log.debug(\u0026#34;Bus pass request received for: \u0026#34; \u0026#43; person); BusPass busPass = busPassService.getBusPass(person); return busPass; } } By annotating the controller class as @RestController, Spring will set it up as a bean and ensure that anything returned from a method is marshalled. As the getBusPass method has been defined as producing application/json, Spring will automatically use Jackson to marshal the response to JSON.\nThe @RequestMapping annotation indicates that you can reach the URL at /buspass. For instance, if you run up the application as it is, this means that you can send GET requests to http://127.0.0.1:8080/buspass. The @RequestParam annotations indicate that you need to send querystring arguments, providing values for “name” and “age”.\nAll that remains is to try it out. Please do let me know if you spot anything that you think could be improved.\n"},{"url":"/2013/10/bundling-project-dependencies-with-the-shade-plugin/","title":"Bundling project dependencies with the Shade plugin","summary":"The Maven Shade plugin bundles all dependencies into a single fat jar — handy for deploying libraries into Drools Guvnor or running self-contained FitNesse fixtures.","date":"2013-10-16","tags":["java"],"cover":"yellow","body":"Have you ever struggled with the mass of .jar files that you can find in a directory of Java libraries? You have no idea what version each of them is and what its dependencies are. You want to put your own application jar in there, but you know that will mean needing to get hold of 20 other jar files to deal with its dependencies.\nI certainly have trouble with this. I use the Drools Guvnor web application to manage business rules, but this sometimes requires that I place my own libraries in that web application’s lib directory. Some of these libraries are actually minimal Spring applications which need to do data access and invoke web services. This means that each of them does require multiple additional Jar files. It becomes difficult to keep track of what libraries I have added to that directory and what I need to add.\nHowever I have come across a decent way of dealing with this problem. The shade plugin enables me to build a single .jar file containing all dependencies for an application. This way, I’m able to ensure that all the dependencies I tested against in my build are definitely the ones that have been deployed.\nThe following is a basic example of configuring the shade plugin in your pom.xml:\ngist · stephen-masters/3609269Open on GitHub → I also find it particularly handy for FitNesse where I’m able to run a quick script to download the latest version of an artifact from a repository, and I know that what I’m getting includes all the dependencies I need. If a dependency version changes, or is added, I don’t need to alter my deployment script.\n"},{"url":"/2013/01/a-web-service-powered-by-spring-and-drools/","title":"A web service powered by Spring and Drools","summary":"A reference project wiring Spring and Drools without the heavyweight KIE integration — hand-cranked and straightforward.","date":"2013-01-23","tags":["java","spring","rules"],"cover":"cobalt","body":"For the past few years I have been designing and building web services which make use of decision management technology such as Drools and FICO Blaze Advisor. The past year or so has all been about using Drools Guvnor to enable business users (legal and operations teams) manage rules, and using the Drools rules engine to evaluate trade requests against those rules.\nMy preference in setting up web services is to use the Spring Framework to configure my application and manage its various components. However, I struggled to find much information online about how best to wire up a Spring web application to make use of Drools for rules evaluation. The Drools documentation does include a chapter on Spring integration, but I found that it didn’t seem to make the integration any simpler, and forced dependencies on older versions of Spring that I didn’t want to use. In the end, I decided to hand-crank the integration in my application, and it turned out to be quite easy to do.\nSo in the hope that it might be useful to someone else, I have knocked up an example project, which configures web services, which are backed by services that each make use of a Drools knowledge base to make decisions. That project can be found at GitHub:\nhttps://github.com/stephen-masters/sctrcd-fx-web\nFeel free to grab a copy and play around with it. It’s built with Maven, and generates a Java web application, which makes use of Spring and Drools to provide a number of web services that could be part of a foreign exchange payments system. I won’t talk about it in depth now though, as it makes use of a variety of technologies. I will soon be posting more, talking about different specific aspects of that project.\n"},{"url":"/2013/01/a-bigdecimal-accumulator-for-drools/","title":"A BigDecimal accumulator for Drools","summary":"Drools's built-in sum accumulator silently converts BigDecimals to doubles. A custom accumulator to fix that before it corrupts your financial calculations.","date":"2013-01-17","tags":["java","rules"],"cover":"cobalt","body":"Working in the financial industry, I have become rather strict about avoiding doubles in Java. The trouble is that they are a floating point representation of a number, which is just an approximation of the real value. This can lead to some unusual results.\nFor instance, according to this, 0.34 + 0.01 is not equal to 0.35.\ndouble x = 0.35; double y = 0.34 + 0.01; System.out.println(x + \u0026quot; : \u0026quot; + y + \u0026quot; : \u0026quot; + (x == y)); 0.35 : 0.35000000000000003 : false Those inaccuracies might seem very small, but it’s surprisingly easy for them to start impacting a real world application. Imagine you wanted to sell dollars and buy Iranian Rial. You would be getting almost 20,000 Rial for every dollar. At that rate, imprecise floating point values could easily impact the final amount being sent. Although with the current US trade sanctions against Iran, that could be the least of your problems.\nIf you are running reports on a history of transactions, then a large number of smaller transactions can add up to large enough values that the imprecise doubles start affecting your totals. Even if you’re not dealing in huge numbers, things can go wrong easily enough. If your process takes a number through a sequence of multiplications or divisions, errors can be magnified. If you have a business rule to always round up, then according to our calculation above, 0.34 + 0.01 = 0.36.\nHowever, that’s enough about why doubles are bad for financial calculations. What caught me by surprise was running accumulate functions in Drools. I was writing code to react to currency exposures being at particular limits, which were all being added up from nice healthy BigDecimal values. However, the numbers I was getting from the ‘sum’ accumulate function were not equal to the numbers my unit tests were expecting. A little investigation showed that the sum accumulator was converting all my nice fixed precision BigDecimal numbers into doubles. Oh dear…\nSo after a little further investigation, I established that there are no BigDecimal accumulators for Drools, and a bug has been open since 2008. The only workaround mentioned was to write your own sumBigDecimal accumulator function.\nThis didn’t seem like great progress, but I thought it seemed like a good opportunity to learn a new corner of Drools, so I knocked together this BigDecimalAccumulator implementation:\ngist · stephen-masters/4088798Open on GitHub → I am a bit puzzled that I’m not finding examples of this all over the place, as Drools has seen a lot of uptake in the financial industry, and it seems like an obvious thing that anybody using Drools for financial rules and calculations would need. Maybe there are loads of private repositories out there, each with their own implementations?\nAnyway, in the absence of BigDecimal accumulator functionality in core Drools, feel free to grab this for your own applications.\n"},{"url":"/2013/01/getting-the-latest-snapshot-from-sonatype-nexus/","title":"Getting the latest snapshot from Sonatype Nexus","summary":"A Ruby script to parse the Nexus REST API and reliably fetch the latest snapshot artifact, sidestepping the timestamped filename problem.","date":"2013-01-15","tags":["java","devops"],"cover":"tangerine","body":"Sonatype Nexus is a repository for build artifacts, which is particularly handy if you have a Maven project. Once you have your Maven project configured, every time you run mvn deploy Maven will do a bit of building and then upload the resulting artifacts (.jar, .war, …) to the repository. If you browse Nexus you will then be able to find those artifacts with a unique name and download them.\nThis is all great, but if your project is running on a snapshot version, then every time you deploy to Nexus, the artifact file name will be appended with date and an ever-incrementing number. For my purposes, I wanted to be able to go on to a Linux test server where I have Apache Tomcat installed and grab the latest .war file. Maybe I need to relax more, but I was getting a bit irritated with having to manually find the snapshot in Nexus, copy the link and then fire off a curl -O -L http://\u0026hellip; command every time I updated the project.\nFortunately it turns out that Nexus provides a REST API for searching. Unfortunately, it only returns the name of an artifact without the time-stamp. I think that this is intended to be a ‘good thing’ with Nexus automatically resolving the latest snapshot based on requesting the snapshot with no time-stamp. Unfortunately, requesting that artifact from the location indicated by the API results in a ‘not found’ response.\nTherefore I knocked up a little Ruby script, which will go to the URI at which a full artifact list can be found, which includes resources such as poms, jars, sha1 and md5 hashes. It then parses the response XML to narrow down the results and selects the most recent artifact that matches the search criteria. Finally it will download the artifact.\nThe latest version of the script can be found as a gist on GitHub:\ngist · stephen-masters/1852106Open on GitHub → gist · stephen-masters/1852106Open on GitHub →\n"},{"url":"/2013/01/multiple-databases-with-spring-data-repositories/","title":"Multiple databases with Spring Data repositories","summary":"Configuring a Spring application with two separate data sources using Spring Data JPA — separate @Configuration classes and EntityManagerFactory beans for each schema.","date":"2013-01-15","tags":["java","spring"],"cover":"mint","body":"The Spring Data project keeps making it easier to do database access in Spring applications, and one of the neatest improvements of recent times is that by defining an interface which extends JpaRepository and referencing a JPA entity, an implementation will automatically be injected with all the usual CRUD methods: findAll(), findOne(id), save(entity), delete(id), etc.\nRecently I was working on a project, where I had taken full advantage of this, and for which I needed to add domain objects from an additional database. Unfortunately, as soon as I added references to entities in a different database I started experiencing troubles. For instance:\nNot an managed type: class com.sctrcd.multidsdemo.domain.bar.Bar … which was being caused by my repository being injected with the entityManager and transactionManager for the other database. Here I walk through how I resolved the problems and got things working.\nAfter naming my beans to ensure that I would not be referencing the wrong one, I started seeing:\nNo bean named 'entityManagerFactory' is defined. … because the repository implementation defaults to a by-name search for a bean called “entityManagerFactory”.\nI was struggling to find any documentation of how to do it right, so to help me work through the steps of such a configuration, I created a minimal demo project at GitHub, containing two entities with those traditional names: Foo and Bar.\nhttps://github.com/gratiartis/multids-demo\nHere I shall explain the configuration that I ended up with in the hope that readers might understand that it can be done, and that it’s actually quite easy and requires very little code, if you know how!\nFirst of all, we set up two JPA entities, Foo and Bar:\n@Entity public class Foo { /* Constructors, fields and accessors/mutators */ } @Entity public class Bar { /* Constructors, fields and accessors/mutators */ } Associated with these we create two repositories: FooRepository and BarRepository. Thanks to the awesomeness of Spring Data, we can get ourselves some pretty full-featured repositories purely by defining interfaces which extend JpaRepository:\npublic interface FooRepository extends JpaRepository\u0026lt;Foo, Long\u0026gt; {} public interface BarRepository extends JpaRepository\u0026lt;Bar, Long\u0026gt; {} We need to ensure that each of these maps to a table in its own database. To achieve this, we will need two separate entity managers, each of which has a different datasource. However, in a Spring Java config @Configuration class, we can only have one @EnableJpaRepositories annotation and each such annotation can only reference one EntityManagerFactory. To achieve this, we create two separate @Configuration classes: FooConfig and BarConfig.\nEach of these @Configuration classes defines a DataSource based on an embedded HSQL database. The following is the BarConfig. FooConfig is identical except for some different names and package paths.\ngist · stephen-masters/7530207Open on GitHub → Each configuration should define a DataSource, EntityManager, EntityManagerFactory and PlatformTransactionManager. You need to make sure that @Entity beans for different data sources are in different packages. We then need to put the correct references in the @EnableJpaRepositories annotation for each @Configuration class.\n@Configuration @EnableTransactionManagement @EnableJpaRepositories( entityManagerFactoryRef = \u0026quot;barEntityManagerFactory\u0026quot;, transactionManagerRef = \u0026quot;barTransactionManager\u0026quot;, basePackages = {\u0026quot;com.sctrcd.multidsdemo.integration.repositories.bar\u0026quot;}) public class BarConfig { // ... } @Configuration @EnableTransactionManagement @EnableJpaRepositories( entityManagerFactoryRef = \u0026quot;barEntityManagerFactory\u0026quot;, transactionManagerRef = \u0026quot;barTransactionManager\u0026quot;, basePackages = { \u0026quot;com.sctrcd.multidsdemo.integration.repositories.bar\u0026quot; }) public class BarConfig { // ... } As you can see, each of these @EnableJpaRepositories annotations defines a specific named EntityManagerFactory and PlatformTransactionManager. They also specify which repositories should be wired up with those beans. In the example, I have put the repositories in database-specific packages. It is also possible to define each individual repository by name, by adding includeFilters to the annotation, but by segregating the repositories by database, I believe that things should end up more readable.\nAt this point you should have a working application using Spring Data repositories to manage entities in two separate databases. Feel free to grab the project from the link above and run the tests to see this happening. And please do let me know if you can spot any good opportunities for improvement.\nUpdate Since writing the post above, I have had the opportunity to implement a multiple datasource solution in a Spring Boot application. As a few people asked about it, here’s a follow-up post describing what needs to be done to implement multiple data sources in a Spring Boot application.\n"},{"url":"/2011/05/playing-around-with-apache-camel/","title":"Playing around with Apache Camel","summary":"First impressions of Apache Camel for implementing Enterprise Integration Patterns — a content-based router and a CSV-to-XML transform with almost no code.","date":"2011-05-12","tags":["java"],"cover":"mint","body":"I have been playing around with Apache Camel for a couple of projects recently, and so far I’m very impressed. Camel is one of a number of frameworks that seem to have sprung up over the past few years in response to the book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf. It attempts to provide mechanisms to support all the patterns described in the book. And it does so very well, from what I have experienced so far. So I thought I would mention a couple of things I have done with it.\nA simple content-based router The problem I was trying to solve was that a legacy application was designed to listen to a WebSphere MQ queue, which would contain requests for a variety of operations. A new application had been developed to handle a subset of these operations. I couldn’t have both applications listening to the same queue, so I needed to divert particular operation request messages to a separate new queue.\nI needed to put together a simple content-based router that would inspect the header of each incoming message and route the message to a different destination depending on the operation name. I was able to implement this by defining a route which used XPath to select an endpoint based on XML attributes. This could be done in the Camel context XML file and the project contained no code outside this file.\ngist · stephen-masters/4546092Open on GitHub → A CSV to XML transform Here, I was dealing with integrating two off the shelf applications, the aim being to facilitating exporting a document and meta data from one system and importing it into the other. When exporting a document from the first application, a CSV would be generated in a directory on the filesystem. The other application provided an import adapter, which required an XML trigger file. I needed a small application to follow the following steps:\nListen for CSV files being dropped in a directory on the filesystem. Split the CSV up into separate requests for each document being exported. Generate an XML trigger file for each request. Drop the XML trigger file into a directory ready for import by the downstream system. As well as providing middleware messaging adapters, Camel also supports defining endpoints that are directories on the filesystem, so it can automatically create a listener for a directory. To deal with the first two steps, I made use of opencsv to parse the CSV, but as I soon discovered, Camel also provided CSV unmarshallers.\nI extended the Camel RouteBuilder and using the fluent DSL for Java, defined my routes and created a Splitter class that would take the unmarshalled CSV and output a list of messages to an internal queue., this looked a bit like the following. I then defined a route to pick the individual metadata messages off the internal queue and use a Processor to generate XML in the required format.\ngist · stephen-masters/4546103Open on GitHub → Further reading Camel is very comprehensive and is also one of the best documented projects out there. I keep trying to implement something myself and then finding that there’s already something that will do the job for me.\nThese are probably the best starting points for info on any particular integration pattern:\nCamel enterprise integration patters Camel architecture And this was one of the better tutorial introductions to it:\nCamel integration tutorial If Camel sounds good, you should also take a look at Spring Integration. It’s another framework based on the patterns in the Enterprise Integration Patterns book, but has that Spring tendency towards implementation through bean annotations. But you don’t have to pick one or the other; the Camel project has developed the camel-spring-integration library to provide a bridge from Camel components to Spring Integration endpoints.\n"},{"url":"/2008/04/weblogic-scripting-tool-scripts/","title":"WebLogic Scripting Tool scripts","summary":"Useful WLST resources and Gist examples for scripting WebLogic domain creation and server administration.","date":"2008-04-05","tags":["java"],"cover":"cobalt","body":"I mentioned my use of the WebLogic Scripting Tool a little while back. I have noticed since then that a number of folks visiting this site are looking for example scripts. I have obviously written a number myself and I promise I’ll try to get around to posting them here. However until I get myself in gear, I thought I would point you at some useful examples that are already out there. I’ll expand this post as I find more…\nFirst, make sure you sign up on http://dev2dev.bea.com/. That’s the BEA site supporting developers where you will find provides news, tutorials and samples to help you get going with WebLogic.\nThey have a number of projects and code samples that you will be able to get at. My recommendation for getting started is to go to the CodeShare section and have a look around at all the good stuff that has been provided by generous developers around the world.\nThere’s a WLST project maintained by the guys who developed it in the first place. This contains a whole load of scripts.\nAnd to give you more of a head start, you can go to the Code Samples area of dev2dev and search for WLST, which will give you a list of samples worth looking at. Currently these include a Server Health Monitor (artifact S198), which will periodically check your runtime heap and execute queues and log the state to file.\nAlso, if you are looking at WLST for its server monitoring capabilities, you should know that BEA Guardian is now free!\nUpdate A few folks have asked about getting hold of some example scripts. Unfortunately most of what I have written is for work, so it is tied to work environments and not mine to share. However, I have created some simple scripts for sharing, that cover scripting the creation of a domain. These are available as GitHub gists for creating WebLogic domains:\ngist · stephen-masters/589660Open on GitHub → gist · stephen-masters/589660Open on GitHub → gist · stephen-masters/589660Open on GitHub →\n"},{"url":"/2007/02/weblogic-scripting-tool/","title":"WebLogic Scripting Tool","summary":"BEA's Jython-based scripting tool for automating WebLogic server administration — create domains, manage deployments, and handle disaster recovery without restarting.","date":"2007-02-05","tags":["java"],"cover":"cobalt","body":"According to the BEA documentation, the WebLogic Scripting Tool is a command-line scripting interface that system administrators and operators use to monitor and manage WebLogic Server instances and domains. It allows you to write scripts in Jython that are able to connect to a running WebLogic domain and make modifications to the configuration with no need to restart anything. It can also be used for creating and modifying a domain in its offline mode. It comes as standard with WebLogic 9.2 and a version is available for 8.1. It is recommended and supported by BEA for automating WebLogic server administration. I am currently developing WLST scripts to improve the development and deployment process.\nI see it as having the following potential benefits:\nStreamlining development – As it can be executed from an Ant build and cause an application version to be undeployed and replaced with a new version on a running server, all without intervention. Improving deployments – Manual steps in a deployment are slow and unreliable. At some stage they are guaranteed to go wrong. The scripted nature of this means that a deployment can be tested against multiple environments and proven before going live. You know that the deployment method for production is the one that produced your test environments. Faster, more reliable disaster recovery – Scripts can be developed to handle a number of failures. i.e. If a database fails and needs to be run from a DR server, scripts can be written in advance to re-create all connection pools pointed at the DR location. This way, the disaster recovery process is fast and reliable. The person initiating the fail-over only needs to know where to find the appropriate scripts. They do not need to know the steps themselves. Monitoring – Scripts can be written (many already exist) to connect to the running server and monitor it. This can include things such as checking whether message queues are live, testing connection pools, monitoring the JVM heap and various other tasks. Useful links for getting started This page has only existed for a very short time, so I haven’t had much opportunity to develop my own content. However, there is already a lot of good documentation out there that would help someone get started with WLST. Here I present my bucket of links that I have found useful.\nWebLogic Scripting Tool WLST project home at Dev2Dev Environment proving with WLST WLST Online and Offline command summary Using WLST offline Automating WebLogic platform application provisioning WLNav – Interview with developers at Dev2Dev Jython Source code I have already written a number of WLST objects and scripts to make my life easier. I need to work on pulling them out into the web site in a manner that I’m happy with, but in the meantime if you are interested, please get in touch and I can send you what I have so far.\nUpdate A few folks have asked about getting hold of some example scripts. Unfortunately most of what I have written is for work, so it is tied to work environments and not mine to share. However, I have created some simple scripts for sharing, that cover scripting the creation of a domain. These are available as GitHub gists for creating WebLogic domains:\ngist · stephen-masters/589660Open on GitHub → gist · stephen-masters/589660Open on GitHub → gist · stephen-masters/589660Open on GitHub →\n"}]