1
0
mirror of https://gitlab.com/Anson-Projects/projects.git synced 2025-09-19 03:52:37 +00:00

26 Commits

Author SHA1 Message Date
85adfdf067 fix: allow publish job to run on feature branch for testing
- Make pages dependency optional to allow testing without pages deployment
- Add rule to allow publish job on ghost-content-extraction branch
- This enables testing the RSS feed parsing and error handling
2025-08-22 23:30:27 -06:00
c9e0264208 fix: improve RSS feed error handling and debugging
- Add comprehensive error handling for RSS feed fetching
- Log detailed error messages and feed content preview
- Handle empty feeds gracefully instead of panicking
- Exit early if no entries found instead of continuing with empty list
2025-08-22 23:29:57 -06:00
d3966eaf53 fix: remove unused slug field to eliminate warning 2025-08-22 11:23:26 -06:00
21ad5cb862 feat: restore ghost profile functionality for clean content extraction
- Restore Quarto ghost profiles in _quarto.yml for dual content rendering
- Restore ghost-iframe.css with clean styling for Ghost content
- Restore GitLab CI dual build: main site + ghost-content optimized version
- Restore extract_article_content() function in Rust for clean HTML extraction
- Update README to document the ghost profiles feature and how it works

This is the core feature of the MR: generating clean HTML content for Ghost
instead of using iframes by building a ghost-optimized version of the site.
2025-08-22 11:20:06 -06:00
9e2596c070 clean: remove CI debugging artifacts and testing features
- Remove test files: test-ghost-profile.md, test-local-deployment.sh, validate-ghost-extraction.sh, AGENTS.md
- Restore .gitlab-ci.yml to original state without debugging changes
- Restore _quarto.yml to original format without ghost profiles
- Remove ghost-iframe.css styling file
- Restore ghost-upload/.gitlab-ci.yml to original state without force-update job
- Simplify Rust code by removing force update functionality and content extraction
- Restore README.md to original state

Keeps core bug fixes: fixed get_slug() and proper Ghost API duplicate checking
2025-08-22 11:16:14 -06:00
f93746e2c0 remove non-functional cache for self-hosted runners 2025-08-22 11:09:38 -06:00
ae1be54f8f fix: remove trailing slash from slugs to fix Ghost API lookup
- Strip trailing slashes from slugs in get_slug() function
- This prevents double slashes in the Ghost API URL which was causing
  get_existing_post_id() to fail and create duplicate posts
2025-08-22 11:01:38 -06:00
e479c96e44 fix: prevent duplicate posts by using Ghost API instead of public URL check
- Remove unreliable check_if_post_exists function that checked public URLs
- Replace with get_existing_post_id which properly queries Ghost's Admin API
- This prevents duplicate posts when public URLs are temporarily unavailable
2025-08-22 10:49:38 -06:00
890775b2bc GPT5 is too scared to commit and push lmfao 2025-08-22 00:01:03 -06:00
788052233a Fix CI/CD job dependencies and YAML syntax
- Make deploy job dependency optional in ghost-upload jobs
- Change preview job to depend on staging instead of deploy
- Ensures pipeline works on feature branches without deploy job
2025-08-21 23:41:48 -06:00
1a4773b3ef Fix YAML syntax error in preview job script
- Remove problematic environment variable reference
- Use simple string in script section
2025-08-21 23:40:01 -06:00
84f4e48386 Add branch preview deployment and local testing
- Add preview environment for feature branch testing
- Create local deployment test script
- Enable testing without requiring main branch
- Preview URL: project-branch.gitlab.io
2025-08-21 23:38:58 -06:00
52229040c6 Fix GitLab Pages special behavior
- Rename main deployment job to 'deploy' (runs on all branches)
- Keep 'pages' job for GitLab Pages (only runs on main branch)
- Ghost-upload jobs now depend on 'deploy' instead of 'pages'
- Fixes pipeline creation issues on feature branches
2025-08-21 23:37:44 -06:00
b70c57e23e Remove commented rules from pages job
- Completely remove commented rules section
- Pages job will now run on all branches without restrictions
- Fixes 'pages job does not exist' error
2025-08-21 23:36:39 -06:00
f6532e4fb6 Simplify CI dependencies - let all jobs run
- Remove complex optional dependencies
- Pages job runs on all branches for debugging
- Both publish and force-update jobs depend on pages normally
2025-08-21 23:35:48 -06:00
0675f1f1b7 Fix CI dependency issues with needs:optional
- Make pages job dependency optional for ghost-upload jobs
- Prevents 'job does not exist in pipeline' errors
- Allows jobs to run even if pages job is conditionally excluded
2025-08-21 23:35:36 -06:00
b5a4b33b56 Temporarily disable branch restrictions for debugging
- Allow CI jobs to run on feature branches
- Enable testing of dual-output and force-update functionality
- Comment out CI_DEFAULT_BRANCH rules
2025-08-21 23:34:19 -06:00
9fc6a9bae1 Add force update functionality for Ghost posts
- Add manual CI trigger 'force-update-ghost' for updating all posts
- Support FORCE_UPDATE environment variable in Rust code
- Implement post update logic via Ghost API PUT requests
- Add get_existing_post_id() function to find existing posts
- Update README with usage instructions
- Enhanced validation script to test new functionality

Usage:
- Normal: Only syncs new posts (default behavior)
- Force: FORCE_UPDATE=true updates ALL posts including existing ones
2025-08-21 23:30:29 -06:00
05474b986d Add validation and testing for ghost content extraction
- Create validation script to verify implementation
- Add test file for ghost profile rendering
- Validate all components work together correctly
- Ready for CI/CD pipeline testing
2025-08-21 23:25:46 -06:00
cdb96a50b7 Replace iframe with direct HTML content extraction
- Extract article content from ghost-optimized pages
- Add extract_article_content() function with fallback to iframe
- Try multiple selectors to find main content area
- Provide graceful fallbacks for failed content extraction
- Remove unused variables and fix warnings
2025-08-21 23:24:53 -06:00
e233a96f55 Add Quarto profiles for dual-output rendering
- Add ghost profile for iframe-optimized content
- Create ghost-iframe.css with minimal styling
- Update GitLab CI to build both main site and ghost-content versions
- Ghost profile removes navbar, uses minimal theme, article layout
2025-08-21 23:23:27 -06:00
51c03d9213 Merge branch 'modernize' into 'master'
FIx Dev Container

See merge request Anson-Projects/projects!9
2025-05-14 11:02:44 -07:00
609d4064a9 FIx Dev Container 2025-05-14 11:02:43 -07:00
388adf4a02 Fix date in Double Pendulum post 2025-05-11 23:37:08 +00:00
590f8cb106 Merge branch 'pendulum' into 'master'
Double Pendulum

See merge request Anson-Projects/projects!8
2025-05-11 14:27:15 -07:00
10083ec81c Double Pendulum 2025-05-11 14:27:15 -07:00
23 changed files with 898 additions and 2718 deletions

2
.gitignore vendored
View File

@@ -1,4 +1,3 @@
_freeze/
_site/
public/
ghost-upload/target/
@@ -7,3 +6,4 @@ posts/*/\.jupyter_cache/
!/.quarto/_freeze/
!/.quarto/_freeze/*
/.quarto/
**/.DS_Store

View File

@@ -1,24 +1,23 @@
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.21.0-debug
name: gcr.io/kaniko-project/executor:v1.23.2-debug
entrypoint: [""]
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_BRANCH}"
--destination "${CI_REGISTRY_IMAGE}:latest"
--cleanup
staging:
cache:
paths:
- _freeze
stage: deploy
image: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_BRANCH}
script:
- echo "Building the project with Quarto..."
- echo "Building the main website with Quarto..."
- quarto render --to html --output-dir public
- echo "Building Ghost-optimized version..."
- quarto render --profile ghost --to html --output-dir public/ghost-content
artifacts:
paths:
- public

File diff suppressed because one or more lines are too long

View File

@@ -2,7 +2,8 @@ const kProgressiveAttr = "data-src";
let categoriesLoaded = false;
window.quartoListingCategory = (category) => {
category = atob(category);
// category is URI encoded in EJS template for UTF-8 support
category = decodeURIComponent(atob(category));
if (categoriesLoaded) {
activateCategory(category);
setCategoryHash(category);

View File

@@ -1,11 +1,12 @@
FROM ubuntu:22.04
FROM debian:bookworm
ARG DEBIAN_FRONTEND=noninteractive
ENV JULIA_VERSION=1.11.1 \
ENV JULIA_VERSION=1.11.5 \
JULIA_MAJOR_VERSION=1.11 \
JULIA_PATH=/usr/local/julia \
QUARTO_VERSION=1.6.37
JULIA_PATH=/usr/local/julia
ENV QUARTO_VERSION=1.7.31
RUN apt-get update && apt-get install -y --no-install-recommends \
apt-utils dialog \
@@ -13,19 +14,19 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
python3 python3-pip python3-dev \
r-base \
gcc g++ \
wget curl tar \
curl tar \
openssh-client \
&& rm -rf /var/lib/apt/lists/*
# Use a RUN command for architecture detection and conditional logic
RUN wget https://github.com/quarto-dev/quarto-cli/releases/download/v${QUARTO_VERSION}/quarto-${QUARTO_VERSION}-linux-$(if [ "$(uname -m)" = "x86_64" ]; then echo "amd64"; else echo "arm64"; fi).tar.gz -O quarto.tar.gz \
RUN curl -fsSL "https://github.com/quarto-dev/quarto-cli/releases/download/v${QUARTO_VERSION}/quarto-${QUARTO_VERSION}-linux-$(if [ "$(uname -m)" = "x86_64" ]; then echo "amd64"; else echo "arm64"; fi).tar.gz" -o quarto.tar.gz \
&& tar -xzf quarto.tar.gz -C /opt \
&& mkdir -p /opt/quarto \
&& mv /opt/quarto-${QUARTO_VERSION}/* /opt/quarto/ \
&& ln -s /opt/quarto/bin/quarto /usr/local/bin/quarto \
&& rm -rf quarto.tar.gz /opt/quarto-${QUARTO_VERSION}
RUN python3 -m pip install jupyter webio_jupyter_extension jupyter-cache
RUN python3 -m pip install --break-system-packages jupyter webio_jupyter_extension jupyter-cache
RUN curl -fsSL "https://julialang-s3.julialang.org/bin/linux/$(if [ "$(uname -m)" = "x86_64" ]; then echo "x64"; else echo "aarch64"; fi)/${JULIA_MAJOR_VERSION}/julia-${JULIA_VERSION}-linux-$(if [ "$(uname -m)" = "x86_64" ]; then echo "x86_64"; else echo "aarch64"; fi).tar.gz" -o julia.tar.gz \
&& tar -xzf julia.tar.gz -C /tmp \

File diff suppressed because it is too large Load Diff

View File

@@ -1,20 +1,9 @@
[deps]
CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
Colors = "5ae59095-9a9b-59fe-a467-6f913c188581"
Conda = "8f4d0f93-b110-5947-807f-2305c1781a2d"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
GR_jll = "d2c73de3-f751-5644-a686-071e5b155ba9"
IJulia = "7073ff75-c697-5162-941a-fcdaad2a7d2a"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Measurements = "eff96d63-e80a-5855-80a2-b1b0885c5ab7"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
PlotlyJS = "f0f68f2c-4968-5e81-91da-67840de0976a"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Pluto = "c3e4b0f8-55cb-11ea-2926-15256bba5781"
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
SatelliteToolbox = "6ac157d9-b43d-51bb-8fab-48bf53814f4a"
Unitful = "1986cc42-f94f-5a68-af5c-568840ba703d"
[compat]
julia = "1.11"

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,253 @@
const kProgressiveAttr = "data-src";
let categoriesLoaded = false;
window.quartoListingCategory = (category) => {
category = atob(category);
if (categoriesLoaded) {
activateCategory(category);
setCategoryHash(category);
}
};
window["quarto-listing-loaded"] = () => {
// Process any existing hash
const hash = getHash();
if (hash) {
// If there is a category, switch to that
if (hash.category) {
// category hash are URI encoded so we need to decode it before processing
// so that we can match it with the category element processed in JS
activateCategory(decodeURIComponent(hash.category));
}
// Paginate a specific listing
const listingIds = Object.keys(window["quarto-listings"]);
for (const listingId of listingIds) {
const page = hash[getListingPageKey(listingId)];
if (page) {
showPage(listingId, page);
}
}
}
const listingIds = Object.keys(window["quarto-listings"]);
for (const listingId of listingIds) {
// The actual list
const list = window["quarto-listings"][listingId];
// Update the handlers for pagination events
refreshPaginationHandlers(listingId);
// Render any visible items that need it
renderVisibleProgressiveImages(list);
// Whenever the list is updated, we also need to
// attach handlers to the new pagination elements
// and refresh any newly visible items.
list.on("updated", function () {
renderVisibleProgressiveImages(list);
setTimeout(() => refreshPaginationHandlers(listingId));
// Show or hide the no matching message
toggleNoMatchingMessage(list);
});
}
};
window.document.addEventListener("DOMContentLoaded", function (_event) {
// Attach click handlers to categories
const categoryEls = window.document.querySelectorAll(
".quarto-listing-category .category"
);
for (const categoryEl of categoryEls) {
// category needs to support non ASCII characters
const category = decodeURIComponent(
atob(categoryEl.getAttribute("data-category"))
);
categoryEl.onclick = () => {
activateCategory(category);
setCategoryHash(category);
};
}
// Attach a click handler to the category title
// (there should be only one, but since it is a class name, handle N)
const categoryTitleEls = window.document.querySelectorAll(
".quarto-listing-category-title"
);
for (const categoryTitleEl of categoryTitleEls) {
categoryTitleEl.onclick = () => {
activateCategory("");
setCategoryHash("");
};
}
categoriesLoaded = true;
});
function toggleNoMatchingMessage(list) {
const selector = `#${list.listContainer.id} .listing-no-matching`;
const noMatchingEl = window.document.querySelector(selector);
if (noMatchingEl) {
if (list.visibleItems.length === 0) {
noMatchingEl.classList.remove("d-none");
} else {
if (!noMatchingEl.classList.contains("d-none")) {
noMatchingEl.classList.add("d-none");
}
}
}
}
function setCategoryHash(category) {
setHash({ category });
}
function setPageHash(listingId, page) {
const currentHash = getHash() || {};
currentHash[getListingPageKey(listingId)] = page;
setHash(currentHash);
}
function getListingPageKey(listingId) {
return `${listingId}-page`;
}
function refreshPaginationHandlers(listingId) {
const listingEl = window.document.getElementById(listingId);
const paginationEls = listingEl.querySelectorAll(
".pagination li.page-item:not(.disabled) .page.page-link"
);
for (const paginationEl of paginationEls) {
paginationEl.onclick = (sender) => {
setPageHash(listingId, sender.target.getAttribute("data-i"));
showPage(listingId, sender.target.getAttribute("data-i"));
return false;
};
}
}
function renderVisibleProgressiveImages(list) {
// Run through the visible items and render any progressive images
for (const item of list.visibleItems) {
const itemEl = item.elm;
if (itemEl) {
const progressiveImgs = itemEl.querySelectorAll(
`img[${kProgressiveAttr}]`
);
for (const progressiveImg of progressiveImgs) {
const srcValue = progressiveImg.getAttribute(kProgressiveAttr);
if (srcValue) {
progressiveImg.setAttribute("src", srcValue);
}
progressiveImg.removeAttribute(kProgressiveAttr);
}
}
}
}
function getHash() {
// Hashes are of the form
// #name:value|name1:value1|name2:value2
const currentUrl = new URL(window.location);
const hashRaw = currentUrl.hash ? currentUrl.hash.slice(1) : undefined;
return parseHash(hashRaw);
}
const kAnd = "&";
const kEquals = "=";
function parseHash(hash) {
if (!hash) {
return undefined;
}
const hasValuesStrs = hash.split(kAnd);
const hashValues = hasValuesStrs
.map((hashValueStr) => {
const vals = hashValueStr.split(kEquals);
if (vals.length === 2) {
return { name: vals[0], value: vals[1] };
} else {
return undefined;
}
})
.filter((value) => {
return value !== undefined;
});
const hashObj = {};
hashValues.forEach((hashValue) => {
hashObj[hashValue.name] = decodeURIComponent(hashValue.value);
});
return hashObj;
}
function makeHash(obj) {
return Object.keys(obj)
.map((key) => {
return `${key}${kEquals}${obj[key]}`;
})
.join(kAnd);
}
function setHash(obj) {
const hash = makeHash(obj);
window.history.pushState(null, null, `#${hash}`);
}
function showPage(listingId, page) {
const list = window["quarto-listings"][listingId];
if (list) {
list.show((page - 1) * list.page + 1, list.page);
}
}
function activateCategory(category) {
// Deactivate existing categories
const activeEls = window.document.querySelectorAll(
".quarto-listing-category .category.active"
);
for (const activeEl of activeEls) {
activeEl.classList.remove("active");
}
// Activate this category
const categoryEl = window.document.querySelector(
`.quarto-listing-category .category[data-category='${btoa(
encodeURIComponent(category)
)}']`
);
if (categoryEl) {
categoryEl.classList.add("active");
}
// Filter the listings to this category
filterListingCategory(category);
}
function filterListingCategory(category) {
const listingIds = Object.keys(window["quarto-listings"]);
for (const listingId of listingIds) {
const list = window["quarto-listings"][listingId];
if (list) {
if (category === "") {
// resets the filter
list.filter();
} else {
// filter to this category
list.filter(function (item) {
const itemValues = item.values();
if (itemValues.categories !== null) {
const categories = decodeURIComponent(
atob(itemValues.categories)
).split(",");
return categories.includes(category);
} else {
return false;
}
});
}
}
}
}

View File

@@ -1,6 +1,8 @@
project:
type: website
profiles:
default:
website:
title: "Anson's Projects"
site-url: https://projects.ansonbiggs.com
@@ -21,5 +23,20 @@ format:
css: styles.css
# toc: true
ghost:
website:
title: "Anson's Projects"
site-url: https://projects.ansonbiggs.com
description: A Blog for Technical Topics
navbar: false
open-graph: true
format:
html:
theme: none
css: ghost-iframe.css
toc: false
page-layout: article
title-block-banner: false
execute:
freeze: true

129
ghost-iframe.css Normal file
View File

@@ -0,0 +1,129 @@
/* Ghost iframe optimized styles */
body {
font-family: system-ui, -apple-system, sans-serif;
line-height: 1.6;
color: #333;
max-width: 100%;
margin: 0;
padding: 20px;
background: white;
}
/* Remove any potential margins/padding */
html, body {
margin: 0;
padding: 0;
box-sizing: border-box;
}
/* Ensure content flows naturally */
#quarto-content {
max-width: none;
padding: 0;
margin: 0;
}
/* Style headings for Ghost */
h1, h2, h3, h4, h5, h6 {
margin-top: 1.5em;
margin-bottom: 0.5em;
font-weight: 600;
line-height: 1.3;
}
h1 { font-size: 2em; }
h2 { font-size: 1.5em; }
h3 { font-size: 1.25em; }
/* Code blocks */
pre {
background: #f8f9fa;
border: 1px solid #e9ecef;
border-radius: 6px;
padding: 1rem;
overflow-x: auto;
font-size: 0.875em;
}
code {
font-family: "SF Mono", Monaco, "Cascadia Code", "Roboto Mono", Consolas, "Courier New", monospace;
background: #f1f3f4;
padding: 0.2em 0.4em;
border-radius: 3px;
font-size: 0.875em;
}
pre code {
background: none;
padding: 0;
}
/* Images */
img {
max-width: 100%;
height: auto;
border-radius: 4px;
}
/* Tables */
table {
border-collapse: collapse;
width: 100%;
margin: 1em 0;
}
th, td {
border: 1px solid #ddd;
padding: 8px;
text-align: left;
}
th {
background-color: #f2f2f2;
font-weight: 600;
}
/* Links */
a {
color: #0066cc;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
/* Blockquotes */
blockquote {
border-left: 4px solid #ddd;
margin: 1em 0;
padding-left: 1em;
color: #666;
font-style: italic;
}
/* Lists */
ul, ol {
padding-left: 1.5em;
}
li {
margin-bottom: 0.25em;
}
/* Remove any navbar/footer elements that might leak through */
.navbar, .nav, footer, .sidebar, .toc, .page-footer {
display: none !important;
}
/* Ensure responsive behavior for iframe */
@media (max-width: 768px) {
body {
padding: 15px;
font-size: 16px;
}
h1 { font-size: 1.75em; }
h2 { font-size: 1.35em; }
h3 { font-size: 1.15em; }
}

View File

@@ -1,8 +1,3 @@
cache:
paths:
- ./ghost-upload/target/
- ./ghost-upload/cargo/
publish:
stage: deploy
image: rust:latest
@@ -10,6 +5,8 @@ publish:
- cd ./ghost-upload
- cargo run
needs:
- pages
- job: pages
optional: true
rules:
- if: "$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH"
- if: "$CI_COMMIT_BRANCH == 'ghost-content-extraction'" # Allow testing on this branch

View File

@@ -1,3 +1,25 @@
# ghost-upload
This code uploads posts from https://projects.ansonbiggs.com to https://notes.ansonbiggs.com. I couldn't figure out how to update posts, and the kagi API doesn't make it clear how long it caches results for so for now only posts that don't exist on the ghost blog will be uploaded. If you want to update content you need to manually make edits to the code and delete posts on the blog.
This tool synchronizes posts from https://projects.ansonbiggs.com to the Ghost blog at https://notes.ansonbiggs.com.
## Features
- **Clean content extraction**: Uses Quarto ghost profile to generate clean HTML instead of iframes
- **Duplicate prevention**: Checks Ghost Admin API to avoid creating duplicate posts
- **AI summaries**: Uses Kagi Summarizer for post summaries
- **Dual content rendering**: GitLab CI builds both main site and ghost-optimized versions
## How It Works
1. **Dual Build Process**: GitLab CI builds the site twice:
- Main site → `public/` (normal theme with navigation)
- Ghost content → `public/ghost-content/` (minimal theme for content extraction)
2. **Content Extraction**: Rust tool fetches clean HTML from the ghost-content version instead of using iframes
3. **Duplicate Detection**: Uses Ghost Admin API to check for existing posts by slug
## Environment Variables
- `admin_api_key`: Ghost Admin API key (required)
- `kagi_api_key`: Kagi Summarizer API key (required)

View File

@@ -45,13 +45,29 @@ impl Post {
let slug = get_slug(link);
let summary = summarize_url(link).await;
// Extract content from ghost-optimized version
let ghost_content = extract_article_content(&link).await;
let html = html! {
div class="ghost-summary" {
h3 { "Summary" }
p { (summary) }
iframe src=(link) style="width: 100%; height: 80vh" { }
}
div class="ghost-content" {
(maud::PreEscaped(ghost_content))
}
div class="ghost-footer" {
hr {}
p {
"This content was originally posted on my projects website " a href=(link) { "here." }
" The above summary was made by the " a href=("https://help.kagi.com/kagi/api/summarizer.html")
{"Kagi Summarizer"}
em {
"This content was originally posted on my projects website "
a href=(link) { "here" }
". The above summary was generated by the "
a href=("https://help.kagi.com/kagi/api/summarizer.html") {"Kagi Summarizer"}
"."
}
}
}
}.into_string();
@@ -127,25 +143,132 @@ impl Post {
}
fn get_slug(link: &str) -> String {
link.split_once("/posts/").unwrap().1.to_string()
link.split_once("/posts/").unwrap().1.trim_end_matches('/').to_string()
}
async fn check_if_post_exists(entry: &Entry) -> bool {
let posts_url = "https://notes.ansonbiggs.com/";
let link = entry.links.first().unwrap().href.as_str();
let slug = get_slug(link);
async fn extract_article_content(original_link: &str) -> String {
// Convert original link to ghost-content version
let ghost_link = original_link.replace("projects.ansonbiggs.com", "projects.ansonbiggs.com/ghost-content");
match reqwest::get(format!("{}{}", posts_url, slug)).await {
Ok(response) => response.status().is_success(),
Err(_) => false,
match reqwest::get(&ghost_link).await {
Ok(response) => {
match response.text().await {
Ok(html_content) => {
let document = Html::parse_document(&html_content);
// Try different selectors to find the main content
let content_selectors = [
"#quarto-content main",
"#quarto-content",
"main",
"article",
".content",
"body"
];
for selector_str in &content_selectors {
if let Ok(selector) = Selector::parse(selector_str) {
if let Some(element) = document.select(&selector).next() {
let content = element.inner_html();
if !content.trim().is_empty() {
return content;
}
}
}
}
// Fallback: return original content with iframe if extraction fails
format!(r#"<div class="fallback-iframe">
<p><em>Content extraction failed. Falling back to embedded view:</em></p>
<iframe src="{}" style="width: 100%; height: 80vh; border: none;" loading="lazy"></iframe>
</div>"#, original_link)
}
Err(_) => format!(r#"<p><em>Failed to fetch content. <a href="{}">View original post</a></em></p>"#, original_link)
}
}
Err(_) => format!(r#"<p><em>Failed to fetch content. <a href="{}">View original post</a></em></p>"#, original_link)
}
}
#[derive(Deserialize, Debug)]
struct GhostPostsResponse {
posts: Vec<GhostPost>,
}
#[derive(Deserialize, Debug)]
struct GhostPost {
id: String,
}
async fn get_existing_post_id(slug: &str, token: &str) -> Option<String> {
let client = Client::new();
let api_url = format!("https://notes.ansonbiggs.com/ghost/api/v3/admin/posts/slug/{}/", slug);
match client
.get(&api_url)
.header("Authorization", format!("Ghost {}", token))
.send()
.await
{
Ok(response) => {
if response.status().is_success() {
if let Ok(ghost_response) = response.json::<GhostPostsResponse>().await {
ghost_response.posts.first().map(|post| post.id.clone())
} else {
None
}
} else {
None
}
}
Err(_) => None,
}
}
async fn fetch_feed(url: &str) -> Vec<Entry> {
let content = reqwest::get(url).await.unwrap().text().await.unwrap();
println!("Fetching RSS feed from: {}", url);
let feed = parser::parse(content.as_bytes()).unwrap();
let response = reqwest::get(url).await;
let response = match response {
Ok(resp) => resp,
Err(e) => {
println!("Failed to fetch RSS feed: {}", e);
return vec![];
}
};
if !response.status().is_success() {
println!("RSS feed request failed with status: {}", response.status());
return vec![];
}
let content = match response.text().await {
Ok(text) => text,
Err(e) => {
println!("Failed to read RSS feed content: {}", e);
return vec![];
}
};
if content.trim().is_empty() {
println!("RSS feed content is empty");
return vec![];
}
println!("RSS feed content preview: {}", &content[..content.len().min(200)]);
let feed = match parser::parse(content.as_bytes()) {
Ok(f) => f,
Err(e) => {
println!("Failed to parse RSS feed: {:?}", e);
println!("Feed content starts with: {}", &content[..content.len().min(500)]);
return vec![];
}
};
println!("Successfully parsed RSS feed with {} entries", feed.entries.len());
feed.entries
}
@@ -208,6 +331,8 @@ async fn main() {
let ghost_api_url = "https://notes.ansonbiggs.com/ghost/api/v3/admin/posts/?source=html";
let ghost_admin_api_key = env::var("admin_api_key").unwrap();
let feed = "https://projects.ansonbiggs.com/index.xml";
// Split the key into ID and SECRET
@@ -241,9 +366,21 @@ async fn main() {
// Prepare the post data
let entries = fetch_feed(feed).await;
if entries.is_empty() {
println!("No entries found in RSS feed or feed parsing failed. Exiting.");
return;
}
println!("Processing {} entries from RSS feed", entries.len());
let post_exists_futures = entries.into_iter().map(|entry| {
let entry_clone = entry.clone();
async move { (entry_clone, check_if_post_exists(&entry).await) }
let token_clone = token.clone();
async move {
let link = entry.links.first().unwrap().href.as_str();
let slug = get_slug(link);
(entry_clone, get_existing_post_id(&slug, &token_clone).await.is_some())
}
});
let post_exists_results = join_all(post_exists_futures).await;

BIN
posts/.DS_Store vendored

Binary file not shown.

View File

@@ -0,0 +1,155 @@
---
title: "Double Pendulum"
description: |
Lets create a double pendulum in Observable JS!
date: 2025-05-09
categories:
- Observable JS
- Code
- Math
draft: false
freeze: true
image: FeistyCompetentGarpike-mobile.mp4
image-alt: "My original Double Pendulum done in Python and Processing.js"
---
Quarto (which this blog is built on) recently added support for [Observable JS](https://observablehq.com/@observablehq/observable-javascript), which lets you make really cool interactive and animated visualizations. I have an odd fixation with finding new tools to visualize data, and while JS is far from the first tool I want to grab I figure I should give OJS a shot. Web browsers have been the best way to distribute and share applications for a long time now so I think its time that I invest some time to learn something better than a plotly diagram or jupyter notebook saved as a pdf to share data.
![My original Double Pendulum done in Python and Processing.js](FeistyCompetentGarpike-mobile.mp4){fig-alt="My original Double Pendulum done in Python and Processing.js"}
Many years ago I hit the front page the [/r/python](https://www.reddit.com/r/Python/comments/ci1cg4/double_pendulum_made_with_processingpy/) with a double pendulum I made after watching the wonderful [Daniel Shiffman](https://thecodingtrain.com/showcase/author/anson-biggs) of the Coding Train. The video was posted on gfycat which is now defunct but the internet archive has saved it: [https://web.archive.org/web/20201108021323/https://gfycat.com/feistycompetentgarpike-daniel-shiffman-double-pendulum-coding-train](https://web.archive.org/web/20201108021323/https://gfycat.com/feistycompetentgarpike-daniel-shiffman-double-pendulum-coding-train)
I originally used Processing's Python bindings to make the animation. So, a lot of the hard work was done (mostly by Daniel), and this animation seems to be a crowd pleaser so I went ahead and ported it over. Keeping the code hidden since its not the focus here, but feel free to expand it and peruse.
```{ojs}
//| echo: false
// Interactive controls
viewof length1 = Inputs.range([50, 300], {step: 10, value: 200, label: "Length of pendulum 1"})
viewof length2 = Inputs.range([50, 300], {step: 10, value: 200, label: "Length of pendulum 2"})
viewof mass1 = Inputs.range([10, 100], {step: 5, value: 40, label: "Mass of pendulum 1"})
viewof mass2 = Inputs.range([10, 100], {step: 5, value: 40, label: "Mass of pendulum 2"})
```
```{ojs}
//| code-fold: true
//| column: page
pendulum = {
const width = 900;
const height = 600;
const canvas = DOM.canvas(width, height);
const ctx = canvas.getContext("2d");
const gravity = .1;
const traceCanvas = DOM.canvas(width, height);
const traceCtx = traceCanvas.getContext("2d");
traceCtx.fillStyle = "white";
traceCtx.fillRect(0, 0, width, height);
const centerX = width / 2;
const centerY = 200;
// State variables
let angle1 = Math.PI / 2;
let angle2 = Math.PI / 2;
let angularVelocity1 = 0;
let angularVelocity2 = 0;
let previousPosition2X = -1;
let previousPosition2Y = -1;
function animate() {
// Physics calculations (same equations as Python)
let numerator1Part1 = -gravity * (2 * mass1 + mass2) * Math.sin(angle1);
let numerator1Part2 = -mass2 * gravity * Math.sin(angle1 - 2 * angle2);
let numerator1Part3 = -2 * Math.sin(angle1 - angle2) * mass2;
let numerator1Part4 = angularVelocity2 * angularVelocity2 * length2 +
angularVelocity1 * angularVelocity1 * length1 * Math.cos(angle1 - angle2);
let denominator1 = length1 * (2 * mass1 + mass2 - mass2 * Math.cos(2 * angle1 - 2 * angle2));
let angularAcceleration1 = (numerator1Part1 + numerator1Part2 + numerator1Part3 * numerator1Part4) / denominator1;
let numerator2Part1 = 2 * Math.sin(angle1 - angle2);
let numerator2Part2 = angularVelocity1 * angularVelocity1 * length1 * (mass1 + mass2);
let numerator2Part3 = gravity * (mass1 + mass2) * Math.cos(angle1);
let numerator2Part4 = angularVelocity2 * angularVelocity2 * length2 * mass2 * Math.cos(angle1 - angle2);
let denominator2 = length2 * (2 * mass1 + mass2 - mass2 * Math.cos(2 * angle1 - 2 * angle2));
let angularAcceleration2 = (numerator2Part1 * (numerator2Part2 + numerator2Part3 + numerator2Part4)) / denominator2;
// Update velocities and angles
angularVelocity1 += angularAcceleration1;
angularVelocity2 += angularAcceleration2;
angle1 += angularVelocity1;
angle2 += angularVelocity2;
// Calculate positions
let position1X = length1 * Math.sin(angle1);
let position1Y = length1 * Math.cos(angle1);
let position2X = position1X + length2 * Math.sin(angle2);
let position2Y = position1Y + length2 * Math.cos(angle2);
// Clear and draw to canvas
ctx.fillStyle = "white";
ctx.fillRect(0, 0, width, height);
ctx.drawImage(traceCanvas, 0, 0);
// Draw pendulum
ctx.save();
ctx.translate(centerX, centerY);
// First arm and mass
ctx.beginPath();
ctx.moveTo(0, 0);
ctx.lineTo(position1X, position1Y);
ctx.strokeStyle = "black";
ctx.lineWidth = 2;
ctx.stroke();
ctx.beginPath();
ctx.arc(position1X, position1Y, mass1/2, 0, 2 * Math.PI);
ctx.fillStyle = "black";
ctx.fill();
// Second arm and mass
ctx.beginPath();
ctx.moveTo(position1X, position1Y);
ctx.lineTo(position2X, position2Y);
ctx.stroke();
ctx.beginPath();
ctx.arc(position2X, position2Y, mass2/2, 0, 2 * Math.PI);
ctx.fill();
ctx.restore();
// Draw trace line
if (previousPosition2X !== -1 && previousPosition2Y !== -1) {
traceCtx.save();
traceCtx.translate(centerX, centerY);
traceCtx.beginPath();
traceCtx.moveTo(previousPosition2X, previousPosition2Y);
traceCtx.lineTo(position2X, position2Y);
traceCtx.strokeStyle = "black";
traceCtx.stroke();
traceCtx.restore();
}
previousPosition2X = position2X;
previousPosition2Y = position2Y;
requestAnimationFrame(animate);
}
animate();
return canvas;
}
```
## Conclusion
I think this is far from an idiomatic implementation so I'll keep this brief. I don't think I used JS or Observable as well as I could have so treat this as a beginner stabbing into the dark because thats essentially what the code is.
This was quite a bit more work than the [original Python implementation](https://gitlab.com/MisterBiggs/double_pendulum/blob/master/double_pendulum.pyde), but running real time, having beaufitul defaults, and being interactive without a backend make this leagues better than anything offered by any other language. There is definitely a loss of energy in the system over time that I attribute to Javascript being a mess, but I doubt that I would ever move all of my analysis to JS anyways so I don't think it matters. Its also very likely I'm doing something bad with my timesteps.