

The mention of caverns piques my interest; I’ve long daydreamed of dwarf fortress-style verticality in factorio.


The mention of caverns piques my interest; I’ve long daydreamed of dwarf fortress-style verticality in factorio.


Played a bit more of the Lizardmen campaign in Total War: Warhammer 2 (easy campaign difficulty and normal battle difficulty). It feels really good when an in-depth-planned deployment and battle plan turns a predicted “valiant defeat” into a “close victory”. The constant tension between expansion and territorial defense is surprisingly hard to balance, especially with the “main quest” events that spawn Chaos armies a few turns’ march from your capital. The most frustrating so far is how the option to confederate with other Lizardmen factions only seems to be possible if you have no preexisting diplomatic ties - as soon as you sign even a pact of non-aggression the option simply disappears from the diplomacy menu despite good relations/standing.
I’ve also been playing a bit of Old School RuneScape. The quest line(s) involving the Humans Against Monsters association hit a bit deeper given current events IRL…
I’m thinking of giving Project Zomboid another try. I wish I had someone to play it with, zombie apocalypse games are much more fun when you can roleplay as a group of survivors (and diversify your skills).


Another approach could be to run an image convolution kernel in GIMP (or some other image manipulation program). Something like
[
1, 1, 1
1, 0, 1
1, 1, 1
]
and then filter pixels based on ≥4 or <4
This was a good opportunity to refresh my grasp on the math involved in losslessly stuffing a tuple into a single number. JS-in-the-browser has Sets and Maps but no Tuples, and Arrays are indexed on their id / memory handle instead of their value contents, so if you want to put coordinates into a set or map and have the collection behave as expected you need to serialize the coordinates into a primitive type. Stuff it into a string if you don’t want to think too hard. For this specific problem we don’t even need to be able to compute the original coordinates (just count the unique removed points) but implementing that computation was a handy way to verify the “serializer” was working correctly.
Seeing as the record tuple proposal was withdrawn in February of this year this is still a technique worth knowing when working with coords in JS.
function part1(inputText) {
const gridWidth = inputText.indexOf('\n');
const lines = inputText.trim().split('\n');
const gridHeight = lines.length;
let accessibleRolls = 0;
for (let y = 0; y < gridHeight; y++) {
for (let x = 0; x < gridWidth; x++) {
if (lines[y][x] === '@') {
let occupiedNeighbors = 0;
for (const [neighborX, neighborY] of [
[x - 1, y],
[x + 1, y],
[x, y - 1],
[x, y + 1],
[x - 1, y - 1],
[x - 1, y + 1],
[x + 1, y - 1],
[x + 1, y + 1],
]) {
if (neighborX < 0 || neighborX >= gridWidth || neighborY < 0 || neighborY >= gridHeight) {
continue;
}
if (lines[neighborY][neighborX] === '@') {
occupiedNeighbors++;
}
}
if (occupiedNeighbors < 4) {
accessibleRolls++;
}
}
}
}
return accessibleRolls;
}
{
const start = performance.now()
const result = part1(document.body.textContent)
const end = performance.now()
console.info({day: 4, part: 1, time: end - start, result})
}
function serializeCoords(x, y, gridWidth) {
const leftShiftAmount = Math.ceil(Math.log10(gridWidth));
return x * (10 ** leftShiftAmount) + y;
}
/*
{
const x = 3;
const y = 4;
const gridWidth = 13;
const serialized = serializeCoords(x, y, gridWidth);
console.debug({ x, y, gridWidth, serialized });
}
*/
function deserializeCoords(serialized, gridWidth) {
const leftShiftAmount = Math.ceil(Math.log10(gridWidth));
const x = Math.floor(serialized / (10 ** leftShiftAmount));
const y = serialized - x * 10 ** leftShiftAmount;
return [x, y];
}
/*
{
const serialized = 304;
const gridWidth = 13;
const [x, y] = deserializeCoords(serialized, gridWidth);
console.debug({ serialized, gridWidth, x, y });
}
*/
function part2(inputText) {
const gridWidth = inputText.indexOf('\n');
const lines = inputText.trim().split('\n');
const gridHeight = lines.length;
let removed = new Set();
while (true) {
const toRemove = new Set();
for (let y = 0; y < gridHeight; y++) {
for (let x = 0; x < gridWidth; x++) {
const serialized = serializeCoords(x, y, gridWidth);
if (lines[y][x] === '@' && !removed.has(serialized)) {
let occupiedNeighbors = 0;
for (const [neighborX, neighborY] of [
[x - 1, y],
[x + 1, y],
[x, y - 1],
[x, y + 1],
[x - 1, y - 1],
[x - 1, y + 1],
[x + 1, y - 1],
[x + 1, y + 1],
]) {
if (neighborX < 0 || neighborX >= gridWidth || neighborY < 0 || neighborY >= gridHeight) {
continue;
}
const serializedNeighbor = serializeCoords(neighborX, neighborY, gridWidth);
if (lines[neighborY][neighborX] === '@' && !removed.has(serializedNeighbor)) {
occupiedNeighbors++;
}
}
if (occupiedNeighbors < 4) {
toRemove.add(serialized);
}
}
}
}
if (toRemove.size === 0) {
break;
}
removed = removed.union(toRemove);
}
return removed.size;
}
/*
{
const exampleText = `..@@.@@@@.
@@@.@.@.@@
@@@@@.@.@@
@.@@@@..@.
@@.@@@@.@@
.@@@@@@@.@
.@.@.@.@@@
@.@@@.@@@@
.@@@@@@@@.
@.@.@@@.@.
`;
const start = performance.now();
const result = part2(exampleText);
const end = performance.now();
console.info({ day: 4, part: 2, time: end - start, result });
}
*/
{
const start = performance.now()
const result = part2(document.body.textContent)
const end = performance.now()
console.info({ day: 4, part: 2, time: end - start, result });
}
For part 2, I eagerly wrote a nice, clean, generic, functional depth-first search, only to get an out of memory error 😭. Note the top-level code blocks: they scope the variables declared inside them, allowing me to run the whole script repeatedly in the console without getting “redeclared variable name” errors.
function part1(inputText) {
let totalOutputJoltage = 0;
for (const batteryBankDef of inputText.split('\n')) {
let bestBankJoltage = 0;
const previousDigits = [];
for (const character of batteryBankDef) {
const currentDigit = Number.parseInt(character, 10);
for (const previousDigit of previousDigits) {
const possibleVoltage = 10 * previousDigit + currentDigit;
if (possibleVoltage > bestBankJoltage) {
bestBankJoltage = possibleVoltage;
}
}
previousDigits.push(currentDigit);
}
totalOutputJoltage += bestBankJoltage;
}
return totalOutputJoltage;
}
{
const start = performance.now();
const result = part1(document.body.textContent)
const end = performance.now();
console.info({day: 3, part: 1, result, time: end - start})
}
function findNthDigitForSequence(bankDef, n, startIndex) {
let digit = 9;
while (digit > 0) {
for (let i = startIndex; i < bankDef.length - 11 + n; i++) {
if (bankDef[i] === digit.toString()) {
return [digit, i]
}
}
digit--;
}
return undefined;
}
function findBestJoltageForBank(bankDef) {
const digits = [];
let previousFoundDigitIndex = -1;
for (let i = 0; i < 12; i++) {
const digitFound = findNthDigitForSequence(bankDef, i, previousFoundDigitIndex + 1);
if (digitFound === undefined) {
debugger;
return undefined;
}
const [digit, index] = digitFound;
digits.push(digit);
previousFoundDigitIndex = index;
}
return Number.parseInt(digits.join(''), 10);
}
function part2(inputText) {
let totalOutputJoltage = 0;
for (const batteryBankDef of inputText.trim().split('\n')) {
totalOutputJoltage += findBestJoltageForBank(batteryBankDef) ?? 0;
}
return totalOutputJoltage;
}
{
const start = performance.now();
const result = part2(document.body.textContent);
const end = performance.now();
console.info({ day: 3, part: 2, time: end - start, result });
}
This year I’m tired of dealing with reading from files and setting up IDEs, so I’m attempting to solve each day directly in my web browser’s console: after opening my problem input in a new tab I can do mySolutionFunction(document.body.textContent) in that tab’s console. Thankfully the browser I use (ff) has a mode that lets me write several lines and then run them, otherwise this would not be simpler. Unfortunately, this means I lost my code for day1 when I closed the tab a bit too quickly.
I didn’t want to use regex for today; you need backreferences and those are impossible to optimize if they blow up (computationally speaking). I’m not so sure my solution for part 2 actually does run in more linear time than a regex with a single backreference does…
function part1(input) {
let sumOfValidIds = 0;
for (const rangeDef of input.split(',')) {
const [start, stop] = rangeDef.split('-').map(s => Number.parseInt(s, 10));
const rangeLength = stop - start + 1;
for (let id = start; id <= stop; id++) {
const idLength = id.toString().length;
if (idLength % 2 === 1) {
continue
}
const halfLength = idLength / 2;
const topHalf = Math.floor(id / Math.pow(10, halfLength));
const bottomHalf = id - topHalf * Math.pow(10, halfLength);
if (topHalf === bottomHalf) {
sumOfValidIds += id;
}
}
}
return sumOfValidIds;
}
part1(document.body.textContent);
function extendsPattern(containerString, pattern) {
let container = containerString;
while (container.length > pattern.length) {
if (!container.startsWith(pattern)) {
return false;
}
container = container.slice(pattern.length)
}
return pattern.startsWith(container);
}
function findContainedPatterns(containerString) {
const patterns = [];
for (let i = 0; i < containerString.length; i++) {
const upTillNow = containerString.substring(0, i+1);
for (let j = 0; j < patterns.length; j++) {
if (!extendsPattern(upTillNow, patterns[j])) {
patterns.splice(j, 1);
j--;
}
}
patterns.push(upTillNow);
}
return patterns;
}
function part2(input) {
let sumOfValidIds = 0;
for (const rangeDef of input.split(',')) {
const [start, stop] = rangeDef.split('-').map(s => Number.parseInt(s, 10));
const rangeLength = stop - start + 1;
for (let id = start; id <
const idString = id.toString();
const patterns = findContainedPatterns(idString);
if (patterns.length > 1) {
const shortestPatternCandidate = patterns[0];
if (idString.length % shortestPatternCandidate.length === 0) {
sumOfValidIds += id;
}
}
}
}
return sumOfValidIds;
}
//part2(`11-22,95-115,998-1012,1188511880-1188511890,222220-222224,1698522-1698528,446443-446449,38593856-38593862,565653-565659,824824821-824824827,2121212118-2121212124`)
part2(document.body.textContent);


[Disco Elysium] takes a lot of energy and a specific mood to play
Totally! In my experience you need to be depressed, in no small part because of People, and waiting on the final thing that will push you over the edge and make you give up on them entirely, for the game to best resonate with you. You need to love Humanity and yet be weary of her, to have hope and yet be terminally cynical about anything good ever happening.
It’s almost like the game was designed as therapeutic deprogramming for bitter activists. Then again, I might just be projecting my own experience and perspective.


As well as 200 miles from every international airport inside the US.


Good thing I never deleted my linkedin, that should be much cleaner than fediverse accounts


Can someone explain to me how this is not my president saying “buy our stuff please we were irresponsible and made too much and now it will bankrupt us”?
nobody deserves customers. Of course, it’s not the technocrats nor the wealthy that will be paying for this bankruptcy but us lowly citizens.


Given the stochastic nature of LLMs and the pseudo-darwinian nature of their training process, I sometimes wonder if geneticists wouldn’t be more suited to interpreting LLM output than programmers.
For what it’s worth, fedia.io does not federate with lemmy.today: https://fedia.io/federation
The only way to approach “talking with everyone” on the fediverse is to host your own instance - only even then you’ll probably need to defederate ASAP from any instances that send you illegal material (as in child sexual abuse material).


Petite balle pour la gauche qui ne se mobilise pas pour les quartiers, ça fait plaisir à l’entendre hors des cercles anti-impérialistes (bien que je ne m’y attendais pas!).
It is, but maybe they mean they want no limit whatsoever on post length.
which, well, if your instance starts sending out megabyte-sized text posts I don’t expect it to stay federated with many others for very long.


I see, thanks for the correction.


There used to be this website, but the url just loads up a scam site now (I’ve created this issue on the project’s tracker if anyone has additional info to contribute).
I don’t know how technical you are, @VieuxQueb@lemmy.ca , but you could try running the “defed-investigator” project locally.


lemmy.ml, no, but I’m fairly certain that lemmygrad.ml has been defederated from lemmy.world at least, if not others.


I’ll be honest, that “Iceberg Index” study doesn’t convince me just yet. It’s entirely built off of using LLMs to simulate human beings and the studies they cite to back up the effectiveness of such an approach are in paid journals that I can’t access. I also can’t figure out how exactly they mapped which jobs could be taken over by LLMs other than looking at 13k available “tools” (from MCPs to Zapier to OpenTools) and deciding which of the Bureau of Labor’s 923 listed skills they were capable of covering. Technically, they asked an LLM to look at the tool and decide the skills it covers, but they claim they manually reviewed this LLM’s output so I guess that counts.
Project Iceberg addresses this gap using Large Population Models to simulate the human–AI labor market, representing 151 million workers as autonomous agents executing over 32,000 skills across 3,000 counties and interacting with thousands of AI tools
from https://iceberg.mit.edu/report.pdf
Large Population Models is https://arxiv.org/abs/2507.09901 which mostly references https://github.com/AgentTorch/AgentTorch, which gives as an example of use the following:
user_prompt_template = "Your age is {age} {gender},{unemployment_rate} the number of COVID cases is {covid_cases}."
# Using Langchain to build LLM Agents
agent_profile = "You are a person living in NYC. Given some info about you and your surroundings, decide your willingness to work. Give answer as a single number between 0 and 1, only."
The whole thing perfectly straddles the line between bleeding-edge research and junk science for someone who hasn’t been near academia in 7 years like myself. Most of the procedure looks like they know what they’re doing, but if the entire thing is built on a faulty premise then there’s no guaranteeing any of their results.
In any case, none of the authors for the recent study are listed in that article on the previous study, so this isn’t necessarily a case of MIT as a whole changing it’s tune.
(The recent article also feels like a DOGE-style ploy to curry favor with the current administration and/or AI corporate circuit, but that is a purely vibes-based assessment I have of the tone and language, not a meaningful critique)


Chiming in to say: same, though it took a step further for me before I quit in disgust. I was ready to accept the api costs argument in good faith until I learned that a dev could not make a reddit client that would use my own api token. Which meant they didn’t (only) care about the api load, they care about ensuring that I see as many ads instead of posts that they can get away with.
Sadly, Google worsening their search results to juice their own (ad) numbers not long afterwards led to the general public learning about searching Reddit as a way to land on actual human-vetted info. Just as the core user base splintered and left in greater numbers than ever before, a tidal wave of new users joined and enthusiastically picked up the torch – without even realizing what they were contributing to.
I see a new post.
I click, I read, I scroll on.
I am the lurker.
#haiku (<- test to see how far this propagates in the mastodon / microblogging part of the fediverse)