ManMachine
@max@manmachine.me
We have reached a new era of civil engineering; now we can build bridges by simply dumping truckloads of shit into the river until the shit mountains are tall enough that some people and maybe cars can cross the river. Truly, it is a revolutionary technology that democraticizes access to bridges; now everyone can dump a truckload of shit over small rivers here and there and cross the rivers instead of asking an engineer to build the bridge for them. This approach completely removes all the bottlenecks in engineering, too: no need to navigate difficult legal or ethical frameworks. The biggest players on the market are staring to replace their bridges with shit mountains, you'd better be catching up and learning how to use this new groundbreaking technology. Some of you have ethical concerns, but this is beyond of the scope of my post. I also recognise that some might notice fish in the rivers dying, or simply slip on the shit; just you wait, I bet it'll be fixed in ~6 months
Look, there are lots of skeptics out there, but the shit mountains are becoming really useful these days. With just a shit mountain or two you could reach places that previously required a ladder or a bridge or a vehicle. The vehicle part is still out of reach, but in the future we can make shit mountains placed in such a way that, when we pour some shit between them, would allow us to reach the destination almost as fast as cars and boats. And it runs on shit, and as you know, shit is virtually free, you can literally go to a number of websites and get the shit for free. You can even get open-sourced shit these days, and pour it locally. Open source shit mountains are not as good as the commercial ones yet, but we're getting there.
Anyway, the bottom line, shit mountains are here to stay. Learn how to live with them.
@nina_kali_nina it's inevitable! we can't put the genie back in the bottle so I will keep wishmaxxing! We have no agency in this matter at all, so I will continue to boost it
@nina_kali_nina me and my local neighborhood agree with you, I just walked on the street past an ad for what's probably AI shit (haven't looked at what it is and the ad doesn't explain it either but it sure has that distinct visual fragrance of arrogance) and it was defaced to hell with countless tags saying "IA = caca" meaning "AI = shit" but in beautiful French it rhymes
@nina_kali_nina but but but... You're doing the shit dirty with your posts comparing it to genai 😢
Shit is actually useful when I tend the fields in my farm...
@hkz shit is pretty great; many AI applications are great too! Computer vision was done in ethical and safe ways for decades. :3 shit should be where it belongs.
Incidentally, shit can be and is a construction material. My parents' place used a mixture of clay and horse manure for its building blocks, iirc. But it isn't the same as dumping a mountain of shit in a house-shaped way, or asking an agent of chaos to keep dumping mountains of shit on top of each other until they start to shape into a house
@nina_kali_nina We got here because software engineering has never required licenses or guardrails like other engineering professions.
Look, OK, it doesn't work now, but if can just add enough shit, it will surely become sentient, and that's why we're scraping everyone's toilet 24*7.
@nina_kali_nina
@nina_kali_nina great analogy! The closing is so spot on 😂
@nina_kali_nina mixed with the attention economy people clamor to say proudly 'i made this pile of shit' for some updooks
(language pun: dookie)
@nina_kali_nina I mean, this but literally for natural gas (and even coal) plants being built to power data centers.
@nina_kali_nina my fav are the blog posts that are like "in only one day i was able to build a shit bridge that 63% of users were able to cross successfully. i take no joy in saying this but civil engineering is cooked"
@aparrish "deploying a bridge isn't a bottleneck anymore" checks: we are in the business of building public transit systems end to end, from building vehicles to running them
@nina_kali_nina as an engineering firm, what would you rather deal with:
- *years* of meetings with crusty old engineers with their "math" and "simulations" and "safety protocols", and endless staffing meetings?
~ OR ~
- *days* to see a fully-functional mountain of shit?
just in terms of time-to-market, the mountain of shit is the clear winner, and the way of the future.
@JamesWidman be a "market disruptor", obviously, investors love this
@nina_kali_nina "Shit mountains are unstable and smell bad, but we know how to solve the problem, just give us 563 quadrillion dollars more and sure this time we will get it right, not like the other times we promised it in the previous 3 years."
@nina_kali_nina Reading it gave me a mild PSTD since I had been working on some dams the last few weeks and I don't know what's worse, a shit dam or the radioactive dam I was working on. 
@nina_kali_nina Also, all the shit will have washed away in three to seven years’ time, so we will need everyone on the planet to shit themselves into oblivion to generate enough material to rebuild. There will be plenty of shit for this cycle to continue indefinitely, we are sure.
Since @majenko is out gallivanting this evening, a late-running replacement stream will be departing from https://twitch.tv/baljemmett at 2100ish GMT (about half an hour from now). We'll be looking at the PSU design I started on Sunday, but chucking that out and starting over in a way that isn't boring and sucky. Hooray!
In what seems to have become somewhat of a theme this week, I'm going to steal @TechTangents's usual Sunday slot since he's unable to stream today - so I'll be moving mine forward from the planned 2100 GMT to take advantage of the gap. Hoping to finish off the LTO sled PSU board design so the extra time might come in handy!
That'll be at the usual https://twitch.tv/baljemmett from, eh, probably 1730 GMT (half an hour from now). Need to find myself a big coffee first...
@baljemmett @TechTangents One pint of coffee...
@TechTangents @chloeraccoon Absolutely! Proper British pint, too, none of these American short measures ;)
from an article about the bitcoin crash: “Bitcoin is crashing hard, reaching historic lows of well below the $70,000 mark. At the time of writing, the token is hovering just above $63,000, levels we haven’t seen since October 2024.”
Given that I personally remember people being excited that bitcoin had reached the mark of one (1) dollar, the term “historic lows” to describe returning to the state of things slightly over one year ago is rather telling about the tech industry’s lack of perspective and cultural memory…
(it’s still dropping though 😌)
@0xabad1dea isn't the big landmark that it now costs more to mine a bitcoin (in dollars) than 1BTC is worth?
@0xabad1dea $10 was when I looked at it, realised it was a ‘greater fool’ thing, and thought it was probably too late to buy because the supply of greater fools must be close to exhaustion.
Capitalism has convinced people that new always equals “better” so if you want to rebel against Capitalism start to recognize that “new” is often just a way to extract more money from you.
Existing (or “old”) things can be functional and beautiful, and often easier to repair.
The word “old” should not be an insult. Not to things or to people.
@rasterweb I'm also trying to stop using "amateur" to mean sloppy and "professional" to mean high quality. That has always been a shitty dichotomy, and deskilling is only going to make it worse.
@attoparsec "Amateur: a person who engages in a study, sport, or other activity for pleasure rather than for financial benefit or professional reasons" seems about right!
It's about doing it because you love doing it.
@attoparsec I once emailed a community member letting them know we had to hire a "professional" to do something and he replied saying "Have I done a sub-standard job in the past as a volunteer?" and I had to explain that we were basically being extorted by a venue who was going to charge us *more* to have our own volunteer do something rather than pay them to do it.
(I put "professional" in quotes to mock the term, but he didn't get it until I explained it.)
#OtD 6 Feb 1916 the Cabaret Voltaire nightclub opened in Zürich, Switzerland. Described as "history's wildest nightclub" it was the spiritual home of the often radical Dada art movement, formed by artists revolted by the capitalist carnage of WWI https://stories.workingclasshistory.com/article/10606/opening-of-the-cabaret-voltaire-nightclub
_Why_, in sed and perl, does the s/foo/bar/ syntax default to substituting just one occurrence, and not all of them?
I can't immediately remember any situation where that was specifically what I wanted. And I can remember lots of situations where I was caught out by forgetting to add the 'g' flag on the end. (One of them three minutes ago, oddly enough.)
Why isn't 'g' the default, and 'only substitute once' a special option you have to select?
@aleteoryx I wonder if the answer in that case is _also_ "because it's what sed did" – that language, whatever it is, went with what seemed to be the existing convention. Maybe all the blame for this can be laid on a single decision decades ago that everyone's been following since.
@aleteoryx @simontatham I feel like it's a case of "the cheap one time operation as the default, expensive unbounded operation as the opt-in".
Lua is the only laguage off the top of my head that does not do that. str:gsub("a", "b") will replace all instances, but str:gsub("a", "b", n) is how you do only n substitutions.
@andnull @aleteoryx Python too:
>>> "tomorrow, and tomorrow, and tomorrow".replace("tomorrow", "stoats")
'stoats, and stoats, and stoats'
@i ah, and in an _interactive_ editing session, you might well want to replace a thing just once, or at least one at a time, because a single instance of that operation in ed is the analogue of moving the cursor over to it in a full-screen editor and deleting/retyping. Could be!
@simontatham Computationally cheaper in 1970?
@simontatham Perl does it, I'm 99% sure, because sed did it, and at the time Larry was writing perl it was a think replacement for shell scripting so inherited all the quirks of the tools being replaced to make it easy for folks (including, I suspect, Larry) to switch between the two.
@simontatham The typical use is “find a particular thing and replace it”. One wouldn't want it to do a second replacement just because there _happened_ to be a second instance of the thing later on. It would be a footgun, your program would work until it encountered the one-in-ten-thousand case where something unrelated looked like the thing your program happened to be searching for.
What I don't understand is what kind of atypical usage patterns you have for s/// that you don't understand this.
@mjd but 'sed s/foo/bar/' doesn't replace just one instance. It replaces (at most) one instance _per line_. I don't see why _that's_ likely to be what you want!
Plus, even if I really did want to replace one instance of foo per line, I might well need to choose which instance it was in a way that's more subtle than 'first occurrence on the line'. So s/foo/bar/ is underpowered for that purpose.
There are lots of obvious use cases for replacing all instances. Renaming an identifier in a program, for example. (Yes I know you're supposed to use a fabulously sophisticated language-sensitive editor for that these days; you haven't always set one up in a given situation.) If I only rename one instance of the identifier per line, the ones I missed cause compile errors.
@simontatham The basic use case for sed is that you have structured data, one record per line, all of the same form, and you want to perform a single operation on each record.
I think the disconnect here is that you're thinking of freeform text, rather than structured data.
@mjd yes, in which case it might very well be that the 'foo' I want to replace is the one in the third field of each line, if any. And there might or might not happen to be a foo in the preceding fields, or the following ones, and if so, I want to leave those ones alone. So then even sed s/foo/bar/ won't do what I want.
@simontatham Counterpoint: I went and looked in Perl code I had around to find examples for you and: I found none.
I didn't use s/// much—it gets more use in ephemeral command-line stuff. All the uses I found of s/// either
1. used anchors (so wouldn't be broken by a default /g) or
2. used /g or
3. default /g wouldn't matter for some other reason or,
4. in one case, had a subtle latent bug.
I found no uses at all of `sed` because when I use sed, it's not in a file. If I were writing some awful shell script, and I needed to do a substitution, I would use perl.
@mjd in Perl, you now remind me, I do have at least one use case for 'replace once', which is to run it in a loop, allowing each substitution to see the string output by the previous one:
1 while s/foo/bar/;
But I'm not sure that's quite in the spirit of "replace once" :-)
@simontatham Then, that would probably be an awk kind of problem. sed has traditionally shied away from numbered fields.
OTOH, if one extended sed, possibly its regular expression notation, possibly its command language, to be able to, say, recognise the beginning, ending, and interior of an n:th delimited field of every line, it would instantly become a significantly more powerful tool than it currently is.
@simontatham Btw, ObRant: if sed is so extended, the extension should be flexible enough to be easily applied to RFC 4180 CSV streams.
cut, too, should be able to handle those. I've got a homebrew cut that allows fields to be specified via -q (for 'quoted delimited fields', because -c and -s were already taken and -v has ... connotations) besides -b, -c, and -f, and can optionally get the field names from the first data tuple. One day, I'll clean it up and submit a patch to GNU coreutils, but I haven't gotten around to it just yet.
@riley by coincidence I ran across https://github.com/wireservice/csvkit a couple of weeks ago, which includes a tool called 'csvcut' which is like cut(1) but specialised to CSV.
@simontatham @mjd I default to thinking of it as starting munging each line from the left and applying more transformations until I'm done with the whole line.
Obviously, I built this mental model around how s/// already works, so trying to imagine the reverse, where I have to limit every operation, feels very awkward.
Maybe the cost of this model is having to write more complex/specific "foo"s than you do/want.
@simontatham I'm afraid it's because that's how ed (I) does it, because that's how QED did it: https://bitsavers.trailing-edge.com/pdf/sds/9xx/940/ucbProjectGenie/mcjones/R-15_QED.pdf#page=33 (page 6-1): IS/newtext/oldtext/ does one, IS:G/newtext/oldtext/ does all, and IS:123/newtext/oldtext/ does 123
QED doesn't claim an ancestor editor, so "because the QED authors chose to do it like this"?
and that will be because this is the behaviour you want when you're using (s)ed commands interactively to create a document; if this command language was devised ex nihilo today in a stream processing context, it would probably be the reverse, I agree
@simontatham "First, do no harm"? When I do M-x query-replace-regexp in Emacs, I always do a few test cases before hitting "!" to speedrun the rest of the file... because even after 35 years I can fatfinger a regex. And there have been any number of times where I've been like, ooh, doing a regex for this really complicated case is gonna be way more trouble than just going down the file and eyeballing it, yes, yes, yes, no, not that one, yes, no, not these two, yes, yes, yes, okay NOW speedrun it...
But then I suspect that your use cases and mine are rather radically different... but I think my human factors prof would say that doing it just once by default would satisfy the rule of least surprise...
That's my take on it, others may think differently...
@stonebear2 but, as mentioned elsewhere in the thread, "sed s/foo/bar/" doesn't replace it just once _total_, it replaces it just once _per line_, and that's the part that seems _more_ surprising than replacing it everywhere.
Yes, in interactive editing you often want to do things one at a time and check each one. The general consensus in this thread seems to be that 'sed' inherits its default from 'ed'. In an interactive ed session, s/// is your normal tool for replacing any part of a line – the equivalent of cursoring over to a piece of text and retyping it. So in that context it certainly makes sense that global replace is not the default.
But as soon as you're running the same s/// over every line in sed, you're doing a high-speed all-at-once batch processing operation whether you like it or not. If you're not confident your sed command will do the right thing, you have no way to confirm it one operation at a time. All you can do is put the output in a fresh file and check it before you replace the original (if you were even going to do that at all).
@simontatham oh, _right_...
No, I've done the trick where I wanted to replace the _first_ doublequote in a line with a single quote (and then follow up with s/"$/'/....) really useful where I'm maybe doing a here document and trying to get the quoting right... though I'm most often running said sed (ahem) _inside_ vi rather than in a batch file...
And then Perl is just being sed-consistent.
https://faultlore.com/blah/c-isnt-a-language/ deserves a fucking record for managing to trigger people into being extremely upset while also demonstrating that they don't understand the actual point being made
@mjg59 I had to stop reading halfway through as it gradually dawned on me that this was giving me a slow-burn panic attack. 176 triples. aaaaaaaaaaaaaa
jwz » 💀 🌐
@jwz@mastodon.social
@glyph @mjg59 This is bringing back some *really* old memories for me, too... I had entirely forgotten that one of my projects at Lucid was automating our FFI. I wrote a Lisp package that parsed C header files and auto-generated the FFI interface, without running a C compiler, so it had *just enough* of an implementation of cpp to barely limp by on the common cases... What a horror that was. this must have been like 1991?
It doesn't matter whether C is good or not. It matters that if I write code in two languages that aren't C, and I want it to all be part of the same process, I need to care about C. C pervades all. You cannot escape it. C will outlive all of us. The language will die and the ABI will persist. The far future will involve students learning about C just to explain their present day. Our robot overlords will use null terminated strings. C will outlive fungi.
jwz » 💀 🌐
@jwz@mastodon.social
@mjg59 Dooming us all to inhuman toil for The One whose name cannot be expressed in signed char.
@LapTop006 @mjg59 @jwz that sounds scary, can you summarise?
The only modification I had in any of my code was an two byte sequence that makes a null, which is technically not utf8 valid but works as a way to code a null in a C style utf8 string when needed.
"But C++ libraries" motherfucker I did not live through the C++ ABI wars of the 2000s to have people tell me with a straight face that C++ will be interoperable 1500 years from now, but I would wager a lot of money that whatever software they're building then will be able to call into libglib.so
@mjg59 /me fondly recalls that paper from Drepper from 20 years ago.
Aaaanyway the point that C is a protocol is very true.
But the flip side to that is: there are no pervasive cross-platform, multi-language protocols other than it.
If you e.g. see WASI as an attempt to rectify this, for one it's basically XKCD 927 (https://xkcd.com/927/).
But even if it catches on, an IDL based approach basically concedes that some underlying protocol is *forever*.
Might still be the practical way "out".
@mjg59 You could argue that with a sufficient supply of graduate students you could build a complete software system with ABIs divorced from C. You’d kind of have to build a new processor too though because all the modern instruction sets have been co-evolved along with C/C++ compilers for the last 30+ years.
@mjg59 It's kind of amusing thinking back to 16-bit Windows where pretty much all of the ABI used Pascal calling conventions.
I guess it wasn't always destined to end up like this.
@jamesh Chunks of AmigaOS used BCPL calling convention because AmigaDOS was based on TRIPOS which was a Martin Richards production so obviously he wrote it in his language (TRIPOS being a Cambridge University reference because obviously)
@mjg59 Looking at this Raymond Chen blog post, it seems the choice was to reduce code size rather than because they saw Pascal as being more interoperable: https://devblogs.microsoft.com/oldnewthing/20040102-00/?p=41213
It also seems they didn't use Pascal calling conventions for all of the win16 ABI: just most of it.
@mjg59 We should have more preemptive embarresment for the decisions we make (and carry) today, that will force the hands of people in the future.
That being said, I still hope that one day (within the coming century) we will be able to unify operating systems and PL research and not be bound by Unix to a almost-lowest common denominator.
@mjg59
May be C already dead, as in, I actively avoid writing C, choosing Rust instead.
Most of us already live in: there is ABI, and there is your tooling (languages).
@mjg59 Only until there is an alternative widely-accepted ABI. Rust is working on a stable ABI for a subset of the language, and I doubt it will be a 1-to-1 correspondence with C.
@mjg59 We could trigger even more people by stating that using whatever the fuck the rust people cook up will probably be worse than what we have now.
@mjg59 Reading this and thinking about LuaJIT-FFI's approach, which is that instead of parsing C header files it defines a easily-parseable subset of C and parses that. You wind up editing your header files into long strings and passing them into Lua.
Maybe this "parseable header C" should be a cross-language standard.
@mjg59 yes, I really liked that post. A long time ago I had to optimize code for a Fortran-wrtitten program. I wrote my portion in C, because in some cases (use of intrinsics without having to write asm) it made things easier (and it was impossible in Fortran directly). Everything used the C ABI anyway.
This article is a little advanced for me, is the point that not enough languages have written their own methods to interface with assembly?
I know very little about what goes on outside of the IDE. Just some vague notion that "the compiler makes this into assembly" But is it more like the compiler makes it into C and then C makes it into assembly?
I just want to remind you that in 1992, the Internet cost $2.50 an hour to access ($4.00 after the first four hours a night), was three million times slower than WiFi, and nobody in your house could take phone calls while you used it. Doomscroll on THAT.
What’s your laptop/desktop backup recommendation for general public, not-highly-technical people who don’t have extreme security needs and just want not to lose their family photos etc?
Maybe it’s just “use the cloud drive,” but…OneDrive seems to cause a lot of problems? or does it?
jwz » 💀 🌐
@jwz@mastodon.social
@inthehands Time Machine on two removable USB drives. One lives in your house and gets connected regularly; one lives somewhere else and you back up to that once or twice a year. All other answers are incorrect.
@jwz
Yeah, that’s my exact setup too (except the remote one is more like monthly). It feels like a lot for the tech-phobic who find even a password manager overwhelming, but maybe that’s just a hurdle worth finagling people over.
jwz » 💀 🌐
@jwz@mastodon.social
@inthehands In my humble but correct opinion, they can either get over using external drives, or they can get over losing all of their photos. There's no third choice.
@jwz @inthehands I have bad xp with this setup, particularly the time machine part. TM doesn’t like being interrupted at all, and for nontechies, cable being unplugged during backup is certainity. Fixing that was hard for *me* and imho impossible for normies.
If it’s just Photos, maybe having them drag&drop? At least for the cold storage disk?
jwz » 💀 🌐
@jwz@mastodon.social
@almad @inthehands Time Machine isn't perfect, but literally everything else is worse. There's a lot of "don't go like that then" in the whole space.
@inthehands @jwz May I ask where the remote one is? Parent's/friend's house? Safe deposit box?
jwz » 💀 🌐
@jwz@mastodon.social
@theorangetheme @inthehands Anywhere that is not likely to burn down at the same time is fine.
@jwz @inthehands I use TM on *three* removable USB drives—two SSDs (one to carry outside the house in case of fires) and one spinning rust (for reliability). Also Dropbox for file sync to the spare machine, a hot spare which *also* has two SSDs for Time Machine, but isn't always freshly backed up (or touched) from one week to the next.
jwz » 💀 🌐
@jwz@mastodon.social
@cstross @inthehands I know *so* many people whose backup strategy is: I have never taken a photo in my life with something other than an iPhone, so if I ever lose access to my iCloud, everything I've taken since I was a teenager is gone forever.
@jwz @inthehands As I was last plausibly a teenager in late 1984, more than an entire teenage lifespan before the iPhone first appeared, I now feel ancient …
jwz » 💀 🌐
@jwz@mastodon.social
@cstross @inthehands Not only were almost all of my employees born after @dnalounge opened, but probably most of them were born after I took it over...
@cstross @inthehands @jwz I have family photos on tin-types. No, I have not digitized them, there is no point. I am the last in my family to know who those people were. I just grab all the boxes and stuff them in my car.
@cstross @jwz @inthehands but aren't SSD’s unrealiable for long-term archiving? i see the SSD as more of a mobile solution with HDs with with the more long term one.
i mean, i have 15 year old HDs still working as archives of old media.
@blogdiva @cstross @jwz I was part of the team (though not a very important part, tbh) that advised Minnesota Public Radio on a storage format when they were digitizing their audio archives in the late 90s / early 00s. The conclusion our group reached was that •no• workaday digital format is suitable for long-long-term archiving, and by far the best approach is to have a process for copying and recopying it all forward onto new physical media into perpetuity.
jwz » 💀 🌐
@jwz@mastodon.social
@inthehands @blogdiva @cstross Exactly this. Don't worry about how long your media will last, just assume that it won't, and have a system that tolerates that. When my backup drive fails, I notice immediately and it's a complete non-issue.
support for DNS-over-TCP has been explicitly necessary since 2010
it's irritating that we still have to keep explaining this https://lobste.rs/c/hatmxu
@fanf so, RFC 5966 said SHOULD, RFC 7766 turned it into MUST, and they still had to publish an even _more_ emphatic RFC 9210?
"The key words 'SHOULD', 'MUST', and 'GOOD GRIEF WHAT IS WRONG WITH YOU HOW CAN WE MAKE THIS ANY CLEARER' are to be interpreted …"
@simontatham there's a slight subtlety that rfc 7766 is about implementations and rfc 9210 is about deployments, so rfc 9210 is more like HEY OPERATORS, THIS MEANS YOU TOO, FIX YOUR FIREWALLS
it has a very banging-my-head-against-the-wall review of the painful and somewhat erratic history, pointing out that there are lots and lots of other rfcs that depend on dns-over-tcp, and it was in practice required before 2010, just not clearly written down as such
To whom it may concern: I am currently experimenting with Snac, @max is also me, if you get a follow from it, be not afraid.