• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle

  • Thanks so much for the other stuff you use! I’ve been using bm for years

    If you mean from my dotfiles, that’s wild. A friend of mine wrote his own implementation in rust, but I’ve not really used their version, though I’m not sure its on github.

    that honestly became kind of cumbersome when I have different configs on different servers, or machines for work vs personal, etc.

    While I’m not currently using it, its on my todo list to take a real look at chezmoi for these per-machine differences; especially as I’m always between Linux, Windows & WSL. While chezmoi is outside the scope of this topic, it seems like a pretty solid configuration management option…and probably safer than what I’m doing (ln -s).

    And sometimes the exports would differ making functions work differently and I didn’t want to just have to copy that section of my ~/.bashrc as well every time something updated

    My “solution” is a collection of templates I’ll load in to my editor (nvim, with my lackluster plugin), which contains the basics for most scripts of a certain type. The only time that I’ll write something and rely on something that isn’t builtin, e.g. a customization, is if:

    • Its a personal/primary machine that I’m working from
    • I require() the item & add testing for it
      • [[ -z "${var}" ]], or command -v usually

    For my work, every script is usually as “batteries included” as reasonable, in whatever language I’m required to work with (bash, sh, pwsh or groovy). That said, the only items that appear in nearly every script at work are:

    • Base functions for normal ops: main(), show_help(), etc.
    • Some kind of logging facility with log()
      • colors & “levels” are a pretty recent change
    • Email notifications on failure (just a curl wrapper for Mailgun)

    bashly framework

    Transpiling bash into bash is probably the weirdest workflow I’ve ever heard of. While I can see some benefit of a “framework” mentality, if the ‘compiled’ result is a 30K line script, I’m not sure how useful it is IMO.

    For me at least, I view most shell scripts as being simple automation tools, and an exercise in limitation.

    If you look through my code in particular, you’ll see I use many of these bash-isms you’ve mentioned!

    I did see some of that, even in the transpiled dtools monolith

    $(<file)

    Just be aware that this reads the full contents into a variable, not an array. I would generally use mapfile/readarray for multiline files. As for the jq example, you should be able to get away with jq '.[]' < file.json, which is also POSIX when that’s a concern.

    maybe we should work together to update the framework to have better conventions like you’ve mentioned?

    I don’t think I’m the right person for that job – I’m both unfamiliar with Ruby and have no desire to interract with it. I’m also pretty opinionated about shell generally, and likely not the right person to come up with a general spec for most people.

    Additionally, my initial reaction that bashly seems like a solution in search of a problem, probably isn’t healthy for the project.


  • I’ve gotten to the point that, anything “useful” enough goes in a repo – unless its for work, since I’d otherwise be polluting our “great” subversion server…

    Functions

    I’ve stopped using as many functions, though are just too handy:

    • bm(): super basic bookmark manager, cd or pushd to some static path
      • I never got into everything zoxide has to offer
    • mkcd(): essentially mkdir -p && cd, but I use it enough that I forgot it isn’t standard

    I’m also primarily a WSL user these days (one day, I’ll move permanently) – to deal with ssh-agent shenanigans there, I also rely on ssh.sh in my config. I should at some point remove kc(), as I don’t think I’lll ever go back.

    Scripts

    Despite having a big collection of scripts, I don’t use these too often; but still wanted to mention:

    • md2d.sh: pandoc wrapper, mostly using it to convert markdown into docx
      • my boss has a weird requirement that all documentation shared with the team must be editable in Word…
    • gitclone.sh: git clone wrapper, but I use it as gcl -g quite often

    A lot of my more useful scripts are, unfortunately, work related – and probably pretty niche.

    “Library”

    I also keep a library of sorts for reusable snippets, which I’ll source as needed. The math & array libs in particular are very rarely used – AoC, for the most part.

    Config

    Otherwise, my bash config is my lifeblood – without it, I’m pretty unproductive.

    dtools comments

    Had a look through your repo, and have some thoughts if you don’t mind. You may already know about several of these items, but I’m not going to be able to sift through 30K lines to see what is/isn’t known.

    printf vs echo

    There’s a great writeup on why echo should be used with caution. Its probably fine, but wanted to mention it – personally, I’ll use echo when I need static text and printf doesn’t make sense to use otherwise.

    Multiline-printf vs HEREDOC

    In the script, you’ve got like 6K lines of printf statements to show various usage text. Instead, I’d recommend using HEREDOCs (<<).

    As an example:

    dtools_usage() {
        cat << EOF
    dtools - A CLI tool to manage all personal dev tools
    
    \e[1mUsage:\e[0m
        dtools COMMAND
        dtools [COMMAND] --help | -h
        dtools --version | -v
    
    \e[1mCommands:\e[0m
        \e[0;32mupdate\e[0m     Update the dtools CLI to the latest version
        ...
    EOF
    }
    

    HEREDOCs can also be used for basically any stdin stream; for example:

    ssh user@host << EOF
    hostname
    mkdir -p ~/.config/
    EOF
    

    bold() vs $'\e[1m'

    On a related note, rather than using functions and by extension subshells ($(...)) to color text; you could do something like:

    ANSI_FMT=(
        ['norm']=$'\e[0m'
        
        ['red']=$'\e[31m'
        ['green']=$'\e[32m'
        ['yellow']=$'\e[33m'
        ['blue']=$'\e[34m'
        ['magenta']=$'\e[35m'
        ['cyan']=$'\e[36m'
        ['black']=$'\e[30m'
        ['white']=$'\e[37m'
    
        ['bold']=$'\e[1m'
        ['red_bold']=$'\e[1;31m'
        ['green_bold']=$'\e[1;32m'
        ['yellow_bold']=$'\e[1;33m'
        ['blue_bold']=$'\e[1;34m'
        ['magenta_bold']=$'\e[1;35m'
        ['cyan_bold']=$'\e[1;36m'
        ['black_bold']=$'\e[1;30m'
        ['white_bold']=$'\e[1;37m'
    
        ['underlined']=$'\e[4m'
        ['red_underline']=$'\e[4;31m'
        ['green_underline']=$'\e[4;32m'
        ['yellow_underline']=$'\e[4;33m'
        ['blue_underline']=$'\e[4;34m'
        ['magenta_underline']=$'\e[4;35m'
        ['cyan_underline']=$'\e[4;36m'
        ['black_underline']=$'\e[4;30m'
        ['white_underline']=$'\e[4;37m'
    )
    

    This sets each of these options in an associative array (or hash table, sort of); callable with ${ANSI_FMT["key"]}; which expands like any other variable. As such, the text will be inserted directly without needing to spawn a subshell.

    Additionally, the $'...' or $"..." syntax is a bashism that expands escape sequences directly; so $'\t' expands to a literal tab character. The only difference betweeen the two forms is whether $ expressions will also be expanded, e.g. $"\e[31m$HOME\e[0m vs $'\e[31mHOME\e[0m.

    Do also note that $'\e[0m (or equiv) is required with this method, as you’re no longer performing the formatting in a subshell environment. I personally find this tradeoff worthwhile, though. But, I also don’t use it very often.

    The heredoc example before would then look like:

    dtools_usage() {
        cat << EOF
    dtools - A CLI tool to manage all personal dev tools
    
    ${ANSI_FMT['bold']}Usage:${ANSI_FMT['norm']}
        dtools COMMAND
        dtools [COMMAND] --help | -h
        dtools --version | -v
    
    ${ANSI_FMT['bold']}Commands:${ANSI_FMT['norm']}
        ${ANSI_FMT['green']}update${ANSI_FMT['norm']}     Update the dtools CLI to the latest version
        ...
    EOF
    }
    

    As a real-world example from a recent work project:

    log() {
        if (( $# == 1 )); then
            mapfile -t largs
            set -- "${1}" "${largs[@]}"
            unset largs
        fi
    
        local rgb lvl
        case "${1,,}" in
            emerg )     rgb='\e[1;31m'; lvl='EMERGENCY';;
            alert )     rgb='\e[1;36m'; lvl='ALERT';;
            crit )      rgb='\e[1;33m'; lvl='CRITICAL';;
            err )       rgb='\e[0;31m'; lvl='ERROR';;
            warn )      rgb='\e[0;33m'; lvl='WARNING';;
            notice )    rgb='\e[0;32m'; lvl='NOTICE';;
            info )      rgb='\e[1;37m'; lvl='INFO';;
            debug )     rgb='\e[1;35m'; lvl='DEBUG';;
        esac
        case "${1,,}" in
            emerg | alert | crit | err ) err+=( "${@:2}" );;
        esac
        shift
    
        [[ -n "${nocolor}" ]] && unset rgb
    
        while (( $# > 0 )); do
            printf '[%(%FT%T)T] [%b%-9s\e[0m] %s: %s\n' -1 \
                "${rgb}" "${lvl}" "${FUNCNAME[1]}" "${1}"
            shift
        done | tee >(
            sed --unbuffered $'s/\e[[][^a-zA-Z]*m//g' >> "${log:-/dev/null}"
        )
    }
    

    Here, I’m using printf’s %b to expand the color code, then later using $'...' with sed to strip those out for writing to a logfile. While I’m not using an associative array in this case, I do something similar in my log.sh library.

    One vs Many

    Seeing that there’s nearly 30K lines in this script, I would argue it should be split up. You can easily split scripts up to keep everything organized, or to make reusable code, by sourceing the script. For example, to use the log.sh library, I would do something like:

    #!/usr/bin/env bash
    
    # $BASH_LIB == ~/.config/bash/lib
    #NO_COLOR="1"
    source "${BASH_LIB}/log.sh"
    
    # set function
    log() {
        log.pretty "${@}"
    }
    
    log info "foo"
    
    # or use them directly
    log.die.pretty "oopsie!"
    

    Given the insane length of this monolith, splitting it up is probably worth it. The run() and related functions could stay within dtools, but each part could be split out to another file, which does its own subcommand argparse?

    Bashisms

    The Wooledge on Bashisms is a great writeup explaining the quirks between POSIX and bash – more specificaly, what kind of tools available out of the box when writing for bash specifically.

    Some that I use on a regular basis:

    • &> or &>>: redirect both stdout & stdin to some file/descriptor
    • |&: shorthand for 2>&1 |
    • var="$(< file)": read file contents into a variable
      • Though, I prefer mapfile or readarray for most of these cases
      • Exceptions would be in containers where those are unavailable (alpine + bash)
    • (( ... )): arithemtic expressions, including C-Style for-loops
      • Makes checking numeric values much nicer: (( myvar >= 1 )) or (( count++ ))

    grep | awk | sed

    Just wanted to note that awk can do basically everything. These days I tend to avoid it, but it can do it all. Using ln. 6361 as an example:

    zellij_session_id="$(zellij ls | awk '
        tolower($0) ~ /current/ {
            print gensub(/\x1B[[][^a-zA-Z]*?m/, "", "G", $1)
        }
    ')"
    

    The downside of awk is that it can be a little slow compared to grep, sed or cut. More power in a single tool, but maybe not as performant.

    Shellcheck

    I’m almost certain I’m preaching to the choir, but will add the recommendation for shellcheck or bash-language-server broadly.

    While there’s not much it spit out for dtools, there are some items of concern, notably unclosed strings.

    A Loose Offer

    If interested, I could like at rewriting dtool taking into account the items I’ve listed above, amongst others. Given the scope of the project, that’s quite the undertaking from a new set of eyes, but figured I’d throw it out there. Gives me something to do over the upcoming long weekend.





  • As I added in another comment, I misunderstood the DHH element of the discourse as I, admittedly, don’t know much of anything about him – I’ve heard some references here and there, but that’s about it.

    Taking a stand against things like this causes change for the better in the long run.

    That’s also fine, and I generally agree. My concern basically boils down to killing momentum by sinking a company with (probably?) sane views on right-to-repair & libre as topics.

    If the goal of a boycott is to starve the company until it goes under, because they made a move we don’t like – then that I don’t really like in this context. If the goal is to force their hand towards at least transparency, or maybe force NP to step down; then I’d support that.



  • I’ll admit I’m not up to date on the hyprland/vaxry lore – but I don’t understand the level of outrage based on this article…

    I’m also not sure why the sponsorship of a software project is necessarily being treated as a 100% endorsement of both the maintainers and their alleged views.

    I’m also not sure if infighting and purity testing will help the movement(s) right now. Once it’s the norm, sure, but it’s still a relatively fringe movement within the industry.


    Edit (2025-10-15@20:14): At the time of writing my comment, I was both unaware (and uninformed) on the DHH side of this topic. While I still think the level of outrage is maybe a melodramatic, the push back seems more warranted than it initially seemed to me. I still don’t know much about DHH beyond Rails (and even then, not much); but from what I’ve seen since my comment, the response is more understandable.