the durability of code can be expressed as how likely a program matches expected output, given all inputs
but, unless under specific conditions, this can be a great challenge. not only is it difficult, but building upon this can be restrictive
i have a great deal of appreciation for functional languages like haskell and ocaml (despite being awful at things like .filter())
and i like to know what is in a function’s scope and i like them to be self-contained
but I’d argue these languages are too mirred in concepts and are hard to read for my c-style brain (shoutout to ocaml’s default formatting settings)
now an idea one might have from listening to talks is the concept of let things crash
rather than try to define this behaviour within the typesystem, sometimes we should abort
// runs in bytecode
func complexTask(...) complex_t { ... }
// gets compiled
func (@compile) complexTask(...) complex_t { ... }
// unsafe by default, but can enforce strict typing
func (@safe) complexTask(...) io complex_t { ... }anyway, i think starting from a fairly loose point, using bytecode could be interesting
IO can suck in functional languages, but it can be useful to know about
likewise, lack of strongly typing means things can be missed or overlooked unless there is total knowledge of what a function does