Tab CompletionTab Atkins Jr.jackalmage@gmail.comhttp://www.xanthir.comhttp://www.xanthir.com/2019-08-11T03:00:39+00:00All content is published in the public domain, or optionally may be licensed under the CC0 license.http://www.xanthir.com/b51z0Octonion Alchemy2019-08-11T03:00:39+00:002019-08-11T02:44:26+00:00<p>A few months ago, I stumbled on <a href="https://www.quantamagazine.org/the-octonion-math-that-could-underpin-physics-20180720/" title="">an article about a physicist who believes octonion math can underly physics</a>. It was a pretty interesting article in its own right, but what really grabbed me at the time was the diagram near the end, giving a brief explainer of complex, quaternion, and octonion math:<p><p><img src="/pictures/octonions.jpg" style="max-width: 100%"><p>Tho I'd seen quaternion and octonion math before, I'd never quite seen the quaternion units depicted in that rock-paper-scissors diagram. And then when I looked down at the octonion diagram of the unit interactions, I was struck: that looks like a (simple) goetic diagram! The sort of "mystic circles and triangles with some latin bullshit" thing you see in anime alchemy and other magic.<p>I couldn't get this out of my head, and after some <a href="https://twitter.com/tabatkins/status/1093978672380338176" title="">noodling on Twitter</a> (link goes to the bottom of the thread to ensure you see the whole thing, scroll to the top to start reading) and chatting with my brother, I'd come up with a whole little overcomplicated elemental transmutation alchemy system which I really liked.<p><a href="https://docs.google.com/drawings/d/16b-F4H-h1yC-XTuGNh9sfE1wAMdnMSGgbpYrRkYXY-g/" title="">Here's the diagram in Google Diagrams.</a> (I haven't spent the time to make it an SVG yet, sorry.) <h2>lrn 2 alchemy nub</h2><p>Reading the diagram requires a little explanation.<p>First, the circle, and each of the six straight lines (with the end looping back to the beginning to make a circle as well) describe seven transmutation cycles; the elements in each cycle can be transmuted into each other.<p>To do so, you take an element from one "node", the <b>source</b>, and an element from another node, the <b>catalyst</b>. This will produce one of the two elements in the third node of the cycle. Which one depends on the chirality of your alchemy: if your source and catalyst are both the "top" elements of their nodes, and tracing the diagram from source to catalyst <i>follows</i> the arrow connecting them, then the result is the top element of the final node. For example, fire catalyzed with water produces lightning; fire catalyzed with life loops around and produces aether; etc.<p>However, choosing the "bottom" element in a node inverts the result: fire catalyzed with air (inverted water) produces earth. Running the transmutation in reverse also inverts the result: water catalyzed with fire (inverted order) produces earth. Two inversions goes back to normal: ice catalyzed with air (both inverted, normal order) produces lightning; so does water catalyzed with ice (one inverted, inverted order). All three inversions (both elements from the "bottom", with the reaction run against the arrows) is back to inverting: air catalyzed with ice produces earth.<p>There's a hidden node in here, too, representing non-elemental substance. (In octonions, it's the value 1; the real unit, as opposed to the 7 complex units e₁-e₇.) By default it's mundane; a lack of magic. Inverted, it represents wild magic, destructive and reactive without the guiding template of an element constraining it. You can produce these by choosing the source and catalyst from the <i>same</i> node: if they're opposing elements, like fire and ice, they produce mundanity as the elements cancel out (regardless of source/catalyst order); if they're the <i>same</i> element, like fire and fire, they produce wild magic as the element burn themselves out from the clash but leave behind their energy.<p>Mundanity combined with anything produces the other thing: mundane catalyzed with fire produces fire, wild magic catalyzed with mundane produces wild magic, etc. Wild magic combined with anything inverts the other: fire + wild magic produces ice, wild magic + wild magic produces mundanity, etc.<p>(This, if it's not obvious, perfectly replicates octonion math, if you're only allowed to use the eight units, their negatives, and multiplication.)<h2>And the point?</h2><p>I imagine that in a video game using these, a character aligned with an element would do more damage with and take less damage from attacks of that element, but take more damage from attacks with the opposing element. Aligning with mundane gives you weaker attacks, but a generalized minor defense against all elements; aligning with wild magic gives you stronger attacks, but a generalized minor weakness against all elements. There'd probably be some environmental interactions, too, like using fire abilities on a fire-aligned hex would give some bonus, etc.<p>I could also imagine a game starting with just the central circle, representing the "classical" elements, and perhaps the druidic wood/metal center. You don't know about the triangle points, so you can't use the druidic elements in alchemy, except for using wild magic to invert it. Then, later in the game, you meet some people from a different magical tradition that introduce you to the "astral" elements (the three on the triangle points), and the rest of alchemy unfolds - you can finally use the druidic element in alchemy!<p>Other than that, eh, this is just a fun thing to think about. Thank you, complicated mathematical structure, for giving me just enough rules to wrap some interesting stories around.http://www.xanthir.com/b51w0I Understand Contravariant Functors Better Now!2019-08-08T18:37:22+00:002019-08-08T17:52:40+00:00<p>Regular readers probably know that I enjoy some functional programming from time to time, and occasionally like to dive into Haskell and related stuff to teach myself something new. One thing that I've seen several times now, but not understood until now, is the idea of a "contravariant functor" or "cofunctor", and more generally the idea of something being covariant or contravariant over a type argument.<p>These concepts come from math, and I only vaguely understand what Wikipedia is talking about when they describe them. Translating over to category theory just obscures things further; it's made more difficult by the fact that both Abstract Math Wikipedia and Category Theory Wikipedia have fetishes for symbol-heavy statements that they apparently think make things clearer.<p>But at least within practical programming, I have a grasp on it, and I'll attempt to explain it to you, the abstract Reader.<h2>Review of Covariance</h2><p>So a quick review of "normal" functors. Say you've got a function that takes an integer and returns it "spelled out"; <code>numSpell(1)</code> returns <code>"one"</code>, etc. You want to call this on some integers you've got lying around, but alas, they're stored in an array and you don't know how many there are. Not to worry, Array is a functor, so you can call its <code>map()</code> method, passing <code>numSpell</code> to it, and you'll get back an Array of strings, having had <code>numSpell</code> applied to each of the integers inside the array.<p>In the abstraction of types, what happened is that you started with an Array<Int> and a function Int->String, and the map() function let you combine these two to get an Array<String>. That's a normal, covariant functor.<h2>Broken Intuition</h2><p>Now, a contravariant functor, or cofunctor (I'm so sorry, category theorists are just the absolute <b>worst</b> at terminology) is kinda similar, but different in a way that's very confusing at first.<p>While map() takes an F<A> functor and a A->B function, producing an F<B> functor, contramap() takes an F<A> cofunctor and an B->A function, and produces an F<B> cofunctor. That's... backwards??? How can you take an B->A function, and a thing of type A, and get an B out of it?<p>Turns out the problem is in our intuition, or at least, in my intuition. I'd gotten very used to thinking in terms of the "functor is a container" abstraction. That serves you well for many functors, like Array or Option, but it can lead you astray for more abstract functors. More importantly, tho, it led me to develop an intuition that the type parameter of a generic like, like Array<Int>, was telling me what sort of value was inside the object; Array<Int> is an Array containing Ints, after all.<p>But that's not it at all! The type param is just telling me that, in the methods for the object, some functions will have different type signatures depending on what I specify. For Array<Int>, it's telling me that array access, the <code>arr[i]</code> syntax, will produce an Int, whereas in Array<String> the exact same syntax will produce a String.<p>In this case, and in general for functors, the type parameter is always going to be dictating the <i>return value</i> of some methods on the object. In other words, the functor is a <i>producer</i> of some sort of value, and the type param tells me what type of value that will be.<p>With this minor insight, the fact that map() can use an A->B function to turn an F<A> into an F<B> makes sense; the F<A> will try to produce an A value normally, then you pass it thru the A->B function and get a B instead.<h2>Contravariance</h2><p>So then we can take that minor insight further: what happens when the type parameter is instead telling you an <i>argument type</i> for methods? That is, the object <i>consumes</i> values of that type?<p>For example, say you have a Serializer class, that converts things to bytes, for writing to files. It can take lots of different types of objects, so it's qualified as well: Serializer<Int>, Serializer<Bool>, etc.<p>So first, this can still be a functor (or technically, functor-adjacent) - it <i>produces</i> bytes, so you can <code>.map()</code> over those bytes to produce something else, by passing a Bytes->Whatever function.<p>But the type term, the Int or Bool or whatever, doesn't show up in that function, as it just handles Bytes. If you have a Serializer<Int>, you can't pass it the numSpell function, which is Int->String, and suddenly get a Serializer<String>, aka something which knows how to <i>consume</i> strings and <i>output</i> bytes. numSpell just doesn't help with that; it turns Ints into Strings, while our Serializer<String> is <i>given</i> Strings, and the Serializer<Int> it was based on only knows how to serialize Ints.<p>In order to transform a Serializer<Int> into a Serializer<String>, we obviously need a String->Int function. Pay attention there: with normal functors, turning F<A> into F<B> requires an A->B function. But turning Serializer<A> into Serializer<B> requires a B->A function. This makes perfect sense in concrete terms: your starting functor <i>produces</i> As, so an A->B function lets you transform that into a B so you can start producing Bs instead; but your Serializer <i>consumes</i> As, so you need a B->A function so you can start accepting Bs instead, transforming them into the As you already know how to deal with. This makes Serialize a <b>contravariant</b> functor, or <b>cofunctor</b>, and so it has a <b><code>.contraMap()</code></b> method instead, which takes the opposite-direction callback and uses it to transform itself.<h2>Profunctors, Etc.</h2><p>So that's the essence here. If a functor is covariant in one of its types, that means it is, in some sense, <i>producing</i> values of that type; they're return values for some method. If you want to transform it from F<Int> to F<String>, you need to provide an Int->String method to <code>.map()</code>. If a functor is contravariant in one of its types, it's the opposite: it <i>consumes</i> values of that type, taking them as arguments to some method. To transform it from F<Int> to F<String>, you need to provide a String->Int method to <code>.contraMap()</code>.<p>There's a two-typed structure called a Profunctor that does both at once; a Profunctor<Int, String> is contravariant in Int and covariant in String. The classic example of a Profunctor is... a function, like numSpell from earlier. In general, functions are characterized by two type parameters, A->B; for numSpell, that's Int->String. <br><p>If you want to change an A->B function to a C->D function, you need to provide two transformations, something to change the type A to C, and something to change the type B to D. Since the function <i>produces</i> a B, you want to give a B->D mapping function so it can start producing Ds instead; since it <i>consumes</i> an A, you want to give a C->A contramapping function so it can start consuming Cs instead.http://www.xanthir.com/b4y31Fast/Slow D&D Initiative System2018-12-21T20:17:09+00:002018-12-18T00:46:17+00:00<p>D&D 5e's initiative system is more-or-less unchanged from much earlier editions. Every character has an "Initiative Bonus"; at the start of combat everyone (including all the DM-controlled enemies) rolls a d20 and adds their initiative bonus; then everyone takes their actions in descending order of their rolls. When everyone's gone once, it "goes back to the top" and repeats in the same order.<p>This... works. It gives you an ordering and lets you represent a faster character by giving them a higher initiative bonus... sorta. But it has several problems.<p>First, your initiative bonus <i>just doesn't matter that much</i>. A d20 has a lot of variation. Over the course of many rolls, you can distinguish between, say, a +2 and a +5 bonus in how many times you succeed. Initiative simply isn't rolled that often, tho, so a character with +5 to initiative won't <i>feel</i> like they're actually much faster than a character with +2 to initiative.<p>Second, in practice it's a rather slow, clunky way to start a battle. A perhaps dramatic build-up to combat suddenly screeches to a halt as the DM demand initiative rolls from everybody, rolls a bunch of initiatives for their monsters, and then sorts everything out. This can easily take several minutes! (It doesn't <i>seem</i> like it should - it sounds easy and quick - but theory and practice don't align well here. In practice, it's pretty slow.) Only after all that's done can combat, and fun, actually begin.<p>Third, once the initial initiative roll has happened, and the first round has finished, initiative... doesn't matter anymore. The order just determines who gets to strike first; after that, every round is the same for everyone: you go, then <i>everyone else</i> gets a turn, then you go, etc.<p>Fourth, while players aren't <i>technically</i> locked into their initiative result (they can delay and take their turn later if they need to), in practice players don't (for various practical reasons). This restricts what sort of combos people can use; it might be more effective to let the Fighter rush forward and have the Cleric hold back to see if they need to drop some heals or just do cleanup, but if initiative puts the Cleric first, generally they'll just go first. This can get very frustrating!<p><a href="https://www.tribality.com/2014/12/19/dd-5e-combat-initiative/" title="">This article</a> presents a better version of initiative, that both simplifies things <i>and</i> gives players more meaningful options. Their write-up didn't handle some corner cases well, tho, so I've reproduced and cleaned up the idea for my own purposes:<h2>Fast/Slow Rounds</h2><p>The core idea is that initiative is done away with. Instead, each round, players announce whether they'll be taking a "fast" or "slow" round. All the players taking a fast round take their actions immediately, in whatever order they decide amongst themselves.<p>Second, the DM decides which monsters are taking the "fast" or "slow" round - fast monsters take their turn now, in whatever order the DM wants. (Typically, all the "mooks" will go in the fast round.)<p>Third, all the players who chose to take a "slow" round take their turns, in whatever order they wish. However, because they held back, examining the battlefield and waiting for an opportune moment, they can add advantage or disadvantage to a single roll anyone makes during their turn. (They can give themselves advantage on an attack roll, or give an enemy disadvantage on a saving throw, or provoke an Opportunity Attack and give the enemy disadvantage on their attack, etc.)<p>Fourth, the "slow" monsters take their turn, and also get to impose advantage or disadvantage to one roll during their turn. (Typically, the "significant" enemies will go here.)<p>Then the round is over, and the next round begins, with players once again choosing to go fast or slow.<p>That's it! (Except for some of the additional quirks, noted later in this post.)<h2>Benefits</h2><p>In practice this ends up having a <i>lot</i> of benefits over traditional initiative.<ol><li>Because there's no big "initiative list" setup at the beginning of the combat, you can jump straight into combat with no delay. Just ask the players who's going fast, and you're off to the races. This has a surprising psychological effect on players, maintaining the drama that was built up pre-combat very effectively!<li>Because the players can adjust when they take their action each round, they remain engaged thru more of the round, rather than just perking up on their turn and checking out a bit while they wait for everyone else to go. They plan out their actions along with the rest of the party, setting up combos and adjusting things for optimal safe ordering. You end up getting a lot more interesting teamwork out of people as a result!<li>It's so fast! Even on a round-by-round basis, this really does make combat move faster. Because the players are working together and going all at once, their plans don't collapse as much due to enemies taking actions between them (and players don't simply <i>forget</i> what they were going to do, which is a significant danger normally...). As such, players don't have to reassess the battlefield before each of their turns - they know exactly what's changed, since it <i>just happened and was part of the plan</i>.<li>No more (or at least, much less) forgetting about people! It's remarkably easy to occasionally skip people in the initiative when using it normally; if a non-active player asks a question, it's easy to slip back into the order as if they'd just gone. Since the players and enemies all go in just two large groups, tho, it's much simpler to track everyone - the players will remember themselves, and enemies become dramatically less fiddly to track.<li>Slow rounds are <i>amazing</i> for players who want to get off a big dramatic action with less chance of whiffing. Similarly, they're great for making your Big Bad actually threatening, rather than several rounds of "They swing, and... they miss. Again. Your turn."</ol><h2>Fiddly Details</h2><p>While the core rules above are trivial, there are a few additional details to cover.<p>First, several classes or feats give bonuses to initiative, which no longer do anything. (Alert feat gives +5, Revised Ranger gives advantage, etc.) While initiative bonuses aren't <i>actually</i> very significant, and thus it would probably be okay to just drop them, players don't like losing abilities even if they're minimal, and it's still a cool differentiator for a "fast" character.<p>As such, any ability that grants a "significant" initiative bonus (+2 or higher, more or less, but use your best judgement) is reinterpreted to let you get the slow-round bonus (adv or dis on one roll during your turn) during a fast round <i>once per long rest</i>. If you have multiple sources of bonuses, they stack to give you multiple uses of this ability.<p>The Bard's Jack of All Trades and the Champion's Remarkable Athlete don't count; their bonuses only range from +1 to +3 and aren't really "significant", plus most people don't realize they apply to Initiative in the first place (it's a Dex check!), so whatever.<p>There are some details to work out for spells that last X rounds (particularly those that are "one round") that I'm not sure about. http://www.xanthir.com/b4y30We Should Be Using Base 6 Instead2019-06-19T16:18:01+00:002018-12-18T00:02:22+00:00<p>Occasionally you might come across someone who believes that it would be better for us to count in a base other than 10. Usually people recommend base-12 ("dozenal"); compsci people sometimes recommend base-2 (binary) or base-16 (hexadecimal). My personal opinion is that all of these have significant downsides, not worth trading out base-10 for, but that there <i>is</i> a substantially better base we should be using: base 6.<p>Let's explore why.<p>(Warning, this post is long and definitely not edited well enough. Strap in.)<h2>Bases Are Arbitrary</h2><p>First of all, there's nothing special about base-10. Powers of 10 look nice and round to us <i>because</i> we use base-10, but we can use any other base and get just as round numbers. Base 6 has 10<sub>6</sub>, 100<sub>6</sub>, etc. (Those are 36<sub>10</sub>, 216<sub>10</sub>, etc; on the other hand, 10<sub>10</sub> and 100<sub>10</sub> are 14<sub>6</sub> and 244<sub>6</sub>. Converting between bases will usually produce awkward numbers no matter which base you start with.)<p>Why do we use base-10, then? The obvious answer is that we have 10 fingers. Counting off each finger gives us one "unit" of 10 things, and that unit-size carried over until we invented positional notation, where it froze into the base-10 we know today.<p>If we invented positional notation earlier, tho, then our hands could have supplied a better base - each hand can count off the values 0, 1, 2, 3, 4, and 5, which are exactly the digits of base-6. Two hands, then, lets you track two base-6 digits, counting up to 55<sub>6</sub>, which is 35<sub>10</sub>!<h2>Bases Are Significant</h2><p>On the other hand, there <i>are</i> important qualities that <i>do</i> differ between bases. <br><p>The most obvious is the tradeoff of length vs mathematical complexity. Binary has <i>trivial</i> math - the addition and multiplication tables have only four entries each! - but it produces very long numbers - 100<sub>10</sub> is 1100100<sub>2</sub>, 7 digits long! On the other hand, using something like, say, base-60 would produce pretty short numbers - 1,000,000<sub>10</sub> is only four digits long in base-60 ([4, 37, 46, 40]), but its multiplication table has <b>3600 entries</b> in it.<p>When evaluating the tradeoffs of long representations vs complex mental math, it's important to understand a little bit about how the brain actually works for math. In particular, we have a certain level of inherent ability in various domains - short-term memory, computation, etc. Overshooting that ability level is bad - it makes us slower to do mental math, and might require us to drop down to tool usage instead (writing the problem out on paper). But <i>undershooting</i> it is just as bad - our brain can't arbitrarily reallocate "processor cycles" like a computer can, so when we undershoot we're just wasting some of our brain's ability (and, due to the tradeoffs, forcing something else to get <i>harder</i>).<p>So, we know from experience that binary is bad on these tradeoffs - base-2 arithmetic is drastically undershooting our arithmetic abilities, while binary length quickly exceeds our short-term memory. Similarly, we know that base-60 (used by the Babylonians, way back when) is bad - it drastically overshoots our arithmetic abilities while not significantly reducing the length of numbers, at least in the domain of values we care about (in other words, less than a thousand or so). So there's a happy medium somewhere in the middle here, and conveniently the geometric mean of 2 and 60 is base-11. Give it a healthy ±5 range, and we'll estimate that the "ideal" base is probably somewhere between base-6 and base-16.<p>But arithmetic complexity is actually more subtle than that. The addition tables, while technically scaling in size with the square of the base, scale in <i>difficulty</i> roughly linearly, since each row or column is just the digits in order, but starting from a different offset. It takes some memorization to recall how each offset works, but fundamentally the difficulty scales up slowly and simply, and you can do simple mental tricks to make addition easier anyway. (Such as adding 8+7, and adding/subtracting 2 from each to make it 10+5, a much simpler addition problem.)<p>Multiplication is more complex, tho. Some rows are easier to remember and use, others are more difficult: <ul><li>"easy" rows are either trivial (0 and 1) or are factors of the base (2 and 5 for base-10), so they only cycle thru a subset of the digits in ascending order - less to memorize! Easy rows end up being pretty trivial to do mental math with; you can really easily multiply or divide in your head by these numbers.<li>"medium" rows are either smallish numbers that share all their factors with the base but aren't divisors (like 4 in base-10) because they also use only a subset of the digits but cycle thru them in a more complicated manner; or are the last row (9 in base-10) because of the nice pattern that makes its complexity easier to handle; or are just small numbers in general (like 3 in base-10), because even tho they cycle thru all the digits they do so in ascending series that are easier to memorize. Medium rows tend to be harder in mental math; you often need to resort to paper-and-pencil, but they're at least easy to do at that point.<li>"hard" rows are the rest - larger numbers that have some (or all) of their factors different from the base (6, 7, and 8 in base-10), so they cycle thru all the digits in a complicated manner, or just thru a subset in a complex manner + you have to track the 10s digit more carefully. Hard rows are just plain hard to compute with, even when you pull out paper-and-pencil. Rows that are coprime to the base, like 7 in base-10, are <i>maximally</i> difficult.</ul><p>So multiplication difficulty varies in a complicated manner between bases, and doesn't scale monotonically. Base-60, for example, while looking tremendously bad for arithmetic at a naive glance, has significant mitigating factors here - because 60 factors into 2×2×3×5, a <i>lot</i> of the rows in the multiplication table are "easy", particularly among the more "useful" small numbers. (It also has a lot of maximally-hard numbers, of course - all the prime rows 7 or higher except for 59, and 49 - and even more merely "hard" rows.) We'll examine this in more detail in a bit.<p>Divisibility difficulty is very similar:<ul><li>"easy" divisibility are the trivial values (1 and 10 in base-10), and values whose factors are a subset of the base's (2 and 5 in base-10, as 10 factors into 2×5). You only have to look at the final digit of a number to tell if it's evenly divisible by one of these values, and memorize which values correspond to divisibility and which don't. (0-2-4-6-8 for 2, 0-5 for 5.)<li>"medium" divisibility are the values who factor into the same <i>primes</i> as the base, but which use at most one more of a given prime than the base does. That is, since 10 factors into 2×5, 4 (2×2), 25 (5×5), 20 (2×2×5), and 50 (2×5×5) all have either two 2s or two 5s. These only require you to look at the last <i>two</i> digits of a number. (While 100 is 2×2×5×5, it's also just a power of the base, which clicks it over into "trivial" territory again.) Also "medium" is, again, the last value less than the base (9 in base-10) because you can always tell divisiblity by just adding up the digits and seeing if <i>that</i> value is divisible by your last row value. (Yes, this works in any base - you can tell if a hexadecimal value is divisible by F (15) by adding together the digits and seeing if the result is still divisible by F.) If this final row value is composite, any numbers whose factors are a subset are also medium, because the same trick applies: 9 is 3×3, so in base-10, you can indeed test for 3-divisibility by adding the digits and seeing if the result is divisible by 3.<li>"hard" divisibility is all the rest. The rows either exceed the base's factor usage by two or more (like 8 in base-10), and thus require looking at the last three or more digits, or they use a factor that's not in the base at all (like 6 in base-10) and so require you to look at the whole number in a more complicated way. And again, rows which are fully coprime to the base (like 7 in base-10) are maximally hard, with no easy tricks or bounded recognition possible; you just have to do the division and see if there's any remainder.</ul><h2>So What's Actually Best?</h2><p>So, based purely on a naive length-vs-arithmetic-difficulty analysis, we've already concluded that the "ideal" base is likely between base-6 (heximal) and base-16 (hexadecimal). Now let's narrow that list down based on the more complex factors, above!<p>First off, we can cross off any odd base right off the bat. They lack easy mult/div by 2 (it becomes "medium" difficulty instead), which is a supremely important number to multiply and divide by. I don't think any other qualities could possibly make up for this loss even in theory, but as it turns out none of the odd numbers in that range are particularly useful anyway, so there's not even a question. They're gone.<p>So we're left with 6, 8, 10, 12, 14, and 16. Let's scratch off another easy one: 14 sucks. Its factors are 2 and 7, and 7 is the least useful small number. 14 has bad mult/div with all the other small numbers above 2. So it's gone too.<p>8 and 16 we can cover together, because they're both powers of 2. This makes them easy to use in computing, as you can just group binary digits together to form octal or hex digits, but it limits their usefulness in mental arithmetic - since 2 is their <i>only</i> factor, you don't get as many useful combinations of values to make mult/div easier. Plus, the "trick" that makes mult/div easier with the largest digit value is, in these cases, applying to 7 and 15, which are again not particularly useful values. So, while these have some mitigating factors with computing, they're not really contenders. Gone.<p>So we're down to 6, 10, and 12. I'll break these down more specifically, because they're all starting to get useful and we need more details.<p>Base-10 has 10 rows in its multiplication table. 0, 1, 2, and 5 are all "easy" - the patterns are trivial or at least very simple. 3, 4, and 9 are "medium" - the patterns are more complex, but not <i>too</i> hard to memorize and use intuitively. But 6, 7, and 8 are all "hard" - the patterns are hard to use, and the tens digit varies enough that it's an additional burden to memorization. (And 7 is "maximally hard".) So 40% easy, 30% medium, 30% hard.<p>Base-6 has 6 rows. 0, 1, 2, and 3 are all "easy", because 0 and 1 are trivial, and 2 and 3 divide 6 and thus are simple repeating patterns (2-4-0, 3-0). 4 and 5 are "medium"; 4 for the same reason as base-10, but moreso (pattern is just 4-2-0, or 4-12-20, a simple counting-down-by-evens pattern), and 5 is the last digit so has the same quality as 9 does in base-10 (5-4-3-2-1-0, or 5-14-23-32-41-50). It's got 66% easy, 33% medium, and no hard rows at all! On top of this, the whole times table is 1/3 the size, at only 36 entries vs 100; if you throw away the truly trivial x0 and x1 rows and columns, then it's a mere 1/4 the size, with 16 vs 64 entries! That's small enough to be simply memorizable regardless of patterns.<p>Now base-12, with 12 rows. 0, 1, 2, 3, 4, and 6 are all "easy", because 12 has lots of useful factors. 8, 9, and 11 are "medium", but 5, 7, and 10 are "hard" (and 5 and 7 are both "maximal"). This is a better distribution than base-10 (50% easy, 25% medium, 25% hard), but it's larger <i>in general</i> (12x12, so 144 entries vs 100) which makes it harder to memorize, so it's probably roughly equivalent to base-10 overall. That said, easy multiplication/division by 3, 4, and 6 is probably worth more in the real world than mult/div by 5, so I'm sympathetic to the claims of base-12 lovers.<p>And even tho I've already eliminated it, let's still examine base-16, which is real bad because its factors are less useful. 0, 1, 2, 4, and 8 are easy, 3, C, and F are medium, but 5, 6, 7, 9, A, B, D, and E are all hard, for a 31% easy, 19% medium, 50% hard distribution. That's not only substantially worse than base-10, it's also <i>so much bigger</i> (256 entries) that its overall difficulty is also higher. (And to make it worse, <i>more than half</i> of the hard rows (5, 7, 9, B, and D) are maximally hard, as they're coprime to 16! That's so much worse!) Base-16 is useful as a more convenient way to read/write binary, but it's horrible as an actual base to do arithmetic in.<h2>Length of Numbers, and Digit "Breakpoints"</h2><p>As mentioned earlier, binary is a bad base for humans, because it produces very long representations. Humans have a "difficulty floor" for dealing with individual digits, so having a long number full of very-simple digits doesn't actually trade off properly; you still end up paying a larger "complexity price" per digit, times a long representation, for a very complex result.<p>In base-10, numbers up to a hundred use 2 digits, and numbers up to a thousand use 3 digits. Base-6 is fairly close to this: 2 digits gets you to 36, 3 to 216, and 4 to 1296. Since we don't generally work with numbers larger than 1000 in base-10 (after that we switch to grouping into thousands/millions/etc, so we're still working with 3 or less digits), you get the same range from base-6 by using, at most, 4 digits. That's only gaining one digit; combine that with the vastly simpler mental math, and you're <i>at worst</i> hitting an equal complexity budget to base-10.<p>But there's more. You see, the 100/1000 breakpoints aren't chosen because they're particularly <i>useful</i>, they're just where base-10 happens to graduate up to the next digit. We use higher-level groupings rather than 10000 (in many languages, at least; traditional Chinese numbering groups by 10000) because 10000 is simply too large to usefully deal with. That is, we <i>just can't think about 10000 things</i> very well.<p>But we can't really think about things up to 1000 well, either. Even 100 is a pretty big chunk of stuff, larger than we traditionally like working with. Left to our own devices, we seem to like things maxing out at approximately 30-ish - that's how many days are in a month, and how many students are traditionally in a large class (at least in America...). Guess what's approximately 30-ish? That's right, the 2-digit breakpoint for base-6, 36!<p>The 3-digit breakpoint for base-6 is 216, which is also a pretty reasonable number. It's about twice as large as 100, so any time 100 would be reasonable, 216 is probably also reasonable.<p>So, altho you need four base-6 digits to reach 1000<sub>10</sub>, I don't think that's particularly a useful goal to hit. 1000<sub>6</sub> being 216<sub>10</sub> is sufficiently useful that it's worth still batching our numbers into 3-digit groups, like today. <a href="https://en.wikipedia.org/wiki/Benford%27s_law" title="">Benford's Law</a> tells us that, even tho 216 is only 20% of 1000, it will generally cover a <i>far higher</i> percentage of numbers in actual usage; in order words, most of the time we'll write our number with three or less major digits anyway, and won't even miss the lost range!<p>As an added bonus, dividing things into groups of 3 is actually <i>natural</i> in base-6, unlike in base-10!<h2>In Conclusion</h2><p>So, base 6 has more useful divisors, making it easy to divide by many small numbers. It's got a smaller (and thus easier to memorize/use) addition table, and a multiplication table that's not only substantially smaller than base-10, but substantially <i>easier</i> in very significant ways, making mental arithmetic much simpler. We can cover a similar range of numbers with just three digits, so it even looks similar to base-10 when the numbers get large enough to need scientific notation.<p>If you ever find a time machine, let me know so I can fix this. ^_^http://www.xanthir.com/b4wJ1Strings Shouldn't Be Iterable By Default2018-09-04T18:01:48+00:002018-09-04T18:01:48+00:00<p>Most programming languages I use, particularly those that are more "dynamic", have made the same, annoying mistake, which has a pretty high chance of causing bugs for very little benefit: they all make strings iterable by default.<p>By that I mean that you can use strings as the sequence value in a loop, like <code>for(let x of someString){...}</code>. This is a Mistake, for several reasons, and I don't think there's any excuse to perpetuate it in future languages, as <i>even in the cases where you intend to loop over a string</i>, this behavior is incorrect.<h2>Strings are Rarely Collections</h2><p>The first problem with string being iterable by default is that, in your program's semantics, strings are rarely actually collections. Something being a collection means that the important part of it is that it's a sequence of individual things, each of which is important to your program. An array of user data, for example, is semantically a collection of user data.<p>Your average string, however, is <i>not</i> a "collection of single characters" in your program's semantics. It's very rare for a program to actually want to interact with the individual characters of a string as significant entities; instead, it's almost always a singular item, like an integer or a normal object.<p>The consequence of this is that it's very easy to accidentally write buggy code that nonetheless runs, just incorrectly. For example, you might have a function that's intended to take a sequence as one of its arguments, which it'll loop over; if the user accidentally passes a single integer, the function will throw an error since integers aren't iterable, but if the user accidentally passes a single string, the function will successfully loop over the characters of the string, likely not doing what was expected.<p>For example, this commonly happens to me when initializing sets in Python. <code>set()</code> is supposed to take a sequence, which it'll consume and add the elements of to itself. If I need to initialize it with a single string, it's easy to accidentally type <code>set("foo")</code>, which then initializes the set to contain the strings "f" and "o", definitely not what I intended! Had I incorrectly initialized it with a number, like <code>set(1)</code>, it immediately throws an informative error telling me that <code>1</code> isn't iterable, rather than just waiting for a later part of my program to work incorrectly because the set doesn't contain what I expect.<p>As a result, you often have to write code that defensively tests if an input is a string before looping over it. There's not even a useful <i>affirmative</i> test for looping appropriate-ness; testing <code>isinstance(arg, collections.Sequence)</code> returns True for strings! This is, in almost all cases, the <i>only</i> sequence type that requires this sort of special handling; every single other object that implements Sequence is almost always <i>intended</i> to be treated as a sequence.<h2>There's No "Correct" Way to Iterate a String</h2><p>Another big issue is that there are <i>so many ways to divide up a string</i>, any of which might be correct in a given situation. You might want to divide it up by codepoints (like Python), grapheme clusters (like Swift), UTF-16 code units (like JS in some circumstances), UTF-8 bytes (Python bytestrings, if encoded in UTF-8), or more. For each of these, you might want to have the string normalized into one of the Unicode Normalization Forms first, too.<p>None of these choices are broadly "correct". (Well, UTF-16 code units is almost always <i>incorrect</i>, but that's legacy JS for you.) Each has its benefits depending on your situation. None of them are appropriate to select as a "default" iteration method; the author of the code should really select the correct method for their particular usage. (Strings are actually super complicated! People should think about them more!)<h2>Infinite Descent Shouldn't Be Thrown Around Casually</h2><p>A further problem is that strings are the only built-in sequence type that is, by default, <i>infinitely recursively iterable</i>. By that I mean, strings are iterable, yielding individual characters. But these individual characters are actually still strings, just length-1 strings, which are still iterable, yielding themselves again.<p>This means that if you try to write code that processes a generic nested data structure by iterating over the values and recursing when it finds more iterable items (not uncommon when dealing with JSON), if you don't specially handle strings you'll infinite-loop on them (or blow your stack). Again, this isn't something you need to worry about for <i>any</i> other builtin sequence type, nor for virtually any custom sequence you write; strings are pretty singular in this regard.<p>(And an obvious "fix" for this is worse than the original problem: Common Lisp says that strings are composed of <i>characters</i>, a totally different type, which doesn't implement the same methods and has to be handled specially. It's really annoying.)<h2>The Solution</h2><p>The fix for all this is easy: just make strings non-iterable by default. Instead, give them several methods that return iterators over them, like <code>.codepoints()</code> or what-have-you. (Similar to <code>.keys()/.values()/.items()</code> on dicts in Python.)<p>This avoids whole classes of bugs, as described in the first and third sections. It also forces authors, in the rare cases they actually do want to loop over a string, to affirmatively decide on how they want to iterate it.<p>So, uh, if you're planning on making a new programming language, maybe consider this?