Main Page

From Wiki
Jump to: navigation, search

Welcome to Packbat Wiki!

I started this site in 2012 as a place for me to write about things that interested me -- but if you're interested in contributing to or commenting on any of these pages, feel free to create an account. As a spam restriction measure you will be initially restricted to the Talk pages; post on mine to get fuller editing access. (Unfortunately, spam is growing to be a noticeable problem, and I've needed to be more liberal with the banhammer; let me know if I screw up via my gmail address -- my first name followed by the first four letters of my last name.)

-- Robin Zimmermann.

P.S. If you are having difficulties registering a username, this may be because you are browsing on .com instead of .net -- if so, either fixing the URL or (if you do not have write access to the URL field) clicking on the "Random page" link should correct this.


This is a bit of a new experiment: rather than adding new content directly to pages, I write bloglike posts that (besides being archived as blogging) get transferred to the appropriate Wiki pages. Here's hoping.

2015-11-21: Incompetence Doesn't Explain Competence

(content warning: discussion of tactics used by sexual predators.)

The classical formulation of Hanlon's razor is this:

Never attribute to malice that which is adequately explained by stupidity.

This is a fine aphorism, a classic aphorism. I think the word 'stupidity' is often more vague than helpful, but in this case it fits: it captures the idea of doing harm that one does not intend, without being specific as to how this unintended harm was allowed to occur. A lot of people seem to live by this aphorism, and that's usually a good thing.

I only say usually, though. Because there's a second word in the aphorism that is often more vague than helpful, and that word is 'adequately'. This word also fits ... but is too easily overlooked, or taken for a synonym of 'possibly'. And this problem is particularly dangerous when bigotry and privilege are involved - one of the defining characteristics of these phenomena is the extent to which the broader culture permits and excuses them.

The example that inspired this post was a post on the Real Social Skills blog about a pattern that predatory men tend to create in their social groups. Basically, the creep finds a group with naive men in charge, is friendly and charming to those men, and harasses the women ... but only when men aren't watching, or (when they are watching) only in ways that are deniable. The result of this is that, when a woman comes up to one of the male leaders of the group to report the problem, the man is confronted with a contradiction between his own (deluded) observations and the reports of the person in front of him that he needs to resolve.

And, too often, out comes Hanlon's razor, with two sharp slices:

  • "I'm surprised to hear that. Bob's is a nice guy, even if he's a bit awkward sometimes, ..." [hypothesis: Bob's misbehavior is explained by his own poor social skills]
  • "... are you sure you're not misinterpreting what he did?" [hypothesis: the witness/victim's perception of misbehavior on Bob's part is explained away as a mistake - in other words, by the victim/witness's poor social skills]

It is to correct this that I propose my addendum: incompetence doesn't explain competence.

If your explanation for Bob's behavior is that he is not good at reading social cues, then that implies he is not good at reading social cues - and therefore he should be making all the other mistakes implied by not being good at reading social cues.

If your explanation for the witness/victim (call her Alice) seeing Bob as creepy is that she is bad at reading someone's intentions, then that implies she is bad at reading people's intentions - and therefore she should be mistaking intentions on other occasions, too.

There are occasions when either or both of these things are the case - Bobs who blurt things out without thinking all the time, Alices who assume some fact is widely known that is actually esoteric - but if you wish to use Hanlon's razor without cutting yourself (or worse, others), you need to recognize when you are assuming incompetency, and recognize when that assumption makes no sense.

(Which, I mention as a footnote, means paying attention to when Alice is perceptive. Contrary to what 99% of Western culture would have you believe, a lot of Alices are very perceptive. Listen to them when they tell you things.)

Note: the above post is transcluded from its own Wiki page. Please leave comments on its Talk page.

2015-10-19: Why One Isn't A Prime Number

(Answering simple questions is surprisingly difficult. This ... well, it's based on an answer that I gave that was satisfactory to a friend of mine. No guarantees of satisfaction to anyone else.)

Why isn't one a prime number?

The idea of prime numbers comes from the idea of factoring: that is, dividing numbers into smaller numbers. Twelve is two times six, so two and six are factors of twelve.

There are three observations that mathematicians made very quickly about factoring (I'm talking the-ancient-Greeks-knew quickly):

  1. Some numbers (e.g. two, three) can't be broken down into smaller pieces.
  2. If you are factoring a number and you get factors that can be broken down, you can break those factors down and repeat until you only have factors that can't be broken down. So, twelve is two times six, six can be broken down into two times three, so twelve is two times two times three (none of those can be broken down further).
  3. No matter what way you do it, if you break a number down all the way to factors that can't be broken down further, you always end up with the same list of factors. Twelve is also three times four, but after you break down four, you end up with three times two times two - same factors you got the other way.

The numbers in (1) are the prime numbers. The numbers you get via (2) are prime factorizations. (3), that these prime factorizations are always unique, is The Fundamental Theorem of Arithmetic - and you'd have to ask a mathematician why it deserves the capital letters, unfortunately, because I don't know.

But the reason why one isn't a prime number is that one is useless as a factor. If you divide by one, you don't get smaller pieces. There's no point in including it in the list of prime factors because it doesn't do anything.

That's my best understanding why one isn't a prime number.

Note: the above post is transcluded from its own Wiki page. Please leave comments on its Talk page.

2015-09-15: Out sick

It may have been coming on already on the 4th, but it was definitely there by the 7th. It's been painful, tiring, and gross. Hence the no updates.

- Robin (talk) 20:48, 15 September 2015 (EDT)

2015-09-04: Learning Retrosheet

I have a project I want to do, and this project requires that I have detailed records of past baseball games in machine-readable format.

Fortunately, Retrosheet hosts exactly that.

Unfortunately, in order to make a program that can read data, I have to be able to read that data.

So, I have a plan. I will look up individual games in the Retrosheet archives and score them onto paper scorecards as I would if the events being described were occurring in a game I was watching. Ideally, I will do so for games that I or those I know have physically watched and scored for extra science-y deliciousness (although the April 9th 2015 NY Mets @ Washington Nationals game, for which I have the score within arms reach, is inconveniently part of the 2015 season and not yet available), but the selection of games isn't as important as the process. By scoring these games, I intend to learn what the data actually looks like on a gut level, and thereby develop an understanding both of what is typically or atypically available in the Retrosheet database and how I can use these game records to my own ends.

Each game will be scored and the scoresheet scanned for posting here, with commentary. Suggestions welcome.

- Robin (talk) 14:21, 4 September 2015 (EDT)

Note: the above post is transcluded from its own Wiki page. Please leave comments on its Talk page.

2015-08-28: A tiny note about comments

I feel as if a bunch of things piled up very quickly at the end of this week, which is when I would normally have written a post for this blog. So, a tiny note.

Comments in code. For a while, I was working my way through Kernighan and Ritchie's The C Programming Language, but around the same time that I stopped updating here I stopped working on that. Anyway, a friend of mine wanted a coding buddy, so yesterday I opened up the file again.

And ... well, last year I festooned my code with comments. I had big massive block comments at the beginning explaining the problem, my understanding of the problem, and my general approach; I had comments breaking the code up into sections; I had comments on loops saying what happened in each loop, including what initialization happened before the loop and what happened during the loop; I had variable names chosen for clarity; I had #define constants with names chosen for clarity ... and within an hour I had not only figured out what was happening in the entire (tiny) program, but made major progress completing the major uncompleted part of the code. Progress which, in no small part, consisted of writing a lot more comments (and revising a couple existing ones).

So that was pretty cool. Thank you, past-Robin, for being so diligent. Yay comments.

Note: the above post is transcluded from its own Wiki page. Please leave comments on its Talk page.

2015-08-21: Wheaton Regional Park

Note: I didn't feel like writing today, so I went out to take pictures instead.

Apparently, the pond in Wheaton Regional Park is officially called Pine Lake. Here's three pictures of it.

Wheaton Regional Park 2015-08-21 - Pine Lake 1.JPG

Wheaton Regional Park 2015-08-21 - Pine Lake 2.JPG

Wheaton Regional Park 2015-08-21 - Pine Lake 3.JPG

Note: the above is transcluded from its own Wiki page. Please leave comments on its Talk page.

2015-08-15: Fame re: transphobia

Crossposted from

If you're following this blog, you've probably heard about Ophelia Benson and the criticism directed at her for her transphobic remarks and behavior online. I first heard about the problem from Alex Gabriel's "Smoke, fire and recognising transphobia" blog post a month and a half ago (which I think is still a pretty good introduction to the whole situation), and if you're willing to dive into comments sections and links therefrom you can probably wiki-walk your way from there to understanding most of this conversation happening about and with her and about and with trans people. There's been a lot of intelligent things said, but as a whole it's pretty painful to read - Ophelia Benson and her supporters (including, depressingly, PZ Myers) have mostly been unwilling to listen to and learn from the people criticizing what she said, and what they’ve been doing instead is not pretty.

Which is kinda tied to what I'm thinking about in this essay: listening to people.

Anil Dash said on Twitter: "There's an odd thing about institutions designed as underdogs; they often have a deep need to never admit it when they’ve become powerful." Being powerful is a dangerous state to be enveloped in, particularly for public intellectuals, because two things happen:

  1. What you say publicly has a lot more impact, because more people hear you.
  2. What people say to you has a lot less impact, because more people are talking to (or rather, yelling at) you.

At a certain point (a point which Ophelia Benson has probably crossed, and PZ Myers definitely has), you no longer have enough time to give serious thought to everything strangers say to you. There are too many strangers. The only way you have to deal with it is to filter - to ignore most comments, to only superficially look at most of the others, and to restrict the vast majority of your intellectual efforts to only a few.

And this is almost always a good idea! I mean, I spent a fair while (an hour, at least) thinking about a comment someone made in reply to my quoting Mads Ananda Lodahl's TEDx talk about the straight world order (a great talk, worth listening to, cw: abuse, violence, gore) ... even though that comment literally ended with "i think it is time for women and feminists to look for things to change in their heads too." I mean, anyone who thinks feminists aren't looking for things to change in their own heads isn't paying attention. This comment was an almost complete waste of time to think about, and the only reason I could afford the luxury of thinking about it is that I'm a nobody on the Internet who deals with comments like these a few times a day on average, not a few times an hour or a few times a minute.

I only deal with trolls a few times a day, so I can afford a significant rate of false negatives in my troll detection in order to reduce my false positive rate.

And that's the thing: when it comes to social justice, a low false positive rate is super, super important. It's super important because when someone is angry, they might be angry because their preconceptions have been challenged, or they might be angry because what you said actually hurts them and/or people they care about. It's like how Mychal Denzel Smith said "feminism shouldn’t make men comfortable" - feminism isn't primarily an intellectual exercise, it is protecting women, even at one's own expense. If you care about not hurting people, someone who lets you know that you hurt them - however bluntly - has done you a favor: they have given you an opportunity to no longer hurt them and those like them.

Which brings me back to the first point: the same factor that makes it impossible for you to interact intelligently with everyone that yells at you - and, therefore, makes it harder for you to learn when you make a mistake - is amplifying the damage that you do when you do make a mistake.

Which brings me back to what Anil Dash said. You might have started this out as nothing more than a blogger, but you need to realize when you're not just that any more. You need to realize when the tools that helped you so much when you were small - wit, intelligence, perspicacity - and the tools that helped you get big - self-confidence, courage, and initiative - aren't enough any more. Someday, the jerk in the comments section calling you a bad name, the one whom you immediately ignore because they're the fiftieth person to call you a bad name this week, is going to be right.

And if you don't prepare for it somehow, when that happens, you're going to blow them off, blow off the sixteen other people who follow that jerk's reasoning and come to the same conclusion, and eventually end up yelling on your blog about a hate mob that seems determined to ruin your day, and you'll be wrong. Because they aren't there to ruin your day, and because your good intentions are meaningless when you've hurt someone.

Sometimes, jerks are right. Sometimes, nice people - you, your friends, your heroes - are wrong. Heck, sometimes those jerks aren’t actually jerks - you just misunderstood what they said. Sometimes you misunderstand things.

Famous Internet bloggers, you need to realize that and figure out how to do better. Because it bums me out knowing that you're not going to.

- Robin (talk), 29 July, 2015, xposted 11:40, 15 August, 2015 (EDT)

Note: the above post is transcluded from its own Wiki page. Please leave comments on its Talk page.

2015-08-07: Altruistic Matchmaking

(Oh dear - it has been a year, hasn't it? Anyway, have a blog post.)

In the context of a recent Tumblr post about altruism in rats, I found myself looking up information about the evolution of altruism. I quickly found an interesting introduction to this in the form of a public-good model of the evolution of altruism by Jeffrey A. Fletcher and Michael Doebeli, and in the midst of it:

We then have three genotypes in the population: ab; Ab; and aB. The first of these is a defector, while the other two are cooperators. Now assume that the costs and benefits in interactions among these phenotypes (i.e. cooperators and defectors) can be modelled as a two-player Prisoner's Dilemma game with pairwise interactions (which by definition implements strong altruism; Fletcher & Zwick 2007). As a check on whether genetic similarity is necessary for cooperation, an experimenter then imposes assortment between cooperators (and between defectors) in the following way: Ab always interacts pairwise with aB, and ab always interacts pairwise with other ab genotypes. In other words, the experimenter imposes the strongest possible assortment between cooperators, but in such a way that carriers of the altruistic allele A never interact with other carriers of A and carriers of the altruistic allele B never interact with other carriers of B. It is clear that in this situation, both alleles A and B will increase in frequency, and the genotype ab will go extinct, leaving a population consisting of cooperators only.

This patently artificial scenario, specifically tailored to produce the desired (altruistic) result, got me thinking immediately about a completely different class of artificial scenarios tailored to produce specific results: game design.

A while ago, someone made a video on YouTube - DayZ: An Anthropological Study - that talked about the frankly sociopathic norms that developed in that game. As someone who loves the idea of zombie apocalypse survival games but would rather not participate in a purely murderous culture, my reaction to that video was to wonder how one might design a game that didn't produce this effect - a game where the cooperation strategy was sufficiently rewarded that PvP killing would be the abberation and punished, rather than the norm.

Using the public-good model that Fletcher and Doebeli talk about to design matchmaking seems like a natural fit to this goal. Altruists will find themselves on servers will altruists. Predators will find themselves on servers with predators. Frankly speaking, this seems like a win-win - players such as myself would, sooner or later, find themselves in a place where they can call out to another survivor and work together, while players who thrive on the competition with each other would find worthier (and willing) opponents.

What remains, then, is the technical problem: how does the game distinguish cooperators from defectors?

Honestly, I'm not sure. But here's a sketch of my thoughts so far.

  • Every player has two ratings: a personal rating and an altruism rating.
  • As a player spends time in an area, that area's altruism rating acquires a larger and larger component of that player's altruism rating. This represents how that player's presence affects the people in the area.
  • How long any given player is expected to survive (or how well they are expected to thrive, if that's measurable) is predicted from a combination of their own personal rating and their environment's altruism rating.
  • A player surviving (or thriving) increases that player's personal rating and the altruism ratings of all players contributing to the area's altruism - quickly if the predicted survivability is low, slowly if it is high. Inversely, each player dying (or suffering) decreases that player's personal rating and the altruism ratings of all players contributing - greatly if these ratings are high, less so if they are low (or negative).
  • In addition to being matched based on altruism, players are assigned to deadlier or more benign worlds based on the combination of their personal and altruism ratings.

I am not tied to the area thing - it might be equally sensible to define strengths of interactions between players and reward or penalize based on that. The key is that the ratings do not care by what means a player affects survivability (or thriving), only that they do so.

Some obvious notes:

  • A sacrifice play (e.g. risking death drawing away a mob of zombies) is only rewarded as altruistic if the sacrifice is worth it.
  • There's no real way for teams to game the altruism metric by putting one member in danger for the others to rescue. Cooperating to avoid getting in trouble in the first place will produce a better score.
  • The proper rating of mixed strategies (e.g. wolfpacks) might be tricky.
  • The proper rating of aggressive loners might be tricky. Their altruism ratings would tend to be fluid - rising while they were alone, and then abruptly tanking when someone attempted to join them. It might be better to include minimal or no aid to oneself in one's altruism rating.

- Robin (talk) 21:51, 7 August 2015 (EDT)

Note: the above post is transcluded from its own Wiki page. Please leave comments on its Talk page.

2014-08-15: ZhurnalyWiki Correl Oracle Redux?

First off, yes, I see the error messages. I will try to fix them soon.

The reason for this post, though, is that my dad, User:Zhurnaly, proprietor of ZhurnalyWiki, suggested a while ago that one possible C programming project would be a program to identify correlations between pages on his journal - something to take the place of his CorrelOracle 0.3 (which dates back to September 2001). And I realized more recently that this is a project which I might actually have enough C knowledge to get started on.

Some notes, therefore, before I get down to reading his code and making plans for my own:

  • First of all, the Perl source code of Zhurnaly's Correl Oracle 0.3 - and his commentary on that, and on Correl Oracle 2 and the original Correl Oracle - are on his Wiki for me to read and draw from. (Given that I am making this for him, using his intellectual property in my code causes exactly zero issues.)
  • Second, I can do the exact same thing he does and post my source code to Packbat Wiki. This will make it easy for other people (esp. Zhurnaly) to make edits and comments while preserving old versions. (Obviously I would back it up regularly to my own computer and to DropBox.)
  • Prior to looking at the Correl Oracles, my thought was to have a multiple-pass process along the following lines:
    1. Count words (and possibly phrases) on each page and on the entire Wiki. A lot of the fiddly-but-necessary stuff comes in at this point - filtering out extraneous information, recognizing formatting, &c. - but the key thing is to have, at the end of it, a list for each page and for the entire ZhurnalyWiki corpus of words appearing and how often they do.
    2. Use the corpus list to create new lists for each page with weighted values for each word. I was initially thinking of using something simple like weight = page_appearances/corpus_appearances, or possibly weight = (page_appearances/page_wordcount)/(corpus_appearances/corpus_wordcount), but see "likelihood-based", below.
    3. Compare the weighted list for each page to the weighted list for each other page. (This might be as simple as "do a dot product of the lists treated as vectors.") List the highest correlates.
  • I have been told (I have not looked at the Correl Oracles yet) that the current system works fast enough on the ~5000 ZhurnalyWiki pages just doing a linear scan. This is worth noting, especially, given that...
  • Zhurnaly suggested that the correlation system might be used to generate relevance ratings for searches. This would be a major improvement over the current system, which returns all results sorted alphabetically by title.
    • A note, related to the above comment on linear scans: one of the reasons why it is tricky to do phrases in the correlation is that the number of possible phrases increases exponentially with the length of the phrase. In the case of a search term, all the phrase-generation is already done by the user.
    • Also, as Zhurnaly pointed out, weighing proximity of search terms would be helpful. There are many possible schemes for this - see "likelihood-based", below, for one thought I had.
    • I was also thinking that it would be valuable to examine how early in a page the terms appeared - again, see "likelihood-based", below. My first thought was some kind of paragraph-count weighting system, with the title having the highest weight and the importance dropping as the term appears later and later.
  • Another thought: a likelihood-based system has the potential to be very good. I am thinking of something like what DanielVarga did on to rate users by how good they were at picking Rationality Quotes to post. This same idea could be applied many different ways to the correlation task:
    Assigning weights to words
    Any word appearing on a page is assigned a weight based on the probability that it would appear at least that many times (or possibly exactly that many times? wait no that's dumb - Robin (talk) 17:06, 15 August 2014 (EDT)) in the number of words on the page if the page were generated by drawing randomly with replacement (or should it be without replacement?) from the entire corpus. How to turn the probability (which decreases as the word grows more significant) into a weight (which one would want to increase if one were using dot products) is a more difficult question. (Perhaps via the suprisal equation?) (Alternatively: perhaps look at the correlation of the probabilities or probability-weights, instead of using any kind of dot product?)
    Weighing the positions of words within the page
    Instead of recording only the number of appearances of a word in a page, one might also record how early in the page each word appears for the first time (e.g. by number of words preceding it). This can then be compared to the probability that the word would be drawn from the corpus word list as early or earlier (or exactly as early, if that works better why did I think that a measure that does not distinguish left tail from right would work better? - Robin (talk) 17:06, 15 August 2014 (EDT)).
    Weighing proximity of search terms
    Some metrics that can have probabilities calculated upon them include:
    • The position of first appearance of each word in the search query.
    • For each appearance of a given word in the query, the distance to the closest appearance of each other word. (This would be compared to the probability distribution.)
    • For each appearance of a given word in the query, the distance to every appearance of each other word.
    Measuring relatedness of pages
    Instead of simple correlations, a program could compare some length-adjusted measure of likelihood of the pages separately to the pages combined, and pick the pairs whose unlikeliness grows instead of shrinks.

In many ways, a lot of these thoughts are incredibly overambitious - I haven't even given consideration to how to create the associative array of page words (there must be some list of hash tables well suited to text corpus analysis, right?), or, for that matter, file I/O of any kind whatsoever - but I figure getting my thoughts down before diving in will probably help in this case.

-- Robin (talk) 17:02, 15 August 2014 (EDT)


  • How easy would it be to generate candidate phrases to correlate over? It occurred to me that words like "the", "and", "this", "of", &c., while normally so common as to be meaningless, might be useful to identify word sequences like "The Art of Computer" that would appear as a block more often than the bag-of-words model would propose (even if the phrase engine doesn't notice that the next word - "Programming" - also belongs in the phrase). Is "phrase structure rules" the correct search term? Robin (talk) 22:00, 11 September 2014 (EDT)
  • Continuing on the subject of phrases: in a relevance-ranking-of-searches application, how do you assign weight to the appearance of the search term as a phrase relative to as isolated words? Perhaps use the bag-of-words as a prior? Robin (talk) 22:14, 11 September 2014 (EDT)

2014-07-18: Does "Top-Down Causation" Mean Anything?

So, m'dad texted me a link to an interview with a mathematician - George F. R. Ellis - selling the idea that we could solve a whole lot of philosophical and physical problems with a concept called "top-down causation", defined as "the process by which higher level organized systems, such as humans, interact with their own component parts".

My immediate reaction was - to quote my text message reply - "Ah, the old Idea [sic] of destructive reductionism strikes again. I think the only rational prescription is 172 pp of Daniel Dennett. :P" (172 pages is, naturally, the length of Elbow Room in the printing I have ready to hand.) I thought that was a pretty witty retort (I like the punnishness of substituting "pp" for "cc" in the medical-prescription template) but I said it because I thought he was saying that higher-level organized systems, like humans, have the power to act independently of lower-level systems. I thought he was proposing libertarian free will: will freed from the constraints of the laws of nature. But I've read the full interview, now, and I've read his essay describing "top-down causation" in detail, and I no longer think that's what he said.

I'm not sure, though. To be sure, I would have to be sure that I understood what he meant - but I can't understand anything in what he says to supports the conclusions he draws.

Let me lay out the two halves of the picture.

What Ellis Gets From "Top-Down Causation"

From the interview:

What exactly is top-down causation?

A key question for science is whether all causation is from the bottom up only. If forces between particles are the only kind of physical causation, then chemistry, biology, and even our minds are emergent, bottom-up properties of physics. On the other hand, it might be that these emergent higher level structures, such as cells, neurons, and the brain, have causal powers in their one right.

In the first instance, all the higher levels are epiphenomena – they have no real existence – and so the idea that you are responsible for your actions is false. But in fact top-down causation takes place all the time, with the higher levels controlling the lower levels, not by any magic force, but by setting constraints on lower level interactions. This means that higher levels such as cells, neurons, and your brain have real causal powers, and this means you can indeed be held accountable for your actions.

In other words, Ellis is proposing that top-down causation is (a) not the forces-between-particles kind of causation that physics deals with, and (b) demonstrates that the forces-between-particles thing doesn't get in the way of believing that we are responsible for what we do.

Now, I've heard forces-between-particles arguments before. My dad sent me the link because I was talking about how I understand the basics of the free-will dialectic, which is full of arguments like this. So, when Ellis alludes to the idea of forces-between-particles destroying human responsibility for behavior, he alludes to arguments along the lines of the following:

  1. If I am responsible for an event, then I caused that event to occur when some other event might have occurred instead.
  2. If forces outside my control caused one event to occur rather than another, then I did not cause that event to occur rather than the other.
  3. The future of the universe relative to any point in time is fully determined by its state at that time and by the operation of its fundamental, mechanical laws of nature (e.g. forces between particles) on that state. In other words, a complete, particle-by-particle description of the entire universe and a complete mathematical description of its natural laws - laws that include no persons, minds, motives, or the like - describes all there is to know about what will happen: no event shall occur except as described by those laws.
  4. If no event can occur except as described by the operation of natural laws on physical universe-states, then every event must be caused by those laws and initial states - by physics.
  5. The natural laws and initial state of the universe - and therefore physics - is outside my control.
  6. Because physics - which is out of my control - causes every event, I cause none of them.
  7. Because I cause no events, I am responsible for nothing that occurs.

...hence my initial reaction that - contrary to his express statement - he must be proposing a magic force acting from the top on these physics below: he explicitly rejects the conclusion of step 4 of the syllogism, and to say that "higher level structures [...] have causal powers in their own right" sounds a whole lot like a denial of step 3 (which would do the job). This wouldn't even be a surprising thing: such theories are called "libertarian free will" by philosophers because lots of people propose them, and they share a lot of properties.


What Ellis Describes as Examples of "Top-Down Causation" his paper doesn't look like free-will libertarianism at all.

More generally, top-down causation happens wherever boundary conditions and initial conditions determine the results. [...] In the context of astronomy/astrophysics, there is a growing literature on contextual effects such as suppression of star formation by powerful active galactic nuclei [7]. This is a top-down effect from galactic scale to stellar scale and thence to the scale of nuclear reactions. Such effects are often characterised as feedback effects.

(if you're curious: [7] M. J. Page et al. (2012) "The suppression of star formation by active galactic nuclei". Nature 485:213-216.)

What is Ellis saying, here? As far as I can tell, just this: sometimes, when we are creating a model of some low-level phenomenon (e.g. the physics of star formation, or the operations being executed in a CPU, or a signal transmitted along a fiber-optic cable), we describe some of the variables in the model in terms of high-level phenomena (e.g. the abundance and proximity of galactic nuclei in the region, the program running on the computer, the network connection being used). If we do not draw from the descriptions of the high-level phenomena, we cannot create an accurate model of the low-level phenomenon. Therefore, high-level phenomena exist.

I've gone through his paper time and time again, and I can't find any definition other than this.

The Disconnect

Nobody says that, because a chair is made of atoms, it is not a chair. The fallacy of division (the idea I meant by "destructive reductionism" in my text, even if the term I mangled - "greedy reductionism" - means something slightly different) is far too obvious for anyone to overlook when such statements are made.

Further, nobody says that, once a chair exists, it is not the proximal cause of events. If the chair is wedged under a doorknob to keep out intruders, it is hard to find anyone who would dispute that the chair is keeping the door shut.

Rather, what some people say is that the chair is not morally responsible - that is, worthy of blame or praise - for keeping the door shut. The chair is not the ultimate cause of the door being jammed shut because we can trace a direct causal path back to whoever put it there ... and the fear that people are trying to relieve with their free-will arguments is that the causal line keeps going - it extends backwards through time to pass out of ourselves into our life experiences, our educations, our upbringing, and our genetics - and carries all our moral responsibility with it, leaving us as pathetic robots, thralls of the forces that made us.

In fact, if Ellis is saying what he seems to be, his entire argument can be summed up in a single sentence from Robert Nozick's Philosophical Explanations:

No one has ever announced that because determinism is true thermostats do not control temperature.

...and its insufficiency to its stated purpose can be summed up in the single sentence I got in response when I posted the quote to a Less Wrong quotes thread:

AllanCrossman: But what thermostats don't control is... what the thermostat is set to.

Which leads me back to what I said before, for completely different reasons. I said that Ellis needed to read Daniel Dennett - specifically, Elbow Room. I got that Nozick quote from Dennett's Elbow Room. He uses it as the introduction to Chapter 3 of Elbow Room, on page 50 of the 172 that I prescribed ... and goes on for many pages more, describing ways in which outside causes can thwart the top-down control of the agent and ways in which they couldn't. He does this because he knows that the existence of high-level agents is not only obvious, but insufficient to justify believing in the autonomy of those agents.

Perhaps I am misreading Ellis's analysis. Perhaps he is saying something more subtle or complex, something which addresses more than just the superficial fact that the universe contains things larger than subatomic particles (or things made of abstractions rather than particles, like novels), and sometimes we can make sense of the universe by talking about those things. And I would be glad to discover that I am mistaken about Ellis's thesis, because if I'm not, he is shoveling great heaps of abstruse argumentation into the inboxes of his readers for no profit whatsoever.

- Robin (talk) 01:38, 19 July 2014 (EDT)

2014-03-12: Time travel game mechanic idea

(This started life as an Idea Pad bullet, but then I realized just how much I was writing and decided to do the right thing and give it its own space.)

A lot of games have played with time-travel mechanics, but I don't believe any have attempted to recreate the most classic of time-travel systems: time-travel that creates a single, self-consistent timeline. The tricky part of doing this in a game is that, if a character jumps back to a time when that character already exists, we have a span of time during which they exist (at least) twice simultaneously. Each copy should be able to act independently, but the player would struggle to control multiple characters at once.

To resolve this issue by making a 'tape' of what the character does is problematic, though, because one function of time travel is to change what the character could have done. Imagine I need to get through a locked door, and quickly; this seems like it should be very easy:

Time Past-Me Future-Me
0 Waiting outside. Arriving from 3 inside.
1 Watching the door open. Opening the door.
2 Entering through open door. Continuing on my merry way.
3 Jumping back to 0 inside.

Unfortunately, from a game perspective, there's a paradox here: how can the game know when there will be some future version of me arriving, and what I will do when I get there? And if the game doesn't know there will be a future-me opening the door, then past-me will see a locked door ... and so, even if future-me opens the door, the game won't know what to do with past-me after that changes.

...unless there is some way for me to tell the game what past-me will do.

My idea, then, would rewrite the locked-door bypass to look more like this.

Time Past-Me #1 (P1) Future-Me #1 (F1) Past-Me #2 (P2) Future-Me #2 (F2)
0 Priming the time-machine. Arriving inside from 3. Restarting in the F1 timeline from 2. Arriving inside from 2.
1 Hunting for a way into the house. Opening the door. Waiting for the door to open. Opening the door.
2 Clearing the doorway and reverting to take over the past-self's actions in the new timeline. Entering and jumping back in time to 0. Continuing on my merry way.
3 Getting inside and, from inside, jumping back to 0.

This is theoretically something a game could implement. If the game records what I do as P1, F1, and P2, it can play those actions back to simulate what happens when I play as F1, P2, and F2, respectively - and if it records what I see (or, if I want to be especially cruel, hear), it can tell if the new timeline will create a paradox for the old one.

I'm not sure yet how to deal with paradoxes, to be honest - it'd have to be something that prevents the player from simply ignoring them - but I think there's a lot of potential in this as a puzzle mechanic. And there's an obvious way to incorporate the tutorial into the system: the first thing the player character does in the game is equip, power up, and activate the experimental time-travel device for its next round of testing - and something bad happens that the character wants to prevent during this testing.

- Robin (talk) 17:43, 12 March 2014 (EDT)

2014-01-16: An Ignoramus's Quest for GCC, Part Two

Continued from Part One

Having learned that I could not simply walk into Mordor install GCC, I proceeded to the aforementioned GCC binaries page with the optimistic belief that I would find what past experience told me to expect from binary pages:

  • An executable file approximately identical to what one would obtain by compiling the source code on my own computer, or
  • An executable file that acts as an installer that produces a folder full of files approximately identical to the folder full of files that I would produce if I were following the procedure to compile the source code on my own computer. opposed to a pageful of links, each of which was -- with but a handful of exceptions:

  • An independent project which, for motives probably similar to my own upon setting forth on this quest, contains GCC somewhere inside its bundle of tools.

...and each of which was -- with but a handful of exceptions:

  • Built to run on one or several flavors of UNIX.

Nevertheless, I forged onwards, quickly narrowing my search down to the two "Microsoft Windows" options listed:

I opened each in a separate tab and started to read.

(There was one other option that I might have considered -- DJGPP, the DOS version -- but not only did I immediately eliminate this from consideration on the grounds that Windows is no longer DOS, but my immediate elimination of it from consideration was wise: running it on a Windows NT system can lead to ... issues.)

(Yes, Windows 7 is Windows NT.) (I think.) (It doesn't actually matter.)

(To be continued.)

2014-01-13: Short Status Update

...hi. Been a while. Let me tell you what's up.

  • Something I haven't mentioned on the Web (at least, not to my recollection) is that I'm in therapy for anxiety. As Boggle the Owl will tell you, anxiety is no joke -- but I'm dealing, and getting better.
  • Over the past month or so, I've been trying to maintain a regular schedule of self-improvement, with partial success. (It is quite fortunate that among my successes to date is realizing that I do not need to achieve perfect success, although I hope to improve the fraction a bit.) (Possibly by decreasing the denominator.) (But partial success is definitely a part of the aforementioned "getting better".)
  • The Bayesian Clocks thing is definitely on hiatus, but I intend to return to it. (Having a running copy of Matlab on my computer will probably be an aid to this -- graphs are fun and pretty, and numerical solution is often simpler than analytic.)
  • The blog ... well, I hope to post a few things here, but it is not a priority. I do hope to get a little bit of joy out of it by making a record of one of my self-improvement projects: learning C. Well, more specifically...

2014-01-13: An Ignoramus's Quest for GCC, Part One

(Yes, this is posted out of order. Relax, I know what I'm doing.)

Over the weekend, talking with a friend from the Less Wrong group in my area -- LessWrong DC -- I've decided to begin studying the C programming language by studying The C Programming Language (by Kernighan and Ritchie -- a.k.a. K&R -- second edition). As an aid to doing so, I decided to install the GNU Compiler Collection (GCC) on my Windows 7 laptop, for two reasons.

  1. GCC is GNU free software -- so if I find myself engaged in a serious C-programming project, I will be able to do so without worrying about acquiring new software licenses.
  2. GCC is, according to what I've heard, the gold standard of C compilers -- so if I find myself wanting or needing to use some other, I can expect at the very minimum to find users of the new compiler who are already familiar with the differences.

Proceeding on this noble basis, I typed "GCC" into a search engine, opened the webpage...

...and ran into the first problem that faces the uninitiated when they set out to use free software: that free software people are living in their own private universe. Indeed, they seem especially prone to the high-percentile-rank variation of the Dunning-Kruger effect: their idea of a rank beginner -- of someone whose knowledge is as close to nonexistent as can be found among those with whom communication is possible -- is an individual who can already compile C source code to run on their computer.

Meaning, in this case, that the installation instructions on the GCC website include, as the very first prerequisite to installation, possession of an "ISO C++98 compiler" -- one capable of configuring and building the raw GCC source code that is the product which GNU provides. Although, to their credit, they are sufficiently self-aware as to remark that:

Important: because these are source releases, they will be of little use to you if you do not already have a C compiler on your machine. If you don't already have a compiler, you need pre-compiled binaries. Our binaries page has references to pre-compiled binaries for various platforms.

...leaving me with my choice of two yaks to shave:

  1. Choose a binary distribution and use that instead; or
  2. Find a C++ compiler somewhere and build GCC.

Given that the latter appeared to likely have significantly more fur than the former, I decided to check out the provided binaries page and see what I could find.

Continued in Part Two.

Blog Archives

In the interest of keeping the front page navigable, I am (manually) shifting entries to archive pages over time. Blog:Archive Index will connect you to that page.

At the moment, the most recent archive page is Blog:Archive 2013 (part 1).

Wiki Links

The following are pages on the Wiki which (should) connect you with most of the interesting non-blog content that I have or have promised.

Recent Additions

...the definition of "recent" is a slippery thing; expect content here when I figure out what it should mean to me. That said:

  • Henderson Duct Log - because, as Robert Pirsig said in Zen and the Art of Motorcycle Maintenance, when a problem turns out to be not easy, it's a good idea to do Science to it ... and science requires data.
  • Essays - because it seemed like a good category for things. (Categories ... now there's an idea.)

Starting Points

  • Packbat's Recreations
    • Wherein the site owner blathers about some of the pointless stuff he does.
  • Blog:Archive Index
    • An index to all the old blog entries that said site owner plans to have written in the future.
  • Packbat's Reading
  • Dispatches
    • Wherein reports are made on places and events.
  • Idea Pad
    • Wherein ideas are thrown at a wall to see what sticks.
  • Essays
    • Some ideas aren't short enough to fit on one line of one page.
  • Quotes
    • Wherein memorable phrasings are not-particularly-im-mortalized.
  • Pro Tips
    • Wherein useful suggestions of little to no import are provided.
  • Link:Art, Link:Webcomics, Link:Web Fiction
    • Wherein URLs that, at one point, directed browsers to interesting visual works, visual sequential works, and textual sequential works accumulate.
  • Scales
    • Wherein criteria are proposed for the measurement of arbitrary qualities.
  • Pictures
    • Sometimes the aforementioned site owner takes pictures, or draws pictures, or otherwise causes images to form on computer monitors. Some of those pictures end up here.
  • Sandbox
    • Wherein experiments with Wiki formatting are conducted and randomly deleted.

Serial Works

  • Roll to Dodge - Zombie Apocalypse
    • "Roll to Dodge" is a simplified kind of forum roleplaying game in which the outcome of each action is determined by rolling a single d6. theplague42 and Packbat were moderating this game starting in January 2011 (until I vanished of the face of the earth) - the above is the beginnings of a copy of this archive, condensed to include just the player actions, die rolls, and resolutions.