Main Page

From Packbat.net Wiki
Jump to: navigation, search

Welcome to Packbat Wiki!

I started this site in 2012 as a place for me to write about things that interested me -- but if you're interested in contributing to or commenting on any of these pages, feel free to create an account. As a spam restriction measure you will be initially restricted to the Talk pages; post on mine to get fuller editing access. (Unfortunately, spam is growing to be a noticeable problem, and I've needed to be more liberal with the banhammer; let me know if I screw up via my gmail address -- my first name followed by the first four letters of my last name.)

-- Robin Zimmermann.

P.S. If you are having difficulties registering a username, this may be because you are browsing on .com instead of .net -- if so, either fixing the URL or (if you do not have write access to the URL field) clicking on the "Random page" link should correct this.

Blog

This is a bit of a new experiment: rather than adding new content directly to pages, I write bloglike posts that (besides being archived as blogging) get transferred to the appropriate Wiki pages. Here's hoping.

2014-08-15: ZhurnalyWiki Correl Oracle Redux?

First off, yes, I see the error messages. I will try to fix them soon.

The reason for this post, though, is that my dad, User:Zhurnaly, proprietor of ZhurnalyWiki, suggested a while ago that one possible C programming project would be a program to identify correlations between pages on his journal - something to take the place of his CorrelOracle 0.3 (which dates back to September 2001). And I realized more recently that this is a project which I might actually have enough C knowledge to get started on.

Some notes, therefore, before I get down to reading his code and making plans for my own:

  • First of all, the Perl source code of Zhurnaly's Correl Oracle 0.3 - and his commentary on that, and on Correl Oracle 2 and the original Correl Oracle - are on his Wiki for me to read and draw from. (Given that I am making this for him, using his intellectual property in my code causes exactly zero issues.)
  • Second, I can do the exact same thing he does and post my source code to Packbat Wiki. This will make it easy for other people (esp. Zhurnaly) to make edits and comments while preserving old versions. (Obviously I would back it up regularly to my own computer and to DropBox.)
  • Prior to looking at the Correl Oracles, my thought was to have a multiple-pass process along the following lines:
    1. Count words (and possibly phrases) on each page and on the entire Wiki. A lot of the fiddly-but-necessary stuff comes in at this point - filtering out extraneous information, recognizing formatting, &c. - but the key thing is to have, at the end of it, a list for each page and for the entire ZhurnalyWiki corpus of words appearing and how often they do.
    2. Use the corpus list to create new lists for each page with weighted values for each word. I was initially thinking of using something simple like weight = page_appearances/corpus_appearances, or possibly weight = (page_appearances/page_wordcount)/(corpus_appearances/corpus_wordcount), but see "likelihood-based", below.
    3. Compare the weighted list for each page to the weighted list for each other page. (This might be as simple as "do a dot product of the lists treated as vectors.") List the highest correlates.
  • I have been told (I have not looked at the Correl Oracles yet) that the current system works fast enough on the ~5000 ZhurnalyWiki pages just doing a linear scan. This is worth noting, especially, given that...
  • Zhurnaly suggested that the correlation system might be used to generate relevance ratings for searches. This would be a major improvement over the current system, which returns all results sorted alphabetically by title.
    • A note, related to the above comment on linear scans: one of the reasons why it is tricky to do phrases in the correlation is that the number of possible phrases increases exponentially with the length of the phrase. In the case of a search term, all the phrase-generation is already done by the user.
    • Also, as Zhurnaly pointed out, weighing proximity of search terms would be helpful. There are many possible schemes for this - see "likelihood-based", below, for one thought I had.
    • I was also thinking that it would be valuable to examine how early in a page the terms appeared - again, see "likelihood-based", below. My first thought was some kind of paragraph-count weighting system, with the title having the highest weight and the importance dropping as the term appears later and later.
  • Another thought: a likelihood-based system has the potential to be very good. I am thinking of something like what DanielVarga did on lesswrong.com to rate users by how good they were at picking Rationality Quotes to post. This same idea could be applied many different ways to the correlation task:
    Assigning weights to words
    Any word appearing on a page is assigned a weight based on the probability that it would appear at least that many times (or possibly exactly that many times? wait no that's dumb - Robin (talk) 17:06, 15 August 2014 (EDT)) in the number of words on the page if the page were generated by drawing randomly with replacement (or should it be without replacement?) from the entire corpus. How to turn the probability (which decreases as the word grows more significant) into a weight (which one would want to increase if one were using dot products) is a more difficult question. (Perhaps via the suprisal equation?) (Alternatively: perhaps look at the correlation of the probabilities or probability-weights, instead of using any kind of dot product?)
    Weighing the positions of words within the page
    Instead of recording only the number of appearances of a word in a page, one might also record how early in the page each word appears for the first time (e.g. by number of words preceding it). This can then be compared to the probability that the word would be drawn from the corpus word list as early or earlier (or exactly as early, if that works better why did I think that a measure that does not distinguish left tail from right would work better? - Robin (talk) 17:06, 15 August 2014 (EDT)).
    Weighing proximity of search terms
    Some metrics that can have probabilities calculated upon them include:
    • The position of first appearance of each word in the search query.
    • For each appearance of a given word in the query, the distance to the closest appearance of each other word. (This would be compared to the probability distribution.)
    • For each appearance of a given word in the query, the distance to every appearance of each other word.
    Measuring relatedness of pages
    Instead of simple correlations, a program could compare some length-adjusted measure of likelihood of the pages separately to the pages combined, and pick the pairs whose unlikeliness grows instead of shrinks.

In many ways, a lot of these thoughts are incredibly overambitious - I haven't even given consideration to how to create the associative array of page words (there must be some list of hash tables well suited to text corpus analysis, right?), or, for that matter, file I/O of any kind whatsoever - but I figure getting my thoughts down before diving in will probably help in this case.

-- Robin (talk) 17:02, 15 August 2014 (EDT)

2014-07-18: Does "Top-Down Causation" Mean Anything?

So, m'dad texted me a link to an interview with a mathematician - George F. R. Ellis - selling the idea that we could solve a whole lot of philosophical and physical problems with a concept called "top-down causation", defined as "the process by which higher level organized systems, such as humans, interact with their own component parts".

My immediate reaction was - to quote my text message reply - "Ah, the old Idea [sic] of destructive reductionism strikes again. I think the only rational prescription is 172 pp of Daniel Dennett. :P" (172 pages is, naturally, the length of Elbow Room in the printing I have ready to hand.) I thought that was a pretty witty retort (I like the punnishness of substituting "pp" for "cc" in the medical-prescription template) but I said it because I thought he was saying that higher-level organized systems, like humans, have the power to act independently of lower-level systems. I thought he was proposing libertarian free will: will freed from the constraints of the laws of nature. But I've read the full interview, now, and I've read his essay describing "top-down causation" in detail, and I no longer think that's what he said.

I'm not sure, though. To be sure, I would have to be sure that I understood what he meant - but I can't understand anything in what he says to supports the conclusions he draws.

Let me lay out the two halves of the picture.

What Ellis Gets From "Top-Down Causation"

From the interview:

What exactly is top-down causation?

A key question for science is whether all causation is from the bottom up only. If forces between particles are the only kind of physical causation, then chemistry, biology, and even our minds are emergent, bottom-up properties of physics. On the other hand, it might be that these emergent higher level structures, such as cells, neurons, and the brain, have causal powers in their one right.

In the first instance, all the higher levels are epiphenomena – they have no real existence – and so the idea that you are responsible for your actions is false. But in fact top-down causation takes place all the time, with the higher levels controlling the lower levels, not by any magic force, but by setting constraints on lower level interactions. This means that higher levels such as cells, neurons, and your brain have real causal powers, and this means you can indeed be held accountable for your actions.

In other words, Ellis is proposing that top-down causation is (a) not the forces-between-particles kind of causation that physics deals with, and (b) demonstrates that the forces-between-particles thing doesn't get in the way of believing that we are responsible for what we do.

Now, I've heard forces-between-particles arguments before. My dad sent me the link because I was talking about how I understand the basics of the free-will dialectic, which is full of arguments like this. So, when Ellis alludes to the idea of forces-between-particles destroying human responsibility for behavior, he alludes to arguments along the lines of the following:

  1. If I am responsible for an event, then I caused that event to occur when some other event might have occurred instead.
  2. If forces outside my control caused one event to occur rather than another, then I did not cause that event to occur rather than the other.
  3. The future of the universe relative to any point in time is fully determined by its state at that time and by the operation of its fundamental, mechanical laws of nature (e.g. forces between particles) on that state. In other words, a complete, particle-by-particle description of the entire universe and a complete mathematical description of its natural laws - laws that include no persons, minds, motives, or the like - describes all there is to know about what will happen: no event shall occur except as described by those laws.
  4. If no event can occur except as described by the operation of natural laws on physical universe-states, then every event must be caused by those laws and initial states - by physics.
  5. The natural laws and initial state of the universe - and therefore physics - is outside my control.
  6. Because physics - which is out of my control - causes every event, I cause none of them.
  7. Because I cause no events, I am responsible for nothing that occurs.

...hence my initial reaction that - contrary to his express statement - he must be proposing a magic force acting from the top on these physics below: he explicitly rejects the conclusion of step 4 of the syllogism, and to say that "higher level structures [...] have causal powers in their own right" sounds a whole lot like a denial of step 3 (which would do the job). This wouldn't even be a surprising thing: such theories are called "libertarian free will" by philosophers because lots of people propose them, and they share a lot of properties.

However...

What Ellis Describes as Examples of "Top-Down Causation"

...in his paper doesn't look like free-will libertarianism at all.

More generally, top-down causation happens wherever boundary conditions and initial conditions determine the results. [...] In the context of astronomy/astrophysics, there is a growing literature on contextual effects such as suppression of star formation by powerful active galactic nuclei [7]. This is a top-down effect from galactic scale to stellar scale and thence to the scale of nuclear reactions. Such effects are often characterised as feedback effects.

(if you're curious: [7] M. J. Page et al. (2012) "The suppression of star formation by active galactic nuclei". Nature 485:213-216.)

What is Ellis saying, here? As far as I can tell, just this: sometimes, when we are creating a model of some low-level phenomenon (e.g. the physics of star formation, or the operations being executed in a CPU, or a signal transmitted along a fiber-optic cable), we describe some of the variables in the model in terms of high-level phenomena (e.g. the abundance and proximity of galactic nuclei in the region, the program running on the computer, the network connection being used). If we do not draw from the descriptions of the high-level phenomena, we cannot create an accurate model of the low-level phenomenon. Therefore, high-level phenomena exist.

I've gone through his paper time and time again, and I can't find any definition other than this.

The Disconnect

Nobody says that, because a chair is made of atoms, it is not a chair. The fallacy of division (the idea I meant by "destructive reductionism" in my text, even if the term I mangled - "greedy reductionism" - means something slightly different) is far too obvious for anyone to overlook when such statements are made.

Further, nobody says that, once a chair exists, it is not the proximal cause of events. If the chair is wedged under a doorknob to keep out intruders, it is hard to find anyone who would dispute that the chair is keeping the door shut.

Rather, what some people say is that the chair is not morally responsible - that is, worthy of blame or praise - for keeping the door shut. The chair is not the ultimate cause of the door being jammed shut because we can trace a direct causal path back to whoever put it there ... and the fear that people are trying to relieve with their free-will arguments is that the causal line keeps going - it extends backwards through time to pass out of ourselves into our life experiences, our educations, our upbringing, and our genetics - and carries all our moral responsibility with it, leaving us as pathetic robots, thralls of the forces that made us.

In fact, if Ellis is saying what he seems to be, his entire argument can be summed up in a single sentence from Robert Nozick's Philosophical Explanations:

No one has ever announced that because determinism is true thermostats do not control temperature.

...and its insufficiency to its stated purpose can be summed up in the single sentence I got in response when I posted the quote to a Less Wrong quotes thread:

AllanCrossman: But what thermostats don't control is... what the thermostat is set to.

Which leads me back to what I said before, for completely different reasons. I said that Ellis needed to read Daniel Dennett - specifically, Elbow Room. I got that Nozick quote from Dennett's Elbow Room. He uses it as the introduction to Chapter 3 of Elbow Room, on page 50 of the 172 that I prescribed ... and goes on for many pages more, describing ways in which outside causes can thwart the top-down control of the agent and ways in which they couldn't. He does this because he knows that the existence of high-level agents is not only obvious, but insufficient to justify believing in the autonomy of those agents.

Perhaps I am misreading Ellis's analysis. Perhaps he is saying something more subtle or complex, something which addresses more than just the superficial fact that the universe contains things larger than subatomic particles (or things made of abstractions rather than particles, like novels), and sometimes we can make sense of the universe by talking about those things. And I would be glad to discover that I am mistaken about Ellis's thesis, because if I'm not, he is shoveling great heaps of abstruse argumentation into the inboxes of his readers for no profit whatsoever.

- Robin (talk) 01:38, 19 July 2014 (EDT)

2014-03-12: Time travel game mechanic idea

(This started life as an Idea Pad bullet, but then I realized just how much I was writing and decided to do the right thing and give it its own space.)

A lot of games have played with time-travel mechanics, but I don't believe any have attempted to recreate the most classic of time-travel systems: time-travel that creates a single, self-consistent timeline. The tricky part of doing this in a game is that, if a character jumps back to a time when that character already exists, we have a span of time during which they exist (at least) twice simultaneously. Each copy should be able to act independently, but the player would struggle to control multiple characters at once.

To resolve this issue by making a 'tape' of what the character does is problematic, though, because one function of time travel is to change what the character could have done. Imagine I need to get through a locked door, and quickly; this seems like it should be very easy:

Time Past-Me Future-Me
0 Waiting outside. Arriving from 3 inside.
1 Watching the door open. Opening the door.
2 Entering through open door. Continuing on my merry way.
3 Jumping back to 0 inside.

Unfortunately, from a game perspective, there's a paradox here: how can the game know when there will be some future version of me arriving, and what I will do when I get there? And if the game doesn't know there will be a future-me opening the door, then past-me will see a locked door ... and so, even if future-me opens the door, the game won't know what to do with past-me after that changes.

...unless there is some way for me to tell the game what past-me will do.

My idea, then, would rewrite the locked-door bypass to look more like this.

Time Past-Me #1 (P1) Future-Me #1 (F1) Past-Me #2 (P2) Future-Me #2 (F2)
0 Priming the time-machine. Arriving inside from 3. Restarting in the F1 timeline from 2. Arriving inside from 2.
1 Hunting for a way into the house. Opening the door. Waiting for the door to open. Opening the door.
2 Clearing the doorway and reverting to take over the past-self's actions in the new timeline. Entering and jumping back in time to 0. Continuing on my merry way.
...
3 Getting inside and, from inside, jumping back to 0.

This is theoretically something a game could implement. If the game records what I do as P1, F1, and P2, it can play those actions back to simulate what happens when I play as F1, P2, and F2, respectively - and if it records what I see (or, if I want to be especially cruel, hear), it can tell if the new timeline will create a paradox for the old one.

I'm not sure yet how to deal with paradoxes, to be honest - it'd have to be something that prevents the player from simply ignoring them - but I think there's a lot of potential in this as a puzzle mechanic. And there's an obvious way to incorporate the tutorial into the system: the first thing the player character does in the game is equip, power up, and activate the experimental time-travel device for its next round of testing - and something bad happens that the character wants to prevent during this testing.

- Robin (talk) 17:43, 12 March 2014 (EDT)

2014-01-16: An Ignoramus's Quest for GCC, Part Two

Continued from Part One

Having learned that I could not simply walk into Mordor install GCC, I proceeded to the aforementioned GCC binaries page with the optimistic belief that I would find what past experience told me to expect from binary pages:

  • An executable file approximately identical to what one would obtain by compiling the source code on my own computer, or
  • An executable file that acts as an installer that produces a folder full of files approximately identical to the folder full of files that I would produce if I were following the procedure to compile the source code on my own computer.

...as opposed to a pageful of links, each of which was -- with but a handful of exceptions:

  • An independent project which, for motives probably similar to my own upon setting forth on this quest, contains GCC somewhere inside its bundle of tools.

...and each of which was -- with but a handful of exceptions:

  • Built to run on one or several flavors of UNIX.

Nevertheless, I forged onwards, quickly narrowing my search down to the two "Microsoft Windows" options listed:

I opened each in a separate tab and started to read.

(There was one other option that I might have considered -- DJGPP, the DOS version -- but not only did I immediately eliminate this from consideration on the grounds that Windows is no longer DOS, but my immediate elimination of it from consideration was wise: running it on a Windows NT system can lead to ... issues.)

(Yes, Windows 7 is Windows NT.) (I think.) (It doesn't actually matter.)

(To be continued.)

2014-01-13: Short Status Update

...hi. Been a while. Let me tell you what's up.

  • Something I haven't mentioned on the Web (at least, not to my recollection) is that I'm in therapy for anxiety. As Boggle the Owl will tell you, anxiety is no joke -- but I'm dealing, and getting better.
  • Over the past month or so, I've been trying to maintain a regular schedule of self-improvement, with partial success. (It is quite fortunate that among my successes to date is realizing that I do not need to achieve perfect success, although I hope to improve the fraction a bit.) (Possibly by decreasing the denominator.) (But partial success is definitely a part of the aforementioned "getting better".)
  • The Bayesian Clocks thing is definitely on hiatus, but I intend to return to it. (Having a running copy of Matlab on my computer will probably be an aid to this -- graphs are fun and pretty, and numerical solution is often simpler than analytic.)
  • The blog ... well, I hope to post a few things here, but it is not a priority. I do hope to get a little bit of joy out of it by making a record of one of my self-improvement projects: learning C. Well, more specifically...

2014-01-13: An Ignoramus's Quest for GCC, Part One

(Yes, this is posted out of order. Relax, I know what I'm doing.)

Over the weekend, talking with a friend from the Less Wrong group in my area -- LessWrong DC -- I've decided to begin studying the C programming language by studying The C Programming Language (by Kernighan and Ritchie -- a.k.a. K&R -- second edition). As an aid to doing so, I decided to install the GNU Compiler Collection (GCC) on my Windows 7 laptop, for two reasons.

  1. GCC is GNU free software -- so if I find myself engaged in a serious C-programming project, I will be able to do so without worrying about acquiring new software licenses.
  2. GCC is, according to what I've heard, the gold standard of C compilers -- so if I find myself wanting or needing to use some other, I can expect at the very minimum to find users of the new compiler who are already familiar with the differences.

Proceeding on this noble basis, I typed "GCC" into a search engine, opened the webpage...

...and ran into the first problem that faces the uninitiated when they set out to use free software: that free software people are living in their own private universe. Indeed, they seem especially prone to the high-percentile-rank variation of the Dunning-Kruger effect: their idea of a rank beginner -- of someone whose knowledge is as close to nonexistent as can be found among those with whom communication is possible -- is an individual who can already compile C source code to run on their computer.

Meaning, in this case, that the installation instructions on the GCC website include, as the very first prerequisite to installation, possession of an "ISO C++98 compiler" -- one capable of configuring and building the raw GCC source code that is the product which GNU provides. Although, to their credit, they are sufficiently self-aware as to remark that:

Important: because these are source releases, they will be of little use to you if you do not already have a C compiler on your machine. If you don't already have a compiler, you need pre-compiled binaries. Our binaries page has references to pre-compiled binaries for various platforms.

...leaving me with my choice of two yaks to shave:

  1. Choose a binary distribution and use that instead; or
  2. Find a C++ compiler somewhere and build GCC.

Given that the latter appeared to likely have significantly more fur than the former, I decided to check out the provided binaries page and see what I could find.

Continued in Part Two.

Blog Archives

In the interest of keeping the front page navigable, I am (manually) shifting entries to archive pages over time. Blog:Archive Index will connect you to that page.

At the moment, the most recent archive page is Blog:Archive 2013 (part 1).

Wiki Links

The following are pages on the Wiki which (should) connect you with most of the interesting non-blog content that I have or have promised.

Recent Additions

...the definition of "recent" is a slippery thing; expect content here when I figure out what it should mean to me. That said:

  • Henderson Duct Log - because, as Robert Pirsig said in Zen and the Art of Motorcycle Maintenance, when a problem turns out to be not easy, it's a good idea to do Science to it ... and science requires data.
  • Essays - because it seemed like a good category for things. (Categories ... now there's an idea.)

Starting Points

  • Packbat's Recreations
    • Wherein the site owner blathers about some of the pointless stuff he does.
  • Blog:Archive Index
    • An index to all the old blog entries that said site owner plans to have written in the future.
  • Packbat's Reading
  • Dispatches
    • Wherein reports are made on places and events.
  • Idea Pad
    • Wherein ideas are thrown at a wall to see what sticks.
  • Essays
    • Some ideas aren't short enough to fit on one line of one page.
  • Quotes
    • Wherein memorable phrasings are not-particularly-im-mortalized.
  • Pro Tips
    • Wherein useful suggestions of little to no import are provided.
  • Link:Art, Link:Webcomics, Link:Web Fiction
    • Wherein URLs that, at one point, directed browsers to interesting visual works, visual sequential works, and textual sequential works accumulate.
  • Scales
    • Wherein criteria are proposed for the measurement of arbitrary qualities.
  • Sandbox
    • Wherein experiments with Wiki formatting are conducted and randomly deleted.

Serial Works

  • Roll to Dodge - Zombie Apocalypse
    • "Roll to Dodge" is a simplified kind of forum roleplaying game in which the outcome of each action is determined by rolling a single d6. theplague42 and Packbat were moderating this game starting in January 2011 (until I vanished of the face of the earth) - the above is the beginnings of a copy of this archive, condensed to include just the player actions, die rolls, and resolutions.