##
Probabilities
*December 5, 2011*

*Posted by PythonGuy in Beginning Programming, Python.*

add a comment

add a comment

I used to play a game called Hamurabi, a very ancient game.

The way it works is you have a population, an amount of land, and an amount of grain. Every year, you need to choose how much grain to plant, how much grain to feed the people, and how many acres of land to buy or sell.

Each year, a certain amount of grain would grow, people would starve if they didn’t get enough food, and rats would eat your surplus grain. Random events would cause disasters to make things interesting.

The game was remarkably simple but surprisingly addictive.

I figured it was time to write a new version for the modern age, when memory and CPU are no longer real constraints to most of the problems we faced programming 30 years ago. I chose the Hamurabi game as inspiration.

Of course, a real simulation of a primitive society could track each individual within the population. People would age, and then die from starvation, random events, or just old age.

Using the US Census data, I found some actuarial tables that gave a percentage chance someone would die at a particular age.

I stored the number of people in each age group, separated by sex, in a simple array. The index was the age.

Now all I needed to do was find a scalable way of calculating how many people die in each age group every year.

The simplest way to do this is to roll the dice for each person each year. Simply use the random modules’ random() function to get a number between 0.0 and 1.0. If this is left than the probability they will die, then they die.

Of course, if you have a thousand people, you have to call random() a thousand times. A million people need it called a million times. This would slow the program down as your population grew.

I vaguely recalled some math from my college years that should help me work these things out more quickly. Reading some of my old textbooks, I discovered that that Binomial Distribution gives you the exact probability of seeing x events in population of n where each has a probability of p.

However, the binomial distribution relies on combinatorials, which rely on factorials, which are notoriously difficult to calculate precisely for large numbers.

Luckily, this is not a new problem. Approximating the Binomial Distribution leads to the Normal or Gauss Distribution. This relies on simple exponentials and Python’s random module already has a gauss() and a normalvariate() built in.

What do you use for mu and sigma, the mean and standard deviation? The mean is simply n*p, and the standard deviation is the square root of n*p*(1-p).

The normal distribution really doesn’t give you good approximations when n or n*p is small. If n is really small (there are only a handful of people), I can just roll random() that many times. But if n is large, and p is very small (generally, for younger ages), then my approximation breaks down.

For this, the Poisson Distribution is useful. The Poisson Distribution can give you a very good approximation of the Binomial Distribution for small numbers of deaths. What I do is roll the dice with random(), and walk from 0 deaths upward, subtracting the probability that that many deaths would occur given the Poisson Distribution. When I’ve exceeded the roll, then that’s the number of deaths that occur.

Using these three methods of calculating deaths lead to a very scalable algorithm for calculating the number of deaths in a population of 1, 10, hundreds of even millions of people. I ran my simulation for several hundred years until I saw hundreds of billions of people, with tens of billions dying each year.

I think this shows why I like Python. Never once did I have to think very hard about how to express my ideas in Python. I was completely free to consider the algorithm. Most of my time was spent reading math books and translating complicated formulas into simple algorithms, and testing that it did what I expected.

##
Learn to Program (With Python)
*April 29, 2011*

*Posted by PythonGuy in Beginning Programming.*

add a comment

add a comment

There are a finite number of broad skills that any successful programmer must master. The language used to program is only one of them.

Let me list them out as best as I can:

- Understanding what’s possible in programming, that is, the science bits.
- Understanding how to put things together to accomplish a specific goal, or the engineering bits.
- Understanding the programming process, where things go wrong, and how to manage that. This is the management bits.
- Understanding how to work as a team. This is more than making friends, and has to do with how to divide up tasks and bring things together in harmony.
- Understanding the platform and environment you are programming in, its limitations and capabilities.
- Understanding the code base you are working on, where things are and how things are organized.
- Understanding the programming language used, its strengths and weaknesses and how to work around them.

Of those skills, the only one people seem to pay attention to is the last one. However, on the stack of things, that’s probably the least important thing. Sure, if you’re working on a project written in Java, you’re going to write Java. But compared to understanding the environment and platform, understanding how to manage the programming problem, working with your teammates, and figuring out the science and engineering of programming, it’s the least of your concerns.

If you’re a beginner programmer, I strongly recommend you choose a language that is as easy as possible to learn (Python), that expresses the vast majority of programming concepts (Python) and that has a strong community built around it (Python).

There are other really, really good choices you can make, and a couple very, very bad choices you can make for a first programming language. Java and C’s community makes up for its language deficiencies, but you are going to fight an uphill battle mastering a difficult language unnecessarily. Perl and Ruby both have a strong community, and a more moderate learning curve than C or Java, but again, why complicate matters with a language that is more difficult than it needs to be?

One day, you’ll likely find yourself working on a C, Perl, or Java project. However, if you’re been successful in learning all the *other* skills that have to do with programming, you’ll pick up these languages in no time flat.

##
The Path to Python Mastery
*April 25, 2011*

*Posted by PythonGuy in Beginning Programming, Python.*

add a comment

add a comment

I’m taking some new apprentices under my wing. Two of them have very little experience programming, and one only has a little. All three are very bright.

I’m noticing that Python is sort of the ideal language for beginners. One of my apprentices was lost on “for” loops. That’s a very good place to get lost. Once you’ve mastered the “for” loop, you’re 80% of the way to becoming a Python Master. So we spent more time than I thought I’d need, but less time than I took, to learn the for loop.

In my mind, the critical steps a Python Master must take are roughly something as follow.

- Understand the basic values: ints, floats, strings, unicode, lists, tuples, dicts. Know how to take slices and modify them.
- Understand variable names and how Python only stores a reference. Assign the same list to two variables and understand why changing the same list through one variable is visible from the other.
- Understand flow control: if, while, for, and try.
- Understand functions: how to define them; what lexical scope is and is not; how to write a function that takes a function. (Lambda calculus is a great topic here, if only to show that the fundamental building block of every program is the function.)
- Understand how to organize data and functions into classes. Understand all the weird things you can do with classes to make attributes appear mysteriously.
- Understand how to organize code into modules. Publish your own module on PyPI, or at least know how it is done. Become familiar with what modules are available in the standard distribution as well as on PyPI.
- Participate in a project that is beyond your understanding. Use other tools to find the right spot of code to modify. Modify the code, then test.
- Participate in design discussions, learning how to evaluate a program at a very high level. Understand the basic algorithms, Big-O notation, and how different components interact productively.
- Create a new project from scratch, and build it until you have five or six engineers participating in it. From this, learn how to manage code among a team.

As you can see, I am a true believer in “doing” versus “knowing”. For myself, a lot of my knowledge is stored in my fingertips (I use ViM a lot). You don’t really understand a piece of something until it has fully become incorporated in how you think about things.

##
Teaching Programming With Python
*January 7, 2011*

*Posted by PythonGuy in Advanced Python, Beginning Programming, Mako, Python.*

add a comment

add a comment

Python Guy taught Python Guy Jr. (9 years old) about Boolean Algebra. (It’s really easy because there are only two numbers, and so you have a 50% chance of getting any problem right by guessing. I guess that’s why they don’t teach it in Kindergarten, although if we did, I think a lot of kids would learn to like math a lot more.)

Python Guy popped open python on the konsole in KDE on Fedora 13. Although this box doesn’t have Flash installed (on purpose), it is, to Python Guy’s spawnlings, the magic box that does things that blow their mind every day before they even wake up late for school.

Python Guy watched the Python Guy Jr. learn all about short-circuiting. When Python Guy was a young one, practicing on the Commodore 64 (or was it Borland C several years later?), I remember the thing I constantly awed over was the short-circuit operators, “and” and “or” in Python, but && and || in C.

Ah, the magic!

>>> 7 and 4

4

>>> 0 and 5

0

>>> 7 or 8

7

>>> 0 or 10

10

Of course, when you understand what is happening, you are tempted to do fancy things like this in Mako:

You have bought <% n %> <% n == 1 and "item" or "items" %>.

Until you mess up, that is, and foolishly choose something false for the positive case:

You have bought <% n %> item<% n == 1 and "" or "s" %>.

(Can you spot it? Neither can I when I do it.)

So Python has the infamous and hotly debated “a if b else c” expression, which does the right thing:

You have bought <% n %> <% "item" if n == 1 else "items" %>.

But is that really the right thing? No, it’s not. If you want to do plurals, you need to be really, really smart about it. Even if you have a library that will pluralize your English nouns, you’ve got to think of the Germans, Slovenians, and Japanese who might not like ordering their stuff in English.

Back to the original topic, Boolean Algebra is a great way to teach Algebra and a great way to play with some of the most advanced features of Python. It’s almost magical how it all works, at least in the imagination of a 9-year-old.

##
Teaching my Son Python
*March 18, 2008*

*Posted by PythonGuy in Beginning Programming, Education, Python.*

add a comment

add a comment

I home school my son. Why? Well, that’s a whole other ball of wax.

One of the opportunities that home schooling affords is I get to choose the curriculum. Part of that curriculum includes learning to program. Whether my son decides to be a lawyer, a plumber, or a high-energy particle physicist, I expect that at least the understanding of how to program will come in handy.

I started to learn to program when I was just a bit older than him on an old Commodore 64 with BASIC. He just turned 7. I had tried some experiments earlier to see if he could “get” programming, at least the most fundamental concepts last year, and I had prodded lightly over the year. But this time, he seems to start to “get” it.

At first, he used it like a calculator. You type in the numbers to the Idle prompt, and out comes the same number. You put in a +, -, *, or / and it does all the hard work for you. (He doesn’t know about decimals yet—although he knows how to add and subtract money including cents.)

What really fascinated him was this short program, written in Python 3.0:

import random def guess_my_number(): number = random.randint(1,1000) while True: guess = int(input("Guess my number: ")) if guess > number: print("You're too high!") elif guess < number: print("You're too low!") else: print("You got it! You must be very smart.") break

This is a very simple program, but it plays a game with my son that requires him to think logically and almost run a similar program, almost as easy to write:

import random def let_me_guess(): high = 1000 low = 1 while True: guess = random.randint(low, high) answer = input("Is it %d? (yes/low/high) " % guess) if answer == 'yes': print("That was fun!") break elif answer == 'low': low = guess + 1 elif answer == 'high': high = guess - 1 else: print("I didn't understand you.")

Now, I didn’t write the programs perfectly the first time, but thanks to Idle it was a trivial matter to find and fix bugs. I am not entirely certain that the above two programs work—I typed them from memory. But if they don’t, it would be a good learning experience to make them work. So he got to see part of the development process—write, rum debug, repeat.

Having the computer be just as smart as dad at guessing a number was an interesting revelation to my son. I guess something clicked in his head that there was something magical happening inside that strange metal box, something that wasn’t completely beyond explanation and something he could harness. After seeing the computer guess his number, he began playing guess_my_number() again and again. In that process, he discovered a bug. If you type in a very large number, then the whole thing throws a ValueError. He actually made it a game of seeing what input caused the error and what didn’t.

I tried to play with the debugger, but I hardly ever use it and it didn’t seem to do what I wanted anyway. It would’ve been neat to show my son how the program actually ran, but truthfully, if you don’t understand the concepts, it will still be mysterious.

It also reminded me about how much “art” (art as in technical experience, know-how) goes into programming. You have to know how to use the keyboard, run programs, debug programs, use a text editor, and a whole number of things. If all of these pieces don’t come together, if you’re deficient in one area, you can’t program. It would be nice if we could make the programming task even simpler by removing some of these dependencies—dependencies with really didn’t exist in the C64 era—but alas, I don’t see a way to do it and still write “real” programs.