jump to navigation

Python is Not a Stepping Stone to Lisp June 13, 2010

Posted by PythonGuy in Java, Lisp, perl, Python, Ruby.
add a comment

I’ve dabbled a bit in Lisp-land. I left frustrated and annoyed. Not with the language, per se, but moreso with the community and its support, or rather, the lack thereof. I’ve also taken up full-time residence for a number of years in C and C++ land. I’ve tinkered in quite a few language and today, I’m forced to write in Java. (Thanks, Java. My project is now a full two months late because your memory management sucks and I cannot do proper caching.)

Brian Carper is an ex-pat of the nation of Ruby now strangely finding residence in the nation of Clojure. He talks about why Ruby is a natural step towards Clojure, and unwittingly exposes Ruby’s fatal flaws, flaws which I find simply abhorrent. One day we’ll be reading about Brian Carper’s adventures in some other language, wherein he discovers, after all, that previous language X had it all wrong to begin with.

For reasons I cannot imagine, he hasn’t tried Python out, at least enough to find it satisfying all the weaknesses of all the languages he has tried before. Yes, super() is not, and bad Python sucks as bad as bad anything. Python isn’t a trivial language to master, although with no foreknowledge you can get pretty deep into Python without realizing it.

Anyway, perhaps this bit of arguing will help him see the error of his ways. I like to classify languages into two categories: great languages and terrible languages. There really is no middle ground.

Great languages are languages designed to solve a problem and that subsequently solve that problem. C went about trying to provide some kind of structure to assembler without getting too far away from assembler, and succeeded brilliantly. Lisp set out to prove that a language built on pure mathematics can solve the world’s problems and do it quickly and it succeeded wildly. Perl set out to show that “scripting” in a higher level language can actually make some problems really easy to solve.

Terrible languages set out to solve a problem and fall short. These are languages like Ruby and Java and pretty much everything out there except a few languages.

When you finally realize what makes a great language a terrible language, you have reached a certain level of understanding. It’s like waking up one morning and seeing, for the first time, that Lincoln probably had body odor!

So all languages are terrible. Even Python. Python sucks, a lot! I mean, I had to fight with the dang thing for hours because I happened to name a script “mymodule.py” in my bin path and it wasn’t picking up the right package path! We have a name for these things in the Python community (warts), and we show them off like trophies, waving them proudly and emphatically in front of programmers new and old.

It’s really odd seeing a language community proudly and boldly declare what their language is terrible at. It’s even more odd to see them do it with a smile.

What makes one language better than others is that it sucks less. When you compare the long list of Python’s failures with the long list of every other language out there’s failures, you’ll quickly see that Python isn’t too bad. In fact, it’s kind of nice.

We don’t have to weigh our benefits against out costs. We know that every language has some huge benefits.

What we do assert, however, is that our costs are much less than other language’s costs, and so you’ll end up with Python because we suck less.

In the end, all languages only provide one benefit: Helping you get your program correct.

Writing a Virtual Machine in Python April 17, 2008

Posted by PythonGuy in Advanced Python, Lisp, Python.
Tags: , ,

Python is the ultimate prototyping language, right?

Yes! I am glad I know it.

In a few hours last night, I was able to write a virtual machine that could power a Scheme interpreter.


Python Typing: Just the Right Amount March 22, 2008

Posted by PythonGuy in Java, Lisp, perl, Python.
1 comment so far

So, I’m reading some blog posts from who-knows-when. I closed the pages and it is lost to my memory, so sorry I can’t uncover it.

A few points seemed to bleed through, though:

1. Dynamic typing (that is, not specifying the type of the objects at the time of coding) is a good thing, since getting static typing (the opposite) correct is very hard. In fact, Haskell and other languages make a really, really good effort at it, almost get it right, but still don’t solve the problem of making the code easier to read, write, and maintain than dynamically typed languages. So, in the boxing match of static typing vs. dynamic typing, dynamic typing wins by forfeit.

2. Strict typing (that is, specifying the type of the object upon inspection) is a good thing, since loose typing (not knowing what type a thing is, even after looking at it very hard) is terrible. In this boxing match, the opponent is so ugly and so hard to deal with, that the audience boos him out of the ring before the match can start. Strict typing wins.

So, the ideal language will be dynamic and strict (python, lisp), not static or loose. (Although, if someone figures out the static typing thing without making life harder, there could be an upset. This is unlikely, since writing strict typing code that works is as hard as writing code.)

Some people nowadays are trying to bring in some form of static typing to python. Python 3k will have it, to some degree. However, it is not really going to be implemented at compile time. it will be an additional parameter check at runtime that will throw an exception should the static type check fail.

I have a bold prediction: In the end, no one is going to use the static typing features of Python. How do I know this? Because perl tried something similar, and it fell by the wayside, hard.

Granted, perl’s typing system was extremely naive and difficult to use. But it is a testament to me that I don’t see any new code with it being used. In fact, I bet you could just ignore the whole thing—just pass those constructs up and not even recognize them—and the old code that uses them won’t even notice. This is partly because the code that is calling the statically typed subroutines are already debugged, but also because if there is an issue, it will come up. If there isn’t, it won’t.

Duck typing these days is taking a lot of hits. That’s unfortunate, because duck typing is the exact amount of typing we need. Duck typing, in the end, says, “Who cares what type it is? I’m just going to use it and if it wasn’t duck enough, we’ll be eating patte of whatever you passed me.”

See, sometimes I do know better than the person who implemented the function what should be passed in. Sometimes I want to sneak something in that is not expected. Here’s an example: The sys.ps1 global variable in the sys module. At first glance, 99% of the programs would say, “That should always, 100% of the time be a string.” But then that 1% who actually seriously use BASH everyday and know that $PS1 is most definitely not a string but actually a piece of code written in a special language will pipe up and say, “That is most definitely *not* a string. It is something that could be turned into a string, but that’s all it is.”

Sometimes, the people who put the guts inside of the black box are too strict. Sometimes they overestimate what the requirements for input are. And sometimes people who use those black boxes know better.

So let’s leave our contracts where they belong—in the documentation. Let’s allow coders to put together their systems and see if they work, to learn how the black boxes work better than the people who made the black boxes. In other words, let’s keep our noses out of each other’s business.

Python v. Lisp August 21, 2007

Posted by PythonGuy in Lisp, Python.
add a comment

In this corner, a young, up and coming language, designed by a mathemetician with attention to simplicity and conciseness: Python!

In this corner, an old, experiened language with more than 60 years of programming experience under its belt, hailed by many as the emperor of all languages, the one and only: Lisp!

I’m trying to learn Lisp, and I think I get it. I’ve tried before and I couldn’t get very far because I didn’t know what exactly I was looking for. But now… now it all seems to make sense to me. (I have yet to write any real code with Lisp: what is the thingy where you set global variables? I can’t recall.)

The core of Lisp is, really, the way it structures its code. Everything is a list, and those lists get processed. The way a list is processed is the first item in the list is the function, while the rest are its parameters. Pretty simple? Except for one problem: You can’t get very far with just that. You need more, though not much more.

So people added some more things to it and one of those things were macros. Macros are a way of hijacking the function call, doing strange things to the arguments that we wouldn’t think of doing in Python. Macros can mess with the way code is compiled. They can rip statements up and put them back together in new and interesting ways. That’s fascinating, but it’s a bit scary. A bit scary and confusing, because macros can (and do) introduce their own little languages.

You can do macros in C (provided the OS allows you) and you can do this with Python (provided you know the magic behind the scenes.) It’s just really, really hard, and people are strongly encouraged not to. One of the reasons is that C and Python implementations vary. Lisp implementations, however, all have the same basic core–list processing.

One more massive difference is the traditions that have built up around Lisp. Rather than being verbose, they chose to use tiny symbols to mean big things. I have yet to understand all these symbols or what all they mean, and that’s a negative. I’ve been programming perl for too long, and I still get stumped by an oddly placed ‘#’ or ‘`’. I hope the set of special symbols in Lisp are small enough that they can all be kept in your head at once. That certainly isn’t true with perl.

Python, on the other hand, made a decision from the get-go that they would avoid this disease affecting Lisp and perl. You can see it in the design for Python 3–there are many bits that are going to disappear or be replaced with something simpler. That’s a good thing. Python will be less magical, less symbol-y, and easier to explain.

I cannot say that Lisp can’t be a good language, or that it is not right now. It certainly is. I plan to understand Lisp fully by embracing it and playing with it. Only then can you trust my opinion on this subject. Who knows? Maybe Python Guy will become Lisp Guy?

What I am saying is that Lisp isn’t a language where you can jump in the deep-end and end up okay. You certainly need to spend a lot of time sorting things out and learning the basic language structures. That’s the opposite of Python, where newbies can start writing code from day 1.