# Content tagged Self-Learning

1960s: Lisp, Simula, Algol, our hallowed forefathers

1971: Unix released in Assembly, the advent of time

1972: Unix ported to C or, the advent of Portable/Commodity OSes

1973: ML interpreter in lisp appears, the advent of Typed FP?

1974: Deutsche+Greenblatt, the advent of LispM Dreams, fast implementations

1975: Sussman and Steele, the advent of Scheme

1976: Copyright Act of 76, the advent of Proprietary Software

1977: Apple II, Atari 2600, the advent of 'hobby computing'

1978: 8086, The first BBS, TCP split into TCP/IP, the advent of networking

1979: CADR LispM paper published, the advent of Single-User Workstations?

1980: Smalltalk-80, the advent of OOP? (76 wasn't released outside PARC)

1981: Symbolics LM-2 sells, the advent of the LispM

1982: Commodore 64; Sun-1 workstation or, the advent of the Unix Workstation

1983: MSFT announces Windows; Stallman founds GNU, the advent of free software

1984: Apple Macintosh or, the advent of consumer graphical PCs

1985: Symbolics registers first domain, NES, Intel sells 386, the advent of home computing? (spreadsheets, office suites, etc)

1986: IETF is formed, IMAP is developed, efforts to add OO to Common Lisp, more standards

1987: GNU Compiler Collection, the advent of free toolchains

1988: Soundblaster, MS-DOS 4.0, a boring year

1989: General Public License, the arrival of free software

1990: Haskell 1.0, Caml Light, the arrival of Typed FP? not quite yet...

1991: Linux 0.01, the advent of open source

1992: Alpha+OpenGenera, Linux GPL'd, Windows 3.1 is released, the advent of the Dark Ages

1993: Pentium I released, the advent of the "One True"(ly bad) Architecture

1994: Linux 1.0, XFree86, Red Hat, the advent of the distro

1995: IE 1, AOL 3 hits 2 million users, the advent of the net

1996: Linux 2.0 w/SMP, Apache leads web servers, the arrival of open source

1997: Wifi+HTTP 1.1, CD-RWs+Office 97, IE 4+google.com, nullsoft founded, the advent of the modern age

1998: US v MSFT, Netscape open sources Mozilla, the 6-year browser war purgatory

1999: Google Incorporates, the advent of Search

That ought to give me plenty of time to work on personal studies...but there's a problem with that. I haven't been good about making my personal CS studying structured since the great 2008 experiment/debacle. There are a lot of reasons for that experiment's failure. I lost steam studying only SICP in 2008 and not having any immediate idea how to write software that was useful to me or anybody else. Perhaps more significantly, I wound up with not only a full time job but also housewife duties in May 08 which practically ended the time and tight scheduling that had been crucial to my progress. I'm likely still going to have housewife duties but I think I can carve out enough time this summer to give things another go.

### Why Bother?

Long before

*Outliers*, many studies suggested that 10,000 hours of practice are required to achieve expertise. Peter Norvig has a post called "Teach Yourself Programming in Ten Years" that is often linked to in discussions of learning to program and it links to a few of these studies. ...but that link is, perhaps, a bit over cited and it's impact softened as a consequence. I prefer the comment by nostrademons on that post, reproduced here for posterity:

I don't think it works quite like that.

When I got my first programming job, straight out of high school, I finished tasks in 2 days that took the other programmers there 4 months. I figured that if I was that much faster than them, I ought to be able to become a world-class programmer in just a couple years (or more accurately, I thought I could become a world-class physicist in my 4 years of college and then a world-class programmer in the 2 years afterwards).

So I resolved to take all the shortcuts I could. I'd read all the classic books in the field and learn from those with more experience than me. I'd take internships with experienced, accomplished programmers and pick their brains for everything I could. I'd take on volunteer coding tasks so I could get some experience building things on my own. I'd cherry-pick all the tough courses at college so I got the best part of a CS degree without having to sit through stuff I already knew.

I did all that. I've read all the classic CS books - GoF patterns, Mythical Man Month, Extreme Programming stuff, Pragmatic Programmer, Knuth, SICP, TAPL, Dragon Book, On Lisp, etc. I've worked with programmers that wrote large chunks of curses, Rogue, vi, Jini, JavaSpaces, HotSpot, Gallery, Stratus, Equalogic, DEC compilers, Python, Google, and a bunch of other projects they don't brag about. I wrote Amherst's course-evaluation system, and rewrote the software for a 100k-user site, and wrote one of the front-page hits for [haskell tutorial]. I have that CS degree, and aced the algorithms class of which you speak, and took compiler design and OS and computer graphics too.

It's been 9.5 years since that first programming job, and it still feels like I have a really long way to go before I'm actually a world-class programmer.

The part I didn't realize, in my youthful arrogance, was that I was comparing myself to the wrong people. When Norvig tells you how to become a programmer in 10 years, he's assuming you're already taking all the shortcuts you can. It still takes 10 years. Most of the people you'll meet straight out of high school, or in most colleges, or in random companies, will never become programmers in the sense that Norvig's talking about. Eventually they'll give up trying, and start grumbling on Reddit about how the software industry is mostly boring cubicle farms where they push around Enterprise JavaBeans and never use the algorithms that they learned in college.

Let me just say, I *love* this comment. I've thought about printing it out, hanging it over my bed and reading it before I go to sleep every night just for the kick in the ass it gives me. I'm probably too relaxed to ever achieve the level of expertise nostrademons is talking about (and see the "experiment" link above if you don't believe me) but I want to keep learning. I think it is far too easy in this field to not follow both theory and practice, to either stop writing code or stop keeping up with the theory. I have a real passion and interest for programming and I plan to try to avoid that.

### How To Do It

I'm not sure to what degree schoolwork, paid work or personal projects can contribute to the 10,000 hour figure. I feel like if it's not a concerted effort to get better in an area where you are weak or otherwise generate forward motion, if it's not dedicated practice rather than plodding repeated movements, it doesn't count. Many of us are adults though and if you want 8 hours of sleep 7 nights a week with social time, transit time, time to eat and time to relax then a week goes from 168 hours to 112 hours to 72 hours (after a 40 hour workweek) to less really quick.

Assuming you set aside an hour every work day (M-F) for dedicated personal studies, 52 weeks a year, it would take you about 40 years to achieve 10,000 hours. I have to assume Norvig or others would include school or a job assuming that it continued to push your limits. Otherwise, we're talking 20 hours a week (or 4 hours after work+school+etc every workday) to get there in 10 years. At any rate, I've said I'm in no rush and that I'm probably too relaxed to achieve the level of expertise discussed above. As a consequence, I'll only do 5 hours a week from 4-5pm Monday through Friday.

I'm not going to have quite as structured a study plan as I did in 2008 though I will be doing the exercises, there will be specific books I study from and hopefully I'll find time to blog about it and not just push to github. Monday-Tuesday will be Computer Systems: A Programmer's Perspective by Bryant and O'Hallaron, Wednesday-Thursday will be Algorithm Design by Kleinberg and Tardos and Friday will be Peter Norvig's Paradigms of Artificial Intelligence Programming in Common Lisp. Let's see how this goes...

There's also been a good discussion on what math programmers need to know on reddit recently. The outstanding comments (IMO) are here, here, here and here. Similarly, there was a good thread a few weeks back titled "What do you wish you knew when you started programming?". A few of my favorite comments are here, here, here and here. More importantly, there was a very enjoyable article and followup about Office Politics as interpreted by Hugh MacLeod and The Office. As some folks in the hackernews thread mention, the model isn't universally applicable. Yep, that's right. It's a model. Go figure.

Well, it's been a very hard week. Mostly because I just hate my Algorithms class. I don't hate algortihms just the way it's being presented and taught. I'm pretty sure I can overcome the obstacles involved, I'm just much less motivated to do so than I would like. The last two semesters I really had a fire under my butt about school for some reason. Maybe not but when I had to rise to the challenge, it was relatively easy to do so and I was kind of proud of that since it was a divergence from my past. This semester the fight just isn't in me and I have next to no pride in what I'm doing in school. I'm sort of coasting and I'm finding it hard to break out of that. Of course, I'm learning the material and I'm doing extracurricular things to improve my knowledge, joy and understanding because

*. Whether that's stupid or not is another question but also kind of irrelevant, I didn't choose to be fascinated by this stuff. I just can't help it. So I'm not doing what I love, I'm doing what I can't help but do. It's gonna be a long road.*

**I care about programming**I've still been getting a few things done. I've written a few quick hackish, sbcl-dependent scripts. Maybe I'll post some of the code for them soon. I started working on Redlinux again. The last release I made was back in May and a lot has changed since then, more about my approach than about Redlinux. I'm hoping to make a new release by the end of November. So far the big change is my build process. As in, now there actually is one. It should be trivial to rebuild from scratch in the future. See what a non-distribution it is? The upcoming release should have a nice proper script for creating a new user and doing a little initial setup. Above and beyond that, I'm hoping to work on the documentation some. If anything, the real problem is it may not fit on a single CD with all the programming software I've bundled in.

A while back I wrote a post on getting an undergraduate CS education for under $1,000. It was mostly focused on which books and resources were ideal for self-study. I reworked said list and posted it on Amazon over the weekend. A lot of my decisions about what's worthwhile for self-study has changed (since I've actually read more). My motivation stems largely from the fact that I prefer self-study to school. Finally, there are two slightly older articles of mine that linked to a bunch of really interesting articles that are still among my favorite blog posts I've stumbled upon since trolling the internet for programming stuff. I'm hoping to do a real writeup on a number of these articles and add in a few of my own ideas in the near future. And since I'm calling it "the near future" you know advance I'll never get around to it. Well, hopefully not. :)

That's all for now. Back to homework guys.

Resources:

Read: Chapter 2 through Section 2.1

Watch: Lecture 2-b

Checked against: Eli Bendersky's Blog, SICP Wiki, Ken Dyck's Solutions, Theloserblog, Wfasim's Solutions and the Scheme Wiki and Personal Wiki solutions listed here.

SICP Notes and Exercises:

Notes

Quotes

Exercises

2.1:

(define (make-rat n d)

(let ((g (gcd n d)))

(if (positive? (/ n d))

(cons (abs (/ n g)) (abs (/ d g)))

(cons (- (/ n g)) (abs (/ d g))))))

;Value: make-rat

2.2:

(define (make-point x y)

(cons x y))

;Value: make-point

(define (x-point point)

(car point))

;Value: x-point

(define (y-point point)

(cdr point))

;Value: y-point

(define (start-segment segment)

(car segment))

;Value: start-segment

(define (end-segment segment)

(cdr segment))

;Value: end-segment

(define (make-segment p1 p2)

(cons p1 p2))

;Value: make-segment

(define (print-point p)

(newline)

(display "(")

(display (x-point p))

(display ",")

(display (y-point p))

(display ")"))

;Value: print-point

(define (midpoint-segment s)

(make-point (average (x-point (start-segment s))

(x-point (end-segment s)))

(average (y-point (start-segment s))

(y-point (end-segment s)))))

;Value: midpoint-segment

This is really interesting to me. I feel like midpoint-segment should be expressible in 2 or 4 lines of code but I can't think of a way to elegantly do that with lets or function composition. The problem is each time you're composing different sets of functions. Without defining multiple lets or compositions it doesn't compress and if you do you lose your LOC gains anyway. I've decided this definition is sufficiently succinct.

2.3:

;;representation 1 - procedure based, working by magic:

(define (rect-area r)

(* (length r) (width r)))

;Value: rect-area

(define (rect-perimeter r)

(* 2 (+ (length r) (width r))))

;Value: rect-perimeter

(define (make-rect top-left bottom-right)

(cons top-left bottom-right))

;Value: make-rect

(define (length r)

(- (y-point (car r)) (y-point (cdr r))))

;Value: length

(define (width r)

(- (x-point (cdr r)) (x-point (car r))))

;Value: width

;;representation 2 - not procedure based, working by reality:

(define (make-rect top bottom)

(cons top bottom))

;Value: make-rect

(define (rect-top r)

(car r))

;Value: rect-top

(define (rect-bottom r)

(cdr r))

;Value: rect-bottom

(define (rect-left r)

(make-segment (start-segment top)

(start-segment bottom)))

;Value: rect-left

(define (rect-right r)

(make-segment (end-segment top)

(end-segment bottom)))

;Value: rect-right

(define (length r)

(- (y-point (start-segment (rect-top r)))

(y-point (start-segment (rect-bottom r)))))

;Value: length

(define (width r)

(- (x-point (end-segment (rect-top r)))

(x-point (start-segment (rect-top r)))))

;Value: width

What working by magic really seems to do is make the cruft that's unnecessary to your implementation obvious.

2.4:

(define (cons x y)

(lambda (m) (m x y)))

;Value: cons

(define (car z)

(z (lambda (p q) p)))

;Value: car

(define (cdr z)

(z (lambda (p q) q)))

;Value: cdr

Wow. Just wow.

2.5:

(define (cons a b)

(* (expt 2 a) (expt 3 b)))

;Value: cons

(define (what-exponent x y)

(define (exp-iter count)

(if (= (modulo y (expt x count)) 0)

(exp-iter (+ count 1))

(- count 1)))

(exp-iter 1))

;Value: what-exponent

(define (car x)

(what-exponent 2 x))

;Value: car

(define (cdr x)

(what-exponent 3 x))

;Value: cdr

This isn't quite as evil as the problem description makes it sound.

2.6:

Whew boy. Here goes...

(define zero (lambda (f) (lambda (x) x)))

;Value: zero

(define (add-1 n)

(lambda (f) (lambda (x) (f ((n f) x)))))

;Value: add-1

(add-1 zero)

(lambda (f) (lambda (x) (f ((zero f) x))))

(lambda (f) (lambda (x) (f x))) ;; this was the difficult step for me. why? i couldn't understand how ((zero f) x) got worked down to x. I knew that the identity function was what eventually got returned but I figured it received f as it's argument. The trick was recalling that f gets passed into a function which does NOTHING WITH F and returns the identity function anyway. (zero f) reduces to the identity function because of the first lambda in zero that just throws it's argument away. Hence, you have (identity x) which is just x and leaves this result as one. somewhat sadly, formatting my code so that the substitution wasn't all on one line also could've made the difference and saved me a week or so.

(define one (lambda (f) (lambda (x) (f x))))

;Value: one

(add-1 one)

(lambda (f) (lambda (x) (f ((one f) x))));; again the f arg is thrown away and x is put into the second lambda to give...

(lambda (f) (lambda (x) (f (f x))))

(define two (lambda (f) (lambda (x) (f (f x)))))

;Value: two

;;clearly we're adding an application of f each time we add one. for example...

((two square) 5)

;Value: 625

;; which is the square of the square of 5 (* 25 25)

;;now i'm supposed to define an addition function which should perform like so:

(add one two)

(add (lambda (f) (lambda (x) (f x)))

(lambda (f) (lambda (x) (f (f x)))))

...

(lambda (f) (lambda (x) (f (f (f x)))))

;;and then allow us to do this

(((add one two) square) 5)

(square (square (square 5)))

;Value: 390625

;;maybe the hard part of this problem is holding multiple levels of evaluation in your head at the same time. anyway...

;;it seems like what we really want to do is feed the f chains into each other somehow...p

(define (add a b)

(lambda (f) (lambda (x) ((a f) (b f)) x)))

;Value: add

;;this is tempting but wrong. i realized you had to pass the f in to make sure you got the correct repeated calls but missed that if you passed (b f) into the resulting function you were passing a procedure instead of a value.

(define (add a b)

(lambda (f) (lambda (x) ((a f) ((b f) x)))))

;Value: add

(add one two)

(lambda (f) (lambda (x) ((one f) ((two f) x))))

(lambda (f) (lambda (x) ((one f)

((lambda (x) (f (f x))) x))))

(lambda (f) (lambda (x)

(lambda (x) ((f x)

(lambda (x) (f (f x)) x)))

(lambda (f) (lambda (x) (f (f (f x)))))

;;you want to hear what's really gross? i found that this worked for odd numbers but not even numbers and tried unsuccessfully to figure out what was wrong for an hour before re-evaluating my definitions for one and two and seeing it "just work".

(((add one two) square) 5)

(define (test churchnum)

(define (inc x)

(+ x 1))

((churchnum inc) 0))

;Value: test

(test (add one two))

;Value: 3

;;it's sort of insulting that after writing all that code you realize

you just implemented a fancy lambda version of repeated for

functions/church numerals.

;;proving above point:

(define (compose f g)

(lambda (x) (f (g x))))

;Value: compose

(define (repeated f n)

(if (= n 1)

f

(compose f (repeated f (- n 1)))))

;Value: repeated

(define (add a b)

(lambda (f) (repeated f (+ a b))))

;Value: add

;;of course, this pretends that church numerals are integers but...you get the idea.

This may have been the hardest problem I encountered thus far and I definitely had to peek at the way other people started solving the problem to get my own ideas flowing in the right direction.

2.7:

(define (lower-bound i)

(car i))

;Value: lower-bound

(define (upper-bound i)

(cdr i))

;Value: upper-bound

2.8:

(define (sub-interval x y)

(let ((p1 (- (lower-bound x) (lower-bound y)))

(p2 (- (lower-bound x) (upper-bound y)))

(p3 (- (upper-bound x) (lower-bound y)))

(p4 (- (upper-bound x) (upper-bound y))))

(make-interval (min p1 p2 p3 p4)

(max p1 p2 p3 p4))))

;Value: sub-interval

Similiar to the addition function the maximum value the difference could be is that of the furthest upper and lower bound and the minimum difference is that of the closest upper and lower bound. This seems to be the best way to test for that.

2.9:

(define (width-interval x)

(/ (- (upper-bound x) (lower-bound x))

2))

;Value: width-interval

(width-interval (mul-interval inter1 inter2))

;Value: 24

(width-interval (div-interval inter1 inter2))

;Value: .5333333333333334

(width-interval (add-interval inter1 inter2))

;Value: 4

(width-interval (sub-interval inter1 inter2))

;Value: 4

Observe that the width of the interval which is the difference between inter1 and inter 2 is identical to the width of the interval which is the sum of inter1 and inter2.

This fact indicates that the width of summed or subtracted intervals is a function of the width of their source intervals. You can clearly see that the width of the intervals produced by multiplying or dividing inter1 and inter2 do not share this trait. Thus, the width of multiplied or divided intervals is not a function of their source intervals alone.

2.10:

(define (div-interval x y)

(if (<= (or (upper-bound y) (lower-bound y)) 0)

(error "Cannot divide by an interval that spans zero." y)

(mul-interval x

(make-interval (/ 1.0 (upper-bound y))

(/ 1.0 (lower-bound y))))))

;Value: div-interval

This fixes the issue of dividing by zero in the software by simply not allowing intervals to drop below zero. Whether intervals that "span" zero should be allowed is up for debate.

2.11:

(define (mul-interval x y) ;;even with lets this is ugly. i object!

(let ((a (lower-bound x))

(b (upper-bound x))

(c (lower-bound y))

(d (upper-bonud y)))

(cond ((and (> a 0) (> b 0) (> c 0) (> d 0))

(make-interval (* a c) (* b d)))

((and (> a 0) (> b 0) (< c 0) (> d 0))

(make-interval (* b c) (* b d)))

((and (< a 0) (> b 0) (> c 0) (> d 0))

(make-interval (* a d) (* b d)))

((and (< a 0) (> b 0) (< c 0) (< d 0))

(make-interval (* b d) (* a d)))

((and (< a 0) (< b 0) (< c 0) (> d 0))

(make-interval (* b d) (* b c)))

((and (< a 0) (< b 0) (< c 0) (< d 0))

(make-interval (* a c) (* b d)))

((or (and (> a 0) (> b 0) (< c 0) (< d 0))

(and (< a 0) (< b 0) (> c 0) (> d 0)))

(make-interval (* b d) (* a c)))

(else (make-interval (min (* a d) (* b c))

(max (* a c) (* b d)))))))

;Value: mul-interval

Eww. Gross.

2.12:

(define (make-center-percent center tolerance)

(make-center-width center (* (/ tolerance 100) center)))

;Value: make-center-percent

(define (percent i)

(* (/ (width i) (center i)) 100))

;Value: percent

(percent (make-center-percent 8 5))

;Value: 5

It's not much and I plan on updating and expanding on this but it's done for now.

"You got an F. What the hell's the matter with you? Ya big failure.

Final Grade: 20.786516853932586%

To be fair, you would've had to do 14.0 problems a week to finish the book in 26 weeks.

They are pretty hard problems. Just keep at it man. You may want to revise your strategy though."

We're 30 weeks into 2008 and I've only done 74 of the 356 problems in that legendary text, the Structure and Interpretation of Computer Programs, which was the central object of my study this semester. That's about two and a half problems a week. Not my brightest shining moment. This whole experience definitely gives me new appreciation for the people that tried to structure and/or educate me in the past. Clearly, I need one of two things:

1) A good kick in the ass to really get going.

2) A new gameplan.

Personally, I'm going to try a mix of the two. Where 1) is concerned I recently wrote a self-study program (the biggest program I've ever written, actually) to help me keep abreast of my own progress and help me chart my course a bit. Where 2) is concerned I'm going to have to start making concessions to maintain momentum and I'm not entirely comfortable with that.

What concessions do I mean? Well, some of the SICP problems are hard. Really hard. Unreasonably hard (see Exercise 4.79 at the bottom for which a good answer is "probably worth a Ph.D."). The book has it's reputation for a reason. It's a reputation of difficulty but also of enlightenment. A lot of very smart people say it's the best way to learn Computer Science and probably the best book on the subject yet written. I'm willing to take their word for it. Anyway, there are problems that I get hung up on and I haven't been letting myself move on to the next section of the book without solving all the problems in the current section. That just isn't scaling well. I'm already hung up on the last 4 problems in Section 2.1. God knows what would happen come 4.4. I'll surely never finish the thing if I don't let myself move forward.

With that in mind, a week or so ago I did let myself move forward a bit and work on Section 2.2. I've already got about a third of it done. Maybe even half. I'm worried about this because I want to stay honest. I don't want to shirk the hard stuff. I won't move past problems unless I'm

*really*stumped and I will circle back at various points to try to work through them. Aside from SICP, I've worked on HTDP (How To Design Programs) and CA (Concrete Abstractions) as well this semester. I got an almost reasonable portion of HTDP done but next to nothing on CA. I'd really like to try plowing through as much of those three books and The C Programming Language (rocking the 1st ed.) as possible before Xmas.

Semester 3 (starting in January) I'm hoping to work on Algorithms (DPV, not CLRS), Essentials of Programming Languages (1st edition, baby!) and one of my Operating Systems texts. Of course, Discrete Math (5th ed) would be more prudent and judging by this semester this could all be revised by Xmas. Well, back to work. Happy Hacking!

That said, I'm trying to push forward. I've gotten in touch with some professors at Northeastern University where I'd very much like to study Computer Science in Fall of 2010, ideally. I also wrote code today for the first time in three weeks. It's hard to find the time, man!

More immediately I'm looking for a new job and have an interview tomorrow morning. For a variety of reasons I'm just not pleased with my current job and I think I can grow more and be happier elsewhere. Cross your fingers for me.

Finally, Jonathan Zittrain was on The Colbert Report tonight talking about his book,

*The Future of the Internet and How to Stop It*. I heard about it in advance and was very excited to see him speak. Unfortunately, I feel that he really botched the interview and I got into a long discussion with Ben about it. I'm pretty disappointed because there are important political issues about technology but they're rarely communicated to the public coherently and concisely and I'm no good at it myself.

I'll probably try to think of a good way to present it and give a fuller update in the next few days though. If I don't get bogged down with the promised education post or the emerging philosophy post or the big easy posts that is. Or hell, SICP 2.1. Yeah, right.

This list skews towards theory rather than practicality (or Computer Science as opposed to Programming, if you prefer) but tries to maintain a suitable balance. Consequently, outside study of particular languages (e.g. ruby, php, html, css, javascript, erlang or haskell, python or lua) is assumed. It is advised to work on practical projects (setting up source code repositories and servers, open source software contribution, web development, etc) as applicable since these subjects are covered at best loosely during the course of study.

This list of 24 texts costs about 900 dollars when bought used off of Amazon.com. Click on the book's title to get forwarded to it's Amazon.com page. The ordering of the list loosely conforms to increasing difficulty but it is not a recommendation as to structure. I will probably delve into those details in the aforementioned future post.

How To Design Programs by Matthias Felleisen, Matthew Flatt, Robert Findler and Shriram Krishnamurthi

Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Sussman

Paradigms of Artificial Intelligence Programming by Peter Norvig

Algorithms by Sanjoy Dasgupta, Christos Papadimitriou and Umesh Vazirani

Introduction to Algorithms by Thomas Cormen, Charles Leiserson, Ronald Rivest and Clifford Stein

Computer Networks by Andrew Tanenbaum

Programming Language Pragmatics by Michael Scott

Computer Organization and Design by David Patterson and John Hennessy

Introduction to Computing Systems by Yale Patt and Sanjay Patel

Operating System Concepts by Abraham Silberschatz, Greg Gagne and Peter Galvin

Operating Systems: Design and Implementation by Andrew Tanenbaum and Albert Woodhull

The Humane Interface by Jef Raskin

Code Complete by Steve McConnell

Real-Time Rendering by Tomas M'Oller

Artificial Intelligence by Stuart Russell and Peter Norvig

Introduction to the Theory of Computation by Michael Sipser

Modern Cryptography: Theory and Practice by Wenbo Mao

Hacker's Delight by Henry Warren

Purely Functional Data Structures by Chris Okasaki

Compilers: Principles, Techniques, and Tools by Alfred Aho, Monica Lam, Ravi Sethi and Jeffrey Ullman

Lisp In Small Pieces by Christian Queinnec

Programming Languages: Application and Interpretation by Sriram Krishnamurthi

Types and Programming Languages by Benjamin Pierce

Concepts, Techniques, and Models of Computer Programming by Peter Van Roy and Seif Haridi

There are a few caveats to this list that should be noted.

1) As mentioned it does not presume to teach language-specific or platform-specific knowledge. It is slanted towards theory and books on how to write C or HTML, do Linux Kernel Development or learn the Win32 API, or work with OpenGL or TCP/IP should be purchased separately.

2) Since it does not deal heavily with specific languages, it does not make any prescriptions about what languages one should know. Generally, I believe it's good to know a language which epitomizes each of the major programming paradigms whether these paradigms are valid or not (as Krishnamurthi posits in PLAI). This means at least one OO language, one stack-based language, one functional language, and one logic/constraint language. I'd recommend Smalltalk for Object Orientation, Forth for a stack language, Erlang or Haskell for a functional language, and Prolog for a constraint language. Beyond that it would probably be reasonable to learn C as a good representation of the von Neumann architecture and "for culture" and hypothetically python, perl, or ruby to learn a web/glue/scripting language.

3) The list could certainly be shorter. It leans towards functional programming, lisp, and programming languages due to my own personal interests and some material could be omitted from those areas correspondingly. Notably, Distributed Systems and Databases are conspicuous absences from my list. This is because I have yet to find any sort of consensus about quality books on either subject. I'm not interested in Databases but I'm certainly interested in Distributed Systems. Computer Networks is about as close as this list gets.

The beauty of this list, if there is one, is that the basis is strong enough that one could jump off in any direction after it's completion. If you were interested in video games get more books on rendering and physics, if you're interested in cryptography and security get more books on that, etc, but the foundation should be strong enough for any direction you'd like to go in. This list may get updated in the future. It's far from perfect but it does seem to me like a pretty reasonable place to start.

*Towards a new HR*and

*Secondhand Standards*. It will explain a bit more about the feasibility and philosophy behind this post.

All this post is meant to do is propose a list of texts that are rigorous and provide a well-rounded foundation in Undergraduate Mathematics. Be forewarned that it leans more towards Algebraists than Topologists. Lest anyone think I'm a total quack I plan to eventually work through all of these myself and have been researching which math texts are good on and off for a few months now. Autodidacticism (to my thinking) doesn't work without good books. I also have a list for an Undergraduate Computer Science education but I'm not quite done with it.

There are 20 texts and their total cost is about 700 dollars when they're bought used off Amazon. This also covers most all of what you would learn in High School Mathematics. Click on the book's title to get forwarded to it's Amazon.com page. The ordering of the list loosely conforms to increasing difficulty but it is not a recommendation as to structure. I'll probably go into such details at a later date.

What is Mathematics? by Richard Courant and Herbert Robbins

How to Solve It by G. Polya

How to Prove It by Daniel Velleman

Algebra by I.M. Gelfand and Alexander Shen

Functions and Graphs by I.M. Gelfand, E.G. Glagoleva and E.E. Shnol

The Method of Coordinates by I.M. Gelfand, E.G. Glagoleva and A.A. Kirilov

Trigonometry by I.M. Gelfand and Mark Saul

Calculus by Michael Spivak

Discrete Mathematics and it's Applications by Kenneth Rosen

Introduction to Probability by Dimitri Bertsekas and John Tsitsiklis

Introduction to Linear Algebra by Serge Lang

Linear Algebra by Serge Lang

A Mathematical Introduction to Logic by Herbert Enderton

A Classical Introduction to Modern Number Theory by Kenneth Ireland and Michael Rosen

Topology by James Munkres

Topics In Algebra by I.N. Herstein

Algebra by Michael Artin

Ordinary Differential Equations by Morris Tenenbaum and Henry Pollard

Principles of Mathematical Analysis by Walter Rudin

Concrete Mathematics by Ronald Graham, Donald Knuth and Oren Patashnik

Resources:

Read: Section 04

Watch: Nothing. To my knowledge there are no online lectures based around HTDP. Correct me if I’m wrong.

Checked against: Nothing.

Exercises

4.1.1:

1. (and true true) -> true

2. (or true false) -> true

3. (not false) -> true

4.1.2:

1. (a) true, (b) false, (c) true

2. (a) false, (b) false, (c) true

3. (a) false, (b) false, (c) false

4.2.1:

;;1.

(define (is-between-3-and-7? n)

(and (> n 3) (<= n 10)))

;;2.

(define (is-between-3-7? n)

(and (> n 3) (< n 10)))

;;3.

(define (is-between-3-9? n)

(and (>= n 3) (< n 9)))

;;4.

(define (is-1-3-or-9-11? n)

(or (is-1-3? n) (is-9-11? n)))

(define (is-1-3? n)

(and (> n 1) (< n 3)))

(define (is-9-11? n)

(and (> n 9) (< n 11)))

;;alternate implementation in case the first smacks of premature optimization:

;;(both suffer from an ominous arbitrary function naming schema!)

(define (is-1-3-or-9-11? n)

(or (and (> n 1) (< n 3))

(and (> n 9) (< n 11))))

;;5.

(define (is-outside-1-3? n)

(not (and (>= n 1) (<= n 3))))

4.2.2:

;; 1. | | | | | | | | | | |

;; -5 0 5

;; (-----)

;; Contract: in-interval-1? : number -> boolean

;; Purpose: To test if a number is between -3 and 0.

(in-interval-1? -2)

(and (< -3 -2) (< -2 0))

(and true true)

true

;;2. | | | | | | | | | | |

;; 0 5 10

;; --) (----------------

;; Contract: in-interval-2? : number -> boolean

;; Purpose: To test if a number is less than 1 or greater than 2.

(in-interval-2? -2)

(or (< -2 1) (> -2 2))

(or true false)

true

;;3. | | | | | | | | | | |

;; 0 5 10

;; --) (----------

;; Contract: in-interval-3? : number -> boolean

;; Purpose: To test if a number is less than 1 or greater than 5.

(in-interval-3? -2)

(not (and (<= 1 -2) (<= -2 5)))

(not (and false true))

(not false)

true

4.2.3:

;;1.

(define (is-solution-1? x)

(= (+ (* 4 x) 2) 62))

;;2.

(define (is-solution-2? x)

(= (* (sqr x) 2) 102))

;;3.

(define (is-solution-3? x)

(= (+ 2 (* 4 (sqr x)) (* 6 x)) 462))

10 is a solution to 3. 12 and 14 are not solutions.

4.2.4:

;; I don't know what specific test cases the authors are referring to for problems 2.2.1 - 2.2.4 so I'll just make up a few.

(= (Fahrenheit->Celsius 32) 0)

(= (dollar->euro 20) 12.8399) ;; as of 05/18/08

(= (triangle 5 2) 5)

(= (convert3 9 2 7) 729)

4.3.1:

The left cond is legal. The right cond is illegal because it's second clause has no answer to evaluate. The last cond is illegal because it has no second clause to evaluate.

4.3.2:

(a) .040

(b) .045

(c) .060

4.3.3:

(a) 40

(b) 121

(c) 595

4.4.1:

(define (interest x)

(cond ((<= x 1000) (* .04 x))

((<= x 5000) (* .045 x))

(else (* .05 x))))

4.4.2:

(define (tax x)

(cond ((<= x 240) 0)

((<= x 480) (* .15 x))

(else (* .28 x))))

(define (netpay hrs)

(- (grosspay hrs) (tax (grosspay hrs))))

(define (grosspay hrs)

(* 12 hrs))

4.4.3:

(define (pay-back charges)

(cond ((<= charges 500) (* .025 charges))

((<= charges 1500) (* .05 charges))

((<= charges 2500) (* .075 charges))

(else (* .01 charges))))

4.4.4:

(define (how-many a b c)

(cond ((> (sqr b) (* 4 a c)) 2)

((= (sqr b) (* 4 a c)) 1)

((< (sqr b) (* 4 a c)) 0))) ;; or else 0))

;; (how-many 1 0 1) = 0

If we didn't assume the equation was proper we'd need to check (with

a cond) to see if a equaled 0 and return an error if it did.

That does it for Section 04. Hopefully, I'll get my act together and wrap up SICP Section 2.1 in the next week or so. :-) You've gotta work on some hard stuff too, right? Besides it's more interesting anyway.

Resources:

Read: Section 03

Watch: Nothing. To my knowledge there are no online lectures based around HTDP. Correct me if I’m wrong.

Checked against: Nothing. Again, to my knowledge there are no available sources to check your answers beyond the locked solutions on the official site and message boards. That’s one reason I’m excited about doing HTDP this way along with SICP. The plethora of SICP resources stand in contrast to an absolute dearth of resources for HTDP.

Exercises

3.1.1:

(define (attendees ticket-price)

(- 870 (* 150 ticket-price)))

This function will give incorrect answers for negative values of ticket-price.

3.1.2:

(define (revenue ticket-price)

(* (attendees ticket-price) ticket-price))

(define (costs ticket-price)

(+ 180 (* .04 (attendees ticket-price))))

(define (profit ticket-price)

(- (revenue ticket-price) (costs ticket-price)))

(profit 3) returns the best price which is 1063.2.

3.1.3:

Both program definitions return the same results for inputs of 3, 4

and 5.

3.1.4:

(define (profit ticket-price)

(- (revenue ticket-price)

(cost ticket-price)))

(define (revenue ticket-price)

(* (attendees ticket-price) ticket-price))

(define (cost ticket-price)

(* 1.5 (attendees ticket-price)))

(define (attendees ticket-price)

(+ 120

(* (/ 15 .10) (- 5.00 ticket-price))))

(define (profit price)

(- (* (+ 120

(* (/ 15 .10)

(- 5.00 price)))

price)

(* 1.5

(+ 120

(* (/ 15 .10)

(- 5.00 price))))))

Both programs return the same results but profit margins have changed based on the new costs. (max (profit 3) (profit 4) (profit 5) is now (profit 4).

3.2.1:

(define fixed-costs 180)

(define price-per-attendee .04)

(define start-attendees 120)

(define attendees-per-dime 15)

(define dime .10)

(define start-price 5.00)

3.3.1:

(define inches-in-cm 2.54)

(define inches-in-ft 12)

(define feet-in-yard 3)

(define yards-in-rod 5.5)

(define rods-in-furlong 40)

(define furlongs-in-mile 8)

(define (inches->cm inches)

(* inches-in-cm inches))

(define (feet->inches feet)

(* inches-in-ft feet))

(define (yards->feet yards)

(* feet-in-yard yards))

(define (rods->yards rods)

(* yards-in-rod rods))

(define (furlongs->rods furlongs)

(* rods-in-furlong furlongs))

(define (miles->furlongs miles)

(* furlongs-in-mile miles))

(define (feet->cm feet)

(inches->cm (feet->inches feet)))

(define (yards->cm yards)

(feet->cm (yards->feet yards)))

(define (rods->inches rods)

(feet->inches (yards->feet (rods->yards rods))))

(define (miles->feet miles)

(yards->feet (rods->yards (furlongs->yards

(miles-furlongs miles)))))

3.3.2:

(define pi 3.14159)

(define (volume-cylinder radius height)

(* pi (sqr radius) height))

3.3.3:

(define pi 3.14159)

(define (area-cylinder radius height)

(* 2 pi radius (+ radius height)))

3.3.4:

(define (area-pipe inner-radius length thickness)

(+ (* 2 pi length (+ inner-radius thickness))

(* 2 (- (* pi (+ inner-radius thickness))

(* pi inner-radius)))))

(define (area-pipe inner-radius length thickness)

(+ (area-pipe-side inner-radius length thickness)

(* 2 (area-pipe-ring inner-radius thickness))))

(define (area-pipe-side inner-radius length thickness)

(* 2 pi length (+ inner-radius thickness)))

(define (area-pipe-ring inner-radius thickness)

(* 2 (- (* pi (+ inner-radius thickness))

(* pi inner-radius))))

This problem reminds me of several in SICP in that the real difficulty with it is a misunderstanding of the question. Once you understand what is desired it’s pretty easy to bang the code out. This seems analogous to the idea that once you have a well-understood, well-specified set of requirements producing the code is trivial and that the requirements are the difficult part. Of course, this leads to blather about how good enough specifications (and UML Diagrams) are equivalent to code (which is bullshit). People forget that requirements change and that unambiguous well-specified requirements are often impossible.

3.3.5:

(define (height time)

(* .5 time (speed time)))

(define (speed time acceleration)

(* time acceleration))

3.3.6:

(define (Celsius->Fahrenheit cels)

(+ 32 (/ (* cels 9) 5)))

(I 32)

(Celsius->Fahrenheit (Fahrenheit->Celsius 32))

(Celsius->Fahrenheit (* (- 32 32) (/ 5 9)))

(Celsius->Fahrenheit (* 0 (/ 5 9)))

(Celsius->Fahrenheit 0)

(+ 32 (/ (* 0 9) 5))

(+ 32 (/ 0 5))

(+ 32 0)

32

Plainly, these functions are inverses of each other though that should be self evident. Since they are inverses their composition returns the original input. The stepper returns the same results.

Well, that’s it for Section 03. It seems that the first 8 Sections at least deal with language primitives and fairly basic material. It certainly is easier to progress through HTDP relative to SICP but I have had the sense that I was learning more in SICP. We’ll see if this changes at all once I progress beyond the early sections though I haven’t decided whether I’ll keep going through HTDP or forge ahead on SICP.

So, I've finally gotten around to cleaning up SICP Section 1.3. It's not quite done but it's damn close. For now, I want to start posting some of the HTDP code I've been writing to get back in the hacking habit over the past few days. I also have some of Concrete Abstractions done and in my source code repository but it's nothing substantial. Without further ado, here's HTDP Section 02 (of 43!). Sections 03 and 04 will go up tomorrow. Note: I skipped HTDP Section 01 because there are no exercises or problems whatsoever.

Resources:

Read: Sections 01 and 02

Watch: Nothing. To my knowledge there are no online lectures based around HTDP. Correct me if I'm wrong.

Checked against: Nothing. Again, to my knowledge there are no available sources to check your answers beyond the locked solutions on the official site and message boards. That's one reason I'm excited about doing HTDP this way along with SICP. The plethora of SICP resources stand in contrast to an absolute dearth of resources for HTDP.

Exercises

2.1.1:

Dr. Scheme does have operations for squaring (sqr x), computing sines (sin x), and finding maximums (max x). If you are not running in the HTDP Beginning Student Language though these functions may not be available.

2.1.2:

(sqrt 4)

2

(sqrt 2)

#i1.4142135623730951

(sqrt -1)

0+1i

;;(tan x) determines the tangent of a given angle.

2.2.1:

(define (Fahrenheit->Celsius fahr)

(* (- fahr 32) (/ 5 9)))

The teachpack worked as intended. Just go to Language -> Add Teachpack. Feel free to test the different convert-*s on your own.

2.2.2:

(define (dollar->euro dollars)

(* .642 dollars)) ;; as of 05/18/08

2.2.3:

(define (triangle side height)

(/ (* side height) 2))

2.2.4:

(define (convert3 first second third)

(+ (* 100 third) (* 10 second) (* 1 first)))

This was sort of counter-intuitive. The idea that this is related to something in an Algebra book is true but misleadingly so. You could try to do something fancy with max but that's not the idea.

2.2.5:

(define (f n)

(+ (/ n 3) 2))

;;The evaluations for 2, 5, and 9 are 2.6, 3.6 and 5, respectively.

(define (f n)

(+ 10 (sqr n)))

;;The evaluations for 2 and 9 are 14 and 91, respectively.

(define (f n)

(+ 20 (* (sqr n) .5)))

;;The evaluations for 2 and 9 are 22 and 60.5, respectively.

(define (f n)

(- 2 (/ 1 n)))

;;The evaluations for 2 and 9 are 1.5 and 1.8, respectively.

2.3.1:

(define (tax income)

(* .15 income))

(define (netpay hrs)

(- (wage hrs) (tax (wage hrs))))

;;supplementary functions:

(define (wage hrs)

(* 12 hrs))

2.3.2:

(define (sum-coins pennies nickels dimes quarters)

(+ (* .01 pennies) (* .05 nickels) (* .1 dimes) (* .25 quarters)))

2.3.3:

(define (total-function attendees)

(- (* 5 attendees) (+ 20 (* .5 attendees))))

2.4.1:

(10) causes the interpreter to expect a function, procedure or expression but it is in fact primitive data, i.e. a number.

(10 + 20) is incorrect because the expression uses infix rather than prefix notation but the error from the interpreter is the same. This is due to the fact that the interpreter has been given a number rather than an procedure as it's operator.

(+ +) fails because the operator + is only given one argument (it requires a minimum of two) and that argument is a function which is the wrong type of input.

2.4.2:

(define (f x)

(+ x 10))

;;The argument to f needed to be changed.

(define (g x)

(+ x 10))

;;There was a missing open-paren before the + operator.

(define (h x)

(+ x 10))

;;The open-paren was in front of x when it should have been in front of h.

2.4.3:

;;> (+ 5 (/ 1 0))

;;/: division by zero

;;> (sin 10 20)

;;sin: expects 1 argument, given 2: 10 20

;;> (somef 10)

;;reference to an identifier before its definition: somef

2.4.4:

(define (somef x)

(sin x x))

;;> (somef 10 20)

;;somef: this procedure expects 1 argument, here it is provided 2 arguments

;;> (somef 10)

;;sin: expects 1 argument, given 2: 10 10

The section ends with a bit on program design. It makes the important note of having human solved examples to test against. Sounds like an argument for unit tests to me.

Additionally, I'm way behind on programming. I know. I've had a lot going on but my progress the last month or two is still just shameful. I've started HTDP to get the juices flowing again and am already through Section 03. It's definitely more straightforward than SICP if less revelatory. I'm considering going ahead and trying to blow through HTDP completely over the next month or two. Then I could circle back to SICP and hopefully be better prepared. I haven't decided on anything yet other than tidying up the presently unadorned answers to SICP 1.3 and then posting what I've got from HTDP so far. I am more than half-way through SICP 2.1 but I'm wondering if it makes more sense to knock out HTDP considering the difference in pace between the books. I'll let you know as I move forward. I'm hoping to get a post up with some pictures of my new digs in the next week or so. Feel free to drop me a line if you'd like to swing by.

At long last, I'm through Chapter 1 of SICP. I'm a bit disappointed that Closures haven't been covered yet but they're in the first few pages of Chapter 2 and I've already got a few problems solved. As a matter of fact, I finished Chapter 1 last Wednesday it just takes time to get these posts up. I have a feeling I need to go back and study those explanations of Lexical Scope in Chapter 1 though. I'll try to write more about the experience thus far in a separate post. For now, here are my results for Section 1.3.

Resources:

Read: Chapter 1 through Section 1.3

Watch: Lectures 2-a

Checked against: Eli Bendersky's Blog, SICP Wiki, Ken Dyck's Solutions, Theloserblog, Wfasim's Solutions, Autodidact and Lispy for Inspiration.

SICP Notes and Exercises:

Notes

Pgs. 63-66: Discussion of Let and Local Variable Binding.

Pg. 76: Discussion of First-Class Status in programming languages.

Quotes

"I'm going to write the...procedure here explicitly without giving it a name. I'm doing it anonymously, I don't necessarily have to give a name to something if I just want to use it once." - Gerald Jay Sussman, approx. 17:00, Lecture 2-a

"Procedures can be named by variables. Procedures are not special...Therefore they can be passed from one to another as arguments." - Gerald Jay Sussman, approx. 20:00, SICP Lecture 1-B from Swiss Archive, Higher-Order Functions Explanation

"Talent is to a great extent knowledge that we haven't yet learned how to formalize." - Gerald Jay Sussman, approx. 55:00, The Legacy of Computer Science

Exercises

1.29:

This exercise definitely wasn't easy. I think most of the difficulty is in figuring out how the math works and how the functions are all feeding into each other.

`(define (cube x) (* x x x))`

;Value: cube

(define (sum term a next b)

(if (> a b)

0

(+ (term a)

(sum term (next a) next b))))

;Value: sum

(define (simpsons-rule f a b n)

(define h (/ (- b a) n))

(define (k-term x)

(cond ((or (= x 0) (= x n)) 1)

((even? x) 2)

(else 1)))

(define (yk x)

(* (k-term x)

(f (+ a (* x h)))))

(* (sum yk a (lambda (x) (+ x 1)) n)

(/ h 3)))

;Value: simpsons-rule

(simpsons-rule cube 0 1 100)

;Value: 1/4

(simpsons-rule cube 0 1 1000)

;Value: 1/4

1.30:

Personally I think it's really nice that Abelson and Sussman have been throwing in these sort of review problems. They make me feel like I'm learning something. They give me hope. I solved this one in about 1 minute and a half and thought, "Hey, maybe I'm not a complete idiot. Maybe I'll actually know something about programming one day."

`(define (sum term a next b)`

(define (iter a result)

(if (> a b)

result

(iter (next a) (+ (term a) result))))

(iter a 0))

;Value: sum

1.31:

a.

`(define (product term a next b)`

(if (> a b)

1

(* (term a)

(product term (next a) next b))))

;Value: product

(define (factorial n)

(product (lambda (x) x) 1 (lambda (x) (+ x 1)) n))

;Value: factorial

(define (pi-approx approximations)

(define (pi-term denom) (/ (- (square denom) 1) (square denom)))

(define (next-term denom) (+ denom 2))

(product pi-term 3 next-term approximations))

;Value: pi-approx

(pi-approx 40)

;Value: 4722366482869645213696/5938020471163465810125 (.795276)

I just changed the variable names and commented pi-approx. The comment is omitted here in favor of this explanation. I couldn't figure out what on earth I was doing in the original so I actually wrote a brand new pi-approx with a different approach before realizing my original version was both correct and, I suspect, faster. I was computing 2 terms at a time based on their shared denominator.

b.

`(define (product term a next b)`

(define (iter a result)

(if (> a b)

result

(* (term a)

(product term (next a) next b))))

(iter a 1))

;Value: product

1.32:

a.

`(define (accumulate combiner null-value term a next b)`

(if (> a b)

null-value

(combiner (term a)

(accumulate combiner null-value term (next a) next b))))

;Value: accumulate

(define (sum term a next b)

(accumulate + 0 term a next b))

;Value: sum

(define (product term a next b)

(accumulate * 1 term a next b))

;Value: product

b.

`(define (accumulate combiner null-value term a next b)`

(define (iter a result)

(if (> a b)

null-value

(combiner (term a)

(iter (next a) result))))

(iter a null-value))

;Value: accumulate

1.33:

`(define (filtered-accumulate combiner null-value term a next b filter)`

(cond ((> a b) null-value)

((filter a) (combiner (term a)

(filtered-accumulate combiner null-value term

(next a) next b filter)))

(else (filtered-accumulate combiner null-value term

(next a) next b filter))))

;Value: filtered-accumulate

a.

`(define (sum-square-primes a b)`

(filtered-accumulate + 0 square a inc b prime?))

;Value: sum-square-primes

b.

`(define (product-relative-primes n)`

(define (relatively-prime i)

(= (gcd i n) 1))

(filtered-accumulate * 1 identity 1 inc n relatively-prime))

;Value: product-relative-primes

1.34:

The procedure f only actually produces output when it's argument is another procedure, specifically a procedure which takes one formal parameter. Given a procedure of a different arity it will produce an error regarding the wrong number of arguments and given a non-procedural argument it will complain about the object not being applicable.

1.35:

`(define tolerance 0.00001)`

;Value: tolerance

(define (fixed-point f first-guess)

(define (close-enough? v1 v2)

(< (abs (- v1 v2)) tolerance))

(define (try guess)

(let ((next (f guess)))

(if (close-enough? guess next)

next

(try next))))

(try first-guess))

;Value: fixed-point

(define (golden-ratio)

(fixed-point (lambda (x) (+ 1 (/ 1 x))) 1.0))

;Value: golden-ratio

(golden-ratio)

;Value: 1.6180327868852458

Things are pretty straightforward from 1.29 through 1.36. The main thing to remember on 1.35 and 1.36 is that a transformation is just a function and serves as the f in the fixed-point.

1.36:

`(define (fixed-point f first-guess)`

(define (close-enough? v1 v2)

(< (abs (- v1 v2)) tolerance))

(define (try guess)

(let ((next (f guess)))

(display guess)

(newline)

(if (close-enough? guess next)

next

(try next))))

(try first-guess))

;Value: fixed-point

(define (solve-for-x)

(fixed-point (lambda (x) (/ (log 1000) (log x))) 2.0))

;Value: solve-for-x

(solve-for-x)

2.

9.965784284662087

3.004472209841214

6.279195757507157

3.759850702401539

5.215843784925895

4.182207192401397

4.8277650983445906

4.387593384662677

4.671250085763899

4.481403616895052

4.6053657460929

4.5230849678718865

4.577114682047341

4.541382480151454

4.564903245230833

4.549372679303342

4.559606491913287

4.552853875788271

4.557305529748263

4.554369064436181

4.556305311532999

4.555028263573554

4.555870396702851

4.555315001192079

4.5556812635433275

4.555439715736846

4.555599009998291

4.555493957531389

4.555563237292884

4.555517548417651

4.555547679306398

4.555527808516254

4.555540912917957

;Value: 4.555532270803653

(define (solve-for-x)

(fixed-point (lambda (x) (average x (/ (log 1000) (log x)))) 2.0))

;Value: solve-for-x

(solve-for-x)

2.

5.9828921423310435

4.922168721308343

4.628224318195455

4.568346513136242

4.5577305909237005

4.555909809045131

4.555599411610624

4.5555465521473675

;Value: 4.555537551999825

Pretty impressive. solve-for-x went from taking 34 steps to 9 steps thanks to average damping. I wonder what it does for golden ratio? And sqrt's for various inputs...

1.37:

a.

`(define (cont-frac n d k)`

(define (frac-iter i)

(if (< i k)

(/ (n i) (+ (d i) (frac-iter (+ i 1))))

(/ (n i) (d i))))

(frac-iter 1))

;Value: cont-frac

(cont-frac (lambda (i) 1.0) (lambda (i) 1.0) 11)

;Value: .6180555555555556

b.

`(define (cont-frac n d k)`

(define (frac-iter count result)

(if (= count 0)

result

(frac-iter (- count 1)

(/ (n count) (+ (d count) result)))

(frac-iter k 0))

;Value: cont-frac

(cont-frac (lambda (i) 1.0) (lambda (i) 1.0) 11)

;Value: .6180555555555556

The main thing that's tricky about 1.37 is figuring out the math of continued fractions and starting with the base case of the last term and working backwards.

1.38:

`(define (euler-expand)`

(define (d-fun i)

(cond ((= (modulo i 3) 2) (* (ceiling (/ i 3)) 2))

(else 1)))

(cont-frac (lambda (i) 1.0) d-fun 8))

;Value: euler-expand

(euler-expand)

;Value: .7182795698924731

So, my original iterative version of cont-frac didn't actually work for this problem. The iterative version didn't work for this problem because it treated division as though it's commutative and it isn't. It took me a while to figure that out.

1.39:

`(define (tan-cf x k)`

(define (d i)

(- (* 2 i) 1))

(define (n i)

(if (= x 1)

x

(square x)))

(cont-frac n d k))

;Value: tan-cf

(tan-cf 1.0 5)

;Value: 1.5574074074074076

This one is actually fairly tricky. If you fail to notice that this is a continued fraction that subtracts rather than adds you're completely hosed. I modified my cont-frac procedure to fix this once I noticed. There's probably an elegant way to extend cont-frac to accomodate these different uses (subtracting versus adding continued fractions, etc.) but I'm not going to chase it down myself. Anybody feel like improving on this?

1.40:

`(define (cubic a b c)`

(lambda (x) (+ (expt x 3) (* a (expt x 2)) (* b x) c)))

(define dx 0.00001)

;Value: dx

(define (fixed-point-of-transform g transform guess)

(fixed-point (transform g) guess))

;Value: fixed-point-of-transform

(define (cubic a b c)

(lambda (x) (+ (expt x 3) (* a (expt x 2)) (* b x) c)))

;Value: cubic

(define (deriv g)

(lambda (x) (/ (- (g (+ x dx)) (g x)) dx)))

;Value: deriv

(define (newton-transform g)

(lambda (x) (- x (/ (g x) ((deriv g) x)))))

;Value: newton-transform

(define (newtons-method g guess)

(fixed-point (newton-transform g) guess))

;Value: newtons-method

(newtons-method (cubic 4 3 2) 1)

;Value: -3.2695308420809894

I didn't realize I just needed to literally translate the function. After I knew that I was fine. Again, time to study more math.

1.41:

`(define (double x)`

(lambda (i) (x (x i))))

;Value: double

(define (inc x) (+ x 1))

;Value: inc

((double inc) 0)

;Value: 2

(((double (double double)) inc) 5)

;Value: 21

;;This is because a double on a (double double) is effectively a square.

(double double)

;Value 16: #[compound-procedure 16]

(((double (double double)) inc) 0)

;Value: 16

((double (double (double inc))) 0)

;Value: 8

(((double (double (double double))) inc) 0)

;Value: 256

1.42:

`(define (compose f g)`

(lambda (i) (f (g i))))

;Value: compose

((compose square inc) 6)

;Value: 49

1.43:

`(define (repeated f n)`

(if (= n 1)

f

(compose f (repeated f (- n 1)))))

;Value: repeated

((repeated square 2) 5)

;Value: 625

Wow! That was a lot easier to think about using compose.

1.44:

`(define (smooth f)`

(define dx 0.00001)

(lambda (x) (/ (+ (f (- x dx)) (f x) (f (+ x dx))) 3)))

;Value: smooth

((smooth square) 2)

;Value: 4.000000000066667

(define (n-smoothed f n)

(repeated smooth n) f)

;Value: n-smoothed

((n-smoothed square 16) 2)

;Value: 4

Check The Loser Blog for a potentially better answer.

1.45:

`(define tolerance 0.00001)`

;Value: tolerance

(define (fixed-point f first-guess)

(define (close-enough? v1 v2)

(< (abs (- v1 v2)) tolerance))

(define (try guess)

(let ((next (f guess)))

(if (close-enough? guess next)

next

(try next))))

(try first-guess))

;Value: fixed-point

(define (average x y)

(/ (+ x y) 2))

;Value: average

(define (average-damp f)

(lambda (x) (average x (f x))))

;Value: average-damp

(define (nth-root x n)

(fixed-point (repeated

(average-damp (lambda (y) (/ x (expt y (- n 1)))))

(ceiling (/ n 2))) 1.0))

;Value: nth-root

(define (compose f g)

(lambda (x) (f (g x))))

;Value: compose

(define (repeated f n)

(if (= n 1)

f

(compose f (repeated f (- n 1)))))

;Value: repeated

(define (nth-root x n)

(fixed-point-of-transform (lambda (y) (/ x (expt y (- n 1))))

(repeated average-damp (log2 n)) 1.0))

;Value: nth-root

(define (log2 n)

(if (= 1 n)

0

(+ (log2 (floor (/ n 2))) 1)))

;Value: log2

After testing the first 15 powers with my version of nth-root I couldn't figure out the relationship between n and the times to average damp. Just about everyone had trouble with this but I found the correct answer in Eli's comment thread...

1.46:

`(define (iterative-improve tester improver)`

(define (iter guess x)

(if (tester guess)

guess

(iter (improver guess) x)))

(lambda (x) (iter 1.0 x)))

;Value: iterative-improve

(define (sqrt x)

((iterative-improve

(lambda (guess) (< (abs (- (square guess) x)) 0.00001))

(lambda (guess) (average guess (/ x guess)))) x))

;Value: sqrt

(define (average x y)

(/ (+ x y) 2))

;Value: average

(sqrt 2)

;Value: 1.4142156862745097

(define (fixed-point f x)

((iterative-improve

(lambda (guess) (< (abs (- guess (f guess))) 0.00001))

(lambda (guess) (f guess))) x))

;Value: fixed-point

(fixed-point cos 1.0)

;Value: .7390893414033927

(Edit: 05/18/08) Well, that wraps it up for Section 1.3. I can't believe how long it took me to find the time to come back and clean these answers up a bit. I have had a lot going on though. There will be a few small changes in convention starting in SICP 2.1 to make things more manageable for me. As always, the most up to date code is in the repo.

I finally finished SICP Section 1.2 last night. I'm tremendously excited because this means that next week I can start tackling Higher Order Functions and (I hope) Closures. At any rate, here is the last month's work:

Resources:

Read: Chapter 1 through Section 1.2

Watch: Lectures 1-b

Checked against: Eli Bendersky's Blog, SICP Wiki, Ken Dyck's Solutions, Autodidact and Lispy for Inspiration.

SICP Notes and Exercises:

Notes

Pg. 35: Explanations of Iteration and Recursion in Processes and Procedures and Tail-Recursion in Compilers.

Maybe I was wrong about SICP. I mean the hardest thing about these exercises was letting the stuff sit in my head for a bit. And the motivation to get some of the more lengthy ones done. We'll see how this goes.

Quotes

"A recursive definition does not necessarily lead to a recursive process." - Gerald Jay Sussman, SICP Lecture 1-B's Time-Space Complexity explanation, approx. 25:30 - 30:30

"The key to understanding complicated things is knowing what not to look at." - Gerald Jay Sussman, SICP Lecture 1-B from Swiss Archive, approx. 10:00

"The reason why people think of programming as being hard is because you're writing down a general rule which is going to be used for lots of instances that a particular instance must process correctly." - Gerald Jay Sussman, SICP Lecture 1-B from Swiss Archive, approx. 46:45

Exercises

1.9:

The first procedure evaluates as follows:

(inc (+ 3 5))

(inc (inc (+ 2 5)))

(inc (inc (inc (+ 1 5))))

(inc (inc (inc (inc (+ 0 5)))))

(inc (inc (inc (inc 5))))

(inc (inc (inc 6)))

(inc (inc 7))

(inc 8)

9

This is a recursive procedure and a recursive process.

The second procedure evaluates as follows:

(+ 3 6)

(+ 2 7)

(+ 1 8)

(+ 0 9)

9

This is a recursive procedure but an iterative process.

1.10:

(A 1 10) evaluates as follows:

(A 0 (A 1 9))

(A 0 (A 0 (A 1 8)))

(A 0 (A 0 (A 0 (A 1 7))))

(A 0 (A 0 (A 0 (A 0 (A 1 6)))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 1 5))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 4)))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 3))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 2)))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 1))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 2)))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 4))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 8)))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 16))))))

(A 0 (A 0 (A 0 (A 0 (A 0 32)))))

(A 0 (A 0 (A 0 (A 0 64))))

(A 0 (A 0 (A 0 128)))

(A 0 (A 0 256))

(A 0 512)

1024

(A 2 4) evaluates as follows:

(A 1 (A 2 3))

(A 1 (A 1 (A 2 2)))

(A 1 (A 1 (A 1 (A 2 1))))

(A 1 (A 1 (A 1 2)))

(A 1 (A 1 (A 0 (A 1 1))))

(A 1 (A 1 (A 0 2)))

(A 1 (A 1 4))

(A 1 (A 0 (A 1 3)))

(A 1 (A 0 (A 0 (A 1 2))))

(A 1 (A 0 (A 0 (A 0 (A 1 1)))))

(A 1 (A 0 (A 0 (A 0 2))))

(A 1 (A 0 (A 0 4)))

(A 1 (A 0 8))

(A 1 16)

(A 0 (A 1 15))

(A 0 (A 0 (A 1 14)))

(A 0 (A 0 (A 0 (A 1 13))))

(A 0 (A 0 (A 0 (A 0 (A 1 12)))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 1 11))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 10)))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 9))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 8)))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 7))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 6)))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 5))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 4)))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 3))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 2)))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 1))))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 2)))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 4))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 8)))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 16))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 32)))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 64))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 128)))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 256))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 512)))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 1024))))))

(A 0 (A 0 (A 0 (A 0 (A 0 2048)))))

(A 0 (A 0 (A 0 (A 0 4096))))

(A 0 (A 0 (A 0 8192)))

(A 0 (A 0 16384))

(A 0 32768)

65536

(A 3 3) evaluates as follows:

(A 2 (A 3 2))

(A 2 (A 2 (A 3 1)))

(A 2 (A 2 2))

(A 2 (A 1 (A 2 1)))

(A 2 (A 1 2))

(A 2 (A 0 (A 1 1)))

(A 2 (A 0 2))

(A 2 4)

65536

(see evaluation of (A 2 4) above)

The defined procedures f, g, and h intriguingly map as follows:

(f n) -> (* 2 n)

(g n) -> 2^n

(h n) -> 2 raised to itself, n times.

##NOTE:

In the book examples are given of recursive and iterative ways to compute the fibonacci sequence. However, the example given for the fibonacci sequence computes a term beyond what is desired to arrive at it's final answer. The termination condition is count = 0 at which point b is returned. This is a small change to that program to fix what I perceive as a flaw. Have I missed something?

My version returns a when count = 1.

(define (fib n)

(fib-iter 1 0 n))

(define (fib-iter a b count)

(if (= count 1)

a

(fib-iter (+ a b) a (- count 1))))

1.11:

A tree recursive process that computes f is demonstrated in the following procedure:

(define (f n)

(cond ((< n 3) n)

((or (= n 3) (> n 3))

(+ (f (- n 1)) (* 2 (f (- n 2))) (* 3 (f (- n 3)))))))

An iterative process that computes f is demonstrated in the following procedure:

(define (f n)

(f-iter 2 1 0 n))

(define (f-iter a b c count)

(cond ((< count 3) count)

(else (f-iter (+ a (* 2 b) (* 3 c)) a b (- count 1)))))

Figuring out the iterative process was scary. It was the first moment I thought I wouldn't be able to do this and might need real help. I was unsure of whether I needed three or four state variables. It clicked after a few minutes of thinking though and was much smoother from there.

1.12:

This one actually gave me some trouble for a bit because I wanted to solve the problem in a non-standard way. After a while, I cracked and read the precursor text (but not the code) to Eli Bendersky's solution and noticing that he defined the function with two arguments (for columns and rows) arrived fairly quickly with that insight at what seems to be the more or less standard solution. I had this much completed for a week or more but got stalled trying to figure out the problem of a pascal function that takes one argument. That diversion contributed greatly to the delay in my progress. I did solve it though and posted the results separately. Here's the standard solution:

(define (pas row col)

(cond ((= row 1) 1)

((= col 1) 1)

((= row col) 1)

(else (+ (pas (- row 1) (- col 1))

(pas (- row 1) col)))))

;Value: pas

1.13:

I need to define a few things for this one first.

rad = the square root of 5 or

`(sqrt 5)`

phi = (1 + rad) / 2 or

`(/ (+ 1 rad) 2)`

psi = (1 - rad) / 2 or

`(/ (- 1 rad) 2)`

fib = you remember our fibonacci function from before right? That's all this is.

Prove that Fib(n) is the closest integer to (/ (phi ^ n) rad). Hint: Use psi as defined above, induction and the definition of the Fibonacci numbers to prove that Fib(n) = ((phi ^ n) - (psi ^ n)) / rad.

Okay, this one is intimidating for a number of reasons. One being that I've never done a formal proof before. At least that I can remember. I've seen proofs done and I've read one or two but I've never done one. Either my math education was lax or I was lax about my math education. In fairness, it was probably a bit of both. That unfamiliarity combined with the aforementioned pascal with a single argument problem served to keep me unmotivated and distracted for a bit.

Prove: That Fib(n) = ((phi ^ n) - (psi ^ n)) / rad.

First, you have to prove your base cases.

Fib (0) = ((phi ^ 0) - (psi ^ 0)) / rad.

That reduces to, 0 = (1 - 1) / rad, So the first base case holds.

Fib (1) = ((phi ^ 1) - (psi ^ 1)) / rad.

That reduces to, 1 = ((1 / 2) + (rad / 2) - (1 / 2) + (rad / 2)) / rad.

That reduces to, 1 = rad / rad, so the second base case holds.

The definition of Fibonacci numbers is that fib(n) = fib(n-1) + fib(n-2) so fib(2) = 0 + 1 = 1. Having found that our lemma is true for n-1 and n-2 will it hold for n?

Fib (2) = ((phi ^ 2) - (psi ^ 2)) / rad.

Remembering that Phi is the golden ratio it meets the condition that (phi ^ 2) = phi + 1.

This gives fib (2) = (2.61803398 - 0.38196601) / rad.

This reduces to fib (2) = 2.23606797 / rad giving 1.

Thus, our lemma holds for fib(n). This does not explain how Fib(n) is always the closest integer to phi ^ n / rad though.

To explain that we must note that in the base case of 1 it holds as phi ^ n / rad evaluates to 0.723606798 and Fib(1) is 1. So, it holds here.

We may then observe that psi being less than 1 will always approach zero as it's exponent is increased.

Thus, the difference between the fib(n) and (/ (phi ^ n) rad) will always be less than 0.381 for n >= 2.

This is what we needed to show.

Whew. After checking this against other people's solutions it turns out I'm not crazy and am roughly correct in my proof which is a relief.

1.14:

Okay. I drew the tree on paper but trying to draw this tree in ASCII for you guys would about kill me. Thankfully on checking my solution I was lucky to find a correct tree image which I will steal with credit. Thanks, Bhrgunatha.

Here is the tree.

As for the order, we can observe that at least in terms number of steps the growth follows that of our tree-recursive fibonacci function and is exponential. I think it's growing at *O(x ^{n})* but it could be growing at

*O(n*. Has anyone come to a definitive conclusion about this? Upon checking Ken Dyck's site again I see that Tim Eichner has an interesting solution. Can anyone confirm this?

^{x})1.15:

a. How many times is the procedure p applied when (sine 12.15) is evaluated?

(p (sine 4.05))

(p (p (sine 1.35)))

(p (p (p (sine 0.45))))

(p (p (p (p (sine 0.15)))))

(p (p (p (p (p (sine 0.05))))))

(p (p (p (p (p 0.05)))))

P is applied 5 times.

b. What is the order of growth in space and number of steps (as a function of a) used by the process generated by the sine procedure when (sine a) is evaluated?

This one I did need help with. I realized quite clearly that the growth was related to the number of divisions by 3 our angle took to get below the threshold (0.01). I did not realize that the abstraction I was looking for to describe this growth was that of a logarithm. That being said, I checked a few other solutions and went over the wiki page for logarithms once or twice. I really need to order Spivak's Calculus now. Anyway, the process is *O(log(a))* in both space and time. Specifically it's *O(log _{3}(n))*.

1.16:

This was tricky until I modeled the state transformations holding to the rule suggested for (* a (b^n)). Once I did that it was pretty easy.

(define (expt b n)

(define (expt-iter b n a)

(cond ((= n 0) a)

((even? n) (expt-iter (square b) (/ n 2) a))

(else (expt-iter b (- n 1) (* a b)))))

(expt b n 1))

;Value: expt-iter

(expt-iter 2 1000 1)

1071508607186267320948425049060001810561404811705533607443750388370351051124936122493198378815695858127594672917553146825187

1452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062

914571196477686542167660429831652624386837205668069376

;;That's a 300 digit number. This algorithm is O(log n). This computed in 16 steps.

1.17:

(define (* a b)

(define (double x)

(+ x x))

(define (halve x)

(/ x 2))

(cond ((= b 0) 0)

((= a 0) 0)

((= b 1) a)

((even? b) (* (double a) (halve b)))

(else (+ a (* a (- b 1))))))

;Value: *

(* 2 1000)

;Value: 2000

;;16 steps again. logarithmic in time. space too? I think it's just linear in space.

1.18:

(define (* a b)

(define (double x)

(+ x x))

(define (halve x)

(/ x 2))

(define (*-iter a b c)

(cond ((= b 0) 0)

((= a 0) 0)

((= b 1) (+ a c))

((even? b) (*-iter (double a) (halve b) c))

(else (*-iter a (- b 1) (+ c a)))))

(*-iter a b 0))

;Value: *

(* 2 1000)

;Value: 2000

;;16 steps again. logarithmic and iterative, so it does it in O(1) space. boo-yah. today was a good day to code.

1.19:

This was just a really difficult problem to understand. I wasn't even sure what they were really asking. Once I realized I just needed to use algebra to try and expand and then factor out a few things I felt a lot more comfortable.

(define (fib n)

(fib-iter 1 0 0 1 n))

;Value: fib

(define (fib-iter a b p q count)

(cond ((= count 0 ) b)

((even? count)

(fib-iter a

b

(+ (square p) (square q)) ;; compute p'

(+ (* 2 p q) (square q)) ;; compute q'

(/ count 2)))

(else (fib-iter (+ (* b q) (* a q) (* a p))

(+ (* b p) (* a q))

p

q

(- count 1)))))

;Value: fib-iter

1.20:

This is one of those exercises that Abelman and Sussman are sort of bastards for including.

;;Normal order evaluation is fully expand to primitives and then reduce.

;;Applicative-order is...well, what we've been doing all along.

How many remainder operations are performed in the normal order version of

(gcd 206 40)?

How many in the applicative order version?

4: (206 4), (40 6), (6 4), (4 2)

Applicative order first:

(gcd 206 40)

(gcd 40 (remainder 206 40))

(gcd 40 6)

(gcd 6 (remainder 40 6))

(gcd 6 4)

(gcd 4 (remainder 6 4))

(gcd 4 2)

(gcd 2 (remainder 4 2))

(gcd 2 0)

2

Normal order version:

(gcd 206 40)

;;we can count the number of times remainders occur in b, which is always evaluated.

(gcd 40 (remainder 206 40)) ;;rc = 1

if (= (remainder 206 40) 0) = false

(gcd (remainder 206 40) (remainder 40 (remainder 206 40)))) ;;rc = 1+2

if (= (remainder 40 6) 0) = false

(gcd (remainder 40 (remainder 206 40)) (remainder (remainder 206 40) (remainder 40 (remainder 206 40)))) ;;rc = 1+2+4

if (= (remainder 6 4) 0) = false

(gcd (remainder (remainder 206 40) (remainder 40 (remainder 206 40))) (remainder (remainder 40 (remainder 206 40)) (remainder (remainder 206 40) (remainder 40 (remainder 206 40))))) ;;rc = 1+2+4+7

if (= (remainder 4 2) 0) = true!

now that if is true we evaluate a: (remainder (remainder 206 40) (remainder 40 (remainder 206 40))) which has 4 remainders to the 14 that have been evaluated in prior predicates. tada! 18 evaluations in total for normal order. 4, if you didn't notice, for applicative order.

GCD is effectively a loop here and the only way for the loop to exit is for the if predicate to evaluate to true, after which the consequent is evaluated. The alternate is only substituted for in this case, never evaluated outright as it never becomes primitive.

In this way, the problem seems to me more of a study into the if conditional than evaluation models. Once you understand that the alternate never gets evaluated, you can simply figure out how many remainders get fed to it before it's true and then how many are in the consequent.

That's the best I could come up with for this one but Eli Bendersky has a solution you may find more clear or detailed.

1.21:

(smallest-divisor 199)

;Value: 199

(smallest-divisor 1999)

;Value: 1999

(smallest-divisor 19999)

;Value: 7

1.22:

The code for this exercise is not *particularly* difficult. It's not easy but it's fairly straightforward. Because this exercise was written over 10 years ago though it's pretty difficult to use on modern hardware. You're supposed to observe algorithmic efficiency because this code is supposed to stress your hardware. Unfortunately, in 2008 this code makes my hardware yawn for numbers on the scale they were asking for. So I started things off at 13 digits and scaled up from there. I also decided to rework the code so that it only outputs when it finds a prime.

(define (start-prime-test n start-time)

(if (prime? n)

(report-prime (- (runtime) start-time) n)))

(define (report-prime elapsed-time n)

(newline)

(display n)

(display " *** ")

(display elapsed-time))

(define (search-for-primes current end)

(cond ((even? current) (search-for-primes (+ current 1) end))

((> current end) (display " done! "))

(else (timed-prime-test current)

(search-for-primes (+ current 2) end))))

So, there's the code. Now for my results:

(search-for-primes 100000000000 100000000060)

100000000003 *** 1.1600000000000037

100000000019 *** 1.1899999999999977

100000000057 *** 1.240000000000009 done!

;Unspecified return value

(search-for-primes 1000000000000 1000000000070)

1000000000039 *** 3.91

1000000000061 *** 3.759999999999998

1000000000063 *** 3.9400000000000013 done!

;Unspecified return value

(search-for-primes 10000000000000 10000000000100)

10000000000037 *** 12.280000000000001

10000000000051 *** 12.510000000000005

10000000000099 *** 12.200000000000003 done!

;Unspecified return value

(search-for-primes 100000000000000 100000000000098)

100000000000031 *** 38.190000000000026

100000000000067 *** 38.16

100000000000097 *** 37.95000000000002 done!

;Unspecified return value

Checking all of these it appears we are very close to the projected (sqrt 10) increase per digit.

1.23:

(define (find-divisor n test-divisor)

(cond ((> (square n) n) n)

((= (modulo n test-divisor) 0) test-divisor)

(else (find-divisor n (next test-divisor)))))

;Value: find-divisor

(define (next n)

(cond ((even? n) (+ n 1))

(else (+ n 2))))

;Value: next

The results this time were:

(search-for-primes 100000000000 100000000060)

100000000003 *** .7400000000000091

100000000019 *** .7200000000000273

100000000057 *** .7099999999999795 done!

;Unspecified return value

(search-for-primes 1000000000000 1000000000070)

1000000000039 *** 2.3600000000000136

1000000000061 *** 2.2900000000000205

1000000000063 *** 2.319999999999993 done!

;Unspecified return value

(search-for-primes 10000000000000 10000000000100)

10000000000037 *** 7.350000000000023

10000000000051 *** 7.340000000000032

10000000000099 *** 7.189999999999998 done!

;Unspecified return value

(search-for-primes 100000000000000 100000000000098)

100000000000031 *** 23.110000000000014

100000000000067 *** 22.879999999999995

100000000000097 *** 22.920000000000016 done!

;Unspecified return value

This time we also are pretty close to half the previous times but it's slightly over half.

1.24:

(define (start-prime-test n start-time)

(if (fast-prime? n 500)

(report-prime (- (runtime) start-time) n)))

;Value: start-prime-test

And these results were:

(search-for-primes 100000000000 100000000060)

100000000003 *** 9.999999999990905e-3

100000000019 *** 0.

100000000057 *** 9.999999999990905e-3 done!

;Unspecified return value

(search-for-primes 1000000000000 1000000000070)

1000000000039 *** 0.

1000000000061 *** 9.999999999990905e-3

1000000000063 *** 0. done!

;Unspecified return value

(search-for-primes 10000000000000 10000000000100)

10000000000037 *** 0.

10000000000051 *** 0.

10000000000099 *** 0. done!

;Unspecified return value

(search-for-primes 100000000000000 100000000000098)

100000000000031 *** 9.999999999990905e-3

100000000000067 *** 0.

100000000000097 *** 0. done!

;Unspecified return value

We can see that this is definitely in *O(log(n))*. The times have gone below the precision of my instruments in most cases.

1.25:

I honestly had to look to Eli and Ken for help on this one. I was hand evaluating the original code before trying Alyssa's and having some trouble. I had noticed that the fast-expt procedure had two arguments where expmod had three so I figured part of the computation was being moved around. I even realized that Alyssa's way went ahead and computed the base to the exponent and then tested the remainder against it once. That just seemed like it should've been better to me. I didn't have the sense to just add runtime in as an argument and see how much time they were taking. At any rate, the original expmod does lots of little remainder tests but because of Bignum arithmetic that ends up being faster than a single test on a huge number.

1.26:

(expmod base (/ exp 2) m) has to be evaluated an extra time each time the (even? exp) condition evaluates to true. This moves the algorithm from log n to n because, as I somewhat foolishly missed, it shifts the recursion from a linear recursion to a tree recursion. See the SICP Wiki's solution for more detail, it seems to be the best resource for rigorous complexity analysis.

1.27:

(define (expmod base exp m)

(cond ((= exp 0) 1)

((even? exp)

(remainder (square (expmod base (/ exp 2) m)) m))

(else

(remainder (* base (expmod base (- exp 1) m)) m))))

;Value: expmod

(define (carmichael-test n)

(define (try-it a)

(= (expmod a n n) a))

(define (carmichael-iter times)

(cond ((= times 0) true)

((try-it times) (carmichael-iter (- times 1)))

(else false)))

(carmichael-iter (- n 1)))

;Value: carmichael-test

1.28:

(define (expmod base exp m)

(define (miller-rabin x)

(and (not (= x 1)) (not (= x m)) (= (square x) (modulo 1 m))))

(cond ((= exp 0) 1)

((even? exp)

(if (miller-rabin (square (expmod base (/ exp 2) m)))

0

(remainder (square (expmod base (/ exp 2) m))

m)))

(else

(remainder (* base (expmod base (- exp 1) m))

m))))

;Value: expmod

(define (miller-rabin-search n)

(define (try-it a)

(= (expmod a (- n 1) n) 1))

(try-it (+ 1 (random (- n 1)))))

;Value: miller-rabin-search

(define (miller-rabin-test n)

(define (mr-iter count)

(cond ((= count 1) #t)

((miller-rabin-search n) (mr-iter (- count 1)))

(else #f)))

(mr-iter (floor (/ n 2))))

;Value: miller-rabin-test

;;I got everything written right on this one but I had to check Ken's page again to notice that my try-it definition was

;;testing the expmod result against a, not 1. Once I fixed that I was right as rain.

As a final note, I should point out that my solution here differs a bit from the norm. One, I'm pretty serious about not using primitives that haven't been introduced yet. Even Ken Dyck's solution uses let (though the SICP wiki avoids it). After all, this is my first serious work in programming ever. The closest thing besides this was my read through the first chapter of K&R in summer of 2006. Anyway, just keep in mind I'm taking this as my formal education.

An Emacs Tutorial

Git Tutorial Part 1

Git Tutorial Part 2

A Much more focused collection of *nix & associated utilities sheets

A Massive Index of Cheat Sheets

Also, I'm not sure I buy it but there was some pretty optimistic news about Concentrated Solar Power today. I'd love to see more detailed plans and a price/time-to-completion estimate.

Finally, if anyone has any insights about why I'm getting a bad EIP value and a kernel panic whenever I try to transfer large files (or dozens of songs) with my server, feel free to let me know. I will buy you a (coffee/beer/etc). It seems related to this issue from an openSuse user. It could also be related to me using the 8139cp module instead of 8139too for my ethernet card. Whatever, I doubt i'll get anywhere but I'll be looking into it.

Now to grab dinner and finish that essay...

A little over two weeks ago I came up against Exercise 1.12 in the venerable Structure and Interpretation of Computer Programs.

The exercise wants you to write a recursive program to compute elements of Pascal's Triangle.

This exercise has pretty much infuriated me and it's all my own fault. Upon first hearing the problem statement I got it in my head that the function should look something like "(define (pas n)...)". I always think of number series being described in terms of a single argument (i.e. the 12th element) so it seemed natural to me that the pascal's triangle function should be computed in this way even though it is not, in some sense, a traditional series.

After a while, I cracked and read the precursor text (but not the code) to Eli Bendersky's solution and noticing that he defined the function with two arguments (for columns and rows) arrived fairly quickly with that insight at what seems to be the more or less standard solution. I have had this much completed for a week but gotten stalled trying to figure out the problem of a pascal function that takes one argument.

As of today I've solved the problem though and hoped to share my results here. First, the throwaway code that ended up being good for nothing!

(define (is-one? element)

(define (is-one-iter ones count flag)

(cond ((< element 5) #t)

((= ones element) #t)

((> ones element) #f)

((= flag 1) (is-one-iter (+ ones 1) count (- flag 1)))

(else (is-one-iter (+ ones count) (+ count 1) (+ flag 1)))))

(is-one-iter 4 2 0))

;Value: is-one?

That code tests to see whether a given element equals one and it does take a single argument which is nice. I couldn't figure out a way to use it to compute the actual elements though.

After a little bit of experimenting I stumbled on this number sequence (OEIS #A080956) which when put in the following procedure would allow me to compute n from a given column and row.

EDIT: Corrected dyslexic mistake in my code (I'd replaced all instances of col with row and vice versa). See comments.

(define (n-from-rowcol row col)

(define (f x)

(- (/ (* (+ x 1) (- 2 x)) 2)))

(+ row col (f (- row 1))))

;Value: n-from-rowcol

Now all I had to do was find a way to reverse the function to give me the inputs if I gave it the output. I actually stumbled upon another number sequence (OEIS #A000124, also known as the Lazy Caterer's Sequence) which when put into the following procedure returns the correct column and row for a given element. At last, working code:

(define (pascal n)

(define (pas col row)

(cond ((= col 1) 1)

((= row 1) 1)

((= col row) 1)

(else (+ (pas (- col 1) row)

(pas (- col 1) (- row 1))))))

(define (colrow-from-n)

(define (col-iter count)

(define (f x)

(- (/ (+ (square x) x 2) 2) x))

(cond ((> (f count) n) (pas (- count 1) (- n (- (f (- count 1)) 1))))

((= (f count) n) (pas (f count) 1))

(else (col-iter (+ count 1)))))

(col-iter 1))

(colrow-from-n))

;Value: pascal

Any insights into cleaner code, better algorithms, or comparisons between the two number series are welcomed.

As I've mentioned, SICP isn't going as fast as I hoped but I won't skip a thing. If my schedule goes out the window so be it but this book is getting finished. Of course, hopefully I can conform somewhat to the schedule as well. There will be an update this weekend even if I'm not through section 1.2.

In the meantime, I thought that I'd post up something I've been working on during my lunch hour. Namely, Project Euler code. Project Euler is a website that has about 180 Programming Problems of escalating difficult. I've only devoted one lunch hour to it so far but it's been fun and I'd love to get through a quarter to half the problems this year.

The challenge for me I think will come from the math side as well as the programming and some of these I just won't be able to solve for a while. Better to challenge myself from both ends, right? The code's hidden behind a cut for those who don't want their eyes scarred by this programming nonsense. Also, I'll be improving these as I discover better programming formalisms. I'm also solving each problem in both C and Scheme. I want to solve each problem from two paradigms (or more) if possible.

Problem 1 in C:

//Project Euler Problem 1:

//Sum the numbers below 1000 divisible by 3 or 5.

#include

int main (void)

{

int count;

int sum = 0;

for (count = 1; count < 1000; count++){

if ((count % 3 == 0) || (count % 5 == 0))

sum += count;}

printf ("The sum of all multiples of 3 or 5 below 1000 is %d.n", sum);

return (0);

}

Problem 1 in Scheme:

;;Project Euler Problem 1:

;;Sum the numbers below 1,000 divisible by 3 or 5.

(define (euler1 top)

(define (iter count sum)

(define (divides? n)

(or (= (modulo n 3) 0) (= (modulo n 5) 0)))

(cond ((= count top) sum)

((divides? count) (iter (+ count 1) (+ sum count)))

(else (iter (+ count 1) sum))))

(iter 1 0))

;Value: euler1

Problem 2 in C:

//Project Euler Problem 2:

//Sum the even-valued terms in the Fibonacci sequence below 1,000,000.

#include

int main (void)

{

int a = 1;

int b = 2;

int temp, sum = 0;

while (a <= 1000000){

if (a % 2 == 0){

temp = b;

b += a;

sum += a;

a = temp;}

else{

temp = b;

b += a;

a = temp;}}

printf ("The sum of the even valued Fibonacci terms below 1,000,000 is %d.n", sum);

return (0);

}

Problem 2 in Scheme:

;;Project Euler Problem 2:

;;Sum the even-valued terms in the Fibonacci sequence below 1,000,000.

(define (euler2 top)

(define (iter current sum count)

(define (fib n)

(cond ((< n 3) n)

(else (+ (fib (- n 1)) (fib (- n 2))))))

(cond ((> current top) sum)

((even? current) (iter (fib (+ count 1))

(+ sum current) (+ count 1)))

(else (iter (fib (+ count 1)) sum (+ count 1)))))

(iter 0 0 0))

;Value: euler2

That's all for now. Hope I get section 1.2 done by this weekend!

Anyway, I've dug up some resources hitting snags here and there. It's what I do. So far, I've found a really great SICP Wiki (but it's half Russian), and a pack of people that have studied it from the Open Courseware over the past year.

That pack is as follows:

Ken Dyck's Solutions

Peter Sheats Solutions

Chuck Hoffman's Blog

Michael Harrison's Solutions and Commentary

Ozten's Solutions and Commentary

and finally, The Lispy Solutions and Commentary which so wonderfully motivated and inspired me tonight. Particularly with regards to a remark on Section 1.1 "just lulling you into a false sense of security".

Of course, there is also the aforementioned SICP Wiki and Eli Bendersky's Blog. Long story short, I really owe it to Lispy for encouraging me. Half way through section 1.2 I was bogged down. Roughly on exercise 1.13 which apparently gave a few other people trouble too. And I felt all alone.

Anyway, I'm going to try to push my schedule back a week and see if by next Friday I can be up to lambdas and through 80 pages of Discrete Math and then continue as planned. At the very least, I've known from day one that the one thing I want most to accomplish this year is wringing as much as I can out of SICP. So if it takes the whole year just to do that, schedules be damned, so be it.

Today was not the easiest day. It wasn't terrible either. The news was decidedly mixed. And it's not about Dad though if you're wondering he's doing well. He's undergone chemo and lost most of his hair but he's generally upbeat and energetic.

Two things have been wearing on me today and the first is work-related. Since January 11th I've been working full time at TVS. The news was that I finally got the paperwork for my benefits package today. It's nice having benefits. Benefits are good. All the same, this meant I could start doing budgeting and working out my finances.

Finances are some scary shit. If I didn't know better I'd swear I'd die without a sizable chunk of money a year. For now I'm still staying with my parents until summer (at their behest more than mine) and I'll find a place to live then.

I really am making enough to be okay. It's just that there's not a lot on the margins. I don't want a whole bunch of stuff. I just don't want to worry about suddenly needing money for any reason.

Anyway, the other struggle has been that of the triangle. I'm getting behind on my schoolwork and hoping to catch up by/over the weekend. And I was pretty distressed because I spent like 4 hours obsessing over exercise 1.12 in SICP.

The problem is to write a procedure that computes the elements of Pascal's Triangle.

That shouldn't be a big deal, you know? But I obsessed over it. And now I've got a silly over-engineered solution that I'm more fond of than I should be. It's an interesting problem though. Hopefully I've learned something from it.

Mine still isn't quite working and I know there is a simpler way to do it. I cracked after a while and read about how one might solve it but I didn't peek at any code. Still, I'm stuck on doing it *my way*. I'm such a bastard. Anyway, it's coming together and I expect it'll be done by the end of the hour. It'll be in the week 2 recap for sure.

Long story short I realized what I've gotten myself into today. And it's still where I want to be. It's just that I think it's going to take more work and time than I might have been able to understand.

Well, I said I would do this and I meant it. These entries will be a bit lengthy. I feel a little pain for any feeds I'm aggregated on. So, in the interest of not driving you all mad, these weekly semester updates will be behind a cut.

Like so.

With that out of the way, here are the details. Each week, I'll post the resources I used (i.e. what I read, what I watched, etc) and the solutions to the relevant problems along with any notes of interest I had. I may offer other general musings (or even specific ones) inspired by my studies in other posts but these will tend towards a cut and dry solution to the exercises. Finally, I'll post a link to any sources I might have found or used to check answers I wasn't sure of and if I got the answer wrong I'll disclose that in the post.

As for the math, I haven't decided what to do about that. I mean, it doesn't make sense to post up a ton of math solutions though I suppose by that logic it doesn't make sense to post code snippets either. If I come up with something I'll let you know. If you have suggestions by all means write them in.

Resources:

Read: Chapter 1 through Section 1.1

Watch: Lectures 1-a

Checked against: Eli Bendersky's Blog

SICP Notes and Exercises:

Notes

Pgs. 28-30: Definitions and explanations of scope and locality. It is becoming evident that subtle errors can easily emerge due to differences in locality and scope. These errors are colluded by the fact that there is no distinction in lisp between variables and procedures. (i.e. you could have a procedure that used abs (another procedure) as a variable.)

Quotes

"First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute." - Preface to the First Edition

"The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology - the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects." - Preface to the First Edition

Exercises

1.2:

(/ (+ 5 4 (- 2 (- 3 (+ 6 (/ 4 3))))) (* 3 (* (- 6 2) (- 2 7))))

;Value: -23/90

1.3:

(define (two-of-three x y z)

(cond ((< x y z) (+ (* y y) (* z z)))

((< y x z) (+ (* x x) (* z z)))

((< z x y) (+ (* x x) (* y y)))

((or (= x y) (= x z)) (+ (* y y) (* z z)))

((or (= y x) (= y z)) (+ (* x x) (* z z)))

((or (= z x) (= z y)) (+ (* x x) (* y y)))))

;Value: two-of-three

Comments: The original version of the program lacked the last three lines but I realized if you got inputs that were equal to each other there wasn't a condition that matched that so I changed it. I'm sure there's a much more elegant way to do it but the job is done. And it's mostly readable.

1.4:

This procedure checks to see if B is greater than 0. If it is, it adds A and B. Otherwise, it subtracts B from A.

1.5:

This procedure when evaluated using applicative-order evaluation will not resolve as it infinitely recurses trying to evaluate (p). An interpreter using Normal-order evaluation will not have this problem because in the example the if condition evaluates to true so the p function is never evaluated. (The Scheme interpreter uses Applicative-Order evaluation.)

1.6:

Again, this is a case of infinite recursion due to Applicative-Order evaluation. Sqrt-iter continues to call itself regardless of the value of (good-enough? guess x) if you must know.

1.7:

(define (good-enough? guess x)

(< (abs (- (improve guess x) guess)) (* 0.000001 guess)))

;Value: good-enough?

1.8:

(define (curt-iter guess x)

(if (good-enough? guess x)

guess

(curt-iter (improve guess x) x)))

;Value: curt-iter

(define (good-enough? guess x)

(< (abs (- (cube guess) x)) 0.001))

;Value: good-enough?

(define (cube x)

(* x (* x x)))

;Value: cube

(define (curt x)

(curt-iter 1.0 x))

;Value: curt

(define (improve guess x)

(/ (+ (/ x (* guess guess)) (* 2 guess)) 3))

;Value: improve

##NOTE:

Here is an example rewrite of the sqrt program using block structure and lexical scoping. It is inserted here because this was the point of discussion but no relevant exercise was assigned.

(define (sqrt x)

(define (sqrt-iter guess)

(if (good-enough? guess)

guess

(sqrt-iter (improve guess))))

(define (good-enough? guess)

(< (abs (- (square guess) x)) 0.001))

(define (average a b)

(/ (+ a b) 2))

(define (improve guess)

(average guess (/ x guess)))

(sqrt-iter 1.0))

So, I've finally gotten everything nailed down. I know what courses I'm taking, what resources I'm using, and what my schedule is.

I've decided to break my studies into 2 semesters each of which is comprised of two courses and lasts 20 weeks. The first semester started this Monday (1/14) and ends Friday (5/30). I then take the month of June off. The second semester will start Monday (7/7) and end (11/21). Having June and December off will both motivate me to push through and also allow room for slight changes in schedule.

I'll study SICP and Discrete Math with Applications by Rosen this semester. I've already started in on the SICP. Obviously I'm a little behind on the math. At the end of each week I'm planning to post a summary of exercises and notes at least for the programming courses. I don't know what I'll do on the Math course. I also haven't quite settled on whose problem sets to do. Ah, well.

Next semester it's HTDP and CTM. I'm a little freaked out looking at all this but if I try I'm bound to learn something. Wish me luck.

And here's the week by week breakdown of the first 4 weeks for each course:

discrete math: resources include lectures, problem sets, and course notes

dma - 787 pages / 20 weeks = 39.35(40) pgs/week

1 is lecture 11-01-00 and pgs.1-44 (through section 1.3)

2 is lecture 11-02-00 and pgs.44-80 (through definition 4)

3 is lecture 11-03-00 and pgs.80-119 (through chapter 1)

4 is lecture 11-06-00 and pgs.119-161 (through theorem 7)

sicp: resources include lectures, online book, course notes, problem sets, and eli bendersky's site

sicp - 610 pages / 20 weeks = 30.5(31) pgs/week

1 is lecture 1-a and pgs.1-31 (through section 1.1)

2 is lecture 1-b and pgs.31-62 (through section 1.3.1)

3 is lecture 2-a and pgs.63-97 (through section 2.1)

4 is lecture 2-b and pgs.97-126 (through section 2.2.3)

As far as my schedule goes, the plan is to work from 7am-3pm Monday through Friday, go to the gym after work on MWF, and veg on the weekends where possible. The workweek will be dedicated to my "education" of course, beyond work and exercise.

I have a long term plan for a course of study but no hard schedule yet. I have to divide up readings and problem sets and link them with lectures and such. I plan to have such a syllabus done and up for viewing by the end of the week. As for the long term plan of study there are 6 Programming Texts and 3 Math Texts that I'd really like to get through. If I get through the first 3 Programming Texts (or even the first 2) and 1 or 2 of the Math Texts I'd consider it a successful year. They're all fairly rigorous and I'd like to cover them in depth. Of late, I've been debating the order in which to approach the programming texts. Either SICP, CTM, HTDP or HTDP, CTM, SICP. Some of the stuff in SICP is a bit difficult and some of the stuff in HTDP is a bit easy so far. This is another thing I'm hoping to have worked out by the end of the week so that I can get going.

Once I do have a syllabus I'll post it and then post notes on readings and lectures and solutions to exercises as I go along so feel free to follow along and ask questions. You can only help me learn more. So far, I read the first 40 pages of SICP in Montana. That's Chapter 1 (of 5), Section 1.1. I've got notes typed up on the lecture and reading and most of the examples solved. I'll get those posted up by Friday as the first entry whatever my course of study turns out to be. Also, Friday I will be going to that Yeasayer concert. So far Ben Grad and Minor are talking about going too. Any more takers? Have you guys liked Yeasayer as much as I have? Isn't that Red Cave song from yesterday awesome?

Finally, here are some good Lessig quotes my readings in Montana of The Future of Ideas (Pgs. 1 - 99):

"The very idea that nonexclusive rights might be more efficient than exclusive rights rarely enters the debate. The assumption is control, and public policy is dedicated to maximizing control." - Lawrence Lessig, The Future of Ideas, Pg. 86

"Where we have little understanding about how a resource will be used, we have more reason to keep that resource in the commons. And where we have a clear vision of how a resource will be used, we have more reason to shift that resource to a system of control." - Lawrence Lessig, The Future of Ideas, Pg. 88-89

"The point is more than theoretical. In essence, the changes in the environment of the Internet that we are observing now alter the balance between control and freedom on the Net. The tilt of these changes is pronounced: control is increasing. And while one cannot say in the abstract that increased control is a mistake, it is clear that we are expanding this control with no sense of what is lost. The shift is not occurring with the idea of a balance in mind. Instead, the shift proceeds as if control were the only value." - Lawrence Lessig, The Future of Ideas, Pg. 99

I'll be in Montana from the 2nd to the 11th, so I've got that coming up. I'll be in Bozeman if you're wondering. It'll be nice to get away for a bit...even if it is for a family reunion with people I haven't seen in a good while, my biological Dad's side of the family. For those who don't know, Mom divorced and the all too awesome feller suffering from Lung Cancer is (technically) my Step-Dad.

I need to draft up a schedule for the new year to figure out how I'm getting my studies done. And whose lectures and course materials I'll be following as I have a choice in some cases. More on that soon. There's also been a ton of great nerd discussion floating around the blogosphere of late, some of which I'll try to comment on in the next couple of days. In the meantime, here's a trivial nugget of thought.

I watched Lecture 1A of the classic MIT Structure and Interpretation of Computer Programs series tonight and something struck me, mostly because Sussman brought the idea to the forefront with clarity at some point. He said something fairly fundamental that borders on self-evident when Computer Science is viewed introspectively but I hadn't formerly considered. In essence, Computer Science is about how to knowledge and process rather than declarative knowledge or fact. Thus, a programming languages job is to serve as a description of process and provide tools towards that end.

The part of this that I hadn't formerly considered is that this is why we bother, or even focus, on learning new programming languages and methods of abstraction rather than focusing on writing specific programs. Sure, many schools recommend a course in Compiler, Operating System, or Programming Language Design and there are plenty of blog posts detailing such undertakings in an effort to enhance skill and knowledge in the field but nothing is so popular or so emphasized as learning new languages. Regularly and of different paradigms and abstractions, if possible. There's something to think on in greater depth here about why that is that I haven't seen eloquently written about by Yegge, Graham, Braithwaite, Atwood, or anyone else. Perhaps if I can capture what it is, I'll write about it. In the meantime, it's just a thought.

**I would like to hear a reasoned argument as to what subjects cannot be learned faster outside of the educational system (note: this does not mean without structure or guidance) than in it and why this might be.**

*More Importantly:*Any takers?

This blog covers 2015, Books, Butler, C, Dad, Discrete Math, Displays, Education, Erlang, Essay, Gaming, Gapingvoid, HTDP, Hardware, IP Law, LISP, Lecture, Lessig, Linkpost, Linux, Lists, MPAA, Milosz, Music, Neruda, Open Source, Operating Systems, Personal, Pics, Poetry, Programming, Programming Languages, Project Euler, Quotes, Reddit, SICP, Self-Learning, Uncategorized, Webcomic, XKCD, Xmas, \"Real World\", adulthood, apple, careers, coleslaw, consumption, creation, fqa, games, goals, heroes, injustice, linux, lisp, math, melee, metapost, milosz, personal, poetry, programming, ragequit, recreation, rip, strangeloop, work

View content from 2015-05, 2015-03, 2015-02, 2015-01, 2014-11, 2014-09, 2014-07, 2014-05, 2014-01, 2013-10, 2013-09, 2013-07, 2013-06, 2013-05, 2013-04, 2013-03, 2013-01, 2012-12, 2012-10, 2012-09, 2012-08, 2012-06, 2012-05, 2012-04, 2012-03, 2012-01, 2011-10, 2011-09, 2011-08, 2011-07, 2011-06, 2011-05, 2011-04, 2011-02, 2011-01, 2010-11, 2010-10, 2010-09, 2010-08, 2010-07, 2010-05, 2010-04, 2010-03, 2010-02, 2010-01, 2009-12, 2009-11, 2009-10, 2009-09, 2009-08, 2009-07, 2009-06, 2009-05, 2009-04, 2009-03, 2009-02, 2009-01, 2008-12, 2008-11, 2008-10, 2008-09, 2008-08, 2008-07, 2008-06, 2008-05, 2008-04, 2008-03, 2008-02, 2008-01, 2007-12, 2007-11, 2007-10, 2007-09, 2007-08, 2007-07, 2007-06, 2007-05