# Content tagged SICP

Resources:

Read: Chapter 2 through Section 2.1

Watch: Lecture 2-b

Checked against: Eli Bendersky's Blog, SICP Wiki, Ken Dyck's Solutions, Theloserblog, Wfasim's Solutions and the Scheme Wiki and Personal Wiki solutions listed here.

SICP Notes and Exercises:

Notes

Quotes

Exercises

2.1:

(define (make-rat n d)

(let ((g (gcd n d)))

(if (positive? (/ n d))

(cons (abs (/ n g)) (abs (/ d g)))

(cons (- (/ n g)) (abs (/ d g))))))

;Value: make-rat

2.2:

(define (make-point x y)

(cons x y))

;Value: make-point

(define (x-point point)

(car point))

;Value: x-point

(define (y-point point)

(cdr point))

;Value: y-point

(define (start-segment segment)

(car segment))

;Value: start-segment

(define (end-segment segment)

(cdr segment))

;Value: end-segment

(define (make-segment p1 p2)

(cons p1 p2))

;Value: make-segment

(define (print-point p)

(newline)

(display "(")

(display (x-point p))

(display ",")

(display (y-point p))

(display ")"))

;Value: print-point

(define (midpoint-segment s)

(make-point (average (x-point (start-segment s))

(x-point (end-segment s)))

(average (y-point (start-segment s))

(y-point (end-segment s)))))

;Value: midpoint-segment

This is really interesting to me. I feel like midpoint-segment should be expressible in 2 or 4 lines of code but I can't think of a way to elegantly do that with lets or function composition. The problem is each time you're composing different sets of functions. Without defining multiple lets or compositions it doesn't compress and if you do you lose your LOC gains anyway. I've decided this definition is sufficiently succinct.

2.3:

;;representation 1 - procedure based, working by magic:

(define (rect-area r)

(* (length r) (width r)))

;Value: rect-area

(define (rect-perimeter r)

(* 2 (+ (length r) (width r))))

;Value: rect-perimeter

(define (make-rect top-left bottom-right)

(cons top-left bottom-right))

;Value: make-rect

(define (length r)

(- (y-point (car r)) (y-point (cdr r))))

;Value: length

(define (width r)

(- (x-point (cdr r)) (x-point (car r))))

;Value: width

;;representation 2 - not procedure based, working by reality:

(define (make-rect top bottom)

(cons top bottom))

;Value: make-rect

(define (rect-top r)

(car r))

;Value: rect-top

(define (rect-bottom r)

(cdr r))

;Value: rect-bottom

(define (rect-left r)

(make-segment (start-segment top)

(start-segment bottom)))

;Value: rect-left

(define (rect-right r)

(make-segment (end-segment top)

(end-segment bottom)))

;Value: rect-right

(define (length r)

(- (y-point (start-segment (rect-top r)))

(y-point (start-segment (rect-bottom r)))))

;Value: length

(define (width r)

(- (x-point (end-segment (rect-top r)))

(x-point (start-segment (rect-top r)))))

;Value: width

What working by magic really seems to do is make the cruft that's unnecessary to your implementation obvious.

2.4:

(define (cons x y)

(lambda (m) (m x y)))

;Value: cons

(define (car z)

(z (lambda (p q) p)))

;Value: car

(define (cdr z)

(z (lambda (p q) q)))

;Value: cdr

Wow. Just wow.

2.5:

(define (cons a b)

(* (expt 2 a) (expt 3 b)))

;Value: cons

(define (what-exponent x y)

(define (exp-iter count)

(if (= (modulo y (expt x count)) 0)

(exp-iter (+ count 1))

(- count 1)))

(exp-iter 1))

;Value: what-exponent

(define (car x)

(what-exponent 2 x))

;Value: car

(define (cdr x)

(what-exponent 3 x))

;Value: cdr

This isn't quite as evil as the problem description makes it sound.

2.6:

Whew boy. Here goes...

(define zero (lambda (f) (lambda (x) x)))

;Value: zero

(define (add-1 n)

(lambda (f) (lambda (x) (f ((n f) x)))))

;Value: add-1

(add-1 zero)

(lambda (f) (lambda (x) (f ((zero f) x))))

(lambda (f) (lambda (x) (f x))) ;; this was the difficult step for me. why? i couldn't understand how ((zero f) x) got worked down to x. I knew that the identity function was what eventually got returned but I figured it received f as it's argument. The trick was recalling that f gets passed into a function which does NOTHING WITH F and returns the identity function anyway. (zero f) reduces to the identity function because of the first lambda in zero that just throws it's argument away. Hence, you have (identity x) which is just x and leaves this result as one. somewhat sadly, formatting my code so that the substitution wasn't all on one line also could've made the difference and saved me a week or so.

(define one (lambda (f) (lambda (x) (f x))))

;Value: one

(add-1 one)

(lambda (f) (lambda (x) (f ((one f) x))));; again the f arg is thrown away and x is put into the second lambda to give...

(lambda (f) (lambda (x) (f (f x))))

(define two (lambda (f) (lambda (x) (f (f x)))))

;Value: two

;;clearly we're adding an application of f each time we add one. for example...

((two square) 5)

;Value: 625

;; which is the square of the square of 5 (* 25 25)

;;now i'm supposed to define an addition function which should perform like so:

(add one two)

(add (lambda (f) (lambda (x) (f x)))

(lambda (f) (lambda (x) (f (f x)))))

...

(lambda (f) (lambda (x) (f (f (f x)))))

;;and then allow us to do this

(((add one two) square) 5)

(square (square (square 5)))

;Value: 390625

;;maybe the hard part of this problem is holding multiple levels of evaluation in your head at the same time. anyway...

;;it seems like what we really want to do is feed the f chains into each other somehow...p

(define (add a b)

(lambda (f) (lambda (x) ((a f) (b f)) x)))

;Value: add

;;this is tempting but wrong. i realized you had to pass the f in to make sure you got the correct repeated calls but missed that if you passed (b f) into the resulting function you were passing a procedure instead of a value.

(define (add a b)

(lambda (f) (lambda (x) ((a f) ((b f) x)))))

;Value: add

(add one two)

(lambda (f) (lambda (x) ((one f) ((two f) x))))

(lambda (f) (lambda (x) ((one f)

((lambda (x) (f (f x))) x))))

(lambda (f) (lambda (x)

(lambda (x) ((f x)

(lambda (x) (f (f x)) x)))

(lambda (f) (lambda (x) (f (f (f x)))))

;;you want to hear what's really gross? i found that this worked for odd numbers but not even numbers and tried unsuccessfully to figure out what was wrong for an hour before re-evaluating my definitions for one and two and seeing it "just work".

(((add one two) square) 5)

(define (test churchnum)

(define (inc x)

(+ x 1))

((churchnum inc) 0))

;Value: test

(test (add one two))

;Value: 3

;;it's sort of insulting that after writing all that code you realize

you just implemented a fancy lambda version of repeated for

functions/church numerals.

;;proving above point:

(define (compose f g)

(lambda (x) (f (g x))))

;Value: compose

(define (repeated f n)

(if (= n 1)

f

(compose f (repeated f (- n 1)))))

;Value: repeated

(define (add a b)

(lambda (f) (repeated f (+ a b))))

;Value: add

;;of course, this pretends that church numerals are integers but...you get the idea.

This may have been the hardest problem I encountered thus far and I definitely had to peek at the way other people started solving the problem to get my own ideas flowing in the right direction.

2.7:

(define (lower-bound i)

(car i))

;Value: lower-bound

(define (upper-bound i)

(cdr i))

;Value: upper-bound

2.8:

(define (sub-interval x y)

(let ((p1 (- (lower-bound x) (lower-bound y)))

(p2 (- (lower-bound x) (upper-bound y)))

(p3 (- (upper-bound x) (lower-bound y)))

(p4 (- (upper-bound x) (upper-bound y))))

(make-interval (min p1 p2 p3 p4)

(max p1 p2 p3 p4))))

;Value: sub-interval

Similiar to the addition function the maximum value the difference could be is that of the furthest upper and lower bound and the minimum difference is that of the closest upper and lower bound. This seems to be the best way to test for that.

2.9:

(define (width-interval x)

(/ (- (upper-bound x) (lower-bound x))

2))

;Value: width-interval

(width-interval (mul-interval inter1 inter2))

;Value: 24

(width-interval (div-interval inter1 inter2))

;Value: .5333333333333334

(width-interval (add-interval inter1 inter2))

;Value: 4

(width-interval (sub-interval inter1 inter2))

;Value: 4

Observe that the width of the interval which is the difference between inter1 and inter 2 is identical to the width of the interval which is the sum of inter1 and inter2.

This fact indicates that the width of summed or subtracted intervals is a function of the width of their source intervals. You can clearly see that the width of the intervals produced by multiplying or dividing inter1 and inter2 do not share this trait. Thus, the width of multiplied or divided intervals is not a function of their source intervals alone.

2.10:

(define (div-interval x y)

(if (<= (or (upper-bound y) (lower-bound y)) 0)

(error "Cannot divide by an interval that spans zero." y)

(mul-interval x

(make-interval (/ 1.0 (upper-bound y))

(/ 1.0 (lower-bound y))))))

;Value: div-interval

This fixes the issue of dividing by zero in the software by simply not allowing intervals to drop below zero. Whether intervals that "span" zero should be allowed is up for debate.

2.11:

(define (mul-interval x y) ;;even with lets this is ugly. i object!

(let ((a (lower-bound x))

(b (upper-bound x))

(c (lower-bound y))

(d (upper-bonud y)))

(cond ((and (> a 0) (> b 0) (> c 0) (> d 0))

(make-interval (* a c) (* b d)))

((and (> a 0) (> b 0) (< c 0) (> d 0))

(make-interval (* b c) (* b d)))

((and (< a 0) (> b 0) (> c 0) (> d 0))

(make-interval (* a d) (* b d)))

((and (< a 0) (> b 0) (< c 0) (< d 0))

(make-interval (* b d) (* a d)))

((and (< a 0) (< b 0) (< c 0) (> d 0))

(make-interval (* b d) (* b c)))

((and (< a 0) (< b 0) (< c 0) (< d 0))

(make-interval (* a c) (* b d)))

((or (and (> a 0) (> b 0) (< c 0) (< d 0))

(and (< a 0) (< b 0) (> c 0) (> d 0)))

(make-interval (* b d) (* a c)))

(else (make-interval (min (* a d) (* b c))

(max (* a c) (* b d)))))))

;Value: mul-interval

Eww. Gross.

2.12:

(define (make-center-percent center tolerance)

(make-center-width center (* (/ tolerance 100) center)))

;Value: make-center-percent

(define (percent i)

(* (/ (width i) (center i)) 100))

;Value: percent

(percent (make-center-percent 8 5))

;Value: 5

It's not much and I plan on updating and expanding on this but it's done for now.

"You got an F. What the hell's the matter with you? Ya big failure.

Final Grade: 20.786516853932586%

To be fair, you would've had to do 14.0 problems a week to finish the book in 26 weeks.

They are pretty hard problems. Just keep at it man. You may want to revise your strategy though."

We're 30 weeks into 2008 and I've only done 74 of the 356 problems in that legendary text, the Structure and Interpretation of Computer Programs, which was the central object of my study this semester. That's about two and a half problems a week. Not my brightest shining moment. This whole experience definitely gives me new appreciation for the people that tried to structure and/or educate me in the past. Clearly, I need one of two things:

1) A good kick in the ass to really get going.

2) A new gameplan.

Personally, I'm going to try a mix of the two. Where 1) is concerned I recently wrote a self-study program (the biggest program I've ever written, actually) to help me keep abreast of my own progress and help me chart my course a bit. Where 2) is concerned I'm going to have to start making concessions to maintain momentum and I'm not entirely comfortable with that.

What concessions do I mean? Well, some of the SICP problems are hard. Really hard. Unreasonably hard (see Exercise 4.79 at the bottom for which a good answer is "probably worth a Ph.D."). The book has it's reputation for a reason. It's a reputation of difficulty but also of enlightenment. A lot of very smart people say it's the best way to learn Computer Science and probably the best book on the subject yet written. I'm willing to take their word for it. Anyway, there are problems that I get hung up on and I haven't been letting myself move on to the next section of the book without solving all the problems in the current section. That just isn't scaling well. I'm already hung up on the last 4 problems in Section 2.1. God knows what would happen come 4.4. I'll surely never finish the thing if I don't let myself move forward.

With that in mind, a week or so ago I did let myself move forward a bit and work on Section 2.2. I've already got about a third of it done. Maybe even half. I'm worried about this because I want to stay honest. I don't want to shirk the hard stuff. I won't move past problems unless I'm

*really*stumped and I will circle back at various points to try to work through them. Aside from SICP, I've worked on HTDP (How To Design Programs) and CA (Concrete Abstractions) as well this semester. I got an almost reasonable portion of HTDP done but next to nothing on CA. I'd really like to try plowing through as much of those three books and The C Programming Language (rocking the 1st ed.) as possible before Xmas.

Semester 3 (starting in January) I'm hoping to work on Algorithms (DPV, not CLRS), Essentials of Programming Languages (1st edition, baby!) and one of my Operating Systems texts. Of course, Discrete Math (5th ed) would be more prudent and judging by this semester this could all be revised by Xmas. Well, back to work. Happy Hacking!

So, I've finally gotten around to cleaning up SICP Section 1.3. It's not quite done but it's damn close. For now, I want to start posting some of the HTDP code I've been writing to get back in the hacking habit over the past few days. I also have some of Concrete Abstractions done and in my source code repository but it's nothing substantial. Without further ado, here's HTDP Section 02 (of 43!). Sections 03 and 04 will go up tomorrow. Note: I skipped HTDP Section 01 because there are no exercises or problems whatsoever.

Resources:

Read: Sections 01 and 02

Watch: Nothing. To my knowledge there are no online lectures based around HTDP. Correct me if I'm wrong.

Checked against: Nothing. Again, to my knowledge there are no available sources to check your answers beyond the locked solutions on the official site and message boards. That's one reason I'm excited about doing HTDP this way along with SICP. The plethora of SICP resources stand in contrast to an absolute dearth of resources for HTDP.

Exercises

2.1.1:

Dr. Scheme does have operations for squaring (sqr x), computing sines (sin x), and finding maximums (max x). If you are not running in the HTDP Beginning Student Language though these functions may not be available.

2.1.2:

(sqrt 4)

2

(sqrt 2)

#i1.4142135623730951

(sqrt -1)

0+1i

;;(tan x) determines the tangent of a given angle.

2.2.1:

(define (Fahrenheit->Celsius fahr)

(* (- fahr 32) (/ 5 9)))

The teachpack worked as intended. Just go to Language -> Add Teachpack. Feel free to test the different convert-*s on your own.

2.2.2:

(define (dollar->euro dollars)

(* .642 dollars)) ;; as of 05/18/08

2.2.3:

(define (triangle side height)

(/ (* side height) 2))

2.2.4:

(define (convert3 first second third)

(+ (* 100 third) (* 10 second) (* 1 first)))

This was sort of counter-intuitive. The idea that this is related to something in an Algebra book is true but misleadingly so. You could try to do something fancy with max but that's not the idea.

2.2.5:

(define (f n)

(+ (/ n 3) 2))

;;The evaluations for 2, 5, and 9 are 2.6, 3.6 and 5, respectively.

(define (f n)

(+ 10 (sqr n)))

;;The evaluations for 2 and 9 are 14 and 91, respectively.

(define (f n)

(+ 20 (* (sqr n) .5)))

;;The evaluations for 2 and 9 are 22 and 60.5, respectively.

(define (f n)

(- 2 (/ 1 n)))

;;The evaluations for 2 and 9 are 1.5 and 1.8, respectively.

2.3.1:

(define (tax income)

(* .15 income))

(define (netpay hrs)

(- (wage hrs) (tax (wage hrs))))

;;supplementary functions:

(define (wage hrs)

(* 12 hrs))

2.3.2:

(define (sum-coins pennies nickels dimes quarters)

(+ (* .01 pennies) (* .05 nickels) (* .1 dimes) (* .25 quarters)))

2.3.3:

(define (total-function attendees)

(- (* 5 attendees) (+ 20 (* .5 attendees))))

2.4.1:

(10) causes the interpreter to expect a function, procedure or expression but it is in fact primitive data, i.e. a number.

(10 + 20) is incorrect because the expression uses infix rather than prefix notation but the error from the interpreter is the same. This is due to the fact that the interpreter has been given a number rather than an procedure as it's operator.

(+ +) fails because the operator + is only given one argument (it requires a minimum of two) and that argument is a function which is the wrong type of input.

2.4.2:

(define (f x)

(+ x 10))

;;The argument to f needed to be changed.

(define (g x)

(+ x 10))

;;There was a missing open-paren before the + operator.

(define (h x)

(+ x 10))

;;The open-paren was in front of x when it should have been in front of h.

2.4.3:

;;> (+ 5 (/ 1 0))

;;/: division by zero

;;> (sin 10 20)

;;sin: expects 1 argument, given 2: 10 20

;;> (somef 10)

;;reference to an identifier before its definition: somef

2.4.4:

(define (somef x)

(sin x x))

;;> (somef 10 20)

;;somef: this procedure expects 1 argument, here it is provided 2 arguments

;;> (somef 10)

;;sin: expects 1 argument, given 2: 10 10

The section ends with a bit on program design. It makes the important note of having human solved examples to test against. Sounds like an argument for unit tests to me.

Additionally, I'm way behind on programming. I know. I've had a lot going on but my progress the last month or two is still just shameful. I've started HTDP to get the juices flowing again and am already through Section 03. It's definitely more straightforward than SICP if less revelatory. I'm considering going ahead and trying to blow through HTDP completely over the next month or two. Then I could circle back to SICP and hopefully be better prepared. I haven't decided on anything yet other than tidying up the presently unadorned answers to SICP 1.3 and then posting what I've got from HTDP so far. I am more than half-way through SICP 2.1 but I'm wondering if it makes more sense to knock out HTDP considering the difference in pace between the books. I'll let you know as I move forward. I'm hoping to get a post up with some pictures of my new digs in the next week or so. Feel free to drop me a line if you'd like to swing by.

At long last, I'm through Chapter 1 of SICP. I'm a bit disappointed that Closures haven't been covered yet but they're in the first few pages of Chapter 2 and I've already got a few problems solved. As a matter of fact, I finished Chapter 1 last Wednesday it just takes time to get these posts up. I have a feeling I need to go back and study those explanations of Lexical Scope in Chapter 1 though. I'll try to write more about the experience thus far in a separate post. For now, here are my results for Section 1.3.

Resources:

Read: Chapter 1 through Section 1.3

Watch: Lectures 2-a

Checked against: Eli Bendersky's Blog, SICP Wiki, Ken Dyck's Solutions, Theloserblog, Wfasim's Solutions, Autodidact and Lispy for Inspiration.

SICP Notes and Exercises:

Notes

Pgs. 63-66: Discussion of Let and Local Variable Binding.

Pg. 76: Discussion of First-Class Status in programming languages.

Quotes

"I'm going to write the...procedure here explicitly without giving it a name. I'm doing it anonymously, I don't necessarily have to give a name to something if I just want to use it once." - Gerald Jay Sussman, approx. 17:00, Lecture 2-a

"Procedures can be named by variables. Procedures are not special...Therefore they can be passed from one to another as arguments." - Gerald Jay Sussman, approx. 20:00, SICP Lecture 1-B from Swiss Archive, Higher-Order Functions Explanation

"Talent is to a great extent knowledge that we haven't yet learned how to formalize." - Gerald Jay Sussman, approx. 55:00, The Legacy of Computer Science

Exercises

1.29:

This exercise definitely wasn't easy. I think most of the difficulty is in figuring out how the math works and how the functions are all feeding into each other.

`(define (cube x) (* x x x))`

;Value: cube

(define (sum term a next b)

(if (> a b)

0

(+ (term a)

(sum term (next a) next b))))

;Value: sum

(define (simpsons-rule f a b n)

(define h (/ (- b a) n))

(define (k-term x)

(cond ((or (= x 0) (= x n)) 1)

((even? x) 2)

(else 1)))

(define (yk x)

(* (k-term x)

(f (+ a (* x h)))))

(* (sum yk a (lambda (x) (+ x 1)) n)

(/ h 3)))

;Value: simpsons-rule

(simpsons-rule cube 0 1 100)

;Value: 1/4

(simpsons-rule cube 0 1 1000)

;Value: 1/4

1.30:

Personally I think it's really nice that Abelson and Sussman have been throwing in these sort of review problems. They make me feel like I'm learning something. They give me hope. I solved this one in about 1 minute and a half and thought, "Hey, maybe I'm not a complete idiot. Maybe I'll actually know something about programming one day."

`(define (sum term a next b)`

(define (iter a result)

(if (> a b)

result

(iter (next a) (+ (term a) result))))

(iter a 0))

;Value: sum

1.31:

a.

`(define (product term a next b)`

(if (> a b)

1

(* (term a)

(product term (next a) next b))))

;Value: product

(define (factorial n)

(product (lambda (x) x) 1 (lambda (x) (+ x 1)) n))

;Value: factorial

(define (pi-approx approximations)

(define (pi-term denom) (/ (- (square denom) 1) (square denom)))

(define (next-term denom) (+ denom 2))

(product pi-term 3 next-term approximations))

;Value: pi-approx

(pi-approx 40)

;Value: 4722366482869645213696/5938020471163465810125 (.795276)

I just changed the variable names and commented pi-approx. The comment is omitted here in favor of this explanation. I couldn't figure out what on earth I was doing in the original so I actually wrote a brand new pi-approx with a different approach before realizing my original version was both correct and, I suspect, faster. I was computing 2 terms at a time based on their shared denominator.

b.

`(define (product term a next b)`

(define (iter a result)

(if (> a b)

result

(* (term a)

(product term (next a) next b))))

(iter a 1))

;Value: product

1.32:

a.

`(define (accumulate combiner null-value term a next b)`

(if (> a b)

null-value

(combiner (term a)

(accumulate combiner null-value term (next a) next b))))

;Value: accumulate

(define (sum term a next b)

(accumulate + 0 term a next b))

;Value: sum

(define (product term a next b)

(accumulate * 1 term a next b))

;Value: product

b.

`(define (accumulate combiner null-value term a next b)`

(define (iter a result)

(if (> a b)

null-value

(combiner (term a)

(iter (next a) result))))

(iter a null-value))

;Value: accumulate

1.33:

`(define (filtered-accumulate combiner null-value term a next b filter)`

(cond ((> a b) null-value)

((filter a) (combiner (term a)

(filtered-accumulate combiner null-value term

(next a) next b filter)))

(else (filtered-accumulate combiner null-value term

(next a) next b filter))))

;Value: filtered-accumulate

a.

`(define (sum-square-primes a b)`

(filtered-accumulate + 0 square a inc b prime?))

;Value: sum-square-primes

b.

`(define (product-relative-primes n)`

(define (relatively-prime i)

(= (gcd i n) 1))

(filtered-accumulate * 1 identity 1 inc n relatively-prime))

;Value: product-relative-primes

1.34:

The procedure f only actually produces output when it's argument is another procedure, specifically a procedure which takes one formal parameter. Given a procedure of a different arity it will produce an error regarding the wrong number of arguments and given a non-procedural argument it will complain about the object not being applicable.

1.35:

`(define tolerance 0.00001)`

;Value: tolerance

(define (fixed-point f first-guess)

(define (close-enough? v1 v2)

(< (abs (- v1 v2)) tolerance))

(define (try guess)

(let ((next (f guess)))

(if (close-enough? guess next)

next

(try next))))

(try first-guess))

;Value: fixed-point

(define (golden-ratio)

(fixed-point (lambda (x) (+ 1 (/ 1 x))) 1.0))

;Value: golden-ratio

(golden-ratio)

;Value: 1.6180327868852458

Things are pretty straightforward from 1.29 through 1.36. The main thing to remember on 1.35 and 1.36 is that a transformation is just a function and serves as the f in the fixed-point.

1.36:

`(define (fixed-point f first-guess)`

(define (close-enough? v1 v2)

(< (abs (- v1 v2)) tolerance))

(define (try guess)

(let ((next (f guess)))

(display guess)

(newline)

(if (close-enough? guess next)

next

(try next))))

(try first-guess))

;Value: fixed-point

(define (solve-for-x)

(fixed-point (lambda (x) (/ (log 1000) (log x))) 2.0))

;Value: solve-for-x

(solve-for-x)

2.

9.965784284662087

3.004472209841214

6.279195757507157

3.759850702401539

5.215843784925895

4.182207192401397

4.8277650983445906

4.387593384662677

4.671250085763899

4.481403616895052

4.6053657460929

4.5230849678718865

4.577114682047341

4.541382480151454

4.564903245230833

4.549372679303342

4.559606491913287

4.552853875788271

4.557305529748263

4.554369064436181

4.556305311532999

4.555028263573554

4.555870396702851

4.555315001192079

4.5556812635433275

4.555439715736846

4.555599009998291

4.555493957531389

4.555563237292884

4.555517548417651

4.555547679306398

4.555527808516254

4.555540912917957

;Value: 4.555532270803653

(define (solve-for-x)

(fixed-point (lambda (x) (average x (/ (log 1000) (log x)))) 2.0))

;Value: solve-for-x

(solve-for-x)

2.

5.9828921423310435

4.922168721308343

4.628224318195455

4.568346513136242

4.5577305909237005

4.555909809045131

4.555599411610624

4.5555465521473675

;Value: 4.555537551999825

Pretty impressive. solve-for-x went from taking 34 steps to 9 steps thanks to average damping. I wonder what it does for golden ratio? And sqrt's for various inputs...

1.37:

a.

`(define (cont-frac n d k)`

(define (frac-iter i)

(if (< i k)

(/ (n i) (+ (d i) (frac-iter (+ i 1))))

(/ (n i) (d i))))

(frac-iter 1))

;Value: cont-frac

(cont-frac (lambda (i) 1.0) (lambda (i) 1.0) 11)

;Value: .6180555555555556

b.

`(define (cont-frac n d k)`

(define (frac-iter count result)

(if (= count 0)

result

(frac-iter (- count 1)

(/ (n count) (+ (d count) result)))

(frac-iter k 0))

;Value: cont-frac

(cont-frac (lambda (i) 1.0) (lambda (i) 1.0) 11)

;Value: .6180555555555556

The main thing that's tricky about 1.37 is figuring out the math of continued fractions and starting with the base case of the last term and working backwards.

1.38:

`(define (euler-expand)`

(define (d-fun i)

(cond ((= (modulo i 3) 2) (* (ceiling (/ i 3)) 2))

(else 1)))

(cont-frac (lambda (i) 1.0) d-fun 8))

;Value: euler-expand

(euler-expand)

;Value: .7182795698924731

So, my original iterative version of cont-frac didn't actually work for this problem. The iterative version didn't work for this problem because it treated division as though it's commutative and it isn't. It took me a while to figure that out.

1.39:

`(define (tan-cf x k)`

(define (d i)

(- (* 2 i) 1))

(define (n i)

(if (= x 1)

x

(square x)))

(cont-frac n d k))

;Value: tan-cf

(tan-cf 1.0 5)

;Value: 1.5574074074074076

This one is actually fairly tricky. If you fail to notice that this is a continued fraction that subtracts rather than adds you're completely hosed. I modified my cont-frac procedure to fix this once I noticed. There's probably an elegant way to extend cont-frac to accomodate these different uses (subtracting versus adding continued fractions, etc.) but I'm not going to chase it down myself. Anybody feel like improving on this?

1.40:

`(define (cubic a b c)`

(lambda (x) (+ (expt x 3) (* a (expt x 2)) (* b x) c)))

(define dx 0.00001)

;Value: dx

(define (fixed-point-of-transform g transform guess)

(fixed-point (transform g) guess))

;Value: fixed-point-of-transform

(define (cubic a b c)

(lambda (x) (+ (expt x 3) (* a (expt x 2)) (* b x) c)))

;Value: cubic

(define (deriv g)

(lambda (x) (/ (- (g (+ x dx)) (g x)) dx)))

;Value: deriv

(define (newton-transform g)

(lambda (x) (- x (/ (g x) ((deriv g) x)))))

;Value: newton-transform

(define (newtons-method g guess)

(fixed-point (newton-transform g) guess))

;Value: newtons-method

(newtons-method (cubic 4 3 2) 1)

;Value: -3.2695308420809894

I didn't realize I just needed to literally translate the function. After I knew that I was fine. Again, time to study more math.

1.41:

`(define (double x)`

(lambda (i) (x (x i))))

;Value: double

(define (inc x) (+ x 1))

;Value: inc

((double inc) 0)

;Value: 2

(((double (double double)) inc) 5)

;Value: 21

;;This is because a double on a (double double) is effectively a square.

(double double)

;Value 16: #[compound-procedure 16]

(((double (double double)) inc) 0)

;Value: 16

((double (double (double inc))) 0)

;Value: 8

(((double (double (double double))) inc) 0)

;Value: 256

1.42:

`(define (compose f g)`

(lambda (i) (f (g i))))

;Value: compose

((compose square inc) 6)

;Value: 49

1.43:

`(define (repeated f n)`

(if (= n 1)

f

(compose f (repeated f (- n 1)))))

;Value: repeated

((repeated square 2) 5)

;Value: 625

Wow! That was a lot easier to think about using compose.

1.44:

`(define (smooth f)`

(define dx 0.00001)

(lambda (x) (/ (+ (f (- x dx)) (f x) (f (+ x dx))) 3)))

;Value: smooth

((smooth square) 2)

;Value: 4.000000000066667

(define (n-smoothed f n)

(repeated smooth n) f)

;Value: n-smoothed

((n-smoothed square 16) 2)

;Value: 4

Check The Loser Blog for a potentially better answer.

1.45:

`(define tolerance 0.00001)`

;Value: tolerance

(define (fixed-point f first-guess)

(define (close-enough? v1 v2)

(< (abs (- v1 v2)) tolerance))

(define (try guess)

(let ((next (f guess)))

(if (close-enough? guess next)

next

(try next))))

(try first-guess))

;Value: fixed-point

(define (average x y)

(/ (+ x y) 2))

;Value: average

(define (average-damp f)

(lambda (x) (average x (f x))))

;Value: average-damp

(define (nth-root x n)

(fixed-point (repeated

(average-damp (lambda (y) (/ x (expt y (- n 1)))))

(ceiling (/ n 2))) 1.0))

;Value: nth-root

(define (compose f g)

(lambda (x) (f (g x))))

;Value: compose

(define (repeated f n)

(if (= n 1)

f

(compose f (repeated f (- n 1)))))

;Value: repeated

(define (nth-root x n)

(fixed-point-of-transform (lambda (y) (/ x (expt y (- n 1))))

(repeated average-damp (log2 n)) 1.0))

;Value: nth-root

(define (log2 n)

(if (= 1 n)

0

(+ (log2 (floor (/ n 2))) 1)))

;Value: log2

After testing the first 15 powers with my version of nth-root I couldn't figure out the relationship between n and the times to average damp. Just about everyone had trouble with this but I found the correct answer in Eli's comment thread...

1.46:

`(define (iterative-improve tester improver)`

(define (iter guess x)

(if (tester guess)

guess

(iter (improver guess) x)))

(lambda (x) (iter 1.0 x)))

;Value: iterative-improve

(define (sqrt x)

((iterative-improve

(lambda (guess) (< (abs (- (square guess) x)) 0.00001))

(lambda (guess) (average guess (/ x guess)))) x))

;Value: sqrt

(define (average x y)

(/ (+ x y) 2))

;Value: average

(sqrt 2)

;Value: 1.4142156862745097

(define (fixed-point f x)

((iterative-improve

(lambda (guess) (< (abs (- guess (f guess))) 0.00001))

(lambda (guess) (f guess))) x))

;Value: fixed-point

(fixed-point cos 1.0)

;Value: .7390893414033927

(Edit: 05/18/08) Well, that wraps it up for Section 1.3. I can't believe how long it took me to find the time to come back and clean these answers up a bit. I have had a lot going on though. There will be a few small changes in convention starting in SICP 2.1 to make things more manageable for me. As always, the most up to date code is in the repo.

I finally finished SICP Section 1.2 last night. I'm tremendously excited because this means that next week I can start tackling Higher Order Functions and (I hope) Closures. At any rate, here is the last month's work:

Resources:

Read: Chapter 1 through Section 1.2

Watch: Lectures 1-b

Checked against: Eli Bendersky's Blog, SICP Wiki, Ken Dyck's Solutions, Autodidact and Lispy for Inspiration.

SICP Notes and Exercises:

Notes

Pg. 35: Explanations of Iteration and Recursion in Processes and Procedures and Tail-Recursion in Compilers.

Maybe I was wrong about SICP. I mean the hardest thing about these exercises was letting the stuff sit in my head for a bit. And the motivation to get some of the more lengthy ones done. We'll see how this goes.

Quotes

"A recursive definition does not necessarily lead to a recursive process." - Gerald Jay Sussman, SICP Lecture 1-B's Time-Space Complexity explanation, approx. 25:30 - 30:30

"The key to understanding complicated things is knowing what not to look at." - Gerald Jay Sussman, SICP Lecture 1-B from Swiss Archive, approx. 10:00

"The reason why people think of programming as being hard is because you're writing down a general rule which is going to be used for lots of instances that a particular instance must process correctly." - Gerald Jay Sussman, SICP Lecture 1-B from Swiss Archive, approx. 46:45

Exercises

1.9:

The first procedure evaluates as follows:

(inc (+ 3 5))

(inc (inc (+ 2 5)))

(inc (inc (inc (+ 1 5))))

(inc (inc (inc (inc (+ 0 5)))))

(inc (inc (inc (inc 5))))

(inc (inc (inc 6)))

(inc (inc 7))

(inc 8)

9

This is a recursive procedure and a recursive process.

The second procedure evaluates as follows:

(+ 3 6)

(+ 2 7)

(+ 1 8)

(+ 0 9)

9

This is a recursive procedure but an iterative process.

1.10:

(A 1 10) evaluates as follows:

(A 0 (A 1 9))

(A 0 (A 0 (A 1 8)))

(A 0 (A 0 (A 0 (A 1 7))))

(A 0 (A 0 (A 0 (A 0 (A 1 6)))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 1 5))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 4)))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 3))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 2)))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 1))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 2)))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 4))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 8)))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 16))))))

(A 0 (A 0 (A 0 (A 0 (A 0 32)))))

(A 0 (A 0 (A 0 (A 0 64))))

(A 0 (A 0 (A 0 128)))

(A 0 (A 0 256))

(A 0 512)

1024

(A 2 4) evaluates as follows:

(A 1 (A 2 3))

(A 1 (A 1 (A 2 2)))

(A 1 (A 1 (A 1 (A 2 1))))

(A 1 (A 1 (A 1 2)))

(A 1 (A 1 (A 0 (A 1 1))))

(A 1 (A 1 (A 0 2)))

(A 1 (A 1 4))

(A 1 (A 0 (A 1 3)))

(A 1 (A 0 (A 0 (A 1 2))))

(A 1 (A 0 (A 0 (A 0 (A 1 1)))))

(A 1 (A 0 (A 0 (A 0 2))))

(A 1 (A 0 (A 0 4)))

(A 1 (A 0 8))

(A 1 16)

(A 0 (A 1 15))

(A 0 (A 0 (A 1 14)))

(A 0 (A 0 (A 0 (A 1 13))))

(A 0 (A 0 (A 0 (A 0 (A 1 12)))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 1 11))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 10)))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 9))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 8)))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 7))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 6)))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 5))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 4)))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 3))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 2)))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 1))))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 2)))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 4))))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 8)))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 16))))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 32)))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 64))))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 128)))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 256))))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 512)))))))

(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 1024))))))

(A 0 (A 0 (A 0 (A 0 (A 0 2048)))))

(A 0 (A 0 (A 0 (A 0 4096))))

(A 0 (A 0 (A 0 8192)))

(A 0 (A 0 16384))

(A 0 32768)

65536

(A 3 3) evaluates as follows:

(A 2 (A 3 2))

(A 2 (A 2 (A 3 1)))

(A 2 (A 2 2))

(A 2 (A 1 (A 2 1)))

(A 2 (A 1 2))

(A 2 (A 0 (A 1 1)))

(A 2 (A 0 2))

(A 2 4)

65536

(see evaluation of (A 2 4) above)

The defined procedures f, g, and h intriguingly map as follows:

(f n) -> (* 2 n)

(g n) -> 2^n

(h n) -> 2 raised to itself, n times.

##NOTE:

In the book examples are given of recursive and iterative ways to compute the fibonacci sequence. However, the example given for the fibonacci sequence computes a term beyond what is desired to arrive at it's final answer. The termination condition is count = 0 at which point b is returned. This is a small change to that program to fix what I perceive as a flaw. Have I missed something?

My version returns a when count = 1.

(define (fib n)

(fib-iter 1 0 n))

(define (fib-iter a b count)

(if (= count 1)

a

(fib-iter (+ a b) a (- count 1))))

1.11:

A tree recursive process that computes f is demonstrated in the following procedure:

(define (f n)

(cond ((< n 3) n)

((or (= n 3) (> n 3))

(+ (f (- n 1)) (* 2 (f (- n 2))) (* 3 (f (- n 3)))))))

An iterative process that computes f is demonstrated in the following procedure:

(define (f n)

(f-iter 2 1 0 n))

(define (f-iter a b c count)

(cond ((< count 3) count)

(else (f-iter (+ a (* 2 b) (* 3 c)) a b (- count 1)))))

Figuring out the iterative process was scary. It was the first moment I thought I wouldn't be able to do this and might need real help. I was unsure of whether I needed three or four state variables. It clicked after a few minutes of thinking though and was much smoother from there.

1.12:

This one actually gave me some trouble for a bit because I wanted to solve the problem in a non-standard way. After a while, I cracked and read the precursor text (but not the code) to Eli Bendersky's solution and noticing that he defined the function with two arguments (for columns and rows) arrived fairly quickly with that insight at what seems to be the more or less standard solution. I had this much completed for a week or more but got stalled trying to figure out the problem of a pascal function that takes one argument. That diversion contributed greatly to the delay in my progress. I did solve it though and posted the results separately. Here's the standard solution:

(define (pas row col)

(cond ((= row 1) 1)

((= col 1) 1)

((= row col) 1)

(else (+ (pas (- row 1) (- col 1))

(pas (- row 1) col)))))

;Value: pas

1.13:

I need to define a few things for this one first.

rad = the square root of 5 or

`(sqrt 5)`

phi = (1 + rad) / 2 or

`(/ (+ 1 rad) 2)`

psi = (1 - rad) / 2 or

`(/ (- 1 rad) 2)`

fib = you remember our fibonacci function from before right? That's all this is.

Prove that Fib(n) is the closest integer to (/ (phi ^ n) rad). Hint: Use psi as defined above, induction and the definition of the Fibonacci numbers to prove that Fib(n) = ((phi ^ n) - (psi ^ n)) / rad.

Okay, this one is intimidating for a number of reasons. One being that I've never done a formal proof before. At least that I can remember. I've seen proofs done and I've read one or two but I've never done one. Either my math education was lax or I was lax about my math education. In fairness, it was probably a bit of both. That unfamiliarity combined with the aforementioned pascal with a single argument problem served to keep me unmotivated and distracted for a bit.

Prove: That Fib(n) = ((phi ^ n) - (psi ^ n)) / rad.

First, you have to prove your base cases.

Fib (0) = ((phi ^ 0) - (psi ^ 0)) / rad.

That reduces to, 0 = (1 - 1) / rad, So the first base case holds.

Fib (1) = ((phi ^ 1) - (psi ^ 1)) / rad.

That reduces to, 1 = ((1 / 2) + (rad / 2) - (1 / 2) + (rad / 2)) / rad.

That reduces to, 1 = rad / rad, so the second base case holds.

The definition of Fibonacci numbers is that fib(n) = fib(n-1) + fib(n-2) so fib(2) = 0 + 1 = 1. Having found that our lemma is true for n-1 and n-2 will it hold for n?

Fib (2) = ((phi ^ 2) - (psi ^ 2)) / rad.

Remembering that Phi is the golden ratio it meets the condition that (phi ^ 2) = phi + 1.

This gives fib (2) = (2.61803398 - 0.38196601) / rad.

This reduces to fib (2) = 2.23606797 / rad giving 1.

Thus, our lemma holds for fib(n). This does not explain how Fib(n) is always the closest integer to phi ^ n / rad though.

To explain that we must note that in the base case of 1 it holds as phi ^ n / rad evaluates to 0.723606798 and Fib(1) is 1. So, it holds here.

We may then observe that psi being less than 1 will always approach zero as it's exponent is increased.

Thus, the difference between the fib(n) and (/ (phi ^ n) rad) will always be less than 0.381 for n >= 2.

This is what we needed to show.

Whew. After checking this against other people's solutions it turns out I'm not crazy and am roughly correct in my proof which is a relief.

1.14:

Okay. I drew the tree on paper but trying to draw this tree in ASCII for you guys would about kill me. Thankfully on checking my solution I was lucky to find a correct tree image which I will steal with credit. Thanks, Bhrgunatha.

Here is the tree.

As for the order, we can observe that at least in terms number of steps the growth follows that of our tree-recursive fibonacci function and is exponential. I think it's growing at *O(x ^{n})* but it could be growing at

*O(n*. Has anyone come to a definitive conclusion about this? Upon checking Ken Dyck's site again I see that Tim Eichner has an interesting solution. Can anyone confirm this?

^{x})1.15:

a. How many times is the procedure p applied when (sine 12.15) is evaluated?

(p (sine 4.05))

(p (p (sine 1.35)))

(p (p (p (sine 0.45))))

(p (p (p (p (sine 0.15)))))

(p (p (p (p (p (sine 0.05))))))

(p (p (p (p (p 0.05)))))

P is applied 5 times.

b. What is the order of growth in space and number of steps (as a function of a) used by the process generated by the sine procedure when (sine a) is evaluated?

This one I did need help with. I realized quite clearly that the growth was related to the number of divisions by 3 our angle took to get below the threshold (0.01). I did not realize that the abstraction I was looking for to describe this growth was that of a logarithm. That being said, I checked a few other solutions and went over the wiki page for logarithms once or twice. I really need to order Spivak's Calculus now. Anyway, the process is *O(log(a))* in both space and time. Specifically it's *O(log _{3}(n))*.

1.16:

This was tricky until I modeled the state transformations holding to the rule suggested for (* a (b^n)). Once I did that it was pretty easy.

(define (expt b n)

(define (expt-iter b n a)

(cond ((= n 0) a)

((even? n) (expt-iter (square b) (/ n 2) a))

(else (expt-iter b (- n 1) (* a b)))))

(expt b n 1))

;Value: expt-iter

(expt-iter 2 1000 1)

1071508607186267320948425049060001810561404811705533607443750388370351051124936122493198378815695858127594672917553146825187

1452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062

914571196477686542167660429831652624386837205668069376

;;That's a 300 digit number. This algorithm is O(log n). This computed in 16 steps.

1.17:

(define (* a b)

(define (double x)

(+ x x))

(define (halve x)

(/ x 2))

(cond ((= b 0) 0)

((= a 0) 0)

((= b 1) a)

((even? b) (* (double a) (halve b)))

(else (+ a (* a (- b 1))))))

;Value: *

(* 2 1000)

;Value: 2000

;;16 steps again. logarithmic in time. space too? I think it's just linear in space.

1.18:

(define (* a b)

(define (double x)

(+ x x))

(define (halve x)

(/ x 2))

(define (*-iter a b c)

(cond ((= b 0) 0)

((= a 0) 0)

((= b 1) (+ a c))

((even? b) (*-iter (double a) (halve b) c))

(else (*-iter a (- b 1) (+ c a)))))

(*-iter a b 0))

;Value: *

(* 2 1000)

;Value: 2000

;;16 steps again. logarithmic and iterative, so it does it in O(1) space. boo-yah. today was a good day to code.

1.19:

This was just a really difficult problem to understand. I wasn't even sure what they were really asking. Once I realized I just needed to use algebra to try and expand and then factor out a few things I felt a lot more comfortable.

(define (fib n)

(fib-iter 1 0 0 1 n))

;Value: fib

(define (fib-iter a b p q count)

(cond ((= count 0 ) b)

((even? count)

(fib-iter a

b

(+ (square p) (square q)) ;; compute p'

(+ (* 2 p q) (square q)) ;; compute q'

(/ count 2)))

(else (fib-iter (+ (* b q) (* a q) (* a p))

(+ (* b p) (* a q))

p

q

(- count 1)))))

;Value: fib-iter

1.20:

This is one of those exercises that Abelman and Sussman are sort of bastards for including.

;;Normal order evaluation is fully expand to primitives and then reduce.

;;Applicative-order is...well, what we've been doing all along.

How many remainder operations are performed in the normal order version of

(gcd 206 40)?

How many in the applicative order version?

4: (206 4), (40 6), (6 4), (4 2)

Applicative order first:

(gcd 206 40)

(gcd 40 (remainder 206 40))

(gcd 40 6)

(gcd 6 (remainder 40 6))

(gcd 6 4)

(gcd 4 (remainder 6 4))

(gcd 4 2)

(gcd 2 (remainder 4 2))

(gcd 2 0)

2

Normal order version:

(gcd 206 40)

;;we can count the number of times remainders occur in b, which is always evaluated.

(gcd 40 (remainder 206 40)) ;;rc = 1

if (= (remainder 206 40) 0) = false

(gcd (remainder 206 40) (remainder 40 (remainder 206 40)))) ;;rc = 1+2

if (= (remainder 40 6) 0) = false

(gcd (remainder 40 (remainder 206 40)) (remainder (remainder 206 40) (remainder 40 (remainder 206 40)))) ;;rc = 1+2+4

if (= (remainder 6 4) 0) = false

(gcd (remainder (remainder 206 40) (remainder 40 (remainder 206 40))) (remainder (remainder 40 (remainder 206 40)) (remainder (remainder 206 40) (remainder 40 (remainder 206 40))))) ;;rc = 1+2+4+7

if (= (remainder 4 2) 0) = true!

now that if is true we evaluate a: (remainder (remainder 206 40) (remainder 40 (remainder 206 40))) which has 4 remainders to the 14 that have been evaluated in prior predicates. tada! 18 evaluations in total for normal order. 4, if you didn't notice, for applicative order.

GCD is effectively a loop here and the only way for the loop to exit is for the if predicate to evaluate to true, after which the consequent is evaluated. The alternate is only substituted for in this case, never evaluated outright as it never becomes primitive.

In this way, the problem seems to me more of a study into the if conditional than evaluation models. Once you understand that the alternate never gets evaluated, you can simply figure out how many remainders get fed to it before it's true and then how many are in the consequent.

That's the best I could come up with for this one but Eli Bendersky has a solution you may find more clear or detailed.

1.21:

(smallest-divisor 199)

;Value: 199

(smallest-divisor 1999)

;Value: 1999

(smallest-divisor 19999)

;Value: 7

1.22:

The code for this exercise is not *particularly* difficult. It's not easy but it's fairly straightforward. Because this exercise was written over 10 years ago though it's pretty difficult to use on modern hardware. You're supposed to observe algorithmic efficiency because this code is supposed to stress your hardware. Unfortunately, in 2008 this code makes my hardware yawn for numbers on the scale they were asking for. So I started things off at 13 digits and scaled up from there. I also decided to rework the code so that it only outputs when it finds a prime.

(define (start-prime-test n start-time)

(if (prime? n)

(report-prime (- (runtime) start-time) n)))

(define (report-prime elapsed-time n)

(newline)

(display n)

(display " *** ")

(display elapsed-time))

(define (search-for-primes current end)

(cond ((even? current) (search-for-primes (+ current 1) end))

((> current end) (display " done! "))

(else (timed-prime-test current)

(search-for-primes (+ current 2) end))))

So, there's the code. Now for my results:

(search-for-primes 100000000000 100000000060)

100000000003 *** 1.1600000000000037

100000000019 *** 1.1899999999999977

100000000057 *** 1.240000000000009 done!

;Unspecified return value

(search-for-primes 1000000000000 1000000000070)

1000000000039 *** 3.91

1000000000061 *** 3.759999999999998

1000000000063 *** 3.9400000000000013 done!

;Unspecified return value

(search-for-primes 10000000000000 10000000000100)

10000000000037 *** 12.280000000000001

10000000000051 *** 12.510000000000005

10000000000099 *** 12.200000000000003 done!

;Unspecified return value

(search-for-primes 100000000000000 100000000000098)

100000000000031 *** 38.190000000000026

100000000000067 *** 38.16

100000000000097 *** 37.95000000000002 done!

;Unspecified return value

Checking all of these it appears we are very close to the projected (sqrt 10) increase per digit.

1.23:

(define (find-divisor n test-divisor)

(cond ((> (square n) n) n)

((= (modulo n test-divisor) 0) test-divisor)

(else (find-divisor n (next test-divisor)))))

;Value: find-divisor

(define (next n)

(cond ((even? n) (+ n 1))

(else (+ n 2))))

;Value: next

The results this time were:

(search-for-primes 100000000000 100000000060)

100000000003 *** .7400000000000091

100000000019 *** .7200000000000273

100000000057 *** .7099999999999795 done!

;Unspecified return value

(search-for-primes 1000000000000 1000000000070)

1000000000039 *** 2.3600000000000136

1000000000061 *** 2.2900000000000205

1000000000063 *** 2.319999999999993 done!

;Unspecified return value

(search-for-primes 10000000000000 10000000000100)

10000000000037 *** 7.350000000000023

10000000000051 *** 7.340000000000032

10000000000099 *** 7.189999999999998 done!

;Unspecified return value

(search-for-primes 100000000000000 100000000000098)

100000000000031 *** 23.110000000000014

100000000000067 *** 22.879999999999995

100000000000097 *** 22.920000000000016 done!

;Unspecified return value

This time we also are pretty close to half the previous times but it's slightly over half.

1.24:

(define (start-prime-test n start-time)

(if (fast-prime? n 500)

(report-prime (- (runtime) start-time) n)))

;Value: start-prime-test

And these results were:

(search-for-primes 100000000000 100000000060)

100000000003 *** 9.999999999990905e-3

100000000019 *** 0.

100000000057 *** 9.999999999990905e-3 done!

;Unspecified return value

(search-for-primes 1000000000000 1000000000070)

1000000000039 *** 0.

1000000000061 *** 9.999999999990905e-3

1000000000063 *** 0. done!

;Unspecified return value

(search-for-primes 10000000000000 10000000000100)

10000000000037 *** 0.

10000000000051 *** 0.

10000000000099 *** 0. done!

;Unspecified return value

(search-for-primes 100000000000000 100000000000098)

100000000000031 *** 9.999999999990905e-3

100000000000067 *** 0.

100000000000097 *** 0. done!

;Unspecified return value

We can see that this is definitely in *O(log(n))*. The times have gone below the precision of my instruments in most cases.

1.25:

I honestly had to look to Eli and Ken for help on this one. I was hand evaluating the original code before trying Alyssa's and having some trouble. I had noticed that the fast-expt procedure had two arguments where expmod had three so I figured part of the computation was being moved around. I even realized that Alyssa's way went ahead and computed the base to the exponent and then tested the remainder against it once. That just seemed like it should've been better to me. I didn't have the sense to just add runtime in as an argument and see how much time they were taking. At any rate, the original expmod does lots of little remainder tests but because of Bignum arithmetic that ends up being faster than a single test on a huge number.

1.26:

(expmod base (/ exp 2) m) has to be evaluated an extra time each time the (even? exp) condition evaluates to true. This moves the algorithm from log n to n because, as I somewhat foolishly missed, it shifts the recursion from a linear recursion to a tree recursion. See the SICP Wiki's solution for more detail, it seems to be the best resource for rigorous complexity analysis.

1.27:

(define (expmod base exp m)

(cond ((= exp 0) 1)

((even? exp)

(remainder (square (expmod base (/ exp 2) m)) m))

(else

(remainder (* base (expmod base (- exp 1) m)) m))))

;Value: expmod

(define (carmichael-test n)

(define (try-it a)

(= (expmod a n n) a))

(define (carmichael-iter times)

(cond ((= times 0) true)

((try-it times) (carmichael-iter (- times 1)))

(else false)))

(carmichael-iter (- n 1)))

;Value: carmichael-test

1.28:

(define (expmod base exp m)

(define (miller-rabin x)

(and (not (= x 1)) (not (= x m)) (= (square x) (modulo 1 m))))

(cond ((= exp 0) 1)

((even? exp)

(if (miller-rabin (square (expmod base (/ exp 2) m)))

0

(remainder (square (expmod base (/ exp 2) m))

m)))

(else

(remainder (* base (expmod base (- exp 1) m))

m))))

;Value: expmod

(define (miller-rabin-search n)

(define (try-it a)

(= (expmod a (- n 1) n) 1))

(try-it (+ 1 (random (- n 1)))))

;Value: miller-rabin-search

(define (miller-rabin-test n)

(define (mr-iter count)

(cond ((= count 1) #t)

((miller-rabin-search n) (mr-iter (- count 1)))

(else #f)))

(mr-iter (floor (/ n 2))))

;Value: miller-rabin-test

;;I got everything written right on this one but I had to check Ken's page again to notice that my try-it definition was

;;testing the expmod result against a, not 1. Once I fixed that I was right as rain.

As a final note, I should point out that my solution here differs a bit from the norm. One, I'm pretty serious about not using primitives that haven't been introduced yet. Even Ken Dyck's solution uses let (though the SICP wiki avoids it). After all, this is my first serious work in programming ever. The closest thing besides this was my read through the first chapter of K&R in summer of 2006. Anyway, just keep in mind I'm taking this as my formal education.

A little over two weeks ago I came up against Exercise 1.12 in the venerable Structure and Interpretation of Computer Programs.

The exercise wants you to write a recursive program to compute elements of Pascal's Triangle.

This exercise has pretty much infuriated me and it's all my own fault. Upon first hearing the problem statement I got it in my head that the function should look something like "(define (pas n)...)". I always think of number series being described in terms of a single argument (i.e. the 12th element) so it seemed natural to me that the pascal's triangle function should be computed in this way even though it is not, in some sense, a traditional series.

After a while, I cracked and read the precursor text (but not the code) to Eli Bendersky's solution and noticing that he defined the function with two arguments (for columns and rows) arrived fairly quickly with that insight at what seems to be the more or less standard solution. I have had this much completed for a week but gotten stalled trying to figure out the problem of a pascal function that takes one argument.

As of today I've solved the problem though and hoped to share my results here. First, the throwaway code that ended up being good for nothing!

(define (is-one? element)

(define (is-one-iter ones count flag)

(cond ((< element 5) #t)

((= ones element) #t)

((> ones element) #f)

((= flag 1) (is-one-iter (+ ones 1) count (- flag 1)))

(else (is-one-iter (+ ones count) (+ count 1) (+ flag 1)))))

(is-one-iter 4 2 0))

;Value: is-one?

That code tests to see whether a given element equals one and it does take a single argument which is nice. I couldn't figure out a way to use it to compute the actual elements though.

After a little bit of experimenting I stumbled on this number sequence (OEIS #A080956) which when put in the following procedure would allow me to compute n from a given column and row.

EDIT: Corrected dyslexic mistake in my code (I'd replaced all instances of col with row and vice versa). See comments.

(define (n-from-rowcol row col)

(define (f x)

(- (/ (* (+ x 1) (- 2 x)) 2)))

(+ row col (f (- row 1))))

;Value: n-from-rowcol

Now all I had to do was find a way to reverse the function to give me the inputs if I gave it the output. I actually stumbled upon another number sequence (OEIS #A000124, also known as the Lazy Caterer's Sequence) which when put into the following procedure returns the correct column and row for a given element. At last, working code:

(define (pascal n)

(define (pas col row)

(cond ((= col 1) 1)

((= row 1) 1)

((= col row) 1)

(else (+ (pas (- col 1) row)

(pas (- col 1) (- row 1))))))

(define (colrow-from-n)

(define (col-iter count)

(define (f x)

(- (/ (+ (square x) x 2) 2) x))

(cond ((> (f count) n) (pas (- count 1) (- n (- (f (- count 1)) 1))))

((= (f count) n) (pas (f count) 1))

(else (col-iter (+ count 1)))))

(col-iter 1))

(colrow-from-n))

;Value: pascal

Any insights into cleaner code, better algorithms, or comparisons between the two number series are welcomed.

Anyway, I've dug up some resources hitting snags here and there. It's what I do. So far, I've found a really great SICP Wiki (but it's half Russian), and a pack of people that have studied it from the Open Courseware over the past year.

That pack is as follows:

Ken Dyck's Solutions

Peter Sheats Solutions

Chuck Hoffman's Blog

Michael Harrison's Solutions and Commentary

Ozten's Solutions and Commentary

and finally, The Lispy Solutions and Commentary which so wonderfully motivated and inspired me tonight. Particularly with regards to a remark on Section 1.1 "just lulling you into a false sense of security".

Of course, there is also the aforementioned SICP Wiki and Eli Bendersky's Blog. Long story short, I really owe it to Lispy for encouraging me. Half way through section 1.2 I was bogged down. Roughly on exercise 1.13 which apparently gave a few other people trouble too. And I felt all alone.

Anyway, I'm going to try to push my schedule back a week and see if by next Friday I can be up to lambdas and through 80 pages of Discrete Math and then continue as planned. At the very least, I've known from day one that the one thing I want most to accomplish this year is wringing as much as I can out of SICP. So if it takes the whole year just to do that, schedules be damned, so be it.

Today was not the easiest day. It wasn't terrible either. The news was decidedly mixed. And it's not about Dad though if you're wondering he's doing well. He's undergone chemo and lost most of his hair but he's generally upbeat and energetic.

Two things have been wearing on me today and the first is work-related. Since January 11th I've been working full time at TVS. The news was that I finally got the paperwork for my benefits package today. It's nice having benefits. Benefits are good. All the same, this meant I could start doing budgeting and working out my finances.

Finances are some scary shit. If I didn't know better I'd swear I'd die without a sizable chunk of money a year. For now I'm still staying with my parents until summer (at their behest more than mine) and I'll find a place to live then.

I really am making enough to be okay. It's just that there's not a lot on the margins. I don't want a whole bunch of stuff. I just don't want to worry about suddenly needing money for any reason.

Anyway, the other struggle has been that of the triangle. I'm getting behind on my schoolwork and hoping to catch up by/over the weekend. And I was pretty distressed because I spent like 4 hours obsessing over exercise 1.12 in SICP.

The problem is to write a procedure that computes the elements of Pascal's Triangle.

That shouldn't be a big deal, you know? But I obsessed over it. And now I've got a silly over-engineered solution that I'm more fond of than I should be. It's an interesting problem though. Hopefully I've learned something from it.

Mine still isn't quite working and I know there is a simpler way to do it. I cracked after a while and read about how one might solve it but I didn't peek at any code. Still, I'm stuck on doing it *my way*. I'm such a bastard. Anyway, it's coming together and I expect it'll be done by the end of the hour. It'll be in the week 2 recap for sure.

Long story short I realized what I've gotten myself into today. And it's still where I want to be. It's just that I think it's going to take more work and time than I might have been able to understand.

Well, I said I would do this and I meant it. These entries will be a bit lengthy. I feel a little pain for any feeds I'm aggregated on. So, in the interest of not driving you all mad, these weekly semester updates will be behind a cut.

Like so.

With that out of the way, here are the details. Each week, I'll post the resources I used (i.e. what I read, what I watched, etc) and the solutions to the relevant problems along with any notes of interest I had. I may offer other general musings (or even specific ones) inspired by my studies in other posts but these will tend towards a cut and dry solution to the exercises. Finally, I'll post a link to any sources I might have found or used to check answers I wasn't sure of and if I got the answer wrong I'll disclose that in the post.

As for the math, I haven't decided what to do about that. I mean, it doesn't make sense to post up a ton of math solutions though I suppose by that logic it doesn't make sense to post code snippets either. If I come up with something I'll let you know. If you have suggestions by all means write them in.

Resources:

Read: Chapter 1 through Section 1.1

Watch: Lectures 1-a

Checked against: Eli Bendersky's Blog

SICP Notes and Exercises:

Notes

Pgs. 28-30: Definitions and explanations of scope and locality. It is becoming evident that subtle errors can easily emerge due to differences in locality and scope. These errors are colluded by the fact that there is no distinction in lisp between variables and procedures. (i.e. you could have a procedure that used abs (another procedure) as a variable.)

Quotes

"First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute." - Preface to the First Edition

"The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology - the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects." - Preface to the First Edition

Exercises

1.2:

(/ (+ 5 4 (- 2 (- 3 (+ 6 (/ 4 3))))) (* 3 (* (- 6 2) (- 2 7))))

;Value: -23/90

1.3:

(define (two-of-three x y z)

(cond ((< x y z) (+ (* y y) (* z z)))

((< y x z) (+ (* x x) (* z z)))

((< z x y) (+ (* x x) (* y y)))

((or (= x y) (= x z)) (+ (* y y) (* z z)))

((or (= y x) (= y z)) (+ (* x x) (* z z)))

((or (= z x) (= z y)) (+ (* x x) (* y y)))))

;Value: two-of-three

Comments: The original version of the program lacked the last three lines but I realized if you got inputs that were equal to each other there wasn't a condition that matched that so I changed it. I'm sure there's a much more elegant way to do it but the job is done. And it's mostly readable.

1.4:

This procedure checks to see if B is greater than 0. If it is, it adds A and B. Otherwise, it subtracts B from A.

1.5:

This procedure when evaluated using applicative-order evaluation will not resolve as it infinitely recurses trying to evaluate (p). An interpreter using Normal-order evaluation will not have this problem because in the example the if condition evaluates to true so the p function is never evaluated. (The Scheme interpreter uses Applicative-Order evaluation.)

1.6:

Again, this is a case of infinite recursion due to Applicative-Order evaluation. Sqrt-iter continues to call itself regardless of the value of (good-enough? guess x) if you must know.

1.7:

(define (good-enough? guess x)

(< (abs (- (improve guess x) guess)) (* 0.000001 guess)))

;Value: good-enough?

1.8:

(define (curt-iter guess x)

(if (good-enough? guess x)

guess

(curt-iter (improve guess x) x)))

;Value: curt-iter

(define (good-enough? guess x)

(< (abs (- (cube guess) x)) 0.001))

;Value: good-enough?

(define (cube x)

(* x (* x x)))

;Value: cube

(define (curt x)

(curt-iter 1.0 x))

;Value: curt

(define (improve guess x)

(/ (+ (/ x (* guess guess)) (* 2 guess)) 3))

;Value: improve

##NOTE:

Here is an example rewrite of the sqrt program using block structure and lexical scoping. It is inserted here because this was the point of discussion but no relevant exercise was assigned.

(define (sqrt x)

(define (sqrt-iter guess)

(if (good-enough? guess)

guess

(sqrt-iter (improve guess))))

(define (good-enough? guess)

(< (abs (- (square guess) x)) 0.001))

(define (average a b)

(/ (+ a b) 2))

(define (improve guess)

(average guess (/ x guess)))

(sqrt-iter 1.0))

So, I've finally gotten everything nailed down. I know what courses I'm taking, what resources I'm using, and what my schedule is.

I've decided to break my studies into 2 semesters each of which is comprised of two courses and lasts 20 weeks. The first semester started this Monday (1/14) and ends Friday (5/30). I then take the month of June off. The second semester will start Monday (7/7) and end (11/21). Having June and December off will both motivate me to push through and also allow room for slight changes in schedule.

I'll study SICP and Discrete Math with Applications by Rosen this semester. I've already started in on the SICP. Obviously I'm a little behind on the math. At the end of each week I'm planning to post a summary of exercises and notes at least for the programming courses. I don't know what I'll do on the Math course. I also haven't quite settled on whose problem sets to do. Ah, well.

Next semester it's HTDP and CTM. I'm a little freaked out looking at all this but if I try I'm bound to learn something. Wish me luck.

And here's the week by week breakdown of the first 4 weeks for each course:

discrete math: resources include lectures, problem sets, and course notes

dma - 787 pages / 20 weeks = 39.35(40) pgs/week

1 is lecture 11-01-00 and pgs.1-44 (through section 1.3)

2 is lecture 11-02-00 and pgs.44-80 (through definition 4)

3 is lecture 11-03-00 and pgs.80-119 (through chapter 1)

4 is lecture 11-06-00 and pgs.119-161 (through theorem 7)

sicp: resources include lectures, online book, course notes, problem sets, and eli bendersky's site

sicp - 610 pages / 20 weeks = 30.5(31) pgs/week

1 is lecture 1-a and pgs.1-31 (through section 1.1)

2 is lecture 1-b and pgs.31-62 (through section 1.3.1)

3 is lecture 2-a and pgs.63-97 (through section 2.1)

4 is lecture 2-b and pgs.97-126 (through section 2.2.3)

As far as my schedule goes, the plan is to work from 7am-3pm Monday through Friday, go to the gym after work on MWF, and veg on the weekends where possible. The workweek will be dedicated to my "education" of course, beyond work and exercise.

I have a long term plan for a course of study but no hard schedule yet. I have to divide up readings and problem sets and link them with lectures and such. I plan to have such a syllabus done and up for viewing by the end of the week. As for the long term plan of study there are 6 Programming Texts and 3 Math Texts that I'd really like to get through. If I get through the first 3 Programming Texts (or even the first 2) and 1 or 2 of the Math Texts I'd consider it a successful year. They're all fairly rigorous and I'd like to cover them in depth. Of late, I've been debating the order in which to approach the programming texts. Either SICP, CTM, HTDP or HTDP, CTM, SICP. Some of the stuff in SICP is a bit difficult and some of the stuff in HTDP is a bit easy so far. This is another thing I'm hoping to have worked out by the end of the week so that I can get going.

Once I do have a syllabus I'll post it and then post notes on readings and lectures and solutions to exercises as I go along so feel free to follow along and ask questions. You can only help me learn more. So far, I read the first 40 pages of SICP in Montana. That's Chapter 1 (of 5), Section 1.1. I've got notes typed up on the lecture and reading and most of the examples solved. I'll get those posted up by Friday as the first entry whatever my course of study turns out to be. Also, Friday I will be going to that Yeasayer concert. So far Ben Grad and Minor are talking about going too. Any more takers? Have you guys liked Yeasayer as much as I have? Isn't that Red Cave song from yesterday awesome?

Finally, here are some good Lessig quotes my readings in Montana of The Future of Ideas (Pgs. 1 - 99):

"The very idea that nonexclusive rights might be more efficient than exclusive rights rarely enters the debate. The assumption is control, and public policy is dedicated to maximizing control." - Lawrence Lessig, The Future of Ideas, Pg. 86

"Where we have little understanding about how a resource will be used, we have more reason to keep that resource in the commons. And where we have a clear vision of how a resource will be used, we have more reason to shift that resource to a system of control." - Lawrence Lessig, The Future of Ideas, Pg. 88-89

"The point is more than theoretical. In essence, the changes in the environment of the Internet that we are observing now alter the balance between control and freedom on the Net. The tilt of these changes is pronounced: control is increasing. And while one cannot say in the abstract that increased control is a mistake, it is clear that we are expanding this control with no sense of what is lost. The shift is not occurring with the idea of a balance in mind. Instead, the shift proceeds as if control were the only value." - Lawrence Lessig, The Future of Ideas, Pg. 99

This blog covers 2015, Books, Butler, C, Dad, Discrete Math, Displays, Education, Erlang, Essay, Gaming, Gapingvoid, HTDP, Hardware, IP Law, LISP, Lecture, Lessig, Linkpost, Linux, Lists, MPAA, Milosz, Music, Neruda, Open Source, Operating Systems, Personal, Pics, Poetry, Programming, Programming Languages, Project Euler, Quotes, Reddit, SICP, Self-Learning, Uncategorized, Webcomic, XKCD, Xmas, \"Real World\", adulthood, apple, careers, coleslaw, consumption, creation, fqa, games, goals, heroes, injustice, linux, lisp, math, melee, metapost, milosz, personal, poetry, programming, ragequit, recreation, rip, strangeloop, work

View content from 2015-05, 2015-03, 2015-02, 2015-01, 2014-11, 2014-09, 2014-07, 2014-05, 2014-01, 2013-10, 2013-09, 2013-07, 2013-06, 2013-05, 2013-04, 2013-03, 2013-01, 2012-12, 2012-10, 2012-09, 2012-08, 2012-06, 2012-05, 2012-04, 2012-03, 2012-01, 2011-10, 2011-09, 2011-08, 2011-07, 2011-06, 2011-05, 2011-04, 2011-02, 2011-01, 2010-11, 2010-10, 2010-09, 2010-08, 2010-07, 2010-05, 2010-04, 2010-03, 2010-02, 2010-01, 2009-12, 2009-11, 2009-10, 2009-09, 2009-08, 2009-07, 2009-06, 2009-05, 2009-04, 2009-03, 2009-02, 2009-01, 2008-12, 2008-11, 2008-10, 2008-09, 2008-08, 2008-07, 2008-06, 2008-05, 2008-04, 2008-03, 2008-02, 2008-01, 2007-12, 2007-11, 2007-10, 2007-09, 2007-08, 2007-07, 2007-06, 2007-05