Sunday, January 10, 2010

Creating nomograms in R

This is actually a post-in-progress, but I am publishing it before it's ready so that people on the Nomography Discussion forum can come take a sneak preview. The post will probably be updated a couple of times.

Below is a three-scale nomogram I generated in R. Bitmap images (like this png) don't suit the text-at-an-angle of the i-scale, and the angled tick-marks, but it's good enough to give the idea - it looks better as a vector-image:(Click to obtain a slightly larger image)

Here's a simple function to draw nomogram scales. It's a few lines longer than when I wrote about it before (I added scale labels in). I don't expect anyone to care about the details of the code, but even without taking advantage of the special aspects of R, it's worth seeing that the entire code is quite compact, and fairly easy to generalize further.

curvaxis <- function(x,y,v,axlab=" ",ltick=TRUE,axlt1=TRUE,thwid=.02){

yscale <- diff(par("yaxp"))[1]
xscale <- diff(par("xaxp"))[1]
yxscale <- yscale/xscale

# compute ticks
n <- length(x)
tdif <- function(x,n=length(x)) c(x[2]-x[1],x[3:n]-x[1:(n-2)],x[n]-x[n-1])

s <- atan(-tdif(x)/tdif(y)*yxscale)
xd <- cos(s)*thwid*xscale # could do this without trig functions
yd <- sin(s)*thwid*yscale
xd <- ifelse(tdif(y)!=0,xd,0)
yd <- ifelse(tdif(y)!=0,yd,thwid*yscale) # are we scaling right? some ticks look off

dmul <- ifelse(ltick,-1,1)
xadj <- as.integer(ltick)
yadj <- 0.5

lines(rbind(x,x+dmul*xd,rep(NA,n)),rbind(y,y+dmul*yd,rep(NA,n))) # draw tick-marks
text(x+dmul*xd*1.5,y+dmul*yd*1.5,v,adj=c(xadj,yadj),cex=0.8,srt=180/pi*mean(s))# tick labels
sg <- sign(y[2]-y[1])
if(axlt1) { #cludgy - this part does the scale labels
else {

And this is the code that calls it:

#set up data
#variable 1
i <- .01*c(4:8,10,12,15,20)
li <- log(1+i)
yi <- -log(i)/(1+li)
xi <- li/(1+li)
yi <- yi-4*xi # skew the plot a bit

#variable 2
n <- 4:8
yn <- n-4
xn <- rep(1,length(n))

#variable 3
d <- c(1:6,8,10,15,20)
xd <- rep(0,length(d))
yd <- log(d)-4*xd

#scale the plot
xmin <- min(xd,xi,xn)
xmax <- max(xd,xi,xn)
ymin <- min(yd,yi,yn)
ymax <- max(yd,yi,yn)

xl <- xmin-0.02*(xmax-xmin)
xu <- xmax+0.02*(xmax-xmin)
yl <- ymin-0.02*(ymax-ymin)
yu <- ymax+0.02*(ymax-ymin)

#draw the nomogram
par(ann=FALSE, xaxt="n",yaxt="n",bty="n")
plot(NULL, xlim=c(xl,xu),ylim=c(yl,yu)) #draws a blank plot

Friday, January 1, 2010

2010 Mathematical Calendar

Ron Doerfler at Dead Reckonings has a 2010 "Graphical Computing" Calendar. It looks very nice.

See this post at Ron's blog

Wednesday, September 30, 2009

User-contributed R packages on CRAN passes 2000 mark

The number of user-contributed packages at for the free (in both senses) statistical package R has passed 2000. See

Quite a milestone. Back in May I predicted to someone that it would hit 2000 around the start of October. Well, not too far out, but it is still September.

If you don't have R but you do a lot of manipulation of numbers, consider getting R. It repays the investment in time very well indeed.

(I have made no posts for a good while due to major illness. Mostly okay now.)

Tuesday, June 23, 2009

The product of two sums of squares

I just happened upon this extra nifty little fact - the set of sums of two squares is closed under multiplication.

That is, facts like (102 + 92) × (12 + 12) = 192 + 12 occur for every
(a2 + b2) × (c2 + d2).

Take a look - simple result, I am surprised I hadn't seen it before...

See the Brahmagupta-Fibonacci identity at Wikipedia

Sunday, June 21, 2009

Trig identites and multiplication

We're used to the idea of using logarithms to convert multiplication problems into addition problems, because of the relation:

log(xy) = log(x) + log(y)

So if you have a table of logarithms, to multiply x and y you can look up the logarithms of each, add them, then find the number in your tables whose logarithm is that sum. Three table-lookups and an addition. [Well, in practice you do the last lookup in a table of antilogarithms, but that's only a minor benefit, it's not actually necessary to the calculation.]

However, there are other ways to make multiplication "simpler" than trying to directly multiply, such as this identity:

cos x cos y = [cos (x + y) + cos (x - y)]/2

This sort of relationship was actually used to do multiplication; it's a bit more complicated than logarithms, but it works well enough if you haven't invented logs yet, but do have cosine tables. It works like this.

You want to multiply two numbers, X and Y (and let's assume that they have already been scaled to lie in a range where this will work).

You find the numbers that they are the cosines of in your tables, X = cos x, Y = cos y. Add and difference those two numbers, giving (x+y) and (x-y). Look those numbers up in your cosine tables. Average them. That's the product. Four table lookups, an addition and a halving (and some rescaling back to the original problem, presumably, which probably just involves moving a decimal point), all much easier than general multiplication.

So far so good.

But here's what I am wondering. Humans came up with a nifty device to automate that multiplication via addition of logs, the slide rule. Is it possible to build a device that does mutliplication using a trig identity like the one above, perhaps some sort of "swivel rod"?

If we do it in two parts, I think maybe we can do it with a slide rule; one side does the additions and subtraction (and halving; halving is inverse doubling, which is just addition, so it can also be done there). The other does the conversions to and from normal scale to cosine scale - but you'd be forever transferring numbers across the different scales.

So I wonder if it can all be combined in a single, simpler system, whether it's a kind of slide rule or something more complex involving actual rotations to do the cosines (accuracy may be an issue though).

I don't think a device was ever made that did this, but I think that it might be possible.

If it were, I'd love to have one.

Monday, June 15, 2009

Annoying ways to ask a question...

I recently answered some mathematics questions on Yahoo Answers.

This reminded me of some of my pet peeves when people ask for mathematical help -
many of which are apparently attempts to minimize the scale of your contribution

* "Can I ask a question? It'll only take a moment."
-- yes, questions rarely take long to ask - but the answer won't be nearly as fast.

* "This is a simple problem"
-- it often isn't. If was so darn simple, you'd answer it yourself.

* "Why are you making this so complicated? It was a simple question!"
-- see question 1. Just because the question was brief doesn't mean it has a simple solution. Fermat's last theorem could be explained to a child and stated in a few lines!

Further annoyances more particular to YA include people just posting their entire homework with not the slightest indication that they've even attempted any of it, sneering responses when you don't just give them all the answers with fully worked solutions, and snarky reactions when you point out that merely typing their question as is into google would have got them the answer instantly.

Friday, May 22, 2009

Total incomprehension?

Sometimes, adding apples and oranges is not fruitful.

Monday, May 18, 2009

Number of uniforms to sum>1

[Sorry - been a long time without posting. Illness has made it hard to find the impetus.]

Anyway, a probability problem I found interesting:

What's the distribution of the number of random uniform[0,1) random variables that you need to generate so that the sum is greater than 1?

There's a nifty little trick to it.


Well, the trick is to compute the survivor function, 1-F(n).

Let N be the number of uniform RVs needed to exceed 1.

P(N>n) = P(U(1) + U(2) + ... + U(n) <= 1)

That RHS probability is relatively easy to get in a variety of ways to be 1/n!

From there, the probability function for N is simple...

Wednesday, February 25, 2009

Calculation Information

Brendan O'Connor at, AI and Social Science has looked at a number of packages with which people might do statistical and related manipulations and computations, and compares them. Having used five of them (STATA and the items in the Python group being the exceptions, though I have had "try these out" on my agenda for some time), I agree broadly with what was said about the packages I have experience with - which suggests to me I can probably take seriously the assessment of the remainder.

A very handy and compact comparison. If you use statistics, take a look.

Wednesday, February 18, 2009

Incredibly simple ways to get rational approximations to square roots

Over at The Universe of Discourse, Mark Dominus talks about Archimedes and the square root of three pointing out that Archimedes needn't have had some mystical way of finding rational approximations to surds in order to figure that 265/153 is very close to the square root of three.

He points out that there's a simple enough pattern that Archimedes could have spotted and soon figured it out. That is true enough, but Archimedes could have got by, while being even less clever than that.

Let's consider a few simple facts:
(i) if a and b are close to the ratio √3:1 then 3b and a are also close to that ratio, with the error in the opposite direction and of similar magnitude
(ii) Hence (a+3b):(b+a) will be substantially closer* to √3:1 than a:b is

*(in fact it improves the error in the approximation of 3 by a2/b2 by a factor that rapidly goes to 2+√3, but Archimedes needn't have figured that fact out - he only needs to have noticed that 3b:a is also an approximation of √3:1, and (a+3b):(b+a) always seems to be a better approximation than he started with)

If we proceed in this fashion from the very ordinary starting point of 2:1 (and eliminate common factors as they pop up), we rapidly hit on Archimedes' ratio.

2 1 (a:b)
3 2 (3b:a)

5 3 (a+3b:b+a), which becomes our new a:b
9 5

7 4 (dividing out a common factor of 2 here)
12 7

19 11
33 19

26 15 (taking out another factor of 2)
45 26

71 41
123 71

97 56 (taking out a third factor of 2)
168 97

265 153

And you're there after maybe a couple of minutes of very simple computation, even if you're a much worse computer (in the original sense) than Archimedes doubtless was.

Not in the least mysterious, and a bonehead like me figured it out almost immediately. I'll wager a master like Archimedes realized something like this in even less time than it took me. It's so simple, he probably wouldn't have even bothered to write it down.


(Later edit:)
What I have above appears to be basically the Bablyonian Method, on which Brent at The Math Less Traveled writes here.

I must have seen mention of this before (at least the phrase "Babylonian Method" sounds somewhat familiar). I have no doubt Archimedes was familiar with some version of it.

Tuesday, February 17, 2009

Nifty log-concave function post

John D Cook has a nifty post up about log-concave functions (in the latest Carnival of Mathematics). These include the log-concave densities, which have been popular objects of research in statistics in the last decade or so - they're a wide class of densities that have (as you might guess from John's article) some nice properties.

I'd like to write a longer, basic post about them sometime. The Wikipedia page doesn't really do them justice.

Monday, January 5, 2009

What mathematicians do

Imagining that mathematicians spend all their times doing calculation is very like imagining that writers spend all their time spelling words.