Benford’s law as applied to the Voynich (paragraphs)

I was musing on the implementation of Zipf’s Law to the Voynich. Several people have carried out studies into this, and all come back saying it falls within parameters. I did my own Zipf study and generally got the same results.

Now, Zipf is a theory that words used within a written language should fall within a logarithmic scale. Voynich does. Doesn’t prove anything other than the text is not purely random.

For more, and a full explanation of how Zipf can be used to analyse the text, see Seravana Reddy and Kevin Knight in section 4 of their paper What We Know About The Voynich Manuscript.

But I then tried to turn Zipf around. After all, Zipf “law” is really only a sub clause of Benford’s law, which states that in any large dataset, the numbers 1-9 should fall within a logarithmic scale.

You might think that in any large dataset, 1 would appear 11% of the time (it being one of 9 possible numbers). It usually doesn’t. It appears about 30% of the time. Strange, eh?

So the percentages of the times each number appears results in a chart like this:

1 should appear 30.1%, 2 17.6%, etc
1 should appear 30.1%, 2 17.6%, etc

Does Benford’s Law apply for, let us say, paragraph length? If it does for word distribution, why not for paragraph length?

Now, Benford’s is often used in fraud detection, and I’ve used it myself for similar things. It applies to any exponential data set (ie, doubles then doubles again in the same time span) but it also applied to a lot of datasets where an exponential growth pattern isn’t obvious, but there is a constant and natural variation. The size of cities (within an economic area). Income distribution. Etc. It doesn’t work for datasets that are limited in some way – ie shoe sizes, growth patterns, IQ scores.

So, in principle, I can’t see why it wouldn’t work for language. Zipf is a logarithmic judgement of how often words appear in text. Let’s expand that idea to the number of words in every paragraph.

I loaded an English language text selected at random from Project Gutenburg: The Maid of Sker by R.D. Blackmore. I trimmed it to the same number of words as the entire Voynich transcript dump from the Voynich Information Browser (VIB). (aprox 41000 words). First 16 chapters, I remember.

One of the principles of Benford’s is that 0.1,1,10,100 is all basically the same digit: 1. We only take the first digit on the left, ignoring 0 which isn’t a logarithmic scale.

So I ran the text through a word parser, individually counting the number of words in every paragraph. I dumped this into OpenOffice calc, counted the number of times the digits 1-9 appear in the first column of the count (doesn’t matter if there are 2, 20 or 200 words in the paragraph, it all goes into the 2 column). I then took the logarithmic scale of this rate against the total count.

Here is the results for The Maid of Sker (525 paragraphs):

maid of skerNow, what we’re looking for is for the blue and red lines to match. The red Benford rate is the theoretical limit of each digit. The blue Sample rate is the actual correspondence. If the blue is a lot bigger or a lot smaller than the red, we’re seeing… well, we’re seeing something artificial, is the best way I can say.

And as you can see, this random text matches pretty damn perfect.

Here’s the same analysis done on “Il piccolo santo”, an Italian language theatre by Roberto Bracco. Only 4051 words & 350 paragraphs, but it’s in a  different language from English AND it’s a different style, being a script designed to be read (or sung) aloud.

il piccolo santoNot a good match for Benford. Why? I’m assuming it’s because it’s a script designed to be read aloud. There are an awful lot of short tense sentences, with a wordcount in the low teens. What we have here is a dataset that’s artificially limited, in as much as it’s designed to be read aloud (or sung aloud).

Let’s try something in German. Die Tote und andere Novellen by Heinrich Mann. About 10,000 words. Only 85 paragraphs! Germans, it seems, don’t like short and snappy paragraphs. What the hey, I’ve formatted it now so in it goes.

Calm and methodical these Germans
Calm and methodical these Germans

It’s a short data sample, but it fits nicely. And it’s a perfect example of Benford. Only 85 paragraphs, but the word length is a good logarithmic fit. Don’t get carried away : 6-7-8 counts only appeared 4 times each.

See the difference between the English & German, and the Italian?

The English & German are novels, stream of conciousness. As such, paragraph length is pretty random, yet interlinked, a perfect dataset for applying Benford’s to.

The Italian text was limited, as it was designed to be spoken aloud, and as such, was mainly short, stubby phrases. So it was a bad match for Benford.

Let’s try the Voynich.

I ran the same process, and at once came up with a quandary. The transcript includes an awful lot of “labels”, which are just one paragraph in the transcript. Most of them are the single words written beside objects in the VM. Include, or leave out?

I tried it both ways, first with all one word paragraph (labels) included (1788 paragraphs) :

Ouch, skewed data
Ouch, skewed data

There were hundreds of labels included into the text, and it seems they’re skewing the data by gobbling up almost half of the text.

Let’s try it without the single labels (1016 paragraphs):

Damn it, now it's skewed the other way!
Damn it, now it’s skewed the other way!

See what I mean about Benford not working if you artificially limit the text in any way? I’ve removed ALL the single labels and, it seems, cut out too many “1”‘s.

Back to the VIB. I then went and extracted the complete T.T. transcript of the Text, Biological, Cosmological and Pharmaceutical sections, on the basis that these are the ones with the fewest labels. This limited me to 706 paragraphs.

Back to the spreadsheet and I produced this:

Well, that was a waste of time.
Well, that was a waste of time.

Back to square uno. There are just too many “labels” in the text – over 60% of the restricted text is one paragraph labels. What I need is a transcription that detects and strips out “labels”, whilst leaving the one paragraphs in the main text.

Ask, and ye shall receive – Stolfi has already worked out the paragraph text, which I “borrowed”. He actually wanted the opposite – he was analysing the labels, and wanted to discard all the other text. The actual transcript doesn’t matter – the important thing is the word boundaries and the paragraphs. Stolfi gave me 1134 paragraphs.

Nope.
Nope.

Nope.

There are still quite a few one label paragraphs, but very very few “teens” (paragraphs with 10-19 words) appearing in Stolfi’s text.

What we’re seeing are a lot of paragraphs bunched around the 40’s marks.

Here’s the stats for the Stolfi text:

Digit Ocurrence Sample Rate Benford Rate
1 177 15,62% 30,10%
2 139 12,27% 17,61%
3 158 13,95% 12,49%
4 170 15,00% 9,69%
5 136 12,00% 7,92%
6 131 11,56% 6,69%
7 104 9,18% 5,80%
8 60 5,30% 5,12%
9 58 5,12% 4,58%
1133 1

The total number of words per paragraph, in a csv format, are as follows:

75,5,1,1,39,4,2,3,145,5,2,2,90,5,1,1,47,6,97,7,76,5,91,5,45,4,50,5,75,3,27,4,35,6,36,7,65,5,67,3,50,6,47,2,62,4,70,4,53,6,34,5,78,3,133,3,70,8,102,3,51,2,55,4,70,4,51,9,95,4,2,2,48,3,1,1,95,5,2,2,125,1,77,3,77,9,53,7,3,3,60,7,75,3,69,4,80,4,39,6,48,8,53,5,42,4,82,6,57,3,73,3,43,3,61,5,53,3,68,6,35,89,3,116,5,113,4,38,7,2,2,51,2,37,4,54,6,61,2,40,7,37,4,42,2,222,4,49,6,76,6,2,2,114,5,129,2,48,9,58,7,2,3,43,3,44,3,55,5,59,5,88,4,38,9,51,6,65,8,94,2,46,8,36,2,37,5,53,5,52,3,59,5,2,3,48,5,134,2,67,5,89,3,174,2,60,3,77,2,78,2,3,3,98,6,76,2,211,5,69,6,59,1,67,29,3,100,4,35,4,33,2,42,6,46,3,51,6,53,9,59,19,3,89,3,69,6,100,4,58,3,42,4,64,3,64,5,156,3,119,4,47,3,123,9,85,8,75,2,118,4,86,6,89,10,87,6,60,3,73,4,122,3,35,5,54,3,48,4,60,2,35,7,31,5,37,6,44,2,34,1,47,3,51,6,94,4,89,4,62,5,121,5,114,10,110,3,106,4,50,4,66,3,80,4,69,7,73,5,1,1,93,5,43,2,91,1,3,79,3,3,73,2,77,4,114,6,61,1,96,7,5,11,67,6,47,4,117,7,42,5,34,6,43,3,39,7,55,6,59,3,55,5,35,4,53,5,59,22,7,37,4,98,10,55,11,35,9,71,191,2,47,4,82,4,54,4,65,2,165,98,11,12,49,18,40,7,50,2,85,5,160,5,173,4,135,6,116,3,109,64,5,51,3,82,7,115,2,51,4,112,9,148,3,95,7,112,7,63,5,93,83,4,74,3,53,3,82,4,70,4,69,3,123,84,117,18,16,98,3,96,8,60,7,48,9,194,5,40,24,44,59,9,32,41,4,2,4,59,4,9,72,3,39,33,82,368,7,92,9,198,9,218,106,150,114,4,642,231,103,164,255,145,3,135,9,242,7,156,139,182,171,211,282,193,298,296,90,126,78,68,55,49,76,118,198,75,81,62,92,336,93,112,93,115,81,112,81,58,63,167,109,159,194,156,295,130,139,227,102,70,205,76,93,128,150,114,193,133,145,148,82,60,142,179,140,4,4,263,295,58,32,57,24,25,24,19,21,44,39,118,67,106,145,57,43,59,65,120,25,62,4,25,6,60,5,61,27,39,37,73,7,65,6,79,6,64,4,78,6,73,4,65,8,65,63,6,74,14,101,11,180,13,94,8,75,107,11,118,11,50,86,6,121,4,72,12,55,4,58,5,96,6,67,5,231,3,81,73,29,19,38,34,53,32,41,53,42,29,16,22,76,50,37,32,38,3,31,30,34,49,73,7,99,72,41,84,62,19,33,36,19,32,33,54,40,32,36,42,57,84,66,41,57,39,54,33,20,28,27,29,27,28,12,17,23,24,15,25,31,24,62,4,41,26,21,27,25,62,23,25,33,18,36,36,38,51,18,37,24,27,73,40,18,23,30,25,46,21,41,28,38,29,46,20,32,32,15,42,35,47,37,30,28,65,17,17,30,43,26,66,8,49,7,119,7,72,9,30,11,46,6,25,9,65,6,51,9,125,4,47,5,68,7,26,7,43,5,28,8,44,7,61,6,26,10,37,2,79,6,26,9,21,5,20,3,63,8,94,6,107,8,8,102,5,5,36,7,7,98,7,7,60,5,6,37,8,9,34,8,6,89,6,6,36,6,7,90,4,6,35,7,7,65,4,4,61,3,3,39,9,9,155,2,2,50,8,63,6,81,4,41,10,22,4,81,3,49,8,42,4,72,12,68,5,62,6,43,4,43,5,74,11,74,7,44,4,41,7,76,9,64,6,49,6,24,5,68,4,46,7,39,6,27,10,56,6,24,3,25,6,82,8,62,7,54,11,54,12,46,5,45,6,63,7,41,4,78,6,24,7,62,4,28,9,28,6,30,12,61,6,49,11,47,10,68,8,28,10,87,11,48,8,72,3,148,13,29,7,70,12,371,11,4,4,94,7,641,1,136,5,73,9,46,10,66,6,517,4,46,5,91,10,51,10,141,5,48,10,70,8,130,5,50,7,7,8,45,11,55,3,58,3,57,9,46,7,44,6,148,10,89,6,47,4,56,7,67,10,39,7,24,10,27,9,45,7,22,5,55,6,45,12,41,4,114,7,49,5,84,8,32,12,54,8,47,5,69,5,30,10,45,8,29,10,56,9,67,5,43,3,16,33,27,71,34,28,30,33,52,27,37,26,51,22,17,35,29,14,30,83,24,55,46,43,45,27,72,41,45,43,22,36,14,27,22,24,31,25,53,31,40,22,29,62,33,43,31,52,43,15,33,47,19,26,13,44,26,24,48,18,24,24,35,40,11,22,31,26,48,30,138,138,40

…just in case you want to copy that into excel.

So, what do I think we’re seeing with this text?

There are two options here. One is that instead of stream of consciousness (ie, a novel, or someone writing down in paragraph format as they think of the ideas) it’s a manual.

The second, somewhat more likely option, is that the text just doesn’t contain paragraphs. Instead, the scribe is running all sentences together until they hit a natural “barrier”, whether it be a change in topic, an image or the end of the page.

So don’t bother looking for paragraphs as denotation of blocks of information. There aren’t any, statistically speaking.

Why is this important?

Well….

  • I don’t know how consistent this is with the writing style of the 15th century. A point to investigate.
  • It indicates a specific pattern to the text. What sorts of texts are written in a similar style?
  • If it’s a modern forgery, the forger wasn’t subconsciously attempting to imitate a writing style, but instead attempting to follow some sort of pattern.

Although I reserve the right to change my opinion while I keep looking – I’m running more tests as we speak!

Leave a Reply

Your email address will not be published. Required fields are marked *