THE RESEARCH OF THE VOYNICH MANUSCRIPT:
The Strategies and the Results.


Jan. B. Hurych


In seven years, we will be celebrating the reappearance of the VM, the Voynich manuscript that ever since baffled many researchers all over the world. There is however not too many reasons to celebrate our progress in the search for solution of the VM. We gathered a lot of auxiliary data, scrutinized them and used them. And what we got so far? The list of old dogmas from Voynich era, some only coincidental events and then several new theories, remarkable only by contradicting each other.



It is not only because of three basic unknowns of the manuscript, that is the author, the script and the language , but also due to the lack of some general strategy. That is understandable, since each researcher has its own area of the expertise but it is also very regrettable, because many of works were undertaken, then abandoned and disappeared in oblivion, without us properly learning from our failures. With the shining exception of Mary d' Imperio's book, the rest of them either repeat the well known facts or venture into the land of illusions. That is not to say no progress was made but have apparently still long way to go. Couldn't we have done a lot more during those one hundred years?

The problem starts with the fact that the VM is rather non-traditional document and conventional methods of research usually do not get us there too far. Very often the results lead to contradictory conclusions which can be only resolved by further facts and we do not have them, unfortunately. I do not mean conventional statistical and other scientific methods that we know have proved themselves in other fields, but even those methods may need here further improvements. That is because we try to apply them on incomplete or incompatible sets of data. It is also possible that some methods cannot be improved any more (say carbon dating for certain periods of time) and have to be replaced by some other, more advanced methods. The use of computers is of course only as good as data we are giving to them. Maybe applying the methods of artificial intelligence could help us in future - or maybe not. The use of self-learning programs is still contemplated but - as far as I know - not applied yet.

What are the methods of research we are using now? Apart from the "guess, test and rest", the most favorite methods are the similarity, analogy or coincidence with application of inference, that is of scientific induction. How far it gets us? Let's see: the typical example is the discovery of the "sunflower" that lead to hypothesis about the "American" origin of the VM, the dating "after Christopher Columbus" and what not. Yes, it was Georgius Baresch who already noticed that VM plants were not from Europe and while it could be a sunflower all right, no other plant in the VM was confirmed as being from America. Also, while otherwise serious researchers were trying to find at least one other herb there that would remind us of some known plant (that is the whole plant, not just a leaf or blossom or root), anywhere on this Earth, but their efforts were apparently futile from the very beginning.

Why? It is now obvious that at least the majority of the flowers there are not known to us and there is a major uncertainty about the "sunflower" itself - there are other plants with similar blossom as well. How probable then could be the identification of the other plants in the VM "herbal" ?



Here we have to pause and get little bit back in time. The basic method for the VM research is not the deduction (mostly because we do not know all facts or premises) but the induction, i.e. by generalization that is "the reaching the universal from particular". It was Francis Bacon who not only coined the theory of induction but also pointed out the treacherous points when it is not applied properly.

That point was also made in now classic article by T.C. Chamberlin, (1890, "The method of multiple working hypotheses") summing it up nicely in the subtitle "With this method the dangers of parental affection for a favorite theory can be circumvented". How many times we saw the VM researchers changing subconsciously their "working" theory into the "ruling" theory and finding only the facts for the support of such theory? His Multiple working hypotheses when investigated simultaneously, i.e. in parallel, may not only reveal mutual connections between the causes of the phenomena but also force us to keep an open mind, all the time.

He also listed the drawbacks of such method, some of them being later eliminated by the use of the Strong Inference, proposed by John R. Platt in his article "Science, Strong Inference -- Proper Scientific Method" (16 October 1964). What is the recipe for the strong inference? In four points, it is :
1. Devising alternative hypotheses;
2. Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses;
3. Carrying out the experiment so as to get a clean result;
4. Recycling the procedure, making sub hypotheses or sequential hypotheses to refine the possibilities that remain, and so on.

That might look simple enough, but it is far from it: the set should include all possible hypotheses in order to guarantee the success. Another, the most critical point, is the test, the crucial experiment. As we can see, we do not test to prove the correctness of one hypothesis, we test to separate off the empty theories. Again it is mainly the elimination process and we have to chose very carefully the kind of test we will perform. This process should be mastered and applied by every researcher before he claims any minor success. Platt even coined the "touchstone question". It consists of asking - in your own mind - on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"

Now we are getting to falsifiability or testability: Bacon already warned that we cannot proclaim the hypothesis valid until we are sure we examined all possible cases for falsification - that is unless we know where or when it is not valid. Even the perfectly valid hypotheses should be falsifiable, claims Karl Popper. Those, that we cannot technically falsify, cannot be proven either. Typical case her is the empty hypothesis of "encoded gibberish", which is not falsifiable - how do you prove you decoded the text to the original plaintext, the proper "gibberish"? Such theory cannot be proven nor disproved. We have to keep in mind however, that even with the "strong inference" and all premises being true, the conclusion is only probable, meaning it could never be a hundred percent sure as it usually is with the deduction. As we can see, looking forI have to stress that it is more important to look for exceptions than for affirmations, which is usually impossible task anyway, considering that not all data are available or testable. And here comes my part of the story . . .



As we can see, it is more important to look for exceptions than for affirmations, which is usually impossible task anyway, considering that not all data are available or testable. As the case is, many approaches to the VM were usually limited to finding something reminding us of what we already know from elsewhere, something more or less similar and familiar. Then we hurry up to apply the idea on the whole set. Of course, we do not and cannot bother with all cases and so we rather jump to conclusion, but that is not how the scientific induction works. We are parting with reality too soon and instead of strong induction we get only a well intended but totally wrong fiction bordering with illusion. Can we really look for usual things and use them for explanation of unusual things? Hardly, especially for the VM, which is "A Riddle Wrapped in a Mystery Inside an Enigma" - as the famous quote by Winston Churchill goes. There nothing seems to be as it appears. So what good it is to look for similarities in appearances only?

It gave me an idea that there must be the other, more productive methodfor the VM research and I think I found it. The method is actually very simple: we have to do quite the opposite: we should look for exceptions not for similarities. It is a psychological fact that he who is looking for similarity will find it even where it does not exist. In reality, if we find something similar in the first instance, it is even more difficult to find another occurrence which is usually not that much similar and so it is more difficult even to spot it. And then comes the case there is so little similarity that it is practically none at all.

That's when and where comes handy my advice: instead of looking for similarities, we have to search for something unusual, irregular or even impossible. These occurrences are especially valuable: we know they are unusual so they apparently have something to tell us. They may give us some new, important information - but of course only if we know how to evaluate them.

For instance, say there is a plant with rather impossible shape of the root (I know it is but let just put it hypothetically). Of course, we mustn't stop there and our next question would be "Why it is so?" What was the reasoning of the author or the purpose behind it all? Was it just an error, inexperience, low skill of the artist or maybe something else? We can also count how many times such deviation appeared anywhere else in the VM and look for its variations. And the next question, going back again, is: "Could we explain at least one of those exceptions some way and what does it mean in general sense?" By answering such questions and explaining such exceptions, we may even get some valuable ideas.

Yes, surprisingly enough, such root may tell us more than the above mentioned "sunflower". We may conclude for instance, that the root was just invented and all other plants as well and yes, if we have more proofs, they apparently have some common idea behind them, for instance the steganography. In our case, the key to the text may be in the pictures. True, we have to prove it - we have to find the system that explains all that, something like a common denominator and more importantly, we have to test such theory. But if we do get confirmation, then we may get much further than from some similarities which, unless tested, may be only in our mind anyway.

After all, if the plant "sunflower" is a real sunflower, what does it tell us? True, the VM then could get to Europe only after 1492 ( if we do not count Vikings :-) or any time later. But the other knowledge in the VM was apparently resident in America, and since the aboriginal nations of Americas did not know our use of vellum, it was brought in some drafts and copied later in Europe and could be much younger anyway.

Let's take another example: the VM history research was relatively rich but it as is shown in my article "The Voynich Manuscript - Do We Really Have Any Provenance?" it is also very dubious in some places. Taking one famous scientific person after another and trying to fit them as authors is another exercise in futility. We have handwritings of those people and we know their written works - and we also know they do not fit the hand or the idea of the VM. Before we could go on hunting for facts supporting some hypothesis about one particular author, we have to find out why would he write the way quite strange for his contemporaries, what did he try to hide the content and for whom he then wrote it. We certainly have no proof the VM was written for an ordinary reader, maybe for nobody except the author himself. If we take however all the specifics, we have to come up with the hypothesis, that the VM is surely hiding some secret. Of course the "real suspect" for the authorship would be the one who had the reason to hide something. Find the reason and find the traces of it in the VM - and you may eliminate a lot of "suspects" that way.

We can see here that starting with one person and trying to prove it was him is putting the horse behind the wagon. Besides, the attempt should be made first to disprove him rather than to prove him . Some disproving facts could be strong enough to eliminate him in the first round already. After reasonable search, we may even conclude the author cannot not be found in between the "rich or famous" and that it was apparently somebody not commonly unknown or even illusive (as is for instance the person of Georgius Baresch). What's more, the disproval can be an easier job: it is sufficient to eliminate the hypothesis by finding one example where it does not work, while to prove something one needs to test all premises and cases.



Just to make my point clear: I am not judging here some strategies used up to now. Those were effective methods but many were abandoned mainly for above reasons. Not always, but often enough they hit he wall simply because the similarities alone are only tools and not the real strategies. And what once was inspirational, eventually run out of continuation for simple reason: there is never enough similarities to make for the rule. Something else, something new must be discovered so we are able to sum it up and make the hypothesis testable. Also, I do not pretend that the strategy of looking for exceptions is anything new - after all, many people in the past did that and often drew the proper conclusions from them. Such approach seems to be also more productive and more dependable than just looking for similarities. Even if we find strong similarities, they are sometimes very superficial and how certain could be the general conclusions based on them?

22th November, 2007.