CampaignSMS

Student Uses AI To Decipher Ancient Greco-Roman Scroll, Wins … – Slashdot

Please create an account to participate in the Slashdot moderation system




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Isn’t this sort of thing what old school image processing excels at? Ie extracting faint images from noise. Obviously ML models are flavour of the moment but I wonder if it couldn’t have been done using a boatload of maths instead.
First: you are erroneously hallucinating that there is a difference between “ML” and “a boatload of maths.” Machine learning is a branch of statistics. It is the biggest boatload of maths.
Second: the raw data looks like this [scrollprize.org]: researchers are looking for a “crackle” pattern indicative of where ink should be. Nothing is visible to the naked eye because the black carbon ink is chemically indistinguishable from the carbonized papyrus beneath.
The picture in the article is misleadingly taken out of context: it shows the output of Farritor’s model with coloured annotations manually added by linguists. No machines were involved in assigning identities to the letters, since there was so little information that needed parsing. Maybe someday an OCR algorithm will be developed for preprocessed Herculaneum scrolls, but there’s no shortage of expert human labour, as most Classics departments haven’t had a new manuscript to edit in a very long time.
Farritor’s model was trained on manually-labelled data; he identified the “crackle” patterns in smaller sections of the available images, as Casey had before him, and enlarged the training set until it was adequate to start finding new signals on its own. This is by far the least human-effort-intensive approach to the problem, and it still took many months to surface because of the extreme barrier to entry in terms of expertise.
If there were another way, it would have been done by now.
“Machine learning is a branch of statistics. It is the biggest boatload of maths.”
Yes I realise that , but the stats maths in ML is non specific. I was refering to very specific image processing maths. Plus you could accurately say that computers themselves are just maths realised in hardware form but its would completely miss the point.
” Nothing is visible to the naked eye”
When has that ever been an issue in the last 100+ years of IR, UV, xray and who knows what other imaging types?
“If there were another w
Giant repositories of unreadable texts do not exist, as people tended to throw them away. A few exceptional archaeological discoveries have been spared; Dr. Seales, who perfected the scroll unrolling process, looked at a couple of them on the way to analysing the Herculaneum papyri. He has given exhaustingly long lectures on the topic [youtube.com] in the past. The En-Gedi scroll used iron-based ink, which made the actual analysis trivial once the unrolling was complete.
The data being used in this case are X-ray spectrographs captured using a synchrotron. (A particle accelerator similar to the LHC.) This is the same technology used to perform crystallography on proteins, but tuned for a large object rather than millions or billions of copies of a single small molecule. This is powerful enough to reconstruct the scrolls on an atom-by-atom level, although it is not quite high enough resolution.
I can’t emphasise enough that after 2000 years, there is no chemical difference between the burnt papyrus of the page and the burnt pine resin of the ink. All that exists in the physical object are patterns of how the stylus deformed the parchment during the writing process, and how the ink caused the fibrous structure to change as it dried. This is why many letters are damaged or left no trace at all.
Anyway. Image processing as a field is no longer a major topic that has large research grants behind it, and the experts who know the techniques are ageing. The field as a whole was basically killed dead in 2012, after the AlexNet [wikipedia.org] model (a neural network) grossly outperformed the state of the art on the ImageNet challenge. I wouldn’t go so far as saying all the researchers got sacked, but they definitely had to make a hard turn into new areas of research to keep their jobs.
This was an example of “the bitter lesson [incompleteideas.net]”: There is no point in hand-crafting a large algorithm using expert knowledge when you can train an AI model to do the job 95% as well with 1% of the development time. Since expert knowledge of the data doesn’t exist (and would take many researchers decades of work to divine, researchers who no longer even exist), there isn’t a practical alternative.
Yes, avionics engineering does indeed have different requirements from archaeology. You have done a very big smart and get a very big cookie.
Common knowledge. And what? They should have put an ML model in place of a hard coded autopilot?
The issue, surely, is not whether it is 95% as good with 1% of the effort, but whether it can do better than hand-turned with less than 100% the effort or whether 95-99% as good is as good as it gets.
Yes, 95% as good is fine for something like the Archimedes Palimpsest or a damaged cuneiform tablet. In fact, it’s more than fine for the tablets, because we’ve got so many and very few are digitised in a useful way. After ISIS destroyed large numbers of tablets, we’ve seen the need to do bulk recording and tra
So a classic way of solving this is to systematically throw convolutions or cross-correlations at the data until you see something visible. Basically to search the space of convolution/cross-correlation functions until you find one that fixes the data.
Well, guess what? That’s exactly what a convolutional neural network does.
Imagine what he could do with a beowulf cluster of RTX 4090s!
Why the hell would you do that? Nobody doing serious AI work uses desktop GPU cards. A rather ancient P100 would likely smoke an RTX 4090 and a V100 most certainly would. Then there are the A100’s and H100’s both of which leave an RTX 4090 for dead.
The Toms Hardware coverage links a Nvidia blog – https://blogs.nvidia.com/blog/… [nvidia.com] – containing:

His achievement in identifying 10 letters within a small patch of scroll earned him a $40,000 prize.

His achievement in identifying 10 letters within a small patch of scroll earned him a $40,000 prize.
Yeah. it’s reading the characters. Not understanding the meaning of the word. That definition was evidently known in 2016 because purple dye is on the earliest draft of the wikitionary ancient greek page for the word. Ancient Greek isn’t a lost language, wasn’t knowledge of it the reason the Rosetta stone could be used to understand hieroglyphics?
The Tom’s coverage, to be generous, played a game of telephone with the original context in a way that *some might think* that **an AI** went and **understood** the meaning of something. But the software was a machine learning algorithm and it didn’t understand meaning any more than an app scanning handwriting and turning it into plaintext did.
This buried the accomplishment and the resourcefulness of the winners (there’s multiple related prizes, some still not won).
It’s not reading the characters; those boxes were drawn in by classicists. Rather it’s generating the black and white image underneath; see my comments here [slashdot.org] and here [slashdot.org].
Thanks for the correction, evidently I’m wrong.
The Rosetta stone contains Ancient Greek, and Ancient Egyptian written in demotic script and hieroglyphics.
The demotic script resembles an ancestor form of modern Coptic, so knowledge of that was used to decipher it alongside the Greek, and the hieroglyphics were then assumed to be a translation of same.
Ironic that our ability to read these languages essentially derives from the equivalent of a modern government notice done in multiple languages.
As a caveat: Between Homer an
Did Greek really change as much as English did? I sort of doubt it. Not being able to read either, I can’t be absolutely sure, but Greece was never conquered by a foreign invader speaking a very different language. And languages don’t change at the same rate even when undisturbed by externalities.
At first, today’s Greece was settled in several waves by different Greek speaking tribes, like the Achaians, the Dorians, the Ionians or the Aiolians – each with a different dialect of Ancient Greek. Homer’s Greek for instance was the Dorian dialect.
Then the Greek cities (poleis) were under constant treat of conquest while at the same time expanding all the time by founding colonies. There were many Greek cities for instance conquered by the Pers
Yes, but after the Dorian invasion, I believe that all the conquests (in Greece) until the Romans were by those speaking approximately the same language. (Like German and Austrian, not really the same, but not that different. E.g. I believe that they could all understand Homer without translation.) It’s not like the Persians had been successful. (I think that’s comparable with French and Anglo-Saxon, though possibly not Norman French.)
Here are a few things that the article couldn’t hope to be able to tell you (and that are somewhat absent from scrollprize.org also):
– “Porphyry” exists in English as the typical word for the red rocks called porphyrite (using old-school greekscii, I think it would be written porqurith) in Greek and Latin, so it’s not entirely unknown. The colour is also familiar to us as “Tyrian purple,” although many armchair classists have no idea that our modern concept of purple originated after several breakthroughs in dyes and pigments in the 19th century. This is the colour beloved by Roman emperors and aristocrats that required immense numbers of crustaceans to produce. (It was eventually replaced with cheaper red dyes.)
– The actual source image looks more like this [scrollprize.org]. Luke Farritor produced the black and white base image. The coloured boxes were added by linguists, not Farritor’s model.
– The Herculaneum scrolls come from the private “working library” of an Epicurean philosopher, Philodemus. Most of them are different drafts of his writing. He was a Greek who had emigrated to Rome, and lived in the early 1st century AD. The library seems to have been left untouched for decades before the eruption.
He appears to have adopted what we barbaroi call early Roman cursive [wikipedia.org], which has many differences from how Greek was written by professional scribes elsewhere in the Empire. In particular, “A” often just looked like a lambda, and “R” had many strange shapes, usually looking somewhat like a “C” or “T” with the bottom extending far below the baseline, and has more in common with the letter “r” than the letter “R”.
Once you understand these peculiarities, it looks like the word is written with a mixture of Greek and Latin letters, which is par for the course when dealing with old texts that weren’t written by professional scribes for a wealthy client.
Doesn’t decypher mean learning the meaning of something that wasn’t known before (or rediscovering what was once known and then forgotten?). The decoded of something that is in a code? I know not all codes are secret but if it’s not secret or mysterious… that’s reading.
I went to check and see if this is a word that has never been understood before (which seems massively unlikely to me, ancient greek on scrolls in a roman ruin… ancient greek not a forgotten language). Scans tech from 2019 was predated by
First, the word is spelled “decipher”.
Second, it means “To read or interpret (ambiguous, obscure, or illegible matter).” A secondary meaning is “To convert from a code or cipher to plaintext; decode.” The first meaning is entirely appropriate here because of how damaged the scroll is.

First, the word is spelled “decipher”.

It is lovely that you can write in American but, you have apparently not heard of English. They are closely-related languages with some distinct differences in spelling.

First, the word is spelled “decipher”.
It is lovely that you can write in American but, you have apparently not heard of English. They are closely-related languages with some distinct differences in spelling.
It’s not spelled decypher [cambridge.org] in Britain, either. It would be lovely if you learned your own language!
In this case, the issue isn’t the definition of the word it is picking it out from clusters of ink fragments in a 3d scan of an ancient rolled up document.
My concern would be more that the ‘AI’ is suffering from digital pareidolia. The only way to be sure is to have a human look at the output and compare it to the original… and even then you’d want a lot of surrounding text deciphered to ensure the suggested word is appropriate in context.
It could decipher doctor’s handwritten prescriptions, even upside-down!
RIP gpu prices.
No, imagine what he could do wit ha beowulf cluster of 4090!
+1
Well it cost Nvidia $40K. that’s like a third of a GPU these days. Not exactly free…
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Joby, Volocopter Fly Electric Air Taxis Over New York City
Japan To Create $6.6 Billion Fund To Develop Outer Space Industry
fortune: No such file or directory

source

Leave a Reply

Your email address will not be published. Required fields are marked *