Unidentified aerial phenomena, commonly referred to as UFOs, has been the focus of research by sociologists, scholars of religion, anthropologists, philosophers, and astronomers. The information age now offers new and innovative ways to study the phenomena, and author Diana Walsh Pasulka sat down with astronomer and computer scientist Jacques Vallee to discuss how “big data” and information processing will influence the field of study.
In a recent presentation you delivered in California in 2016, you described methods that you used to identify historical cases of sightings of anomalous aerial phenomena. Your methods were similar to those used by scholars of religion in that you locate primary sources and elaborate their historical contexts. You also utilized new research methods involving technology. You have been at the forefront of computer innovation since the 1960s, so I am intrigued on how you integrate historical research with digital technology. Can you discuss your methods and how they obtained the results that were eventually included in your book?
The study we just concluded is the product of a group of specialists (scholars, historians, researchers of ufology, archivists) linked through the Internet in an exchange managed by my colleague Chris Aubeck. The work builds on the intersection of classical scholarship and modern computer media to assess historical cases of extraordinary phenomena. In that sense the process is very similar to research in the history of religions, with the added benefit that the emerging patterns of ancient UFO sightings can be compared to modern reports.
This is very much a work in progress: future versions may focus on a nucleus of only 100 cases rather than 400 or 500; or they may expand to over 1,000 if we can get better access to data from Middle-eastern or Asian records.
You issued a warning to new researchers in the field, particularly those who believe that “big data” and data mining techniques will shed light on the field and offer new insights? Can you elaborate?
For those who have just discovered the world of “big data” because of the exploits of super-computers winning at chess or GO against the best human champions, the technology seems almost magical: dump unstructured threads of information into a large binary warehouse and let “deep thinking” extract the salient patterns. This technology actually works when the data itself is well-behaved, in particular for consumer trends, industrial maintenance or medical research. When the subject area lacks an ontology, as in the case of unexplained aerial phenomena, the same techniques are problematic because the software is most likely to extract spurious or misleading patterns. Much of my career has been spent developing metadata software and (more recently) investing in Big Data companies, so I am a believer in the relevant techniques but one cannot skip the arduous work of preliminary data “scrubbing.” In particular, we have to keep in mind that the observed data may be the result of multiple phenomena rather than a single source. The problem is one of discernment and intelligence rather than brute-force statistics.
Do you have advice for researchers today? How can we use new digital strategies to streamline our research?
The rapid expansion of computer records, as more and more old newspapers and books are digitized, has greatly facilitated access to original records. At the same time researchers are developing ingenious techniques for extracting information across long periods when the meaning of many terms has changed (think of the word “meteor,” which refers to the sighting of a falling aerolithe today but used to mean any luminous phenomenon in the sky). It has become feasible to conduct large-scale research on thousands of records at very minimum cost, without travel or administrative burden.
The problem remains of abstracting the data and making it available to others for critique, review and further research. A meeting was organized in 2015 at the Paris headquarters of the French Space Agency, gathering researchers from six nations in an effort to further collaboration in the exchange of data on unexplained aerial phenomena, but that work is just beginning. The phenomenon is global and very complex. Only 5-10% of the observations have actual research value. As a result, individual (occasionally heroic) attempts to hoard large quantities of information in hopes of “solving” the problem have proved naive, overly costly, and short-lived. It seems to me an open-source strategy will be the best way to motivate the research community and harness its resources.
This seems like a crucial moment in the study of this phenomena in that we have access to digital technologies that help us identify patterns, but we cannot ignore the actual field research and historical methods that help us, first, identify something truly unexplainable, and second, draw correlations to other phenomena, or mostly importantly, rule these correlations out. This is arduous work but it results in knowledge.
Headline image credit: Jacques Vallee, 2015.
Many trials on the same data with differing definitions/ filters might be applied, but it’s difficult because every witness and every researcher have subtle (or sometimes less so) definitions of what they are describing. The categories may need to be relatively “soft.”
Just a pay check!
That image at the top of your article is a “lens flare” – not a UFO.
[…] Unidentified Aerial Phenomena and new research: Q & A with Diana Walsh Pasulka and Jacques Valle… – blog.oup.com […]
Hello my English is a little bit bad so..please be indulgent :-)
for me there are 2 main problems to solve, indeed first one is to establish a wide database of ufo cases with the problem of indexing
but by indexing you loose information , so for me the best way is to put textual big data analysis with occurrence of words and therefore there is two other problems, first one is traduction and second one is slang specially with cases of the social medias
to find patterns , the “classical” big data analysis seems indeed insufficient because of the multivariate possible explanations.. but .. you could reclassify, structure your textual data in arborescence or in heuristic schematics and so on.. so in between classical hard science technics and more human science technics
for me it’s logical because ufology is like human-o-logy.. we study both ufo in terms of technology and aliens actions, protocols , behaviors
sorry for my bad English my mother tongue is french
The first problem is to keep information in a big data ufo base , with classification and indexing you loose information.
The second problem is to apply textual data analysis and classifying with problems of translating and slang
the “classical” big data analysis is not adapted , we are in between hard science technique and human science technics, logical because ufo cases talk about technology and aliens behaviors
sorry for my bad English my mother tongue is french
Comments are closed.