‘Today’s world is complex and unreliable. Tomorrow is expected to be more so.’ – Jennifer M. Gidley, The Future: A Very Short Introduction From the beginning of time, humanity has been driven by a paradox: fearing the unknown but with a constant curiosity to know. Over time, science and technology have developed, meaning that we […]
The first machine known as the typewriter was patented on 23rd June 1868, by printer and journalist Christopher Latham Sholes of Wisconsin. Though it was not the first personal printing machine attempted—a patent was granted to Englishman Henry Mill in 1714, yet no machine appears to have been built—Sholes’ invention was the first to be practical enough for mass production and use by the general public.
Why should a trained scientist be seriously interested in science past? After all, science looks to the future. Moreover, as Nobel laureate immunologist Sir Peter Medawar once put it: “A great many highly creative scientists…take it for granted, though they are usually too polite or too ashamed to say so, that an interest in the history of science is a sign of failing or unawakened powers.”
Creativity research has come of age. Today, the nature of the creative process is investigated with every tool of modern cognitive neuroscience: neuroimaging, genetics, computational modeling, among them. Yet the brain mechanisms of creativity remain a mystery and the studies of the brains of “creative” individuals have so far failed to produce conclusive results.
A puzzling observation: the progress epitomized by Moore’s law of integrated circuits never resulted in an equivalent evolution of user interfaces. Over the years, interaction with computers has evolved disappointingly little. The mouse was invented in the 1960s, the same decade as hypertext. Push buttons and the QWERTY layout existed in the 19th century and the display-plus-keyboard setup was used in the Apollo program.
Over the past few decades, the digital games industry has taken the entertainment market by storm, transforming a niche into a multi-billion-dollar market and captivating the hearts of millions along the way. Today, the once-deserted space is overcome with cascades of new games clamouring for recognition.
Tier 1 genomic applications, backed by strong evidence of their clinical utility, support population screening to identify those at heightened risk for inherited cancers and cardiovascular disease. While accounting for less than 10% of the population, these individuals and families account for disproportionate morbidity and mortality and can benefit from targeted prevention efforts.
In November 2017, the Future of Life Institute in California—which focuses on ‘keeping artificial intelligence beneficial’—released a slick, violent video depicting ‘slaughterbots’ [some viewers may find this video distressing]. It went viral. The tiny (fictional) drones in the video used facial recognition systems to target and destroy civilians.
Virtual Reality. Augmented Reality. Gamified Learning. Blended Learning. Mobile Learning. The list of technologies that promise to revolutionise medical education (or education in general) could go on, creating an exciting yet daunting task for the course leaders and educators who have to evaluate them.
Recently, we’ve heard that Volvo are abandoning the internal combustion engine, and that both the United Kingdom and France will ban petrol and diesel cars from 2040. Other countries like China are said to be considering similar mandates.
Nicolae Popescu was born in the small city of Alexandria, a two-hour bus ride south of Bucharest. After organising a digital scam to sell hundreds of fictitious cars on eBay, and pocketing $3 million, he was arrested in 2010 but eventually was released on a technicality.
The 10th Annual International Open Access Week is marked as 23-29 October 2017. This year, the theme is “Open In Order To…” which is “an invitation to answer the question of what concrete benefits can be realized by making scholarly outputs openly available?” To celebrate Open Access Week, we talked to Scott Edmunds, Executive Editor for GigaScience.
Big Data analytics have become pervasive in today’s economy. While they produce countless novelties for businesses and consumers, they have led to increasing concerns about privacy, behavioral manipulations, and even job losses. But the handling of vast quantities of data is anything but new.
Internet-related legal issues are still treated as fringe issues in both public and private international law. Anyone doubting this claim need only take a look at the tables of content from journals in those respective fields. However, approaching Internet-related legal issues in this manner is becoming increasingly untenable. Let us consider the following: Tech companies feature prominently on lists ranking the world’s most powerful companies.
Much has been written about autonomous, driverless vehicles. Though they will undoubtedly have a huge impact as artificial intelligence (AI) develops, the shift to electric cars is equally important, and will have all sorts of consequences for the United Kingdom. The carbon dioxide emissions from petrol and diesel cars account for about 10% of the global energy-related CO2 emissions
The possibility of human-level Artificial General Intelligence (AGI) remains controversial. While its timeline remain uncertain, the question remains: if engineers are able to develop truly intelligent AGI, how would human-computer interactions change? In the following excerpt from AI: Its Nature and Future, artificial intelligence expert Margaret A. Boden discusses the philosophical consequences behind truly intelligent AGI.