All eyes are on the U.S. political landscape heading into the 2018 Midterm Elections in November. With all 435 seats of the House of Representatives and about one-third of Senate spots up for grabs, the next decade of politics lies in the hands of voters.
We live in the age of scientific optimism. Tomorrow’s new knowledge will vastly expand our understanding of ourselves and our mastery of the world. The flip side of scientific optimism, however, is that today’s knowledge is fraught with gaps and errors of which we are not yet aware. Furthermore, knowledge and understanding do not always grow gradually like the rings of a tree.
The Soviet Union launched the first man-made satellite, Sputnik, into space in October 1957, initiating the scientific rivalry between the USSR and the United States at the height of the Cold War. In the subsequent decades, the Soviet and American space programs traded milestones as they each embarked upon manned space flight and the exploration of space.
With over 10 million active researchers, more than 2 million scientific articles published each year, and an uncontrolled spread of bibliometric indicators, contemporary science is undergoing a profound change that is modifying consolidated procedures, ethical principles that were deemed inalienable and traditional mechanisms for the validation of scientific outputs that have worked successfully for the last century.
The Democratic Party’s 2008 presidential primary was supposed to be the coronation of Hillary Clinton. She was the most well-known candidate, had the most support from the party establishment, and had, by far, the most financial resources.
The coronation went off script. Barack Obama, a black man with an unhelpful name, won the Democratic nomination and, then, the presidential election against Republican John McCain because the Obama campaign had a lot more going for it than Obama’s eloquence and charisma.
Most practicing scientists scarcely harbor any doubts that science makes progress. For, what they see is that despite the many false alleys into which science has strayed across the centuries, despite the waxing and waning of theories and beliefs, the history of science, at least since the ‘early modern period’ (the 16th and 17th centuries) is one of steady accumulation of scientific knowledge. For most scientists this growth of knowledge is progress. Indeed, to deny either the possibility or actuality of progress in science is to deny its raison d’être.
‘Today’s world is complex and unreliable. Tomorrow is expected to be more so.’ – Jennifer M. Gidley, The Future: A Very Short Introduction From the beginning of time, humanity has been driven by a paradox: fearing the unknown but with a constant curiosity to know. Over time, science and technology have developed, meaning that we […]
The first machine known as the typewriter was patented on 23rd June 1868, by printer and journalist Christopher Latham Sholes of Wisconsin. Though it was not the first personal printing machine attempted—a patent was granted to Englishman Henry Mill in 1714, yet no machine appears to have been built—Sholes’ invention was the first to be practical enough for mass production and use by the general public.
Why should a trained scientist be seriously interested in science past? After all, science looks to the future. Moreover, as Nobel laureate immunologist Sir Peter Medawar once put it: “A great many highly creative scientists…take it for granted, though they are usually too polite or too ashamed to say so, that an interest in the history of science is a sign of failing or unawakened powers.”
Creativity research has come of age. Today, the nature of the creative process is investigated with every tool of modern cognitive neuroscience: neuroimaging, genetics, computational modeling, among them. Yet the brain mechanisms of creativity remain a mystery and the studies of the brains of “creative” individuals have so far failed to produce conclusive results.
A puzzling observation: the progress epitomized by Moore’s law of integrated circuits never resulted in an equivalent evolution of user interfaces. Over the years, interaction with computers has evolved disappointingly little. The mouse was invented in the 1960s, the same decade as hypertext. Push buttons and the QWERTY layout existed in the 19th century and the display-plus-keyboard setup was used in the Apollo program.
Over the past few decades, the digital games industry has taken the entertainment market by storm, transforming a niche into a multi-billion-dollar market and captivating the hearts of millions along the way. Today, the once-deserted space is overcome with cascades of new games clamouring for recognition.
Tier 1 genomic applications, backed by strong evidence of their clinical utility, support population screening to identify those at heightened risk for inherited cancers and cardiovascular disease. While accounting for less than 10% of the population, these individuals and families account for disproportionate morbidity and mortality and can benefit from targeted prevention efforts.
In November 2017, the Future of Life Institute in California—which focuses on ‘keeping artificial intelligence beneficial’—released a slick, violent video depicting ‘slaughterbots’ [some viewers may find this video distressing]. It went viral. The tiny (fictional) drones in the video used facial recognition systems to target and destroy civilians.
Virtual Reality. Augmented Reality. Gamified Learning. Blended Learning. Mobile Learning. The list of technologies that promise to revolutionise medical education (or education in general) could go on, creating an exciting yet daunting task for the course leaders and educators who have to evaluate them.
Recently, we’ve heard that Volvo are abandoning the internal combustion engine, and that both the United Kingdom and France will ban petrol and diesel cars from 2040. Other countries like China are said to be considering similar mandates.