Kateryna Yushchenko was a Ukranian female Computer Scientist who developed one of the world's first high-level languages using indirect addressing (also known as pointers), called the Address programming language. Yushchenko was the first woman in the USSR to become a Doctor of Physical and Mathematical Sciences in programming.
Image of Kateryna Yushchenko at Samarkand University in Uzbekistan
by Samoylen, CC BY-SA 3.0
Harold Lawson is upheld as the inventor of pointers in 1964 however it could be argued that actually Kateryna Yushchenko had already invented them in the 50’s in Ukraine.
It is fair to say that Kateryna Yushchenko was born into turbulent times in Ukraine on Dec 8th 1919. In 1917, in the midst of World War 1, the February Revolution in Russia saw Tsar Nicholas II deposed and he and his family later assassinated. The interim government were usurped by Lenin’s Bolsheviks during the October Revolution of the same year. In the middle of this Russian unrest the Ukrainian War of Independence began on June 10th 1917 resulting in the establishment and development of the short lived Ukrainian People’s Republic 1917 – 1920 which proclaimed its independence from the Russian Republic on 22 January 1918. In 1920 just after Kateryna’s birth the whole of the Ukrainian territory fell into Bolshevik hands. In 1921 on 18th March the Peace of Riga Treaty between the Second Polish Republic, Soviet Russia (acting also on behalf of Soviet Belarus), and Soviet Ukraine sealed the fate of the Ukrainian People's Republic and the Ukrainian War of Independence ended. The Ukrainan Peoples Republic was officially re branded Ukrainian Soviet Republic and then the Ukrainian Socialist Soviet Republic one of the co-founders of the Soviet Union.
In 1922 Dec 19th the USSR came into being. In the early years of the Soviet Ukraine when Kateryna would have been a child, there was a revival of Ukrainian culture due to Bolshevik policies known as the policy of Korenization (indigenisation). This was possibly to boost overall support for the Soviet system. By 1928 Stalin had consolidated power in USSR and started an abrupt reversal of the Korenization policy. In 1930 when Kateryna was 11 yrs old, Ukraine and presumably Kateryna and her family would have experienced the Holodomor or Terror Famine. According to the findings of the Court of Appeal of Kyiv in 2010, the population losses due to the famine amounted to 10 million, with 3.9 million direct famine deaths, and a further 6.1 million birth deficits. Some scholars believe that the famine was planned by Joseph Stalin to eliminate a Ukrainian independence movement and possibly ethnic Ukrainians via genocide. Others suggest that the man-made famine was a consequence of Soviet industrialisation. Some believe it was to simply push Ukrainian peasants into submission, drive them into the collectives and ensure a steady supply of grain for Soviet industrialization. The first reports of mass malnutrition and deaths from starvation emerged from two urban areas of the city of Uman, reported in January 1933. This is 3 and a half hours from where Kateryna lived in Chyhyryn. Kateryna would now have been 14 yrs old. It would have been a time of unbelievable suffering. More than 2,500 people were convicted of cannibalism during the Holodomor.
In 1937, when Kateryna was 17 yrs old and had just started her Undergraduate Degree at Kyiv University her father, a Geography and History teacher, was recognized as a Ukrainian nationalist by the Soviets and was arrested. Kateryna’s mother, a housewife, tried to prove her husband’s innocence by showing the secret service agents the documents testifying to the fact that he had participated in the revolutionary movement. Sadly, she did not return. She was sentenced to a 10-year imprisonment and the evidence she brought was burned in front of her. Kateryna was then expelled from University. Who can imagine how this 17yr old girl who had lost both parents and had her education taken from her felt. Incredibly she carried on and was determined to continue her studies. The only institution that accepted her, with a scholarship, food and accommodation provided by the State, was Samarkand University in Uzbekistan, which was 2422 miles away (3895 km). To put this into perspective the distance between London and Kyiv is 1496 miles and it is 1784 miles from London to Moscow.
“After all the misery and humiliation of trying to continue pursuing science, it seemed to be possible salvation. I was completely devoted to studying... The opportunity to finally complete my education has given me the strength to survive the grief that came. But I always remembered my parents and such a distant Ukraine.” Katrina Yshchenko
In 1939 World War 2 broke out and she would not return to the Ukraine until after the war.
After World War 2 the Institute of Mathematics of the USSR Academy of Sciences was opened in Lviv. Kateryna met Boris Gnedenko (then almost head of the Institute), who saw her diploma and recruited her to the department of probability theory. She was engaged in special problems of probability theory and worked on areas which are important for the development of quantum mechanics today. In 1950, when Kateryna was 30 yrs old, the Institute moved to Kyiv and Kateryna moved as well. Under the direction of Boris Gnedenko, she obtained a Ph.D. She was the first woman in the USSR to obtain a Ph.D. in Physical and Mathematical Sciences in programming. For a period of seven years, Yushchenko held the position of Senior Researcher of the Kiev Institute of Mathematics of the Ukrainian SSR Academy of Sciences (1950–57)
In 1952 The Institute purchased a set of computer analysis machines for research, and Kateryna was appointed head of this laboratory. In 1952, the first MESM computer in continental Europe was transferred to the Institute of Mathematics for scientists, including Yushchenko, to use. MESM was the first universally programmable electronic computer in the Soviet Union. The device was inconvenient as it worked slowly, acted up due to its vacuum tubes, and its memory consisted of only a few bytes! In the process of working with MESM, it became clear that the more complex tasks were difficult to solve by writing simple machine programs. There was a need to develop a high-level programming language and Yushchenko created the Address Programming Language in 1955 which used indirect addressing (aka pointers). It is thanks to its address language that the dependence on the location of the program in memory has disappeared. She wrote many books about address programming. Her invention was two years ahead of Fortran, three years ahead of Cobol (which was developed in part by Grace Hopper who created the first compiler and coined the term computer "bug"), and five years ahead of ALGOL. It was a breakthrough! Yushchenko’s invention began to be used in most Soviet and later Chinese-made machines. Address programming language became the first fundamental achievement of the Soviet School of Theoretical Programming. 1961 she co-authored the book “Elements of Programming”. The book was used across the USSR and countries from the Eastern Bloc. She continued to work at the Institute for 40 yrs which would have been until she was 70, supervising 56 PhDs. She died at the age of 81 in 2001.
The MESM computer was operated until 1957 https://habr.com/en/company/ua-hosting/blog/387837/
By Mrs Staves
Zuckerberg’s Metaverse: The Not-So-Far Future
In the past few months, you may have heard of something called the ‘Metaverse’. Maybe you’ve heard that it’s going to take over the internet; or maybe you’ve been told that it’s we’re all going to live there tomorrow. Or perhaps you’re convinced it’s not real – living in a virtual universe only happens in science fiction stories, right?
Well, yes. That is – for now.
Unlike most things, the Metaverse doesn’t technically exist; it’s more of an idea, or dream for what the not-so-far future could possibly bring, dreamt up by Facebook billionaire Mark Zuckerberg, the tech tycoon who has been watching (and selling) our every move online since the start of his company, Thefacebook (then Facebook, now Meta) in 2006. He describes the idea of the Metaverse as “a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you”, similar to how virtual reality works nowadays. However, he also emphasises that we will not only be able to experience alternate experiences and realities from the comfort of our own homes, but that this is something that we will be able to undergo with other people, a revolutionary idea in an age where tech is, so far, generally limited to one’s own experiences.
This is still all up in the air for now though, and it is unlikely that you’ll wake up in a virtual world tomorrow. However, Zuckerberg’s persistence with chasing the idea is inspiring a lot of people, and shaking the public – and you – that this is a future that is coming increasingly closer to us as a society, and poses the real question that perhaps these Silicon Valley stargazers have forgotten to answer: are we ready?
Find more information about the metaverse and what it will be like at https://www.theverge.com/22701104/metaverse-explained-fortnite-roblox-facebook-horizon or watch this video to see Zuckerberg’s interpretation of what he believes to be the future of tomorrow.
By Iva Shehu (12.JSD)
SANS foundation cyber security GIAC certification
I have recently passed the SANS foundation cyber security fundamentals exam organised and proctored by GIAC. SANS foundation is a global information security thought leader. GIAC stands for Global Information Assurance certification.
In 2020, I signed up for the Cyber Discovery programme. Cyber Discovery was launched in 2017 as a four-year programme, funded and supported by the Department for Digital, Culture, Media and Sport (DCMS) as part of the UK’s National Cyber Security Strategy. It was created and delivered by SANS Institute as a free, extracurricular programme for 13 to 18 year-olds across the UK. More than 100,000 young people took part in Cyber Discovery, progressing from an initial assessment phase into the vast, online cyber learning platform Cyberstart.
I completed the various challenges and games. This helped me to qualify for the cyber essential programme, which taught basic cyber offence and defence skills. Once I finished the course, I registered for the exam. By passing this exam, I obtained the above mentioned cyber security fundamentals SANS certificate.
Teen tech Awards 2021
Last year, I entered the Teen Tech awards competition in the data science category. I came up with the idea based on my interest in Geography and coding. For this project, I worked on building a prediction model to forecast when UK will achieve net zero emissions status.(prediction: 2048). I learnt Python programming language, read about different packages to do forecasting and machine learning.
I was one of the three UK finalists. I gave a presentation of my project and prediction to a panel of judges from industry and academia. Based on the presentation and the broad scope of the topic, I was given the “Big and Bold Award” under the Teen Tech data science category. The process made me learn a practical application of the python language and allowed me to work with a professional mentor from the industry. Overall, it was a good learning experience.
By Vidya Ram
A group of four of us had the pleasure of taking part in Cyber Centurion VII this year, an annual cyber security competition run by Cyber Security Challenge UK. There are three rounds, each lasting 6 hours, which this year took place in November, December and March. The objective of each round is to successfully find and fix cybersecurity vulnerabilities on computer networks, in order to defend it against attack (think of it as reverse-hacking!). For each round, we had three ‘images’ (systems) to help defend - an Ubuntu image, a Windows, and a Windows Server. We’d work on securing these, anticipating what potential hackers might exploit, and stopping them from doing so. It was an incredibly fun and rewarding experience, and we all learned a lot!
Here is what the team had to say about their experiences:
‘I enjoyed every minute of this, from working hard to find flaws in the system, to hearing the rewarding ping of the scoring system telling us we’d gained points! I’m looking forward to doing this again next year and learning more about cybersecurity techniques and tools - it’s a useful skill for everyone to have!’ - Jenny Z
‘The competition was an incredible experience which was definitely worth the time. The best part was how much I learnt through the rounds, as it forces you to cover more and more as the rounds go on. It was incredibly rewarding, and I hope I get the chance to do it again next year.’ - Kaitlyn C
‘Since this was our first year doing Cyber Centurion, we had no idea what to expect. However, using the resources and practice rounds, we quickly gained knowledge and experience unlike any cyber security competition we'd done. One of my favourite aspects would definitely be the anticipation of watching the leaderboards change, and the thrill of finally getting points after spending hours over an issue. I particularly liked how Cyber Centurion covers the blue team aspect of cyber security (that is, defending as opposed to attacking), and cannot wait to compete with my team again.’ - Reva B
‘Cyber Centurion has been an amazing experience. I’ve loved working with my friends and learning new skills constantly throughout the process. This competition has helped me understand the field of cyber security more and I’m excited to play again.’ - Sarina K
If this seems of interest to anyone, or if you’d just like to learn more about cyber security, then please feel free to contact any of us via email! We’d all love to see more people becoming involved in cyber!
By Jenny, Kaitlyn, Reva and Sarina
Machine learning, as Arthur Samuel put it, is a field of study that gives computers the ability to learn without being explicitly programmed. This means that whilst in classical programming you need to provide the computer with rules and data and it will give you the answer, in machine learning, you can provide the computer with data and the answer and it will work out the rules. This can be explained using simple addition. To do the sum 2+3 in classical programming, you would give the computer the data i.e the numbers 2 and 3 and you would give it the rules i.e you have to add the numbers together and the computer would produce a result of 5. However, in machine learning, you can give the computer the data i.e the numbers 2 and 3 and you can give it the answer i.e 5 and it can work out the rule (provided that it has been given enough data points for it to be able to accurately say that you add the numbers each time).
An example of machine learning is classification software. For example, if you are trying to get a computer to accurately distinguish between pictures of cats and dogs, you would give it a large, varied data set of pictures of both dogs and cats and you would tell it which pictures contain dogs and which pictures contain cats. The hope is that, eventually, you would be able to give it a new picture and it would accurately be able to tell whether it is a picture of a cat or a dog by associating certain features with each group. For example, the algorithm might identify that cats have a specific ear shape or that dogs paws are of a specific colour. However, this is also the reason why we must provide the computer with a varied set. If we only had pictures of black dogs or only ginger cats in the data set, the computer would classify a black cat as a dog. When distinguishing between cats and dogs this doesn’t seem like too big of a deal. However, if we were using this algorithm to recognise people, and we don’t have enough variety in the data set i.e not enough females or people of colour, then the data set would be biased towards these groups of people, and this could have fatal consequences.
Now that I have explained what machine learning is and how it is different from classical programming, I will explain the different types of machine learning and how they work. First, there is supervised learning. The aim of supervised learning is to predict the target variable given predictor variables. An example of this is a machine learning algorithm which predicts house prices (the target variable) based on location, square footage and condition of the house (the predictor variables). In this instance, the machine learning algorithm would be given a data set with many different houses and it would get told their locations, their square footage and conditions amongst other things. It would also get told the prices of these houses and based on this information it would predict the price for a new house. The two sub-categories of supervised learning are classification and regression. In classification, the target variable has categories (like dog vs cat), whilst in regression, the target variable is continuous (like house prices).
The second type of machine learning is unsupervised learning. The difference between supervised and unsupervised learning is that supervised learning uses a labelled data set whilst unsupervised learning uses an unlabelled data set. The aim of unsupervised learning is to discover underlying patterns in the data. So, in unsupervised learning, an algorithm might be given images of cats and dogs and be told to separate the images into 2 groups. However, the algorithm will not add labels to the output.
The third type of machine learning is reinforcement learning. The aim of reinforcement learning is to learn a series of actions. In reinforcement learning, the machine is not given a predefined data set and thus the machine has to collect its own data. For example, if a machine learning algorithm were to distinguish between cats and dogs using reinforcement learning, it would do so using feedback. So, if it were given an image of a dog and it classified it as a cat, we would give negative feedback so that, eventually, the algorithm would be able to correctly classify the images.
There are platforms on which you can set up end to end machine learning training pipelines. For simpler use cases, you can also use out of box algorithms like IBM Watson and Google Vision AI. These require input data and output data, however, you don’t need to code the backend of the algorithm yourself. This is an easier way to ‘code’ an algorithm and then you can give it new data and see if it can classify the data.
This is one of the most innovative and interesting fields at the moment.
The global demographic population is ageing at a rapid pace. In the UK alone, one-fifth of the population, roughly 11.8 million citizens are more than 65 years of age or older. This ever-growing elderly population suffer from multiple chronic diseases (multimorbidity) such as cardiovascular disease, osteoporosis and dementia. Although there has been an increase in life expectancy, the vast majority tend to live in ill health over the course of their lives. As a result, there is increased need for hospital visits and hospitalisation, placing a huge burden on the healthcare system.
Polypharmacy, the use of multiple drugs or more than medically appropriate, is a burgeoning concern amongst older patients with multimorbidity. The research undertaken by the National Institutes of Health has shown that polypharmacy has become increasingly and alarmingly common in older adults with the highest number of drugs taken by those residing in nursing homes. Nearly 50% of the elderly population take one or more medications that are not medically necessary. The increased use of polypharmacy by doctors and over-prescription of unnecessary medications leads to drug to drug interactions contributing to the increased risk of falls in the elderly population, delirium and other related healthcare complications. Current evidence in medical literature clearly establishes a strong link between polypharmacy and detrimental clinical consequences in later life. Due to this, hospitals see an increased number of hospital admissions and re-admissions. This increased healthcare demand places undue strain on NHS healthcare workforce and infrastructure leading to a supply and demand mismatch.
The advancements in digital health technologies such as Telemedicine and Artificial Intelligence (AI) has contributed to the use remote-monitoring devices in elderly patients. AI technologies and interconnected personal devices has made it possible to audit, analyse and assimilate extensive medical data throughout the elderly population. A research conducted by Professor Arnold Milstein at Stanford University using thermal imaging cameras and AI algorithms has identified patients at risk of falls and injuries in community thereby preventing these by district nurses visiting their homes before an event. The use of thermal imaging and other medical technologies has proven to show the reduction in hospital admissions due to prophylactic interventions beforehand and early treatment of infections such as urinary tract infections. This has also assisted remote monitoring of ageing and vulnerable patients, and has delivered highly targeted and direct diagnostics, healthcare and treatment. The use of technology in healthcare and AI has opened up access to personalised and precision medicine.
The ongoing Covid-19 pandemic has pushed digital technology into the forefront of medicine through virtual clinics and telemonitoring of patients who are unable to visit hospital due to self-isolation and distancing measures. Broader use of this technology in the daily lives of elderly patients will help to identify those patients in need of help before they become unwell and needing hospital care. The use of Artificial Intelligence and remote monitoring of patients using advanced digital health technology will undeniably revolutionize healthcare delivery in the future by taking hospital care to the doorstep of communities.
Recently, there has been much speculation surrounding the colonisation of other planets. From SpaceX to NASA, there have been an array of meetings, plans and discussions surrounding the future of Mars, and whether it should one day be colonised.
One argument for the colonisation of Mars is presented by futurist Michio Kaku, who points out that 99.9% of life forms on Earth have gone extinct. On this planet, he claims, we either adapt or die. With the multitude of problems facing our planet, and a growing private sector in space exploration, the frequent discussion of Mars is understandable. Issues such as global warming, antibiotic resistance, and nuclear disaster threaten the planet, as do the countless asteroids that may hit Earth at any given moment. In the case of our planet’s destruction, many argue that a ‘backup planet’ is a viable solution. Such an argument was also supported by the late Stephen Hawking, who conjectured that we needed to colonise the planet in the next 100 years to avoid extinction.
Although such a topic undoubtedly stirs excitement among the population, the reality is that the colonisation of Mars is highly impractical. Ideas such as home-building robots, genetically modified plants that can survive on Mars and other necessary technologies are, in many respects, a huge challenge to attain. Whilst easy to succumb to the fantasy of life on Mars, one must not forget the many dangers associated with space travel. Life on a planet with little gravity, high doses of radiation, and micrometeorites is hardly appealing.
Of course, with sufficient research and investment, these issues could be tackled. But why should the government and the taxpayer invest such large sums of money in another planet, as opposed to their own? Even in the event of large- scale disasters such as global warming, or an atomic bomb, the Earth would be far more habitable than Mars. Many have concerns over polluted water, and yet the only water on Mars is in the frozen ice caps. Many have concerns over the volume of carbon dioxide in the atmosphere, and yet the atmosphere of Mars is 96% carbon dioxide. It is certainly hard to envision a scenario in which Mars is more habitable than Earth. Then why spend so much money, time and resources fixing these problems, instead of focusing on rebuilding our own planet?
The issue of asteroids still remains. Many theorize that the potential for asteroids to destroy Earth is a valid reason to seek shelter elsewhere, and colonise another planet as potential backup. However, if an asteroid was on course to Earth, surely instead of relocating the population, it would be far simpler to build asteroid deflecting technology? In the unlikely case that no area on Earth was safe, one could invest in constructing deep sea colonies in bio domes. Although this sounds challenging, it is still more feasible than relocating a population to a planet nine months away.
To summarise, although the idea of travelling to Mars is both exciting and tempting, one needs to look at the practical implications of this. Spending billions on robots, housing, GMO food and all the necessary technology to achieve such a feat is far less reasonable than focusing on renewable technologies, and our own planet. In times of great uncertainty, we should not be focusing on the colonisation of space, but rather the current state of Earth, and tackling our climate crisis.
By Tatiana, 11L
The technology involved in genetic editing has made huge breakthroughs in the past few years. What began as an unrealistically difficult and ambitious endeavour in the increasingly complex world of medicine has now manifested into a reality through technology such as CRISPR. Genetic editing holds the power to not only treat but prevent countless diseases, transforming the world of medicine and possibly even diverging the path of human evolution itself.
The debate as to whether genetic editing is justified has been fiercely battled for years. The first genetically edited babies were born in China in November 2018. The scientist responsible for this, He Jiankui, was found guilty of “illegal medical practices”. He served three years in prison and was fined a huge 3 million yuan (£327,360). The Chinese court even insisted Jiankui “crossed the bottom line of ethics in scientific research and medical ethics.” Large numbers of people agree with this claim, arguing genetic editing can never be justified.
The main reasons supporting this argument include how genetic editing involves humans ‘playing God’. Religious believers often insist that only God should have the right to edit such a crucial element of our individuality, and humans should be happy with their genetic identity as it is ‘God’s gift’, even if this genetic identity involves a disease.
The misuse of genetic editing has been a cause of much concern. Its potential use to enhance characteristics such as physical strength, looks, or even intelligence would be unfair to ‘unedited humans’ and possibly biased to the wealthy- the poor will likely be unable to afford genetic editing. A ‘black market’ related to gene editing - much like ‘back alley ’abortions - may develop, where those who cannot afford gene editing will choose unauthorised and unregulated facilities with likely higher complication rates due to the lack of sanitation and doctors able to preform the procedure.
Furthermore, if everyone decides to genetically edit themselves there would be a reduction in genetic variation in the human species. Further concern is that eradicating genetic diseases would result in overpopulation, thus greatly contributing to the ever worsening issues of global warming and depletion of essential natural resources.
There is also a strong ethical issue associated with all types of gene editing – is it really correct for people to use the system to ‘customise’ their own children? Surely only the child should have the right to alter their appearance and should do it when they are old enough to understand the significance of this irreversible decision.
Genetic editing may gives rise to eugenics in dictatorship countries - where political or government groups forcefully try to modify the gene pool of some of their subjects. This may be to ensure mental and physical advantage in warfare or scientific careers.
In addition to ethical issues and the potential misuse of genome editing, there are concerns over safety and possible complications. Germline therapy (a type of genetic editing where DNA is transferred into the cells that produce reproductive cells) poses a potential infection risk through the use of viral vectors that enable DNA to be transferred into these cells. No one can truly predict how these resulting genes may interact during fertilisation and what genetic defects may arise.
On the other hand, one can argue that genetic editing can easily be justified. After all, a long time ago surgery would have been considered as a human taking the opportunity to ‘play God’. Surgery was previously extremely risky due poor hygiene, little access to powerful anaesthetic and sub-optimal techniques with high complication rates. However, surgery is currently much safer- millions of people undergo it and change their lives for the better. Many people predict this future for gene editing – there is nothing wrong with people wanting to rid themselves of a disease to empower themselves and become healthy again- we all have the right to be as healthy as possible. Nature can be very cruel to us- people cannot choose whether they end up with genetically inherited diseases such as haemophilia which completely destroy one’s life and damage their mental health as well as their physical health. If research in genetic editing continues we will have the power to live long and happy lives. Couples can be reassured that their unborn children can too since germline therapy ensures the disease will not be inherited in the family again.
If gene editing becomes widespread and advanced enough it will be the key to controlling human evolution – humans will eventually be much more intelligent creatures who are more mentally and physically resilient to the variety of challenges life brings in our day to day lives. In fact, instead of waiting hundreds of thousands of years for beneficial mutations to arise (as with natural selection), we could start to see beneficial changes every year. Many people regard gene therapy as unsafe, however, as with all new therapies, medicine, and vaccinations, genetic editing will be vigorously tested and researched before it is released to the public as a standard procedure, certifying its safety.
In conclusion, genetic editing could greatly benefit people, increase longevity, and change the scale of human happiness and productivity by multiple orders of magnitude. It could eliminate thousands of diseases and many forms of pain and anxiety arising from them. There are only a handful of areas of research in the world with this much potential. However, whilst it may be a wonderful addition to medical science, there needs to be firm monitoring to ensure genetic editing is as risk free as possible. Furthermore, it must be strictly controlled to avoid misuse. Genetic editing has risks- we must proceed with caution, but many new technologies have risks and we are eventually able to use them to greatly benefit people throughout the world. We should not let fear hold back progress on this extremely promising new area of research.
By Lana, 11N
Einstein’s theories of special and general relativity were singularly revolutionary and radical at their time of conception; established laws of physics and previous understandings of reality itself were uprooted. Albert Einstein, one of the most famous scientists of all time, proposed his theory of special relativity in 1905, and the theory of general relativity in 1910, contributing fundamental ideas, forming the basis of modern physics. By combining the ideas of Isaac Newton and John Clerk Maxwell, Einstein conceptualised a new reality. The seemingly inexplicable discrepancy between Maxwell’s finding that the speed of light (c) was constant regardless of motion, with Newton’s laws of motion, was reconciled through Einstein’s proposal of spacetime. Einstein challenged Newton’s understanding of the universe as a ‘clockwork universe,’ where a metre is always a metre and one second is the same anywhere in the universe – Einstein’s spacetime stated that space and time are not two separate dimensions, they are not separate from each other, but are unified in a dynamic, 4-dimensionsal continuum. This fabric of the universe is sensitive and respondent to the presence of mass and energy and dictates how mass and light moves through the universe. Time and time again, different phenomena such as the Eddington solar eclipse in 1919, has proven Einstein’s theory right.
The theory of special relativity also concluded that simultaneity, things happening at the same time, is relative to motion, which Einstein explained through a simple thought experiment: suppose one person is standing still, equidistant from two trees, and two bolts of lightening strike both trees at the same time. A second person who was also equidistant from the two trees, but was instead in a moving train, would have seen one tree struck before the other. This led to the conclusion that both time and distance are relative to motion. Such is the genius of Einstein – his musings, questioning, and inherent curiously led to such revolutionary ideas, explained through equally enlightening and simple terms.
What is particularly striking is the use of thought experiments themselves, as oppose to the empirical and experimental methods we are accustomed to today. For all of these theories were derived from sparse evidence, considering how revolutionary they were; evidence which even now, continues to arise, repeatedly proves Einstein’s genius. For he did not experience these things before formulating his theories – they were predictions, and this required immense creativity, intellect and imagination.
A decade after special relativity, Einstein had introduced acceleration into his theory, which resulted in the theory of general relativity. Einstein completely changed the Newtonian notion of gravity as a force, into the idea that gravity is the distortion of the fabric of spacetime, caused by massive objects. Understanding of this distortion has led to much scientific discovery, including the study of stars and galaxies that are behind massive objects, using gravitational lensing! This is when the light around a massive object, such as a black hole, becomes bent, which then acts as a lens to see things behind it.
Einstein’s genius has enlightened and enriched our understanding of the reality of the universe. His theories continue to guide us, more than a century later. His genius stretches into the future, inspiring infinite discoveries.
The mystery of our universe and its end has left scientists baffled since it was discovered that our solar system was the smallest part of a cosmos larger than was every previously thought possible. While the earth is predicted to vaporize in about 6 billion years, the universe will continue long after that. The issue with finding an answer to how our universe will eventually end is that with such a large part of our universe being made up of the elusive dark matter, and the potential ‘end’ of the universe being trillions of years into the future, it is difficult to come up with one definitive theory. So far, there are three major competing theories hypothesizing potential ways that the universe will end.
The Big Freeze theory grounds itself in the field of thermodynamics, the study of heat. In the universe, events, processes and more generally, everything, occurs due to a heat difference between different sources. This theory suggests that since heat always moves, eventually heat will be evenly distributed throughout the entire universe. At this point, also referred to as ‘heat death’, all stars will run out of fuel and die, all matter will decay and the only thing remaining would be a few particles that would also over time be shifted away by the expansion of the universe. Even the largest stars that collapse into black holes would eventually give off Hawking radiation, eventually evaporating these too. A rather bleak theory, this suggests that the universe will eventually end up cold and empty.
In many ways the Big Crunch theory is a direct opposite to the Big Freeze. Thanks to the theory of relativity, and the consequent discovery of cosmic microwave background radiation, it was discovered that the universe is expanding. As a result of this discovery it has been speculated that although the universe is expanding right now, it will eventually reach a threshold where there is so much matter in the universe that gravity becomes the dominant force, causing the universe’s expansion to slow down, stop and then to contract. The universe would contract faster and faster, becoming denser and hotter as it does so, until all matter finally implodes in on itself in a final singularity. This is often though to be in effect a reverse big bang. However, this theory is less widely though because of a more recent discovery that makes this theory improbable, the rate at which the universe is expanding is increasing.
The Big Rip is the final major theory entirely grounds itself in the activity of dark matter. Dark matter is thought to be responsible for the universe’s expansion, and since its density remains constant despite the universe growing, it is thought that more and more dark matter ‘pops’ into existence in order to keep up the rate of expansion. Oddly enough, this does not contradict the fundamental law of conservation of energy. This law states that in an isolated system, the total amount of energy will remain constant. This law is conserved because as energy and momentum are dependent on spacetime, if spacetime stays the same, total amount of energy remains the same but since spacetime changes, so does total energy. It is suggested that eventually, so much dark energy will have popped into existence, that its density would be above that of ordinary matter so the forces of expansion from the dark matter will overcome the gravitational forces of ordinary matter. This will essentially rip the universe apart, with larger objects of a lower density like planets and stars being ripped apart first, then humans and other living creatures and finally atoms will be destroyed before the universe will finally be entirely ripped apart.
All of these theories have their merits and provide good explanations for what will one day happen to our universe, but none yet provide conclusive evidence. What we can be sure of, is that by the time that these doomsday scenarios might happen, after trillions of years humans will be so far evolved that we probably will not still be able to call those living at that time humans anymore, if life still continues to exist for that long.
By Yuval, Year 11