A group of four of us had the pleasure of taking part in Cyber Centurion VII this year, an annual cyber security competition run by Cyber Security Challenge UK. There are three rounds, each lasting 6 hours, which this year took place in November, December and March. The objective of each round is to successfully find and fix cybersecurity vulnerabilities on computer networks, in order to defend it against attack (think of it as reverse-hacking!). For each round, we had three ‘images’ (systems) to help defend - an Ubuntu image, a Windows, and a Windows Server. We’d work on securing these, anticipating what potential hackers might exploit, and stopping them from doing so. It was an incredibly fun and rewarding experience, and we all learned a lot! Here is what the team had to say about their experiences: ‘I enjoyed every minute of this, from working hard to find flaws in the system, to hearing the rewarding ping of the scoring system telling us we’d gained points! I’m looking forward to doing this again next year and learning more about cybersecurity techniques and tools - it’s a useful skill for everyone to have!’ - Jenny Z ‘The competition was an incredible experience which was definitely worth the time. The best part was how much I learnt through the rounds, as it forces you to cover more and more as the rounds go on. It was incredibly rewarding, and I hope I get the chance to do it again next year.’ - Kaitlyn C ‘Since this was our first year doing Cyber Centurion, we had no idea what to expect. However, using the resources and practice rounds, we quickly gained knowledge and experience unlike any cyber security competition we'd done. One of my favourite aspects would definitely be the anticipation of watching the leaderboards change, and the thrill of finally getting points after spending hours over an issue. I particularly liked how Cyber Centurion covers the blue team aspect of cyber security (that is, defending as opposed to attacking), and cannot wait to compete with my team again.’ - Reva B ‘Cyber Centurion has been an amazing experience. I’ve loved working with my friends and learning new skills constantly throughout the process. This competition has helped me understand the field of cyber security more and I’m excited to play again.’ - Sarina K If this seems of interest to anyone, or if you’d just like to learn more about cyber security, then please feel free to contact any of us via email! We’d all love to see more people becoming involved in cyber! By Jenny, Kaitlyn, Reva and Sarina
0 Comments
Machine learning, as Arthur Samuel put it, is a field of study that gives computers the ability to learn without being explicitly programmed. This means that whilst in classical programming you need to provide the computer with rules and data and it will give you the answer, in machine learning, you can provide the computer with data and the answer and it will work out the rules. This can be explained using simple addition. To do the sum 2+3 in classical programming, you would give the computer the data i.e the numbers 2 and 3 and you would give it the rules i.e you have to add the numbers together and the computer would produce a result of 5. However, in machine learning, you can give the computer the data i.e the numbers 2 and 3 and you can give it the answer i.e 5 and it can work out the rule (provided that it has been given enough data points for it to be able to accurately say that you add the numbers each time). An example of machine learning is classification software. For example, if you are trying to get a computer to accurately distinguish between pictures of cats and dogs, you would give it a large, varied data set of pictures of both dogs and cats and you would tell it which pictures contain dogs and which pictures contain cats. The hope is that, eventually, you would be able to give it a new picture and it would accurately be able to tell whether it is a picture of a cat or a dog by associating certain features with each group. For example, the algorithm might identify that cats have a specific ear shape or that dogs paws are of a specific colour. However, this is also the reason why we must provide the computer with a varied set. If we only had pictures of black dogs or only ginger cats in the data set, the computer would classify a black cat as a dog. When distinguishing between cats and dogs this doesn’t seem like too big of a deal. However, if we were using this algorithm to recognise people, and we don’t have enough variety in the data set i.e not enough females or people of colour, then the data set would be biased towards these groups of people, and this could have fatal consequences. Now that I have explained what machine learning is and how it is different from classical programming, I will explain the different types of machine learning and how they work. First, there is supervised learning. The aim of supervised learning is to predict the target variable given predictor variables. An example of this is a machine learning algorithm which predicts house prices (the target variable) based on location, square footage and condition of the house (the predictor variables). In this instance, the machine learning algorithm would be given a data set with many different houses and it would get told their locations, their square footage and conditions amongst other things. It would also get told the prices of these houses and based on this information it would predict the price for a new house. The two sub-categories of supervised learning are classification and regression. In classification, the target variable has categories (like dog vs cat), whilst in regression, the target variable is continuous (like house prices). The second type of machine learning is unsupervised learning. The difference between supervised and unsupervised learning is that supervised learning uses a labelled data set whilst unsupervised learning uses an unlabelled data set. The aim of unsupervised learning is to discover underlying patterns in the data. So, in unsupervised learning, an algorithm might be given images of cats and dogs and be told to separate the images into 2 groups. However, the algorithm will not add labels to the output. The third type of machine learning is reinforcement learning. The aim of reinforcement learning is to learn a series of actions. In reinforcement learning, the machine is not given a predefined data set and thus the machine has to collect its own data. For example, if a machine learning algorithm were to distinguish between cats and dogs using reinforcement learning, it would do so using feedback. So, if it were given an image of a dog and it classified it as a cat, we would give negative feedback so that, eventually, the algorithm would be able to correctly classify the images. There are platforms on which you can set up end to end machine learning training pipelines. For simpler use cases, you can also use out of box algorithms like IBM Watson and Google Vision AI. These require input data and output data, however, you don’t need to code the backend of the algorithm yourself. This is an easier way to ‘code’ an algorithm and then you can give it new data and see if it can classify the data. This is one of the most innovative and interesting fields at the moment. Ahana 9C
The global demographic population is ageing at a rapid pace. In the UK alone, one-fifth of the population, roughly 11.8 million citizens are more than 65 years of age or older. This ever-growing elderly population suffer from multiple chronic diseases (multimorbidity) such as cardiovascular disease, osteoporosis and dementia. Although there has been an increase in life expectancy, the vast majority tend to live in ill health over the course of their lives. As a result, there is increased need for hospital visits and hospitalisation, placing a huge burden on the healthcare system. Polypharmacy, the use of multiple drugs or more than medically appropriate, is a burgeoning concern amongst older patients with multimorbidity. The research undertaken by the National Institutes of Health has shown that polypharmacy has become increasingly and alarmingly common in older adults with the highest number of drugs taken by those residing in nursing homes. Nearly 50% of the elderly population take one or more medications that are not medically necessary. The increased use of polypharmacy by doctors and over-prescription of unnecessary medications leads to drug to drug interactions contributing to the increased risk of falls in the elderly population, delirium and other related healthcare complications. Current evidence in medical literature clearly establishes a strong link between polypharmacy and detrimental clinical consequences in later life. Due to this, hospitals see an increased number of hospital admissions and re-admissions. This increased healthcare demand places undue strain on NHS healthcare workforce and infrastructure leading to a supply and demand mismatch. The advancements in digital health technologies such as Telemedicine and Artificial Intelligence (AI) has contributed to the use remote-monitoring devices in elderly patients. AI technologies and interconnected personal devices has made it possible to audit, analyse and assimilate extensive medical data throughout the elderly population. A research conducted by Professor Arnold Milstein at Stanford University using thermal imaging cameras and AI algorithms has identified patients at risk of falls and injuries in community thereby preventing these by district nurses visiting their homes before an event. The use of thermal imaging and other medical technologies has proven to show the reduction in hospital admissions due to prophylactic interventions beforehand and early treatment of infections such as urinary tract infections. This has also assisted remote monitoring of ageing and vulnerable patients, and has delivered highly targeted and direct diagnostics, healthcare and treatment. The use of technology in healthcare and AI has opened up access to personalised and precision medicine. The ongoing Covid-19 pandemic has pushed digital technology into the forefront of medicine through virtual clinics and telemonitoring of patients who are unable to visit hospital due to self-isolation and distancing measures. Broader use of this technology in the daily lives of elderly patients will help to identify those patients in need of help before they become unwell and needing hospital care. The use of Artificial Intelligence and remote monitoring of patients using advanced digital health technology will undeniably revolutionize healthcare delivery in the future by taking hospital care to the doorstep of communities. Shriyaa, 10M
Recently, there has been much speculation surrounding the colonisation of other planets. From SpaceX to NASA, there have been an array of meetings, plans and discussions surrounding the future of Mars, and whether it should one day be colonised. One argument for the colonisation of Mars is presented by futurist Michio Kaku, who points out that 99.9% of life forms on Earth have gone extinct. On this planet, he claims, we either adapt or die. With the multitude of problems facing our planet, and a growing private sector in space exploration, the frequent discussion of Mars is understandable. Issues such as global warming, antibiotic resistance, and nuclear disaster threaten the planet, as do the countless asteroids that may hit Earth at any given moment. In the case of our planet’s destruction, many argue that a ‘backup planet’ is a viable solution. Such an argument was also supported by the late Stephen Hawking, who conjectured that we needed to colonise the planet in the next 100 years to avoid extinction. Although such a topic undoubtedly stirs excitement among the population, the reality is that the colonisation of Mars is highly impractical. Ideas such as home-building robots, genetically modified plants that can survive on Mars and other necessary technologies are, in many respects, a huge challenge to attain. Whilst easy to succumb to the fantasy of life on Mars, one must not forget the many dangers associated with space travel. Life on a planet with little gravity, high doses of radiation, and micrometeorites is hardly appealing. Of course, with sufficient research and investment, these issues could be tackled. But why should the government and the taxpayer invest such large sums of money in another planet, as opposed to their own? Even in the event of large- scale disasters such as global warming, or an atomic bomb, the Earth would be far more habitable than Mars. Many have concerns over polluted water, and yet the only water on Mars is in the frozen ice caps. Many have concerns over the volume of carbon dioxide in the atmosphere, and yet the atmosphere of Mars is 96% carbon dioxide. It is certainly hard to envision a scenario in which Mars is more habitable than Earth. Then why spend so much money, time and resources fixing these problems, instead of focusing on rebuilding our own planet? The issue of asteroids still remains. Many theorize that the potential for asteroids to destroy Earth is a valid reason to seek shelter elsewhere, and colonise another planet as potential backup. However, if an asteroid was on course to Earth, surely instead of relocating the population, it would be far simpler to build asteroid deflecting technology? In the unlikely case that no area on Earth was safe, one could invest in constructing deep sea colonies in bio domes. Although this sounds challenging, it is still more feasible than relocating a population to a planet nine months away. To summarise, although the idea of travelling to Mars is both exciting and tempting, one needs to look at the practical implications of this. Spending billions on robots, housing, GMO food and all the necessary technology to achieve such a feat is far less reasonable than focusing on renewable technologies, and our own planet. In times of great uncertainty, we should not be focusing on the colonisation of space, but rather the current state of Earth, and tackling our climate crisis. By Tatiana, 11L
The technology involved in genetic editing has made huge breakthroughs in the past few years. What began as an unrealistically difficult and ambitious endeavour in the increasingly complex world of medicine has now manifested into a reality through technology such as CRISPR. Genetic editing holds the power to not only treat but prevent countless diseases, transforming the world of medicine and possibly even diverging the path of human evolution itself. The debate as to whether genetic editing is justified has been fiercely battled for years. The first genetically edited babies were born in China in November 2018. The scientist responsible for this, He Jiankui, was found guilty of “illegal medical practices”. He served three years in prison and was fined a huge 3 million yuan (£327,360). The Chinese court even insisted Jiankui “crossed the bottom line of ethics in scientific research and medical ethics.” Large numbers of people agree with this claim, arguing genetic editing can never be justified. The main reasons supporting this argument include how genetic editing involves humans ‘playing God’. Religious believers often insist that only God should have the right to edit such a crucial element of our individuality, and humans should be happy with their genetic identity as it is ‘God’s gift’, even if this genetic identity involves a disease. The misuse of genetic editing has been a cause of much concern. Its potential use to enhance characteristics such as physical strength, looks, or even intelligence would be unfair to ‘unedited humans’ and possibly biased to the wealthy- the poor will likely be unable to afford genetic editing. A ‘black market’ related to gene editing - much like ‘back alley ’abortions - may develop, where those who cannot afford gene editing will choose unauthorised and unregulated facilities with likely higher complication rates due to the lack of sanitation and doctors able to preform the procedure. Furthermore, if everyone decides to genetically edit themselves there would be a reduction in genetic variation in the human species. Further concern is that eradicating genetic diseases would result in overpopulation, thus greatly contributing to the ever worsening issues of global warming and depletion of essential natural resources. There is also a strong ethical issue associated with all types of gene editing – is it really correct for people to use the system to ‘customise’ their own children? Surely only the child should have the right to alter their appearance and should do it when they are old enough to understand the significance of this irreversible decision. Genetic editing may gives rise to eugenics in dictatorship countries - where political or government groups forcefully try to modify the gene pool of some of their subjects. This may be to ensure mental and physical advantage in warfare or scientific careers. In addition to ethical issues and the potential misuse of genome editing, there are concerns over safety and possible complications. Germline therapy (a type of genetic editing where DNA is transferred into the cells that produce reproductive cells) poses a potential infection risk through the use of viral vectors that enable DNA to be transferred into these cells. No one can truly predict how these resulting genes may interact during fertilisation and what genetic defects may arise. On the other hand, one can argue that genetic editing can easily be justified. After all, a long time ago surgery would have been considered as a human taking the opportunity to ‘play God’. Surgery was previously extremely risky due poor hygiene, little access to powerful anaesthetic and sub-optimal techniques with high complication rates. However, surgery is currently much safer- millions of people undergo it and change their lives for the better. Many people predict this future for gene editing – there is nothing wrong with people wanting to rid themselves of a disease to empower themselves and become healthy again- we all have the right to be as healthy as possible. Nature can be very cruel to us- people cannot choose whether they end up with genetically inherited diseases such as haemophilia which completely destroy one’s life and damage their mental health as well as their physical health. If research in genetic editing continues we will have the power to live long and happy lives. Couples can be reassured that their unborn children can too since germline therapy ensures the disease will not be inherited in the family again. If gene editing becomes widespread and advanced enough it will be the key to controlling human evolution – humans will eventually be much more intelligent creatures who are more mentally and physically resilient to the variety of challenges life brings in our day to day lives. In fact, instead of waiting hundreds of thousands of years for beneficial mutations to arise (as with natural selection), we could start to see beneficial changes every year. Many people regard gene therapy as unsafe, however, as with all new therapies, medicine, and vaccinations, genetic editing will be vigorously tested and researched before it is released to the public as a standard procedure, certifying its safety. In conclusion, genetic editing could greatly benefit people, increase longevity, and change the scale of human happiness and productivity by multiple orders of magnitude. It could eliminate thousands of diseases and many forms of pain and anxiety arising from them. There are only a handful of areas of research in the world with this much potential. However, whilst it may be a wonderful addition to medical science, there needs to be firm monitoring to ensure genetic editing is as risk free as possible. Furthermore, it must be strictly controlled to avoid misuse. Genetic editing has risks- we must proceed with caution, but many new technologies have risks and we are eventually able to use them to greatly benefit people throughout the world. We should not let fear hold back progress on this extremely promising new area of research. By Lana, 11N
Einstein’s theories of special and general relativity were singularly revolutionary and radical at their time of conception; established laws of physics and previous understandings of reality itself were uprooted. Albert Einstein, one of the most famous scientists of all time, proposed his theory of special relativity in 1905, and the theory of general relativity in 1910, contributing fundamental ideas, forming the basis of modern physics. By combining the ideas of Isaac Newton and John Clerk Maxwell, Einstein conceptualised a new reality. The seemingly inexplicable discrepancy between Maxwell’s finding that the speed of light (c) was constant regardless of motion, with Newton’s laws of motion, was reconciled through Einstein’s proposal of spacetime. Einstein challenged Newton’s understanding of the universe as a ‘clockwork universe,’ where a metre is always a metre and one second is the same anywhere in the universe – Einstein’s spacetime stated that space and time are not two separate dimensions, they are not separate from each other, but are unified in a dynamic, 4-dimensionsal continuum. This fabric of the universe is sensitive and respondent to the presence of mass and energy and dictates how mass and light moves through the universe. Time and time again, different phenomena such as the Eddington solar eclipse in 1919, has proven Einstein’s theory right.
The theory of special relativity also concluded that simultaneity, things happening at the same time, is relative to motion, which Einstein explained through a simple thought experiment: suppose one person is standing still, equidistant from two trees, and two bolts of lightening strike both trees at the same time. A second person who was also equidistant from the two trees, but was instead in a moving train, would have seen one tree struck before the other. This led to the conclusion that both time and distance are relative to motion. Such is the genius of Einstein – his musings, questioning, and inherent curiously led to such revolutionary ideas, explained through equally enlightening and simple terms. What is particularly striking is the use of thought experiments themselves, as oppose to the empirical and experimental methods we are accustomed to today. For all of these theories were derived from sparse evidence, considering how revolutionary they were; evidence which even now, continues to arise, repeatedly proves Einstein’s genius. For he did not experience these things before formulating his theories – they were predictions, and this required immense creativity, intellect and imagination. A decade after special relativity, Einstein had introduced acceleration into his theory, which resulted in the theory of general relativity. Einstein completely changed the Newtonian notion of gravity as a force, into the idea that gravity is the distortion of the fabric of spacetime, caused by massive objects. Understanding of this distortion has led to much scientific discovery, including the study of stars and galaxies that are behind massive objects, using gravitational lensing! This is when the light around a massive object, such as a black hole, becomes bent, which then acts as a lens to see things behind it. Einstein’s genius has enlightened and enriched our understanding of the reality of the universe. His theories continue to guide us, more than a century later. His genius stretches into the future, inspiring infinite discoveries. Research sources: https://www.space.com/36273-theory-special-relativity.html https://www.space.com/17661-theory-general-relativity.html https://physics.stackexchange.com/questions/314050/will-moving-observer-see-time-dilation https://www.gresham.ac.uk/lectures-and-events/einstein https://www.gresham.ac.uk/lectures-and-events/was-einstein-right http://www.physics.org/article-questions.asp?id=55 The mystery of our universe and its end has left scientists baffled since it was discovered that our solar system was the smallest part of a cosmos larger than was every previously thought possible. While the earth is predicted to vaporize in about 6 billion years, the universe will continue long after that. The issue with finding an answer to how our universe will eventually end is that with such a large part of our universe being made up of the elusive dark matter, and the potential ‘end’ of the universe being trillions of years into the future, it is difficult to come up with one definitive theory. So far, there are three major competing theories hypothesizing potential ways that the universe will end. The Big Freeze theory grounds itself in the field of thermodynamics, the study of heat. In the universe, events, processes and more generally, everything, occurs due to a heat difference between different sources. This theory suggests that since heat always moves, eventually heat will be evenly distributed throughout the entire universe. At this point, also referred to as ‘heat death’, all stars will run out of fuel and die, all matter will decay and the only thing remaining would be a few particles that would also over time be shifted away by the expansion of the universe. Even the largest stars that collapse into black holes would eventually give off Hawking radiation, eventually evaporating these too. A rather bleak theory, this suggests that the universe will eventually end up cold and empty. In many ways the Big Crunch theory is a direct opposite to the Big Freeze. Thanks to the theory of relativity, and the consequent discovery of cosmic microwave background radiation, it was discovered that the universe is expanding. As a result of this discovery it has been speculated that although the universe is expanding right now, it will eventually reach a threshold where there is so much matter in the universe that gravity becomes the dominant force, causing the universe’s expansion to slow down, stop and then to contract. The universe would contract faster and faster, becoming denser and hotter as it does so, until all matter finally implodes in on itself in a final singularity. This is often though to be in effect a reverse big bang. However, this theory is less widely though because of a more recent discovery that makes this theory improbable, the rate at which the universe is expanding is increasing. The Big Rip is the final major theory entirely grounds itself in the activity of dark matter. Dark matter is thought to be responsible for the universe’s expansion, and since its density remains constant despite the universe growing, it is thought that more and more dark matter ‘pops’ into existence in order to keep up the rate of expansion. Oddly enough, this does not contradict the fundamental law of conservation of energy. This law states that in an isolated system, the total amount of energy will remain constant. This law is conserved because as energy and momentum are dependent on spacetime, if spacetime stays the same, total amount of energy remains the same but since spacetime changes, so does total energy. It is suggested that eventually, so much dark energy will have popped into existence, that its density would be above that of ordinary matter so the forces of expansion from the dark matter will overcome the gravitational forces of ordinary matter. This will essentially rip the universe apart, with larger objects of a lower density like planets and stars being ripped apart first, then humans and other living creatures and finally atoms will be destroyed before the universe will finally be entirely ripped apart. All of these theories have their merits and provide good explanations for what will one day happen to our universe, but none yet provide conclusive evidence. What we can be sure of, is that by the time that these doomsday scenarios might happen, after trillions of years humans will be so far evolved that we probably will not still be able to call those living at that time humans anymore, if life still continues to exist for that long. Sources http://www.bbc.co.uk/earth/story/20150602-how-will-the-universe-end www.wired.co.uk/article/how-will-universe-end https://www.sciencedaily.com/terms/supercooling.htm http://www.preposterousuniverse.com/blog/2010/02/22/energy-is-not-conserved/ By Yuval, Year 11
There is ‘liminal space’ between cellular life and death. Despite the fact that they are often thought of as oxymoronic, it is not as straightforward. Many have grappled to denote the moment of death for humans: Is it when the beating of the heart no longer occurs? When breathing stops? A lack of detectable activity? Divergent answers arise as death is a process, and by definition, not an irreversible one. In regard to cells, predominantly it is assumed that once the cells pass critical checkpoints, the death process is irrevocable. Such checkpoints include condensation of nucleus, collapse of DNA, disintegration of the mitochondria and cell shrinkage. Moreover, these events are often intentional. An essential component of life is programmed cell death, with over 20 forms proposed. Among these, apoptosis is the most notable and well-studied due to its regulatory mechanisms in cell suicide, and crucial roles in embryonic development, maintaining a balance of cellular multiplication and regulating internal conditions (homeostasis) by eradicating the undesired, faulty or dangerous cells in the body. Apoptosis in Greek is defined as ‘falling’ and it expedites the habitual turnover of cells, analogous to leaves falling from a tree in autumn. A number of triggers are involved in apoptosis, but at length they activate a decisive group of ‘executioner’ proteins named caspases. These enzymes, by cleaving hundreds of various types of proteins within a cell, inflict destruction in cellular targets, attack structural proteins and deconstruct the cytoskeleton, resulting in cell shrinkage to blebs and die. With all this, dubiety also follows. The fence which segregates life and death is porous even at the degree of cells (the rudimentary units of life). A growing body of evidence have recently demonstrated that cells that are believed to be dead or terminal are able to revive themselves, or somewhat revive, hence reverse apoptosis when under the right conditions. This phenomenon is referred to as anastasis (Greek for ‘rising to life’) and can occur in vitro and in vivo. A significant role that anastasis plays involves the maintenance of differentiated cells that are difficult to recoup, such as neurons and cardiomyocytes. In this way, anastasis can counter many of the complications resulted by apoptosis. A variety of degenerative diseases, such as Alzheimer and Parkinson, are associated with apoptosis not functioning correctly. This is because protein aggregation can activate an enzyme that triggers apoptosis, resulting in the death of neurons and loss of brain function. However, if we rival apoptosis to the demolition of buildings, the detrimental effects that arise when anastasis takes place can be also understood. The caspases involved in the breakdown of cellular structures are somewhat like demolition workers destroying buildings. If someone decides after: “I don’t want it to be destroyed, please rebuild it.” Then, the damage has to be repaired, but this process of restoring may go wrong. You won’t have a complete replica of the original. Therefore, when anastasis takes place, the resurrected cells may bear chromosomal abnormalities and acquire mutations. This will engender a multiplier effect where particular mutations will cause unchecked cell growth and proliferation. Henceforth, this revival process may trigger normal cells to become carcinogenic, by gaining new mutations and transmuting into more hostile and metastatic cancers. In this way, cancer cells are said to employ anastasis as a way to ‘cheat death’ and use it as an escape tactic to survive cell- death- inducing anti-cancer therapy (e.g. chemotherapy and radiotherapy). The correlation between anastasis and cell regeneration, rise of disorders and cell death decision is yet to be elucidated, as additional research is required to confirm a direct link. Ultimately, if there truly is a correlation, the resurrection of cells could increase awareness into multidisciplinary fields of science that supplement our understanding in the control of cell survival and destruction. Furthermore, it could provide insight into identifying novel analeptic approaches for brain damage, cancer, injury to tissue and moreover regeneration medicine by meditating the reversibility of apoptosis. REFERENCES 1. Kroemer G, et al. 2009. Classification of cell death: recommendations of the Nomenclature Committee on Cell Death 2009 2. Jacobson MD, Weil M, Raff MC. 1997. Programmed cell death in animal development. 3. Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P. 2002. The cell cycle and programmed cell death. 4. Burke PJ. 2017. Mitochondria, bioenergetics and apoptosis in cancer. 5. Tang HL, et al. 2012. Cell survival, DNA damage, and oncogenic transformation after a transient and reversible apoptotic response. 6. Tang HL, Yuen KL, Tang HM, Fung MC. 2009. Reversibility of apoptosis in cancer cells. 7. Taylor RC, Cullen SP, Martin SJ. 2008. Apoptosis: controlled demolition at the cellular level. 8. Kerr JF, Wyllie AH, Currie AR. Apoptosis: a basic biological phenomenon with wide-ranging implications in tissue kinetics. 9. Sun G, Guzman E, Balasanyan V, Conner CM, Wong K, Zhou HR, et al. A molecular signature for anastasis, recovery from the brink of apoptotic cell death 10. Tang HL, Tang HM, Fung MC, Hardwick JM. In vivo Caspase Tracker biosensor system for detecting anastasis and non-apoptotic caspase activity. 11. Baskar R, Lee KA, Yeo R, Yeoh K-W. Cancer and radiation therapy: current advances and future directions. By Gaya, Year 11
Introduction: As a second patient is seemingly cured of HIV after receiving a stem-cell treatment for his (Hodgkin lymphoma) cancer, it begs the question of whether HIV is ‘incurable,’ as we have always assumed. This article will endeavour to explain how HIV and AIDS affect the human body, how Adam Castillejo was ‘cured’, and what all this means for the future of modern medicine. HIV: HIV stands for the human immunodeficiency virus. It is a virus that attacks cells that would normally help the body fight infection (CD4 helper cells), making a person more vulnerable to other infections and diseases. CD4 helper cells are a subset of white blood cells that do not neutralise infections, but rather trigger the body’s response to infections. There are 7 steps that HIV follows to multiply in the body. These are illustrated in the picture below. The process begins when HIV encounters a CD4 cell. The 7 steps in the HIV life cycle are
Not only does HIV attack CD4 cells, but it also uses the cells to make more of the virus. HIV destroys CD4 cells by using their replication machinery to create new copies of the virus. This ultimately causes the CD4 cells to swell and burst. Once the virus has destroyed a certain number of CD4 cells and the count drops below 200, a person will have progressed to AIDS. You can get or transmit HIV through specific activities only. Most commonly, people get or transmit HIV through sexual behaviours and needle or syringe use. Only certain fluids- blood, semen, rectal fluids, vaginal fluids, and breast milk- from a person who has HIV can transmit HIV. It can also be transmitted from mother to baby during pregnancy via the placenta. AIDS: AIDS is the acquired immune deficiency syndrome. It is the name used to describe a number of potentially life-threatening infections and illnesses that happen when your immune system has been severely damaged by the HIV virus. Many people often confuse HIV and AIDS- HIV causes AIDS. HIV destroys CD4 T cells- white blood cells that play a significant role in helping your body fight disease. The fewer CD4 T cells you have, the weaker your immune system becomes. You can have an HIV infection, with few or no symptoms, for years before it turns into AIDS. AIDS is diagnosed when the CD4 T cell count falls below 200, or if you have an AIDS-defining complication, such as a serious infection or cancer. Thanks to better antiviral treatments, most people with HIV today don’t develop AIDS. Untreated, HIV typically turns into AIDS in about 8-10 years. AIDS only occurs once your immune system has been severely damaged and as mentioned before, is only diagnosed once the CD4 T cell count falls below 200, or if you have an AIDS-defining complication. TREATMENT: Antiretroviral medicines are normally used to treat HIV. They work by stopping the virus replicating in the body, allowing the immune system to repair itself and preventing further damage. These come in the form of tablets, which need to be taken every day. HIV is able to develop resistance to a single HIV medicine very easily, but taking a combination of different medicines makes this much less likely. On top of this, there are some people who are actually naturally resistant to HIV. CCR5 is the most commonly used receptor by HIV-1- the virus strain of HIV that dominates around the world- to enter cells. But a small number of people who are resistant to HIV have two mutated copies of the CCR5 receptor. This means the virus cannot penetrate cells in the body it normally infects. Researchers say it may be possible to use gene therapy to target the CCR5 receptor in people with HIV. Adam Castillejo was the second patient to be cured of HIV. He received a stem-cell treatment for a cancer (Hodgkin lymphoma) he had, and this rendered him virus-free. This is because the donor of the stem cells had the uncommon gene that gives them and now Mr Castillejo protection against HIV. STEM CELLS: In order to understand how the phenomenon of Mr Castillejo becoming HIV free came about, it is important to look more closely at how stem cells work. Stem cells are special human cells that have the ability to develop (differentiate) into many different cell types, from muscle cells to brain cells. In some cases, they also have the ability to repair damaged tissues. Stem cells are divided into two main forms- embryonic stem cells and adult stem cells. The embryonic stem cells used in research today come from unused embryos resulting from an in vitro fertilisation procedure that are donated to science. These embryonic stem cells are pluripotent, meaning they can turn into more than one type of cell. There are two types of adult stem cells. One type comes from fully developed tissues, like the brain, skin, and bone marrow. There are only small numbers of stem cells in these tissues, and they are more likely to generate only certain types of cells. For example, a stem cell derived from the kidney will only generate more kidney cells. The second type is induced pluripotent stem cells. These are adult stem cells that have been manipulated in a lab to take on the pluripotent characteristics of embryonic stem cells. Although induced pluripotent stem cells don’t appear to be clinically different from embryonic stem cells, scientists have not yet found one that can develop every kind of cell and tissue. The only stem cell currently used to treat disease are hematopoietic stem cells- the blood cell-forming adult stem cells found in the bone marrow. Stem cell transplants are used currently in the treatment of cancer. In a typical stem cell transplant for cancer very high doses of chemotherapy are used, sometimes along with radiation therapy, to try to kill all the cancer cells. This treatment also kills the stem cells in the bone marrow. Soon after treatment, stem cells are given to replace those that were destroyed. These stem cells are given into a vein, much like a blood transfusion. Over time, they settle in the bone marrow and begin to grow and make healthy blood cells. This process is called engraftment. There are two main types of stem cell transplants. Autologous stem cell transplants are when the stem cells come from the same person who will get the transplant. In this type of transplant, your own stem cells are removed, or harvested, from your blood before you get treatment that destroys them. Your stem cells are removed from either your bone marrow or your blood, and then frozen. After you get high doses of therapy, the stem cells are thawed and given back to you. One advantage of autologous stem cell transplant is that you’re getting your own cells back. You do not have to worry about the new stem cells (engrafted cells) attacking your body, or about getting a new infection from another person. But there can still be graft failure, which means the cells don’t go into the bone marrow and make blood cells like they should. A disadvantage of an autologous transplant is that cancer cells may be collected along with the stem cells, and later put back into your body. Another disadvantage is that your immune system is still the same as it was before the transplant. As the cancer cells were able to escape attack from your immune system before, they may be able to do so again. To help prevent this, some places treat the stem cells before giving them back to the patient to try to kill any remaining cancer cells- this is called purging. Again, this has problems because some normal stem cells can be lost during this process. This may cause your body to take longer to start making normal blood cells, and you might have very low and unsafe levels of white blood cells or platelets for a longer time, increasing risk of infections or bleeding problems. Allogeneic stem cell transplants are when the stem cells come from a matched related or unrelated donor. In the most common type of allogeneic transplant, the stem cells come from a donor whose tissue type closely matches the patient’s. Blood taken from the placenta and umbilical cord of new-borns is a newer source of stem cells for allogeneic transplant. Called cord blood, this small volume of blood has a high number of stem cells that tend to multiply quickly. But the small volume normally does not contain enough stem cells for large adults, so is mostly used for children and smaller adults. A positive to this type of stem cell treatment is that the donor stem cells make their own immune cells, which could help kill any cancer cells that remain after high-dose treatment. This is called the graft-versus-cancer effect. Also, the donor can often be asked to donate more stem cells or even white blood cells if needed, and stem cells from healthy donors are cancer-free. Cons- the transplant might not take; the transplanted donor stem cells could die or be destroyed by the patient’s body before settling in the bone marrow. Also, the immune cells from the donor may not just attack the cancer cells- they could attack healthy cells in the patient’s body. This is called the graft-versus-host disease. There is also a very small risk of certain infections from the donor cells, but as donors are tested before they donate, this is rare. A higher risk comes from infections you had previously, and which your immune system has had under control. These infections may surface after allogeneic transplant because your immune system is suppressed by medicines called immunosuppressive drugs. Such infections could cause serious problems, if not death. Of course, this all brings up the question of why can we not use stem cell transplants to cure all HIV patients? The answer is that it would not be possible to find enough genetically matched bone marrow donors with the naturally occurring genetic mutation to treat the 33 million people with HIV, even if that was desirable, safe and ethical. Also, most people with HIV already have very compromised immune systems, and so carrying out stem cell transplants would carry a significant risk, which may not be outweighed by the benefits. GENE THERAPY: A lot of research has gone into the potential of using gene therapy for treating and preventing diseases in general, and specifically for HIV. Gene therapy is an experimental technique that uses genes to treat or prevent disease. In the future, this technique may allow doctors to treat a disorder by inserting a gene into a patient’s cells instead of using drugs or surgery. There is research into several approaches to gene therapy, including: - Replacing a mutated gene that causes disease with a healthy copy of the gene. - Inactivating, or ‘knocking out’ a mutated gene that is functioning improperly and causing unwanted consequences. - Introducing a new gene into the body to help fight a disease. Gene therapy is a promising treatment option for a number of diseases, including inherited genetic disorders, some types of cancer, and certain viral infections. However, the technique is risky and is still under study to make sure that it will be safe and effective. Gene therapy is currently being tested only for diseases that have no other cures. Gene therapy is promising in helping cure HIV because people who are naturally resistant to HIV have two copies of the CCR5 receptor due to a random mutation. Gene therapy could be used to give people the second mutated copy of this receptor if they target the CCR5 receptor, hence, curing people of HIV. However, gene therapy is still a controversial technique, and whilst many believe that it will become a staple of 21st century medicine, experts say society will be better served if medical researchers proceed slowly and prudently. Therefore, it is likely to be decades until gene therapy becomes normalised in modern medicine. And for specific treatments, such as this supposed HIV cure, many years of testing, clinical trials and waiting for approval would have to pass before the treatment becomes available. Kiran Kuri, Year 13
Robots have always been lurking at the back of directors’ heads. From The Terminator of 1984 to Wall-E of 2008, robots may have evolved, but they remain the turn-to option for a merciless villain ready to take over the world or end humanity as we know it. In this article, I will be covering 3 different movies with 3 different types of robots with a similar motive. My first movie is 2001- A Space Odyssey. Released in 1968 and directed by Stanley Kubrick, the main antagonist is more an artificially intelligent software than a robot. In case you aren’t familiar with this movie, it’s about a space quest to go to Jupiter to find what a monolith was aiming at, run by HAL 9000, the AI software. His increasingly bad behaviour forces two of the astronauts, Bowman and Poole to disconnect HAL. By lip reading their private conversation, HAL turns Bowman back into a foetus and kills Poole and the rest of the crew. While being disconnected, HAL explains that his secret mission was to kill all the crew members. This movie is the 7th earliest about robot, setting the example that robots are ordained to kill us all, and is a curse covered by a blessing. My second choice is Wall-E. This movie, in short is about a space programme to evacuate humans from earth and make it a better habitat. This movie is full of robots, but I would like to focus on Auto, the artificially intelligent steering wheel. He takes over (unofficially) as the captain of the mission and turns a 5-year plan into a 700-year plan. His mission objective to never return to earth- A113, and he does anything to keep to this. This shows how helpful AI can turn on humanity and enslave us. There is also a striking resemblance between HAL and Auto - both are the head of a mission in space with a secret twist which affects humans. They also have a similar appearance - white with a red light in the middle. This could just be a nod to Kubrick by Disney, as they usually include cannon. However, it could have a deeper meaning… what if HAL had been restored generations after the first, unknown of the harm caused. That, really, is for you to ponder over. My third and final choice is Big Hero 6. This Disney animation, in a nutshell, is about a healthcare robot, Baymax which makes friends with his dead creator’s younger brother, Hiro. He and his friends team up to uncover a mystery of microbots, small metal pieces, when put together forming a larger structure. Instead of talking about the antagonist, I am going to explore Baymax, the robot who is always willing to help. He helps those in need, and without spoiling the suspense too much, when programmed to be destructive, unlike our two other antagonists, Baymax uses his new skills to save people from danger, not to put people in danger. Baymax also shows a sense of understanding, which neither HAL nor Auto did. When his microchip is about to be replaced by Hiro, Baymax plays a video showing his elder brother struggling to create him. This warns Hiro not to make too much changes as the robot has taken shape due to someone else’s tireless effort. In essence, most robots in movies are considered more of a bane than a boon, yet some always go out of their way to help. It is also interesting to see how robots and AI are shown to have two faces, one to help and one to destroy, yet this does not mean that they are decisively shady. By Anagha Sreeram, 8C
|