Cell A Bias and Coincidental Thinking

Many of us find the concept of fate extremely seductive; it is admittedly difficult not to question if the stars have perfectly aligned when extraordinary coincidences occur. All of us have experienced dreaming about a long-forgotten friend only to run into him the very next day, but should we necessarily ascribe coincidences like this to a higher power?  Coincidences of this type are usually a collision of two individual events, and any individual event usually has the potential to transpire in one of two possible ways - it either occurs, or it does not. Either the cloud above you looks like an elephant or it does not. Either the phone rings at 8:52 P.M. or it does not.  

Sometimes, two individual events coincide in a way that strikes you as noteworthy – Patricia calls you moments after you thought of her, you dream about a long-forgotten friend only to run into him the very next day. This collision of events is where the term coincidence earns its keep.  Notice that coincidences rely on two or more individual events colliding in a way that strikes you as transcendentally directed, perhaps by a some divine being or fate itself. We all have experienced a seemingly eerie combination of multiple variables coinciding unexpectedly -- you have a frightening dream about your mother and wake up to hear a voicemail that she has gone to the hospital, you mention in passing that you have never broken a bone, and awkwardly fall to your femur’s detriment later that day. But how often do occurrences of this type – collisions of two individual events – come whizzing past our face, only to go unnoticed? Moreover, how often do coincidences simply fail to occur?

Here is where Cell A Bias, one’s tendency to ascribe authorship or divine intervention to random events, comes into play. (Cell A refers to the top left cell in a two by two matrix seen below, the same cell that we call coincidence). There are two individual events that could either occur or not. Let us consider the example of Patricia calling you moments after you think of her. In this case, the two events are you thinking of Patricia and her calling you. Either of the two events could either occur or fail to occur, which gives us four possible combinations. This can be diagrammed using a two by two matrix, as seen below.

 

Capture.PNG

 

 

As stated, most forms of Cell A Bias arise from our mistakenly interpreting two events as indicative of some underlying reason or purpose.

Since any combination besides Cell A involves a non-event, the resulting possibilities are: Cell A (Event), Cell B (Non-Event), Cell C (Non-Event), and Cell D (Non-Event), as shown in the matrix above. Notice there are three non-event cells and only one event cell. If you think of Patricia, but she does not call you moments later, you do not register that as a salient event. Likewise, should Patricia happen to call you, but without your prior consideration, no event is marked. Obviously, if neither event occurs -- the phone does not ring, nor you do not think of Patricia -- no connection is made. We need not debate over which combinations are more likely than others. There are, in fact, an infinite number of non-events happening right now. You are neither thinking of a particular loved one, nor are they calling you at this particular instant. Your father twists his ankle, but you failed to dream about it the previous night. However, your brain cannot keep tabs on every non-event, so cells B, C and D go unnoticed while the “perfect storm” brings events in Cell A to the forefront of your consciousness.

Evolutionarily, it actually makes sense for us to assign authorship to natural events that most likely have none. Our brain simply cannot hold memories of every time a coincidence failed to occur, so the only memories we do retain of those of seeming authorship. It is evolutionarily useful for us to make meaningful connections that allow our brains to retain more information, and insofar as complying with Cell A Bias allows for this, the process likely has survival value. If Cell A Bias allows us to make more neurologically meaningful connections about patterns in the world – whether true or not – this was likely evolutionarily advantageous. Therefore, we find coincidences of this sort to be meaningful simply because we fail to notice the trillions of non-events happening all around us.

In fact, simple probability and the law of large numbers practically guarantees that some coincidences must occur. It would be extraordinary if no events ever happened to coincide. In this context, Cell A Bias can tell us  that we should be more skeptical of coincidences and avoid automatically ascribing them to a higher power or fate. Of course, that could be the reason why coincidences arise, but the Cell A Bias should make you skeptical the next time you notice a coincidence.

ATOMS Program Matters for Students and Science

A typical fifth-grade science class involves a lot of worksheets, taking tests, and homework assignments. The topics of their science class are interesting, but the students don’t scream in excitement every time they get another worksheet. Kids would much rather complete a project, like exploding a baking soda volcano or making their own ice cream. For some Pennsylvanian kids, this educational dream becomes a reality every summer, through the Advanced Training for Outstanding Mathematics and Science (ATOMS) program.

The ATOMS program serves elementary and middle school students who have high math and science grades. The program is provided by Appalachia Intermediate IU8 and serves four counties in southwestern Pennsylvania. Besides enhancing students’ STEM education, the program allows students to socialize with their high achieving peers, and to build their sense of self.

Kids love the ATOMS program’s hands-on activities. One ATOMS scholar, Josh, said his favorite activity was building and testing a slingshot, which taught him about physics concepts. Another student, Savana, enjoyed learning about volume by building a tower from straws and tape, filling it with coins, and weighing it. By learning hands-on skills, students are given an educational advantage. Ideas and memorization are at the core of traditional classroom learning, and ATOMS fills the gap of hands-on experience. Years before they enter college, the kids participate in tactile activities that form a foundation for future STEM education.  

Because the kids are highly engaged, they remember much of what they learn at ATOMS. This can help them in school, says Lisa Prebish, Gifted Support teacher at Penn Cambria School District. In the classroom, she has noticed students referring to skills they learned at ATOMS. One popular ATOMS activity is the egg drop, in which teams of students must construct a container that will protect their egg from breaking, as it is dropped from a ladder. This project might help students later in their education, Prebish says. A main part of Penn Cambria’s home economics course is the “egg baby” project. For a week or two, students become the “parents” of a fragile egg, which they must carry everywhere. Students might use their ATOMS egg drop container design to develop a container for their “egg baby,” protecting it from harm and succeeding in the assignment.

The ATOMS program also gives kids a chance to socialize. ATOMS scholars are unlikely to feel socially ostracized because their peers are of similar academic abilities, Prebish notes. Unlike in a regular classroom, high-achieving kids are unlikely to be bullied. Students feel accepted because they are with like-minded peers, which allows for intellectual and enthusiastic conversations about science. The program also brings kids, and ideas, from different schools together, since kids can choose which site to attend. Looking back on his ATOMS experience, Josh said, “You have people that really want to be there… and it’s really a great experience to talk to people that are outside of your school, but kind of think similarly to what you do.” The program encourages group work, and the projects are often completed via team effort. Because the kids establish a rapport, group work is productive and enjoyable. Savana said the program “was fun because you got to hear about other people’s ideas and not just your own.”

Another focus of the ATOMS program is fostering a sense of self in students. By working together on science projects they love, students can be confidently creative. They are encouraged to try new things, share their ideas, and accomplish goals. A third student, Alexa, loved that she got to be creative in her ATOMS activities. She said, “You got to be messy and work together and solve things… You got to eat in some of the classes, and do arts and crafts, and make messes, and it was just a lot of fun.” The program shows kids that being “messy,” whether by cluttering up a workspace or by making mistakes in the process of solving a problem, is totally acceptable. They learn that creativity is a necessary part of the scientific process, and that science is a part of daily life. For example, Prebish recalled an ATOMS program that focused on cooking. Students spent the week creating elements of an Italian dinner and learning about the science of the kitchen. At the end of the week, they had a feast. By cooking a meal together as a science project, the students learned that STEM is applicable to both a laboratory and a kitchen.

By participating in the program, ATOMS scholars grow in their love of science. The kids learn that science is much more than calculating numbers or completing a worksheet. They realize that they can build machines and combine ingredients to do amazing tasks, and they are introduced to other bright minds of their generation. Students of ATOMS experience science with an enthusiasm that will help them to become future community leaders. To ATOMS program scholars, science really matters.

New Techniques in Reproductive Medicine: Mitochondrial Transfer

After a certain point in life, people just want to settle down and start a family. They look forward to having kids. However, not every family is able to create a child that is perfectly healthy. In some cases, the mother might have reproductive issues that inhibit the parents from having a child. Or the mother could have passed down a rare genetic condition she inherited through the mitochondria to the child which could affect the development of the child. In these situations, the parents might opt for adoption. However, there are other methods that might enable the family to reproduce a healthy child.

The mitochondrion plays an important role in the body by creating energy to help the cells in the body function. Babies normally inherit their mitochondrial DNA from only the mother. If the mother has a defect or mutation in the genes of the mitochondrial DNA, the mutated DNA will be passed on to the child. In some cases, these mutations can prove fatal to the baby by causing problems in the respiratory, endocrine, heart, muscles, and nervous system. Usually, these defects can either cause a miscarriage or they can kill the baby either shortly after birth, or after a few years. The mother will most likely not know she has unhealthy mitochondria until she goes to a doctor to figure out why she cannot produce a healthy baby. Doctors can confirm this by looking into the family history of diseases and the characteristics of fiber tissues in the body, which appear ragged red when the mitochondria is mutated. These results will expose if the genetic defect lies in the mitochondrial genome. Once the doctors determine that the mother has unhealthy mitochondria, which could affect the development of her child, she can decide how she wants to start a family of her own. If the mother is set on conceiving her own child as opposed to adopting a child, then she could look into mitochondrial transfer to ensure her child develops healthy mitochondria.

The mitochondria are essential to the body by providing the body with energy. In normal conception, the child’s DNA comes from both the mother and the father with all of the mitochondrial DNA coming from the mother. If there is a mutation in the mitochondrial DNA, then the child will be at risk of lack of energy. Therefore, the child might not fully develop.  The regular DNA is in the nucleus, while the mitochondrial DNA is in the cytoplasm but still inside the mother’s egg cell. However, with mitochondrial transfer, a tiny 0.1 percent of the child’s DNA comes from a healthy mitochondrial DNA donor. In this process, the nucleus, which is the body’s main control center, is removed from the egg cell containing the unhealthy mitochondria. The egg cell is where the embryo will develop. The donor’s nucleus is then removed from the donor’s egg cell, which contains the healthy mitochondria. The mother’s nucleus is then inserted into the donor’s healthy egg cell. The egg cell is then transferred back into the mother’s uterus, where the embryo will develop.Now, this egg can be fertilized by the sperm and the parents can have a healthy baby.

The first successful mitochondrial transfer procedure was tested out on a Jordanian couple by physician John Zhang in a Mexican clinic back in 2011. The couple had initially traveled from Jordan to New York University and met with Dr. James Grifo who had tested mitochondria transfer on mice. He then directed them to D. Zhang, who had been working on this technique. They had lost their first two children due to mitochondrial associated Leigh Syndrome disease, so they decided to try out the mitochondrial transfer method. Luckily for them, this method was successful and they had a baby boy who inherited nuclear DNA from both his parents and healthy mitochondrial DNA from the donor.

According to researchers, there are still some issues that need to be resolved with mitochondrial transfer therapy. Some of the diseased mitochondrial DNA can be transferred from the mother into the healthy donor’s nucleus. This technique involves sucking up themother’s nucleus with a tiny glass straw and transferring it the donor egg cell, which doesn’t contain a nucleus. However, some disease containing mitochondria can be sucked up in this process, and therefore transferred to the donor egg as well. Even a small amount of mutated DNA might cause further problems in the following generations. The mutated DNA could change the structure of the mitochondrial DNA which could essentially then lead to a variety of problems, such as Leigh syndrome, which could fatally affect the child. However, others say that mitochondrial transfer would ensure that deadly diseases aren’t passed onto the next generation.

Some researchers are claiming that altering the DNA of an embryo is not ethical. In fact, the United States Congress banned the FDA from using federal funds to review these practices because altering a human embryo is deemed as unethical. While some argue that altering the genes could cure life threatening diseases, others feel that this would pave the way for so called “designer babies.” The genes of these babies could be altered to look a certain way that would enable the child to adhere to beauty standards. Or, the genes could be altered to ensure the child have a high IQ, which would give the child an unfair advantage in society.

Aside from the scientific community, the public have also voiced their concern on the ethics of this procedure. They believe that people who aren’t born yet are unable to give their consent to having this treatment given to them if they aren’t born yet. Catholic ethicists have complained that this method “dilutes parenthood” because some of the DNA is coming from a donor. Altering this genome is permanent and this alteration could be passed down onto future generations. There is also no way to identify the long-term impact of mitochondrial transfer because there could be other problems in the genome that might affect the baby in the future.  

While research is still being done on this practice, there is not enough evidence to ensure its complete safety. While it could cure one disease, it could also cause other diseases in the process because the genome is altered. However, with a bit more work, this technique could be utilized to treat those one in four thousand women who suffer from mitochondrial diseases. However, mitochondrial transfer therapy first has to be approved by the government. This can only be accomplished through multiple successful trials. Only then can this procedure become more commonly used.

Gene Therapy on the Horizon?

Cancer. Cardiovascular disease. HIV. Cystic Fibrosis. Hemophilia. Can you imagine a world in which these diseases could be controlled and potentially eradicated? Gene therapy is an innovative technique that creates a promising future for the cure of these devastatingly destructive diseases.  

Gene therapy is a relatively new area of science, on the spectrum of scientific discovery, which classically involves using healthy genes to replace mutated genes that cause disease. A second method for gene therapy, non-classical gene therapy, focuses on inhibiting or repairing mutations.  Gene therapy is used on two types of cells:  germ cells and somatic cells. Germline gene therapy involves inserting healthy DNA fragments into germ cells (sperm and egg cells) so that a patient’s children do not inherit a targeted genetic defect, thereby eliminating the mutation in future generations. However, due to various ethical concerns (e.g. the children who would be affected by germline therapy cannot decide if they want the treatment), the U.S. government does not sponsor research on germline therapy.  In contrast, somatic gene therapy receives federal funding because it does not prevent mutated genes from being passed to offspring, as it does not affect germ cells, but rather it targets somatic cells in the body such as cells from the bone marrow.

The concept of classical gene therapy appears very straightforward:  simply insert a new gene to replace a mutated one. Unfortunately, the process is a bit more sophisticated. A functional DNA fragment is carried via a vector (e.g. virus or plasmid) which is either injected into the body area with the mutated cells, or is injected into cells that have been removed from the body and will then be transplanted back into the body after the injection. Classical gene therapy has recently been given recognition in the news for its success in treating Leber Congenital Amaurosis.  

Leber Congenital Amaurosis (LCA) is a rare inherited retinal disease that affects approximately 2,000 people in the United States.  A fully-functioning retina enables visual recognition by translating light into signals that are sent to the brain. However, individuals with LCA are born with a mutation in a gene which causes retinal dysfunction. There are several types of LCA so symptoms generally vary, but patients afflicted by LCA often have severe vision impairment from birth. Patients who are not completely blind are generally farsighted and have low visual acuity permitting them to perceive only general stimuli such as hand motions and bright lights. Surprisingly, people with LCA generally have seemingly normal eyes upon initial examination, hence the term amaurosis, meaning loss of vision without evident change to the eye.  An electroretinogram, an eye test used to determine how the retina responds to light, or a genetic screen are needed for a more definitive diagnosis.  Once a diagnosis has been made, treatment options primarily include low-visual aids or orientation and mobility training so those afflicted with this disease can learn to navigate independently. A new treatment option called Luxturna may become the first gene therapy treatment to target this inherited disease.  The treatment involves injecting specially engineered viruses into the eye.  The viruses carry a correct copy of the mutated gene to the retinal cells.  Although the treatment does not restore 20/20 vision, patients who participated in the clinical trial reported life-altering improvements in visual acuity, perception of color, night vision, and their ability to navigate.

At this point you may be wondering why we do not see widespread benefits of gene therapy in many facets of medicine. First, gene therapy is still in the early stages of development. Although the use of functional DNA to modify genes was first suggested in 1970, nearly 20 years passed before gene therapy could be used on a patient - a four-year-old girl with a genetic disease that caused immunodeficiency and prohibited her body from fighting infection. The child showed significant improvement after the treatment and was eventually able to attend school without worrying about developing severe infections. The success of this treatment promoted additional gene therapy clinical trials in the 1990’s. Unfortunately, in 1999, a patient died while participating in a clinical trial for gene therapy.  This publicized tragedy severely impacted the field as many gene therapy trials were put on hold while new regulations were put in place. Finally, in 2003 the dark cloud was lifted when the first gene therapy was approved to treat head and neck cancer in China. However, most of the research that has made it to clinical testing is ongoing and has not yet been granted FDA approval.

Although there seems to be astounding benefits to gene therapy, like many medical treatments there are risks. When researchers inject viruses that carry a functional copy of a mutated gene, there is the risk of severe autoimmune response, such as an infection, or even tumor generation if the newly implemented genes are inserted at the wrong location in the DNA. Therefore, you can probably understand why researchers spend countless hours creating an effective treatment that maximizes benefits and minimizes risks. Naturally, the FDA is also extremely careful in granting their approval.  Finally, we do not see widespread impact of gene therapy because is it extremely cost prohibitive. For example, the new gene therapy treatment for LCA is estimated to cost nearly one million dollars. Prices for other gene therapy treatments are similarly high in cost.  

Although the promise of gene therapy gives us hope for a future in which disease is more easily eradicated, there is still a lot of work to be done.

Cancer Cures from Coral Reefs

As far back as Mesopotamia and Ancient Egypt, scholars have recorded and cataloged natural medicines. Most of these treatments were sourced from plants found on land because it was impossible for humans to effectively explore the oceans. However, as medicine progressed and sought to find treatments to more diseases, it became necessary to broaden the search beyond the shoreline.

Up until the advent of SCUBA equipment, it was often difficult for scientists to explore the ocean. As noted by William Fenical, a chemist who scours the ocean searching for sources of new drugs, in a Nature article, “until about 1970, no one even thought of the ocean.” Its depths are still largely unexplored and contain many organisms, particularly stationary and slow-moving ones, that produce unique chemicals as a means of self-defense. On the surface, it may appear that they are merely producing toxins. However, the poisonous effects of these chemicals can be harnessed for medical purposes, with properties ranging from pain management to Alzheimer’s treatment. Because of this, the ocean has a unique potential as a source of medicine and is frequently referred to as ‘the 21st century’s medicine cabinet.’

A main area of promise is the development of new cancer drugs. Yondelis, approved by the FDA in 2015, was derived from a creature called a tunicate, also known as a sea squirt. In addition, one of the first prominent chemotherapy treatments was derived from sea life. Cytarabine, also known as Cytosar-U, was originally isolated from a sea sponge in 1959. After showing promise in lab experiments, it was developed into a drug that became a common treatment for leukemia and lymphoma. In fact, Cytosar-U is now on the WHO List of Essential Medicines, which provides guidelines for medications needed to provide comprehensive health care. Although Cytosar-U proved to be one of the ocean’s greatest success stories, there has not been a massive onslaught of new drugs with marine origins. Many other promising compounds, like eleutherobin, an anticancer agent sourced from corals, faced slow research because they were difficult to collect and synthesize. By 2012, only three drugs from marine sources had been approved by the FDA, and only one had been approved by the EU. Even with these limited numbers of approved treatments, there are still some promising drugs awaiting or entering clinical trials.

One of these candidates is bryostatin 1, isolated from a marine pest called the brown bryozoan. It has shown potential as an immunotherapy drug for cancer, HIV, AIDS and Alzheimer’s disease, but development has been slowed by limited supply, according to a Stanford News Service interview with principal investigator Paul Wender, who stated “we started to realize that clinical trials a lot of people were thinking about were not being done because they didn’t have enough material.” It is wildly inefficient to gather this drug from natural sources: in one case, scientists collected only 18 grams of bryostatin 1 from 14 tons of bryozoans. In addition, bryozoans only produce bryostatin in very specific environments, meaning that some populations are useless for isolating this compound. Research ground to a halt simply because there was no way to produce enough to go around. Fortunately, there has been a breakthrough in the production of bryostatin 1. Wender’s lab managed to develop a process to synthesize the compound with a whopping 4.8 percent yield, thousands of times more efficient than extracting bryostatin from its natural source. With this development, the lab hopes to eventually synthesize 20 grams of the compound per year, enough to treat up to 20,000 cancer patients, giving Bryostatin 1 a fighting chance to become a useful drug.

The onslaught of interest in ocean-derived drugs has increased just as the environments this research relies on become increasingly threatened by climate change. The shallower coastal regions that are most easily explored by scientists are rapidly declining: it has been found that up to 30 percent of coral reefs worldwide have already degraded.  Reefs are critical to marine biodiversity, with about 25 percent of species found in or around coral reefs.  The breakdown of these fragile and diverse ecosystems means that many species of marine life will go extinct or near extinction before scientists even know they exist. For scientists trying to take full advantage of nature’s medicine cabinet, this disappearance adds another layer of urgency to their search. As the ecosystem begins to die, so does the chance of finding new treatments. The preservation of marine environments is critical ecologically, but when viewed from a medical perspective becomes an even more pressing issue.

Medicalizing Molly for PTSD

Young adults associate ecstasy with imagery of  dark rooms lashed by beams of light, pulsing with repetitive beats of EDM music and dense with sweat and dancing. However, this generation may soon be displaced by a demographic twice as old.

Ecstasy, or MDMA, is not having an identity crisis or rebranding itself; if anything, it is returning to its roots. Before MDMA was classified as a Schedule I drug in 1985, psychiatrists and therapists drew on its disinhibiting effects to supplement their patients’ psychotherapy sessions. Today, it is being researched as a potential treatment for post-traumatic stress disorder (PTSD) in veterans.

Currently, selective serotonin reuptake inhibitors (SSRIs) are the approved pharmaceutical treatment for PTSD. SSRIs block the reabsorption and prolong the effects of serotonin, a neurotransmitter associated with mood and happiness. MDMA also has a similar reabsorption-blocking mechanism which additionally stimulates the release of serotonin. This flood of serotonin along with other neurotransmitters, including dopamine and norepinephrine, is responsible for the euphoria that recreational MDMA users seek.

The problem is that SSRIs have not seen sufficient success in treating combat-related PTSD in veterans. PTSD treatment incorporates a therapy component; however, many people with the disorder are impacted by their traumatic experiences to a degree that they are unable to discuss them, even in a psychiatric setting. Hypervigilance, fear and avoidance of recalling traumatic experiences, and emotional and social isolation, are predominant symptoms of PTSD. Researchers hope that MDMA’s effects of reducing fear and increasing relaxation, empathy and trust could offset the anxiety-inducing barriers to therapy. Unlike SSRIs which are prescribed daily, researchers intend for MDMA to be used as an adjunct to therapy. In order for therapy to be helpful in treating PTSD, a patient must be willing and able to engage with the therapist.

Research shows that MDMA quiets the fearful and avoidant behaviors of people with the disorder, which, along with the release of prolactin and oxytocin, may stimulate trust and bonding between the patient and therapist. In a small-scale trial by the Multidisciplinary Association for Psychedelic Studies (MAPS), 63 percent of participants with PTSD reported reduced anxiety and improved sleep; 68 percent reported fewer nightmares, flashbacks or intrusive memories and an increased ability to feel emotions; 79 percent reported less excessive vigilance and less avoidance of people and places; 89 percent of participants noticed a better general well-being and increased self-awareness. Currently, the FDA has approved phase III trials for treatment of PTSD with MDMA-assisted psychotherapy. Based on its success, it will proceed to phase IV, the post-marketing surveillance trial. Then, the drug will be available on the market while still being monitored for rare or long-term side effects that had not manifested in the earlier phases. These studies are underway on a global scale while more are in the planning stages.

MDMA is not without its problems. Because the drug releases an immense degree of serotonin, it depletes the brain’s serotonin reserves. Because of this, subsequent doses cannot reach the intensity threshold of euphoria experienced before. In fact, the desired effect becomes more unattainable as negative side effects increase. One of these post-high effects is a period of anhedonia, an inability to experience pleasure. Then again, we have all heard prescription drug commercials that ramble off a slew of side effects, not all of them minor, and the recreational dose of MDMA is larger than that being investigated for psychiatric benefits. Given the millions of people living with PTSD, it is about time for a more reliably successful treatment.

Healers or Dealers?

Modern medicine has transformed the human experience, benefitting society in countless ways. The magnitude of the impact is hard to imagine for someone living in the 21st century United States.

But the healthcare system is not perfect. The pharmaceutical industry is a key player in modern medicine and, as a for-profit industry, it does not always act in the direct interests of consumers. Total spending on pharmaceutical promotion grew from $11.4 billion in 1996 to $29.9 billion in 2005. Predicted worldwide sales of these companies in 2018 reach $1.3 trillion.

But that might not be so impressive. After all, it is the world. But when you realize that 44.5% of those sales are made in North America... maybe it’s a little impressive.

High blood pressure, acid reflux, type II diabetes -- these are symptoms of a sick society, and we cannot necessarily blame companies for the rise in consumption of their respective drugs. In addition, pharmaceutical companies invest billions in the development and testing of new drugs. Working smoothly usually generates profit while the failure to do so tends to generate lawsuits.

But sometimes profit and efficacy do not fully align. There are some areas of business that are not suited to health care, and, upon closer examination, unsettling questions arise. Why do treatments need to be advertised? What happens when the best treatment for the patient only makes half the profit of a treatment that works only half as well? What if a treatment generates profit but can indirectly harm the patient?

Only in the United States and New Zealand can we enjoy being bombarded by heavily filtered actor portrayals promising that a pill can “fix it.” This is because we are the only two developed countries to have legalized direct-to-consumer-advertising, (DTCA) which is the ability of a pharmaceutical corporation to advertise their drugs to the general public.

Defenders of DTCA insist it can inform patients.  I would question that a 21st century citizen with serious symptoms and zero insight would be brought to epiphany via commercial. Google is available for anyone with concerns, and the power of suggestion is considerable. And while yes, WebMD might just tell everyone they have stage 4 cancer, the television is telling them Awesoma ® will make the world pretty and they will get to ride horses into the sunset while eating organic free-range berries (common side effects include thoughts of suicide and vomiting blood).

This is not an ideal system. DTCA affects many categories of pharmaceutical drug. Among the most clearly impacted is psychiatry, where diagnoses for less severe symptoms are quite nuanced and often left to general practitioners with little training in psychiatry. The rate at which general practitioners prescribe psychiatric medications has increased by 150% in the last 10 years, possibly in part due to DTCA. A 2005 study focused on the effects of DTCA found that those who ask for medication – for instance, a pill advertised in a specific commercial -- were more likely to be prescribed a drug, regardless of whether or not they met a clinical diagnosis.  According to Forbes, word of mouth has increased the off-label use of atypical antipsychotics such as Seroquil and Zyprexa as a sleeping aid and mood stabilizer. Both these drugs can cause tardive dyskinesia (uncontrollable and irreversible muscle movements), while Zyprexa can cause massive weight gain even in small doses. While drug companies cannot legally advertise off-label use, they have done so before and have also been accused of misrepresenting their drugs in sales. In addition psychiatric drugs are preferred to therapy by insurance companies for financial reasons. This can lead cases where doctors prescribe SSRIs or benzodiazepines to manage the symptoms of panic disorder, which responds equally well with longer lasting effects to cognitive behavioral therapy. While there are factors at play besides pharmaceutical companies, they play an important role via the means they use to push their treatments and their ability to influence research which insurance uses to consider which treatments to cover.  Independent studies have found pharmaceutical research to, at times, be biased towards positive results for their product.

And when it comes to indirectly harming patients coinciding with profit, 80% of heroin addicts in the US report starting with pills. Pharmaceutical companies are not only morally guilty on this account -- they have been found guilty in courts of law. Purdue Pharma marketed OxyContin as “non-addictive,” giving opioids a path into common usage. They also advertise heavily to doctors and lobby for the use of opioids. The “war on drugs” in this country never touched on the legal prescriptions, which are alarmingly easy to come by. When my little sister got her wisdom teeth out, she was handed a full bottle of hydrocodone (Vicodin, an opioid painkiller) with zero warnings. She “thought it was something like Motrin.” It came in the same bag as the Motrin. How many of us would figure that it is safe because the doctor prescribed it? We trust the white coat and orange bottle. And while many doctors have the best intentions, they are not omniscient or infallible. As the Russians say, trust but verify.

I doubt my sister’s dentist was plotting her downfall into the mire of heroin addiction. But what if it was an ongoing prescription for migraines and the delivery was just as careless? It suddenly becomes possible.

Prescription medication can be very confusing. The fact that it’s “medicine” makes it easy to trust the comforting glow of pharmaceutical advertisements. The average person doesn’t know the difference between amoxicillin and amoxapine, or levothyroxine and thiothixene, or hydroxyzine and doxycycline.  This letter scramble makes it easy to accept whatever the television or doctor might say. We are dependent on experts.

Fixing this problem would require massive infrastructure change, either in the industry itself or in the way it is regulated. Considering the divided political climate this is not likely to happen any time soon.  All we can do as patients is to make sure we are aware of the full risks of what we are prescribed. Because even if negative results are not our fault, they will be our problem.

The Murky Mess of Medical Malpractice

In 2003, 17 year old Jesica was admitted to Duke University Hospital as a result of a heart condition that required a double lung and heart transplant. Jesica was under the care of an experienced surgeon who had performed over 100 heart transplants. The surgery was going smoothly, and the surgeon was preparing to complete the operation. However, he soon received a call from the immunology lab that something terrible had happened. The organs that were donated were incompatible with Jesica’s blood type. Frantic, the surgeon tried to correct this mistake by performing a second surgery, but it was too late. A simple mistake, neglecting to check blood type, cost Jesica her life.

The American Board of Professional Liability Attorneys formally defines medical malpractice as negligence by a health care provider that has resulted in injury, damage, or loss. According to a study conducted from 1991 to 2002 performed by the National Center for Biotechnology Information, 7.6% of all physicians annually faced a medical malpractice claim, while only 1.7% gave compensation to the patients filing the claim. This data could be interpreted in one of two ways. Patients are either facing unjust compensation for negligence by health care professionals, or are filing false claims against physicians, resulting in an overall increase in accumulation of medical malpractice claims. Although there is no way to verify either claim, both scenarios have significant and often lifelong effects on patients and health care professionals.

For the past two years, lobbyists have been attempting to influence Congressmen and Congresswomen to overhaul the current medical malpractice laws. These laws were aimed to place strict limitations on the amount of patient compensation that can be awarded in malpractice suits, but were rejected when the healthcare bill was not passed. The complexity in creating legislation surrounding medical malpractice lies in the immense effect that these laws have on not only the victims of medical malpractice, but the health care professionals and the way they practice medicine. On one end of the spectrum, patient rights advocates believe limitations could lead to unjust compensation for injury and loss. Alternatively, there are advocates that hope to place limitations on the amount of compensation that patients can receive in order to ensure that doctors are able to freely practice medicine and therefore drive the cost of healthcare down.  

Advocates for no limits on financial compensation for medical malpractice victims believe that most patients have a justified claim against health care professionals and are unjustly compensated. Patients who have faced a crippling injury and families that have faced loss of life as a result of negligence by health care professionals are entitled to fair compensation. Money can sometimes alleviate financial burdens that are a direct result of the negligence of the clinician, but more often than not, the mistakes that physicians make cannot be truly atoned by any amount of money. What is particularly interesting is that across all specialties, patients were awarded, on average, $274,887 for malpractice indemnity payments. However, when the consequence is the loss of a family member, this amount may seem miniscule. Considering how few malpractice claims actually receive indemnity payments, it seems likely that cases of medical malpractice are evaluated thoroughly before compensation is provided. This means that if limits on compensation were placed, patient rights would be compromised, since cases that truly deserve financial support would not receive it.

On the other hand, looking at the effects that medical malpractice insurance has on health care professionals reveals a different story. One major consequence of high malpractice insurance for health care professionals is the practice of defensive medicine. Defensive medicine is when health care professionals order more tests and procedures on patients than necessary in order to protect themselves from the risk of medical malpractice. Although this might prevent errors by health care professionals, it also drives up the overall cost of health care. In fact, defensive medicine is one of the major contributing factors to increased co-pays and premiums. This inherently means that fewer people will have access to health care, since medical expenses are increasingly moving out of reach for the average American family.

Another more controversial effect of defensive medicine due to the rising costs of medical malpractice insurance is the potential impediment for progress. Since doctors are constantly worrying about involvement in medical malpractice lawsuits, they are less likely to take risks in their work and therefore are unlikely to try new therapies and drugs that may lead to impactful, life-changing patient outcomes. Of course, allowing malpractice insurance to be lowered in order to allow discoveries to be made also greatly increases the risk of medical negligence by health care professionals, which brings us back to the fundamental debate between patient rights and health care professional liberty.

There truly is no “correct” solution to the issues facing medical malpractice insurance and limits on financial awards. Moving to either extreme means losses on the other side of the debate. If the amount of compensation was limited, then there is a significant risk that patients would not be able to gain just compensation for the injuries and losses that they have suffered due to the negligence of health care professionals. On the other hand, limiting compensation awards could also mean that health care professionals would not have to resort to defensive medicine, which not only drives the cost of health care up, but also has the potential to impede the progress of health care practices and techniques. Perhaps the best way to address this convoluted debate lies in the awareness of the scope of the issue. Both doctors and patients need to bring forward their stories to help influence policymakers to make the best decisions possible to the benefit of all parties.

Collateral Damage: The Elbow Injury Epidemic in Baseball

Now that the 2017 Major League Baseball regular season has drawn to a close, fans of the sport will have to find some way to pass the time until the next season starts in April. Many will spend the winter months looking over stats. For these self-described “nerds,” baseball offers a goldmine of data. A full baseball season consists of 2,430 games with thousands of at-bats serving as data points that can provide clear insight into how players performed that year. Home runs, stolen bases, strikeouts, OPS, WAR and dozens of other metrics are not just used by fans at a bar arguing over who should win the Most Valuable Player award, but also by the front offices of teams deciding who is deserving of a big-money contract before the next season.

For these team executives, a much simpler statistic can indicate how a player’s season went: ulnar collateral ligament tears. If a player’s number under this column doesn’t say “0” at the end of the season, then they likely had a rough year and may be in for a rough season the following year as well. This is because elbow UCL tears are devastating. By the nature of baseball, it is not an injury you can simply play through. A torn UCL absolutely requires surgery and rehab that can take around a full year before a player can play on a baseball diamond again.

The UCL connects the humerus to the ulna and is crucial for stabilizing the elbow during both flexion and extension. For the average joe, incurring an injury to the UCL is exceedingly unlikely. If you play baseball, whether you are little-leaguer or an MLB superstar, the threat of a tear is very real. The throwing motion almost universally adopted to throw a baseball exerts extreme stress on the elbow. This motion has been reported to subject the UCL to 60 Newton meters, a force that the UCLs tested in cadavers cannot support. As baseball has become more competitive over the last century, players have been throwing harder and harder and the forces endured by their UCLs have become concomitantly greater and greater. It is for this reason that baseball players and pitchers in particular, often suffer UCL tears, an issue that is practically endemic to the sport.

In the past, these injuries were less common simply because the average MLB pitcher did not throw as hard as he does today. This century alone, the speed of the average MLB pitcher’s fastball has increased from 88 to 91 mph. For those that did have the misfortune of sustaining the injury, their careers were jeopardized. When Tommy John of the Los Angeles Dodgers tore his UCL in 1974, he resorted to an experimental reconstructive surgery performed by team doctor Frank Jobe. The procedure was an overwhelming success and is still widely performed on baseball players to this day bearing the name “Tommy John Surgery” (TJS). Recipients of TJS who commit to their rehabilitation generally have a very high likelihood of playing baseball again at the same level prior to the rupture. Still, dozens of MLB players and hundreds of amateur athletes undergo the year-long recovery process every year. Professionals who are injured fortunately have access to skilled physicians and the financial resources needed for the long recovery, but sadly the same is not necessarily true for afflicted children. While the problem may be less widespread and insidious as something like concussions in football, it is still a sad reality that exists.

This sort of problem is nothing new to the game of baseball. As the sport exerts so much strain on the arm, injuries in the past largely centered around the shoulder. During the second half of the 20th century, rotator cuff injuries would consistently end baseball careers. Not much was understood about the nature of the injuries or how to treat them, so players who blew out their shoulders and required reconstructive surgeries were often out of luck. Strangely, sports doctors today don’t hear much about those sorts of injuries in baseball anymore. Whereas those injuries had been about as prevalent as UCL injuries are today, the problem has been significantly ameliorated. Clearly something must have changed in baseball to account for this change in affliction. Have pitchers simply shifted the demands of throwing from their shoulders to their elbows? Common sense would tell you “no” and, if anything, the freak athletes who throw over 100 mph today must surely place more stress on their shoulders than their predecessors. So what changed?

What happened is the baseball community reacted to the crisis and worked towards a culture change. Never before has such a large emphasis been placed on shoulder muscle strengthening and longevity training. Stretching programs have become the norm for preparing all parts of the body for performance of any kind, but in baseball a great amount of focus is particularly applied to shoulders. Gone are the days of the laissez-faire approach to pitching that was common in the past. During those times, if a pitcher felt ready to pitch and was selected by their managers to do so, then they would do so and nobody would bat an eye. Fast-forward to today and you see pitching rotations by which starting pitchers in the MLB only play every fifth game. Furthermore, managers have more delicate control over rookies and can decide at the beginning of the season that they will only allow those players to pitch a specific number of innings that year. Reform can be seen in youth leagues as well; there are now regulations on throwing curveballs before a certain age (as this pitch requires a particularly violent follow-through), pitch count limits during games and tournaments, closer pitcher’s mounds and more. Baseball coaches and players saw the problem that existed and successfully worked towards a future in which shoulder injuries would be less prevalent.

Certainly, the same can and must be done with elbow injuries. Just because a reliable treatment exists for UCL tears doesn’t mean we should accept so many athletes, especially children, needing to endure a year of intense recovery. The dilemma of course, is that you can never tell someone to throw slower and expect them to oblige. The idea that “if I don’t throw hard, someone else will” is very true in such a competitive sport played by so many people. Perhaps there is more we can do to educate coaches and players about ways to care for the elbow such that catastrophic injuries can be kept to a minimum. Major organizations including the MLB and Little League Baseball have pushed the “Pitch Smart” initiative. This program has been adopted by youth leagues and involves educating coaches on how to train their players to throw with less strain on the elbow. Also included are the implementation of pitch counts based on age, as well as mandatory rest days following a game in which a certain amount of pitches were thrown. The hope is that this will bring about an extension of the culture change created by addressing shoulder injuries. By having kids throw less and with a more streamlined motion, childhood elbow injuries will ideally be curtailed and those who make it to the major leagues will have the tools and endurance to pitch a full, healthy career.

A 20/20 Perspective on Blind Culture

Picture a blind individual. What physical characteristics or lifestyle would you associate with this person? An image familiar to most is the figure sporting the signature white cane, a pair of dark spectacles, and of course, accompanied by man’s best friend, a complimentary guide dog. Or when asked to consider the lives of the visually impaired, one might envision that of only a tragic existence in the dark, pitying the blind who are forever dependent on the assistance of others to manage their daily lives, their interactions limited only to braille and verbal cues.

However, these are all outdated ideals and fallacies that highlight only the negative misconceptions bred by society about the blind community. As a widely misunderstood and unrecognized group, it has become even more important to dispel stereotypes regarding the blind to prevent such unwarranted judgement.

Contrary to popular belief, the entire blind community does not experience complete vision loss or see “total darkness” as we might expect. In fact, only 10-15% of the population see total darkness, while most retain residual vision through light, color, or form perception. For those completely without sight, braille is commonly considered to be their distinct form of written language and communication. The majority of the visually impaired community are commonly assumed to understand and be fluent in the use of braille, a system of raised dots read by the fingers, as a code for reading material for the legally blind. Legally blind is defined as having a visual acuity of 20/200, even while wearing corrective lenses. To put it in perspective, the smallest words someone legally blind could see 20 ft. away, others with “normal vision” could see over 200 ft. away. However, fewer than 10% of legally blind people in the United States can read Braille, and only 10% of blind children are taught Braille. This is due to the increasingly popular notion that the system is becoming too outdated and troublesome to teach. Instead, the use of audio texts, technical aides for print enhancement, and voice recognition software for communication have become more popular. Unfortunately, the declining use of braille leaves more of the blind population illiterate, which is linked to the decreasing probability of employment and lower levels of independence later in life. This surge in technology that has made manifest the usage of such “shortcuts” to communication has significantly contributed to plummeting literacy rates among the blind. Braille continues to be acknowledged as the unique means of communication for the community, yet its declining usage in the past decades has begun to prompt others to question the true significance of its role in blind culture.

A common conception of the blind renders the image of a character donning dark glasses, grasping a white cane, and being led by a canine companion. Yet, less than 2% of the visually impaired use a cane for mobility, and only 10,000 guide dog teams currently work in the U.S., suggesting that less than 2% of the population even utilize guide dogs. Many also believe that one gains “super hearing” after losing vision. However, hearing does not improve biologically at all. Instead, one learns to be more attentive to surrounding sounds, which appears to improve their listening as opposed to hearing. Finally, the blind do not only pine after the cure to their impairment, and they may lead as full lives as any other. Blind blogger Rachel Finlay proudly states, “My experience with disability has shaped who I am as a person. It’s made me resilient and resourceful. It’s made me curious about the world, determined to succeed and develop a willingness to try to understand others because I know what it’s like to be misunderstood and misjudged… My life is great and blindness and learning to accept it is part of that”. And Rachel is not the only one who thinks this way, as a 2015 study conducted by Blind Veterans UKregarding attitudes to blindness revealed that 60% of those polled would not see blindness as a barrier to being able to lead a happy life.

We have developed such a warped image of the blind as society has maintained only a vague awareness of blind culture, and the visually impaired have been often viewed as a tragic case of defect or as an object of pity. In fact, the existence of a blind culture has been disputed over the past two centuries.

The natural link between members of the community through their shared internal conflicts defines blind culture. The bonds they create over their experiences and understanding of the world connects them. A primary issue many share are the economic hardships and barriers to employment in their job search. According to Cornell University’s Employment and Disability Institute, only 28% of working age Americans with visual disabilities had full-time jobs in 2015. In addition, the National Industries for the Blind, a non-profit employment agency, surveyed hundreds of hiring managers across the U.S, and found that as much as 54% of them believed that there were few jobs available at their company that blind employees could perform. Overall, in the U.S., by 2016, the unemployment rate for the blind was an appallingly high 62.3%. These current statistics represent the stigma against the blind in employment. Society evidently has made clear distinctions that identify and isolate them as incapable and unwanted in the workforce, severely limiting their job prospects. However, among those defying this stigma against the blind is Gary Donahue, the Washington Correspondent for the British Broadcasting Company (BBC), who lived with total blindness since he was a child. He attended specialized boarding schools, eventually heading to Oxford University where he studied philosophy and languages. Donahue then landed a job as a reporter at BBC, where he made a name for himself as a journalist despite the obstacles he faced compared to his coworkers. Currently, Donahue is an outspoken activist who addresses the under-representation of disabled workers in the media and encourages them to pursue such “unthinkable” careers. Donahue advises, “It's about finding the strategies to overcome barriers when they present themselves and live an independent life that takes on the world. It's about doing those challenging, high-profile jobs that people who can see traditionally have thought might not be for us”.

How are the visually impaired educated to eventually freely navigate the world? Contrary to popular assumptions of their low levels of independence, many of them independently go about their lives by adulthood. Many will attend either a specialized blind school or integrate into public schools with the assistance of specialized instructors, to eventually attain such a level of self-sufficiency.

The Western Pennsylvania School for Blind Children (WPSBC), located in the heart of Oakland, Pittsburgh, is an example of a chartered specialized school that focuses on specialized instruction and therapy for blind and deaf children up to ages twenty-one, with an end goal of smoothly integrating them into society. When I visited the facility, I noted that majority of the students currently enrolled required the use of wheelchairs. In addition, most had further mental disabilities including cerebral palsy- affecting the muscle and posture, and mental retardation, that left them both wheelchair bound and lacking communication ability. In order to help students reach their highest potential, WPSBC provides a team of classroom staff including physical and occupational therapists, certified visually impaired instructors, mobility instructors, speech and language pathologists, nurses, and behavior support specialists that work together to provide for the students’ interests and needs. This holistic method of education has been observed to be more effective than a clinical approach towards dealing with a child’s needs individually. The combination of trained teachers and special equipment offered by specialized schools creates a comfortable and optimal setting for their learning.

The blind community is still considered a minority group linked to many unfavorable yet ungrounded beliefs, but we can learn to communicate with them and treat them as equal counterparts, if we begin to understand them. Due to these preconceived notions, they begin life at a disadvantage regarding employment and advancement opportunities, and the stigma only adds to that. Even if they may be equivalent or greater in ability and education to their peers, they will still be viewed as less able. As eloquently stated by arguably one of the most famous members of the community, Helen Keller, “The best and most beautiful things in the world cannot be seen or even touched- they must be felt with the heart”. Their visual impairment does not make them less able than any other to experience, fulfill, and appreciate the most important things in life. Let us not turn a blind eye to this ignored but complex culture.

Mutually Inclusive: Finding Faith in Modern Science

“The most beautiful thing we can experience is the Mysterious.” Einstein once said this of his spirituality in a 1954 essay for NPR. While most consider science and religion to be mutually exclusive entities perpetually at odds, over 51% of scientists believe in a higher power. In fact, some scientists find science to be the basis of their conversion from atheism to religion. One such example is Francis Collins, the director of the National Health Institute, who converted to Christianity and argues that the idea of a Christian God is compatible with Darwin’s theory of evolution.

Historically, early scientists were first members of the religious community. During the Golden Age of Islam, there was a scientific revolution and many Islamic cities became centers of knowledge and learning. Mathematics, medicine, and astronomy were all fields that were contributed to by Islamic scholars such as Abu Ja’far.  Later, during the Middle Ages in Europe, Christian clergy were pioneers of science. The Christian churches were in charge of nearly all education in Europe, and primarily clergy had the time and resources available to focus on science. Newton, known as a father of science, was also a religious zealot.

It was the 18th century before the separation of science and religion was truly apparent. During the Enlightenment, philosophers such as Kant championed the divide of what they considered to be spiritualism and rationalism. Today, religious organizations have a reputation for being anti-science and anti-reason. Yet some members of religious communities have embraced the scientific research on the basis of religion. To these people, the research further reinforces their belief that God has enabled humans with the unique ability to believe in God.

What is this unique ability that believers claim differentiates humans from apes? According to Dr. Dean Hamer of the NIH, the ability to believe in God is genetic, caused by a gene affectionately referred to as the “God gene.” This gene, VMAT2, regulates monoamines, a mood-controlling chemical. Hamer conducted an experiment that showed subjects that had this gene were more susceptible to spiritual experiences. Other researchers agree that religious affiliations are caused by genes that regulate dopamine and serotonin neurotransmitters.

A perhaps unexpected group to be predisposed to religion are people with epilepsy. A study revealed that those with epilepsy reacted most strongly to words associated with religion, when compared with neutral and erotic words. Those without epilepsy reacted most strongly to erotic words. This is perhaps because epilepsy is often resulting from activity in the temporal lobe, the same area of the brain that religious activity is believed to occur.

Another more abstract cause of religious belief is something called the God Helmet. Stanley Koren and Dr. Michael Persinger of Laurentian University created this apparatus, which when worn, induces a spiritual experience. About 80% of wearers have experienced this spiritual sensation. The God Helmet utilizes electrodes to alter the electromagnetic field at the temporal lobes, thereby convincing the left temporal lobe to believe that a presence is causing the activity in the right side of the brain. Due to this, Persinger believes religion is nothing more than the effects of the electromagnetic field.

Regardless of whether religion is divinely or scientifically inspired--or perhaps both, the effects of religion and spirituality on the brain are undeniable to theists and atheists alike. There’s been an exorbitant number of studies claiming religion acts in a similar manner to many recreational drugs. In a few ways, this is true. Religion does trigger the reward system of the brain and can have long-lasting impact on brain structure and function.

Andrew Newberg of the University of Pennsylvania is at the forefront of a field of research focusing on religion and the brain. This field is commonly referred to as neurotheology and has gained much attention over the past few years. Newberg conducted a study where he used Single Photon Emission Computed Tomography (SPECT) to image the brains of Tibetan monks and Christian nuns while they engaged in meditation and prayer, respectively. He found that the frontal lobe, responsible for focus, and  the limbic system, which regulates emotion, showed  increased activity whereas the parietal lobe, responsible for helping orientate oneself in space and time, showed decreased activity. This loss of orientation ability is fascinating. It indicates that the participants were no longer able to differentiate themselves from the universe‒ they were truly experiencing a state of transcendence.

Newberg studied one final group, one whose results differed wildly from the monks and nuns. Newberg observed Pentecostal Christians while they were ‘speaking in tongues.’ Unlike the monks and nuns who experienced an increase in frontal lobe activity during periods of religious engagement, the Pentecostal Christians experienced a decrease. Furthermore, the language center of the brain showed almost no activity, which some claim proves that God speaks through people. Others claim it proves that the Pentecostal Christians are simply working themselves into a state‒and not a divine one.

The varieties of religion do have neurological differences, although not in the way one might assume. Newberg found differences along sects of religion were negligible. However, people who believe a high power is present in their everyday lives activate neural pathways associated with fear when thinking about their religious beliefs. Furthermore, those who are guided by religious doctrine associate their religious beliefs with language. Meanwhile, atheists connect their beliefs with pathways associated with images.

It is interesting to note that atheists are similar to theists in some neurological ways. Like the nuns and Tibetan monks, there’s a marked increase in activity in the prefrontal cortex, which controls emotions and attention. In atheists, this may present itself as an analytical mindset. Yet, what causes atheism? It’s largely uncertain, but there is an increasing amount of research being done on atheism. Dr. Patrick McNamara of Boston University conducted a study where he performed MRI scans on those with Parkinson’s disease. Parkinson’s notably causes a lack of dopamine, which is linked to religious belief. As individuals  with Parkinson’s lose dopamine, they often lose their religiosity.

Apart from differences in brain activity, various religious practices also cause structural changes. In addition to the effects observed by Newberg, other negative effects have been found. Born-again Christians in particular experience notable declines in brain function over the course of their lifetimes. This type of Christianity is often accompanied  with hippocampal atrophy which is associated with depression, dementia, and Alzheimer’s. Many people in the scientific community believe that this is due to the fact that born-again Christians must overcome their old ways of thinking after their ‘rebirth.’ The act of ‘rebirth’ causes cognitive dissonance, which puts excess stress on the  brain.

Evolutionarily speaking, religion has advantages. Religion is able to bind together a community, increasing chances of survival. Those in religious communities are more likely to make choices for the good of the group rather than make selfish decisions. Among early American communes, those that were secular were four times more likely to fail than religious communes.

While the relationship between science and religion is fraught and overwhelmed with conflict, there is an undeniable link between the two. This link has been explored since the dawn of science and has been the subject of heated debate, from early Christian and Muslim scholars to modern day theologists. However, as Carl Sagan once said, “science is not only compatible with spirituality; it is a profound source of spirituality.”

The Real American Psycho

In American Psycho, Bret Easton Ellis introduced the American public to the trope of a Manhattan businessman and serial killer in the character of Patrick Bateman. For ages, the word psychopath has evoked imagery of rouge killers with axes and twisted senses of humor running through their own distorted reality. Although businessmen-turned-killers are not the norm in reality, psychopathy in business is surprisingly quite commonplace.

Antisocial personality disorder (ASPD), or psychopathy and sociopathy, refers to people who habitually disregard and violate the rights of others with a complete lack of remorse. Although the majority of those with ASPD live with their symptoms unnoticed and undiagnosed, common symptoms of ASPD include a lack of empathy, superficial emotions, charm, secrecy, manipulation, compulsive lying, an authoritarian nature, paranoia and a narcissistic self-image. In addition, psychologists describe those with ASPD as having a severely malformed conscience or lack of conscience altogether. Despite popular culture’s perception, these callous traits do not make all those with ASPD killers.

In such a dog-eat-dog business world, the cultural understanding is that successful businesspeople must separate emotional ties from business ties to be cold, calculated and ruthlessly determined. Although these behaviors are detrimental to their personal lives in maintaining long-term relationships, the disorder’s antisocial traits are effectively applied in the business world. While only one percent of the American population have been diagnosed with ASPD, 21 percent of American executives have been diagnosed as psychopaths, roughly the same percentage of psychopaths in prison. People with ASPD thrive in these situations by masquerading as master conmen and violating moral values to get ahead. To many with ASPD, business is treated as a game with the objective to gain power over as many people as possible.

How do so many people with ASPD secure these positions of power? Overlooking personality traits, companies tend to seek out relevant skills and past accomplishments. During interviews, those with ASPD exhibit their mastery of maintaining positive impressions and their personal image to set themselves apart from other applicants. As exceptional albeit superficial speakers, they are eloquent oral communicators with above-average IQs. Adding their unique ability to read people quickly, those with ASPD are able to size up their competition and convincingly project a more powerful and qualified version of themselves to the public.

Nevertheless, not all with ASPD are successful in high-ranking positions. According to recent sub-categorization of people with ASPD, successful psychopaths effectively con the public with their superficial image without their egocentric nature obstructing their success. Commonly found within the American justice system, unsuccessful psychopaths lack the aforementioned oral communication skills and charming behavior, ultimately resorting instead to violent intimidation and social aggression.

Thankfully Patrick Bateman is purely fictional, and psychopaths are not the murderers they are perceived as in literature. Instead, those with ASPD succeed in business thanks to their mastery of maintaining a personal image and ability to manipulate others for personal gain. The public’s perception of ASPD is inaccurate and, frankly, exactly the facade psychopaths crave.

Super Kids: the Future of Genetic Engineering?

It’s 2085, and David is two minutes old. Eight months earlier, his mother and father were worried for David’s life when their doctor informed them that their baby showed three copies of chromosome 3, essentially a death sentence. Luckily, the doctor explained that advances in genetic engineering allowed for the safe removal of the extra chromosome, and now newborn David sat happily in the arms of his mother.

Much before the development of modern genetics, the first instance of genetic manipulation has its origins around 10,000 B.C. when ancient farmers selectively bred crops with specific qualities to increase yield. These farmers selected different combinations of wild grasses to breed the precursors of modern staples like wheat and rice (http://www.sciencegroup.org.uk/ifgene/history.htm). Genetic engineering has come quite a ways since those early examples of exploiting genetics.

David is five months old. His mom smiles at his sleeping face. She remembers her doctor coming in after David’s first treatment, his face bearing bad news. He explained that David’s prenatal tests also showed trisomy 21 -- Down’s Syndrome. It wasn’t too late to alter this, the doctor explained. She and her husband couldn’t sleep for the next week. She still gets scolded by her mother: “How could you change something so important about him?!” She puts another blanket on sleeping David, wondering if she made the right decision.

Modern genetic engineering primarily uses a gene-editing technology called CRISPR-Cas9. Pioneered in 2012, CRISPR gene-editing involves the use of a guide-RNA, which is simply a molecule researchers create that binds to a specific portion of DNA. Once the guide-RNA binds to its specific portion of DNA, the guide RNA recruits a protein called Cas9, which binds to the DNA and unwinds it. This allows the RNA to bind to the newly opened DNA, and then Cas9 snips the DNA. The DNA now has to be repaired, and this can be done by disabling the gene, fixing the gene, or (of most interest to researchers) inserting a new gene into the DNA (https://www.sciencenewsforstudents.org/article/explainer-how-crispr-works). Basically, researchers can manipulate what cell machinery are active or inactive to influence how the DNA cut gets repaired, and therefore, researchers can selectively insert new genes into someone’s genome.

According to Beth Mole, writer at Nature.com, researchers are currently working on ways to silence the effects of Down’s syndrome with genetic modifications (http://www.nature.com/news/researchers-turn-off-down-s-syndrome-genes-1.13406). The strategy utilizes the gene XIST, the X-inactivation gene, which normally functions in females to silence one of her two X chromosomes. XIST produces molecules that act as a blanket of sorts, coating the entire surface of the chromosome and disallowing any expression from genes on that particular chromosome. Therefore, when the XIST gene is added via CRISPR-Cas9 to one of the extra chromosome-21’s in someone with Down’s Syndrome, the team of researchers think that they can silence that chromosome completely, and that a child with such a modification would develop normally. This technology could be applied to treat individuals suffering from any trisomy, not just those with Down’s Syndrome.

David is seven years old. He likes to race his classmates at school, and his mom recently signed him up for a club track league. At his first race, David wins the 100-meter dash in a time of 8.87 seconds; in 2009 Usain Bolt held the world record with 9.58, a time that would be laughably slow today.

While we might have some time before we decide whether to create super-athletes, the genetic possibility exists. In “Clinical Genetics in Nursing Practice,” author Felissa Lashley writes about a mutation in the gene that encodes ɑ-actinin-3 (https://books.google.com/books?id=chieefSNpwEC&pg=PA516&lpg=PA516&dq=are+superathletes+possible+in+the+future+genetics&source=bl&ots=UOfU28Vqyw&sig=tzvyklijrGBZiQlgXLCfBd8icAA&hl=en&sa=X&ved=0ahUKEwivjfD2q5vXAhUEMSYKHcvQCvsQ6AEIKzAA#v=onepage&q=are%20superathletes%20possible%20in%20the%20future%20genetics&f=false). This protein subunit is involved in generating muscle force through contraction of muscle fibers, and has been found to increase athletic performance. Lashley further argues that “using genotyping to select to select athletes based on possession of identified favorable genetic polymorphisms and discouraging others has serious ethical and societal implications.”

As supported by recent surveys, the public’s opinion might already be swaying in favor of the genetic modifications and creation of this “super-athlete” class. In 2009, New York University School of Medicine released a study that revealed “10 percent of parents surveyed would approve of genetic testing to ensure their child was athletic.” Just five years later, 26 percent of 1,001 surveyed Americans said that it would be a good thing if “prospective parents can alter the DNA of their children to produce smarter, healthier or more athletic offspring.” (https://theconversation.com/genetically-engineered-athletes-could-be-heading-this-way-soon-42166)

David is 14. He will start college in a few weeks. Most of his classmates will be around his age. He is a gifted student in math. He could out-compute his father, a mathematician himself, by the time he was 10. His high school offered advanced calculus and theoretical math classes, and in college he will study mathematical theories that were nonexistent 50 years ago--before the advent of genetic engineering.

Genetic engineering is becoming a hotly contested area of research as the field grows larger and more powerful. As gene therapies are currently being developed and used to fight diseases like respiratory and immunodeficiency conditions, the possibility of using genetic engineering for aesthetic and athletic purposes looms in the near future. As such, it is important for us as a society to have the difficult conversations now, considering the possible realities of a world in which super-athletes and genetically-made geniuses walk among us.