Pick your color? More like pick your poison.

How nail salon jobs expose workers to harmful chemicals

Linh reached for a rubber glove with no intention of wearing it. Instead, she wrapped the floppy beige fingers around the top of a stubborn nail polish bottle for extra grip as she unscrewed it. The bottle squeaked open, releasing a faint but noxious smell. The glove was dropped back in the drawer.

As she began to paint my fingernails, I asked Linh to tell me about herself. She moved from Vietnam to Dorchester, Mass. a year and a half ago with her husband and two children, now in their twenties, and got the job at this South End salon through a family friend. As she doused my nails with liquid from a generic bottle hand-labeled “Acetone,” I asked if she ever wore gloves while she worked. “For pedicures,” she said, but didn’t elaborate. I don’t speak Vietnamese, and she speaks very little English. Our conversation was brief.

Wearing gloves to paint nails might seem unnecessary or even inconvenient for such dexterous work. But exposure to chemicals found in nail products are linked to numerous health risks, some as extreme as miscarriages and thyroid and lung cancer.

As of 2017, the Bureau of Labor Statistics estimated 104,020 people in the U.S. are employed as nail technicians. Massachusetts has the third largest population of nail technicians per capita after New Jersey and New York. In fact, there might even be more nail salons in Massachusetts than McDonald’s restaurants.

Now, recent reports are shedding light on the dangers of working in nail salons due to the toxic chemicals found in nail products.     

In a study published last month in the journal of Workplace Health & Safety, researchers from the University of Wisconsin-Madison found that the air in a typical nail salon has chemical levels that exceed federal occupational exposure limits in the U.S., despite the use of ventilation systems. The most prevalent chemicals they found released by nail polish, formaldehyde and acetone, are associated with skin and eye irritation, respiratory issues, and, in rare cases, leukemia and sinus cancer, according to the study.

Gloves can only protect a worker’s hands from chemical exposure, which is why the Boston Public Health Commission also encourages nail technicians to wear masks, but neither product is required by law.

“Gloves and masks can be expensive, and a lot of workers say they don’t want to wear them because they’re intimidating to the client,” said Tran Huynh, an occupational health professor at Drexel University who published a study in 2018 on the health concerns of Vietnamese nail salon workers in Philadelphia.

Plus, to be effective, workers must use specific types of gloves and masks—thick nitrile or latex gloves and N-95 respirators, the kind of masks handed out in Sacramento during the wildfires this fall. The thin beige surgical gloves in Linh’s drawer, had she worn them, wouldn’t have protected her hands from the chemicals. And Linh wasn’t wearing a mask either while she painted my nails. According to BPHC’s program coordinator Stephanie Seller, the N-95 respirators are supposed to protect workers from breathing in dust created when filing nails, especially acrylics, fake nails that adhere to the top of nails.

So why, in light of this research and the protective equipment we know works, do nail salons neglect to keep their workers safe?

One issue is cost. Last October, all Boston nail salons were supposed to have installed updated ventilation systems in order to carry odors and dust out of the salons and pump in clean air. But while the policy is moving in the right direction, says Huynh, the salons can’t always afford to make the changes.

“Many family-owned salons operate on low profit margins so strict regulations such as installing local exhaust ventilation might negatively impact small businesses due to potential high cost of implementing and maintaining the system,” Huynh writes in the study.

But costs aside, are these ventilation systems enough to keep workers safe? “There are still huge gaps in the research about how these chemicals affect people, and as a result a lot of safety policies are delayed or ineffective,” Huynh said.  

Huynh research and personal history are intertwined. Her mother and aunt worked in a nail salon after her family moved from Vietnam to the U.S. “That’s why I started to do this research,” she says. “My mom used to tell me about the headaches and the pain in her hands, but no one had real answers why.”

Those painful memories have become the motivation behind her work. Through her research, Huynh says she hopes to provide vulnerable immigrant workers, more than half of whom are Vietnamese, with the information they need to protect themselves.

“My goal is to work with policymakers so that people can stay safe without hurting their business,” she said. 

Photo: Unsplash.

How inequities in funding for research and treatment are hurting those with rare cancers

From only being able to choose between surgery and radiation as treatment options in 1947 to reaching the milestone approval of the first cancer-specific gene therapy just seven decades later, it’s undeniable that cancer research and treatment have progressed significantly over this short period of time. But as it turns out, only the cancers that have been more extensively characterized have reaped the benefits of this scientific progress. Because research funding for studying cancers and developing treatments is unevenly distributed across cancers, patients with rare cancers have seen minimal progress compared to patients with more common cancers. 

According to an editorial published 2017 in the British Journal of Cancer, one out of four cancer patients can’t make full use of the developments in science due to the manner in which funding is allocated to the rare forms of the disease they suffer from. This problem itself is wildly ironic. Generally, if a disease is classified as ‘widespread,’ it’s unfavorable. Though in this contradictory scenario, the lower the prevalence of a cancer, the less research funding it sees, meaning those suffering from rare cancers are at an incredible disadvantage and haven’t exactly reaped the benefits of the last 100 years of cancer advances.

This disparity in funding and research between rare and common cancers is evident from the National Cancer Institute (NCI) funding. A study conducted by researchers at California State University Long Beach in 2012 looked into the funding levels for a variety of different cancers, based on the direct impact of the diseases on patient lives. This impact, or “burden” as the researchers dubbed it, was investigated in terms of a number of different metrics, including years of life lost (YLL), which takes into account the age that people are killed and weighs that against the severity of the disease. They found that when NCI funding relative to YLL was calculated as a ratio, more common cancers like breast, prostate and leukemia were categorized as “overfunded” (ratios > 1). On the other hand, those cancers that are considered relatively rare were identified as “underfunded” (ratios < 1), and included types like stomach, oral and uterine.

While this blatant inequality in resource allocation for cancer patients has been established in the scientific literature, there are widely different explanations as to why it persists. Manuel Salto-Tellez, a chair and professor of molecular pathology at Queen’s University Belfast, identified what he believes is the main reason for this large funding gap. Referring to the paper he co-authored titled ‘Rare cancers: the greatest inequality in cancer research and oncology treatment,’ he noted, “Rare cancers research standards are difficult to meet for the individual researchers because of the lack of critical mass for a single institution to find.” In other words, proper funding for rare cancer is challenging to secure because of inadequate patient samples and a lack of supporting clinical trials. Despite this, rare cancer research proposals and clinical trial designs are still held to the same standards as more common cancers, resulting in the significant funding gap. This means that recruiting a sizable clinical trial is still a requirement for investigating a disease with minimal diagnosed patients, something near impossible for researchers to achieve.

When asked what could help remedy the situation and get more research off the ground for rare cancers, Salto-Tellez emphasized modifying how these proposals and trials are evaluated. This could be done by “protecting some funding for [these] neglected areas of science,” and purposefully designating funds to the cancers that lack them.

For his part, Salto-Tellez hopes to close the gap between rare and common cancers by personally “encouraging donors to invest in this area,” he says, while simultaneously conducting his own research “according to appropriate standards of quality and opportunity.” With this, he hopes rare cancers can finally begin to profit from the scientific advancements in the field, thereby stepping out of the funding shadow cast by the well-understood and common cancers.

Image: “A comparison of cancer burden and research spending reveals discrepancies in the distribution of research funding,” BMC Public Health, 2012.

Perovskite solar cells reach new heights

New solar cells reach 23 percent efficiency in just six years, compared to silicon’s 26 percent after 50 years

A new type of solar cell may soon surpass traditional silicon panels after new research, published this month in the journal Science, uncovered a key part of the puzzle. 

Since their inception in the 1960s, solar cells made out of silicon have been standard. But scientists struggle with the high cost of producing them because they are manufactured using expensive equipment at high temperatures. That’s where perovskites come in — using the new material, scientists reached 23.7 percent efficiency in just six years, compared to 25 percent in silicon cells over 50 years.

“We set up a system that would allow us to make solar cells at a very, very rapid pace, much faster compared to silicon solar cells,” said Juan-Pablo Correa-Baena, an author on the paper who did his postdoctoral research on solar cells at MIT. “[It] gave us the opportunity to explore not only the traditional perovskite materials we’ve been exploring, but also opening it up to many new materials.”

Perovskites are a unique compound where researchers can “mix and match” the elements included, Correa-Baena said. He first came across perovskites during a postdoc at ESPL in Lausanne, Switzerland, where researchers experimented with adding alkali metals as part of the perovskite recipe. It was a huge breakthrough, and the paper on the topic has been cited more than 1,000 times.

The problem, Correa-Baena said, was that no one really understood why alkali metals were so successful, or even why perovskites in general were good. That’s what he began experimenting with in Tonio Buonassissi’s lab at MIT.

“We still didn’t really know about the fundamentals of this material,” Correa-Baena said. “What is it, fundamentally, that has changed? What are the physics and the chemistry that has changed when we add these alkali metals?”

There’s three major properties of perovskites that make them perfect for solar cells. One of them has to do with the diffusion length, which deals with how far electrons are able to travel within the material. It’s a basic necessity of any solar cell — if the electrons can’t freely travel to the edge of the material to be collected and used for energy, the cell can’t function.

Perovskites also have a direct band gap, which is a measure of how efficiently the material can absorb light. Silicon, on the other hand, has an indirect band gap, so the film must be 1 millimeter thick, high by scientific standards, because you need more material. Perovskites, on the other hand, can be made as thin as 1 micron.

The other special thing about perovskites is how rapidly research has progressed. Scientists took 50 years to reach 26 percent efficiency with silicon because the equipment needed for production was so difficult to obtain. Perovskites don’t have that problem.

“There was a very rapid change in how people were researching,” Correa-Baena said. “If you want to change the focus of the lab to this perovskite thing, how much money would you need? You would not need millions of dollars to build very complicated furnaces, but you would need something like a couple hundred thousand.”

Correa-Baena emphasized that perovskites are still not ready for full-scale commercialization, although startups do exist in the United States that are exploring the idea. He says the most important aspect is time: If it only took six years to achieve this level of research on an entirely new material, how much further could it go?

“These solar cells are much easier to make and are generating jobs locally,” Correa-Baena said. “This will make a difference. People will start buying into it. This is a good business model. This is going to be the future.”

Photo: Unsplash.

Whales, a large carbon sink, could help address climate change but challenges persist

Whales have been hunted for centuries across cultures and continents for their fat, meat and bone. And while whaling peaked during the early industrial revolution, the practice is still alive and well in many corners of the world. Though populations have never fully recovered, could restoring whale populations to pre-industrial levels help address climate change? 

A 2010 PLOS One study published by scientists at the University of Maine, the University of British Columbia and the Gulf of Maine Research Institute found that, compared to pre-exploitation levels, large baleen whales at their current numbers store 9.1 million fewer tons of carbon. According to the researchers, restoring whale populations to pre-industrial levels could remove roughly 160,000 tons of carbon per year. This compares favorably to unproven schemes, including iron fertilization.

That’s a large carbon sink we may do well to coax back to life.

  Increased levels of carbon in many organisms acidifies their blood outside of natural levels. According to Jelle Atema, an oceanographer and professor of biology and oceanography at Boston University, the consequences of this could be dangerous for whales.

“Of course, animals are adapting, but we don’t know is the rate at which to kind of adapt to the changes in pH,” said Atema. “There’s a lot of activity going on in that area to figure that out.”

Atema has researched the effects of altered acidity levels on crayfish. He reports that his team found that it can disturb their perception of food. He says that amino acids act as a signal molecules for food, which is limited by increased acidity. This would disturb food webs that sustain ocean environments.

Atema says that different organisms will be affected at varying rates, but that time is running out for many ocean ecosystems.

“It’s a level that can be expected to change if we don’t do anything about it. And it’s pretty clear that that level is very dangerous to a lot of life.”

There is a disturbing irony in whales being destroyed by the issue they could help solve. It doesn’t have to be this way.

Photo by Paola Ocaranza on Unsplash.

Does It get better? Weight-based victimization and LGBTQ+ body dissatisfaction

For those who experience constant discrimination for their identity, school should act as a place of solace. However, recent findings have revealed that this isn’t necessarily the case.

I think I was a junior in high school when I first became critical about my body. I became obsessed with how I looked in the mirror and how my partners viewed me. It’s a given that most, if not all, of us have felt a similar way. As we develop into adulthood, researchers points out there is an increase in body dissatisfaction among children and young adults. When we question where these sentiments come from, it isn’t shocking to find that recent research suggests that weight-based victimization may be a major influence in our own perception of body image.

For many lesbian, gay, bisexual, transgender, and queer individuals, this is especially true. A study recently published in Pediatric Obesity found that 77 percent of LGBTQ+ adolescents experience some form of weight-based victimization. This was found to be a constant for many LGBTQ+ youth, regardless of body shape — 55 to 64 percent of adolescents in this study that were identified as having an underweight body mass index reported experiencing similar discrimination. Graduation from high school can be seen as liberation by many, but weight-based victimization against LGBTQ+ people can shape body image and satisfaction for a lifetime. Many queer individuals—especially gay men—face in-group discrimination. Research into social media usage and body image found that using image-centric social media outlets like Instagram can result in body image dissatisfaction among gay men.

Ryan Watson, a human development researcher at University of Connecticut who co-authored the Pediatric Obesity study, has observed that “many folks report that there is body shaming even among their gay peers.”

Asher Pandjiris, a New York-based therapist specializing in trauma and its impact on the body, spoke in an interview with NBC about her experience with treating disordered eating in the LGBTQ+ community. She believes that much of the disordered eating in the LGBTQ+ community stems from internalized homophobia experienced at a younger age. At the same time, many of the unattainable standards being policed and upheld by the queer community itself.

But what does this mean in the long-term? “Research shows that the effects of bullying and victimization manifest strongly later in life for all people,” says Watson. “Given the accounts for these experiences of victimization are oftentimes more prevalent and bias-based in particular for LGBTQ+ youth, I expect that these effects could be stronger and more detrimental for LGBTQ+ populations as they age.”

And just as Watson said, the literature confirms these outcomes. A 2011 study published in the Journal of School Health found that LGBTQ+-related school victimization is linked to poor mental health and an elevated risk for contracting sexually transmitted infections and HIV. If victimization related to sexual orientation and gender identity can lead to negative health outcomes, weight-based victimization against this community can only serve to perpetuate the same outcome.

It’s difficult knowing what exactly we can do about such a nebulous issue. Watson believes that this is an issue that must intervened by administrators, teachers, students and family members. “School administration should be invested in continuing to create policies that are specific to anti-bullying characteristics. Teachers need to be able to recognize what constitutes bullying and intervene. Students need to hold their peers accountable. Parents need to socialize their children to be accepting of all identities and body shapes.”

Beyond individuals the school system, other people can address these complex issues. For pediatricians, a 2017 policy statement published by the American Academy of Pediatrics recommends providers to assess youth with obesity about their experience with weight-based victimization and stigma related to their weight. For the everyday person, we must also celebrate and uplift those who do not fit in with these confines, and hold each other accountable for upholding unrealistic standards.

As evident, this is not an issue that individual can address on their own, but an issue that should be addressed from all levels of society. Do not forget the power that your voice has, for the smallest actions we make can bring about the biggest changes.

AI has a long way to go to be used as a ‘fake news detector’

Because machine learning models are trained on historical data, an AI tool to detect fake news in real time is far off, researchers say.

It may seem to the general public and their Alexas and self-driving Ubers that AI is capable of anything. But it looks like, at least for now, it can’t be relied upon to weed fake news out of our news feeds.

 A recent study by MIT cognitive scientists indicates that AI-powered fake news detectors have found pretty impressive success rates, but by the article’s own admission, such technology needs a lot more work before society can trust it as a steady tool to root out fraudulent online content. Look closer, and you’ll see that the high rates of accuracy were attained by training a neural network with a very specific window of past coverage and testing it on the same window.

Considering the stark differences between how current AI technology makes decisions and how journalists verify fact, it’s clear that right now, AI isn’t a viable cure for the scourge of fake news that infected the 2016 election cycle and appears to be sticking around.

This use is affected by a widespread problem that occurs in translating AI research into everyday application: AI is “trained” on existing datasets, and everyday use involves brand new data every day.

“The data sets in the new topics are so different from what we experienced in the past, it’s going to be very difficult to have success,” said Xavier Boix, a postdoctoral fellow at MIT’s Center for Brain, Minds and Machines, and an author of the study. “The way we see this is this should be a tool in a toolbox. Just having a detector based on the text, that’s clearly limited. It could be something used, but not in isolation.”

The MIT work was innovative in that it focused on detecting veracity based solely on an article’s text, and not any attributes of the creator or its publication. The study “trained” a neural network using 12,000 fake news articles from more than 200 websites, and 11,000 real articles from the New York Times and the Guardian. All of these articles were from Oct. 26, 2016 to Nov. 25, 2016. When put to the test, the detector achieved 93% accuracy.

The algorithm identified “real verbs” such as “adapting,” “backing” and “praised,” as well as “fake verbs” including “getting,” “spending” and “lying.” However, when tested on a pool of articles that did not contain the word “Trump,” the accuracy dipped to 87 percent. This led Boix to think that such a detector needs intensive training on the very subject matter it’s meant to be testing.

Boix admitted that though the model has pretty well mastered detecting fraud in the 2016 election, it isn’t ready to move into the future.

“If you train your model in the past election and apply it to articles of a new election, I would bet that the model would probably fail,” Boix said. “The best way to have the model work is to have recent data and have them up to date. That’s one of the current limitations of AI — the data set. If the data set doesn’t capture the domain that you want to apply the model, the model is going to fail.”

This observation is in line with what critics of AI fake news detectors often say.

Ernest Davis, a professor computer science at NYU, co-wrote a New York Times op-ed in November titled, “No, A.I. Won’t Solve the Fake News Problem.” He’s skeptical of using detecting technology that relies only on the text of an article.

“There are hardly any inherent clues in the material,” Davis said. “It’s up to people to chase the truth, and that’s not something AI is capable of. [The MIT study] is helpful … but it can be evaded. But it does raise the cost of creating fake news. It’s not useless.”

Boix, too, framed his team’s work as a potential tool, but not a catch-all safeguard.

“The way we see this is this should be a tool in a toolbox … Something you could do is have this when you’re browsing, if you ran some algorithm it could tell you, ‘This sentence here is indicative of fake news,’” he said. “It’s not going to give you an answer for sure, you can’t trust this 100 percent, but maybe it could give you a warning. It could tell you ‘Hey look, this language here is similar of language used in the past in fake news.”

Davis drew on the fundamental differences between the way AI makes decisions and the way journalistic fact-checkers do. “If you accept only official sources, you’re cutting off some people speaking truth to power. That’s not technological, it’s political.” His Times op-ed pointed out that AI can detect patterns in text, but not in implicit ideas or concepts.

Boix said there’s a recent trend of actively guiding the AI research corps’ collective resources toward doing work for “social good.”

“I think now that we see the power in these technologies and these new developments, we are realizing that that could be used for good and bad,” Boix said. “There has been a movement in the community pushing researchers to use these technologies for social good, to use these projects to have a positive impact for society.”

Did this exercise teach the scientist anything new about neural networks, his career’s work, or was it simply an exercise in reaching for societal good? He said he was genuinely surprised that they weren’t able to reverse engineer the algorithm’s decision-making process by examining the results and patterns.

“That kind of shocked me a bit,” Boix said. “It would be nice if they could explain what reasoning they do.”

As social media and other online platforms look for ways to keep fraud off the public’s pages, this researcher is skeptical of the algorithms that are being heralded as solutions.

Photo by Kayla Velasquez on Unsplash.

It’s not a bird or a plane: It’s a drone

Drones getting too close to animals has caused a public backlash to the technology’s use. But scientists say the benefits outweigh the risks.

The footage of the baby bear and its mother went viral. It showed the duo walking along a picturesque snowy mountain-side from such a close distance that it would not have been possible for it to have been taken from a hand-held camera. They had needed something else for the shot: a drone. But while the device was successfully capturing a moment between mother and cub, the drone pilot pushed his luck. Driving the drone too close, the device spooked the bears and sent the smallest of the two tumbling down the mountain.

The scene sent waves of discussion on social media outlets like twitter, reviving the ethical debate about the morals of using drones near wildlife. However, researchers were already looking at the problem.

In a 2015 study, Mark Ditmer, a postdoctoral researcher at Boise State University, looked into the physiological response wild black bears had to drones by monitoring their heart rates and thereby their stress levels. He found that when drones would approach, the heart rates of the wild bears would skyrocket, showing an increased level of stress.

“We saw pretty evidently that they had these huge spikes in heart rate,” said Ditmer as he described the effect of the drones on the bears. “This shows that this is sort of a different stimulus for them.”

However, a continuation of that study a few years later showed something different and a new research article was published this past January showing that bears had the ability to become habituated to the devices when exposed to the drones for enough time. This research was conducted on captive bears this time, which meant that scientists could control their environment, thus leaving the drones as the main variable.

Science is often entering uncharted territories with the creation and utilization of new tools, technologies and scientific methods. Drones, which are on the market for consumers, have been an area of increased interest among scientists and conservationists because of their easy assess.

While there is still more research to be done in this area, the new insights about habituation have helped to calm some of the moral questions surrounding drone use while emphasizing the positive impacts drones could have for both research and combating poaching.

Indeed, there are groups that already exist that are utilizing drones to draw attention to or scare off poachers. The Air Shepherds have used the latest surveillance drone technology to dissuade and stop poachers in Africa before they are able to kill the animals like rhinos and elephants. If a poacher is spotted within protected areas, park rangers are sent out to the area with the idea that they will stop the illegal activity.

There are potential drawbacks to this since it takes time for park rangers to get to the area and could give poachers an opportunity to get away. However, in one of their surveillance areas, Air Shepherds claim that in a spot where 19 rhinos were normally being killed per month, no animals were killed when the drone was monitoring for the 6 month period.

And drones are not only being used over  for terrestrial species. Many marine researchers have also taken to the devices. “For many marine animals, their behaviour is not altered and you can capture an aerial view of its natural behaviour,” wrote Enie Hensel, the main author of a 2018 drone and marine research study, in an email. Hensel notes in her study that drones are valuable in identifying and counting individual sharks, turtles and rays from a birds-eye view.

But as more research becomes possible with drones, it seems unlikely that the discussions surrounding their logistics, ethics, and regulations will come to an end soon.

According to Ditmar, , when it comes to rules and regulations, drones are a moving target. But Ditmar is careful to point out that the restrictions are actually tougher on scientists than they are on consumers using them for recreational purposes. 

For his first study on black bears, Ditmer and his team were required to have the drone in their line of sight at all times and even needed a certified pilots license in order to fly it. He can only hope that that the public operate their drones under those regulations.

With drone technology continuing to evolve, quieter, more efficient and more inexpensive drones will become increasingly available, opening the door to more scientific research and conservation work. But as drones and their applications become more ubiquitous, so too will the moral and ethical discussions about their use so close to wildlife. 

Pearls of wisdom for oyster reef restoration efforts

A recent study suggests that oyster reefs are more sensitive to environmental variables like elevation and sedimentation than previously thought.

Whether grilled, fried, or raw on a half shell, oysters are popular menu items in restaurants across America. What is less known, however, is the important role oysters play in their environments, and how depleted their populations have become over the last few centuries due to overharvesting and pollution. In their effort to clean waterways and mitigate damages caused by storms that are worsening due to climate change, many organizations are trying to restore coastal oyster reefs to even a fraction of their former glory. Could science provide a more efficient way to grow oysters?

In a study published in January in Marine Ecology Progress Series, then-PhD candidate Christopher Baillie and Professor Jonathan Grabowski of Northeastern University’s Marine Science Center concluded that oyster reefs are much more sensitive to subtle changes in elevation than previously thought. Their research could affect how these restoration efforts proceed by improving understanding of what environmental conditions are best for oysters.

Baillie and Grabowski studied the effects of a one-meter range in intertidal elevation, sedimentation, predation, algal cover, and settlement of other marine animals on oyster population health, which was measured by oyster mortality rates and shell lengths. The research, motivated by the limited resources available to reef-restoration organizations, was conducted in Ipswich, Mass.

Now conducting postdoctoral research at East Carolina University, Baillie summarized his biggest finding in a southern drawl: “Vertical elevation affects oyster reef restoration but a lot of factors don’t.” The deepest units had three times more oyster settlement than units at the two more shallow intertidal elevations.

“The depth range was relatively small in terms of the overall tidal range, but even over these small gradients there were pretty dramatic differences in results,” added Baillie. He prescribed careful consideration of limited resources to ensure the best results in restoration.

While elevation was the most important factor in oyster mortality, with deeper-situated oysters having better survival and growth rates, the differences in mortality between the two shallower groups was attributed to sedimentation. Increased sedimentation resulted in increased mortality rates. When oysters dominated the waterway centuries ago, they kept turbidity (water haziness) and sedimentation down. Now that the oysters are mostly gone, sedimentation has worsened, leaving few stable spots that for oysters to settle.

“It’s a vicious cycle,” Baillie said.

Oysters ingest and filter about 50 gallons of water per day, retaining the nutrients they need. Nitrogen pollution, which exacerbates algal growth that reduces the oxygen levels of the water, is removed from the water by oysters via the ingestion of contaminated water. The oysters then deposit some of the pollutants in a solid packet in the sand and store the rest of the nitrogen in their shells and tissues.

In addition to filtering water, oyster reefs act as breakwaters, helping to shield coasts from rough waves that can cause erosion and flooding. These natural buffers intercept underwater currents on the seafloor, minimizing the energy of the waves on the surface. Before Europeans arrived on the east coast of what is now the United States, oysters were ubiquitous in coastal waterways. In the Mystic and Charles Rivers around Boston, oysters were in fact so prolific that large ships could not pass.

But now, due to overharvesting and pollution caused by industrialization, urbanization and farming, oyster populations around the U.S. have plummeted. In New England, oysters are “functionally extinct,” says Baillie.

For his part, Baillie hopes that “there is enough interest to bring some of [the oysters] back.” The value that oysters hold in their ecosystems is what sparked his research interest in restoration efforts.

“Also, the outdoors is more appealing than fluorescent lighting,” he said, laughing.

Photo: Oyster restoration at MacDill Air Force Base in Florida. Airman st Class Sarah Hall-Kirchner.

One cell to rule them all: How a single cell may be the key to battling cancer

Based on projected statistics by the American Cancer Society, more than half a million Americans are expected to die of cancer this year.

While current methods of treatment, such as chemotherapy and radiotherapy, are successful in removing a large number of cancer cells from the body, there is still a chance of cancer recurrence using these therapies. Destroying a tumor to only have it grow back again is not a cure. Cancers evolve, gaining the resistance they need to fight back or evade conventional therapy.

Medicine needs a new method of cancer treatment. Fast.

Scientists now believe they have found a new way to fight the disease, and it involves cancer stem cells, or CSCs, a rare subset of cells that exist in the tumor which are responsible for the creation of cancer cells. One of their greatest strengths, according to a recent study in the journal Cell, is their ability to withstand harsh conditions and adapt to them. 

Because of this, these CSCs are immune to any conventional therapy – which is why tumors are able to grow back after being destroyed. While a patient may be under the impression that the cancer cells are gone, it is likely that cancer stem cells still remain, lurking in wait, ready to initiate another tumor.

But these CSCs represent a new opportunity. Scientists are searching for new methods to treat not just cancer cells in bulk, but rather target the cancer stem cells alone. The hope is that by eliminating this tumor-initiating cell, the cancer will be unable to regrow.

Kilmer McCully, an associate professor of pathology at Harvard University, is looking for new therapies to eradicate these cells. His research has “revealed a new class of compounds, furanonaphthoquinones, which specifically inhibit the growth of cancer stem cells,” he said.

An example of one of this new class of compounds is napabucasin, which has the ability to prohibit the regeneration of cells and also the power to cause apoptosis, or programmed cell death, to a wide array of cancer cells. Cancer treatment that uses this compound to target cells would kill off the tumor and prevent its CSCs from functioning as well.

While therapy focusing on cancer stem cells alone would represent progress in the fight against the disease, researchers have multiple obstacles ahead of them.

For instance, as CSCs make up a very small percentage of the total population of cancer cells in a tumor, the method of specifically targeting a rare set of cells within a mass of billions of cells will prove to be a difficult task.

Plus, there’s debate as to whether all cancers have these CSCs. In a 2010 New York Times Magazine article titled “The Cancer Sleeper Cell,” physician and author Siddhartha Mukherjee notes that the likelihood of all cancers following a “stem-cell model” is exceedingly small. While breast cancer and brain cancer, for example, are known to follow a stem-cell model, other forms of cancer may not.

But McCully argues that just because a treatment will not cure all cancers does not mean it shouldn’t be researched. Despite these obstacles ahead, scientists like McCully remain optimistic toward cancer stem cells.

“I believe there will be a success in the future in the treatment of cancer… using autologous human stem cells,” McCully said, which refers to a type of transplant in which doctors collect and harvest stem cells from the bloodstream and input it into an affected patient.

Even now, the science community is seeing new strides in stem cell research. In a recent study at the University of Toledo, researchers have discovered that Connexin43 (Cx43), a gap junction protein in human lungs, is capable of suppressing lung cancer stem cells.

Discoveries like these lay the infrastructure to what the future of cancer research will bring. While stem cells may not be widely represented in all types of cancers, the discovery of even a single cure could be the meaning of life or death for millions of people around the world.

Photo by National Cancer Institute on Unsplash.

Researchers at M.I.T. are developing a system to track food contamination through RFID tags

E. coli-tainted romaine lettuce, Salmonella outbreak tied to cereal, listeria and deli ham. The list goes on. In 2018, it seemed as if every week another food product was being recalled due to a food-borne pathogen outbreak. Romaine was the worst culprit, responsible for multiple outbreaks that sickened hundreds of Americans.

So, who’s protecting our food?

The Centers for Disease Control and Prevention are responsible for linking illnesses in people and the food safety systems of government agencies and food producers. The CDC, the Food and Drug Administration, and the United States Department of Agriculture’s Food Safety and Inspection Services, or FSIS, collaborate at the federal level to promote food safety.

But when the government shuts down, how do consumers, farmers, factories safely avoid food-borne pathogens? What if there was a better, more technical way to ensure the food we’re eating is safe?

Researchers at the MIT Media Lab have developed a system that includes a reader that senses minute changes in wireless signals emitted from RFID tags, or radio frequency identification, when the signals interact with food and can sense potential contamination. 

“Scientists and governments have long recognized the need to assess food quality and safety. The vast majority of existing solutions, however, rely on expensive equipment in specialized food labs,” said Unsoo Ha, a postdoc at MIT and first author of the study. “Unfortunately, extracting samples from every purchased item and sending them to food labs for testing is impractical for lay consumers. To solve the issue, we proposed RFIQ which is based on cheap stickers.”

“In recent years, there have been so many hazards related to food and drinks we could have avoided if we all had tools to sense food quality and safety ourselves,” said Fadel Adib, an assistant professor at the MIT Media Lab and co-author of the paper, in a MIT press release about the research.

The team’s RFIQ system, which can predict if a material is pure or tainted and at what concentration, was tested on baby formula laced with melamine and alcohol diluted with methanol, or fake alcohol. In the experiment, it identified contamination with an average accuracy of 97 and 96 percent, respectively. The RFIQ, includes a reader that can sense small changes in wireless signals emitted from the tags when the signals interact with food. Changes in the signals emitted correspond to levels of certain contaminants within the food product.

With more research and experimentation, Adib and his co-authors wrote that “wireless sensing can glean increasingly more sensitive information about the health and safety of our food and environment.”

Earlier this year, a 35-day government shutdown rocked federal agencies, notably the FDA, furloughing hundreds of workers and halting many food inspections, leaving consumers hoping what they were buying was safe to eat. That’s risky, says Darin Detwiler, a professor of food policy and director of the Regulatory Affairs of Food and Food Industries program at Northeastern University.

“Without the federal oversight, we are at increased risk, given the rapid movement of perishable foods to market.” said Detwiler.

A system like the one being researched at the MIT Media Lab could help consumers, especially as our food system grows larger and more complex and, with it, the possibility of food-borne illnesses.

“I hope that RFIQ could help consumers to identify the food quality and safety even without the legal protections, which means the technology can democratize food quality and safety,” said Ha. “Hopefully the technology will bring [quality and safety] to the hands of everyone.”

The Food and Drug Administration reports about 48 million cases of food-borne illnesses each year, meaning one in six Americans are becoming sick from contaminated food, which results in 128,000 hospitalizations and 3,000 deaths. The U.S. Department of Agriculture estimates that food-borne illnesses cost more than $15.6 billion each year.

While the government is back up and running, shutdowns are common and that puts our food at play. Without federal oversight and inspections, food-borne pathogens can persist.

“We cannot live in a plastic bubble and be afraid of everything we eat,” said Detwiler. “We need to be proactive in our ability to make decisions to best protect ourselves and our family.”

Photo by Massimiliano Martini on Unsplash.