Wednesday, November 21, 2012

Short List

I've been told to always cite my sources unless I want to be banned from reading or writing anything ever again and kept locked in a little cave somewhere in northern Canada.  I try to cite my sources in the original posts, but I thought it would be a good idea to point out some of the main people and groups who have influenced my thinking.  Added bonus:  a nice compact list of cool people to check out!

The Skeptics' Guide to the Universe - A weekly podcast covering science and skepticism.  Quite excellent!  

Skeptoid - Another weekly science podcast, led by Brian Dunning

Richard Wiseman's many books, videos, articles, and blogs - Richard Wiseman is a psychologist that works with the quirky sides of human nature, such as luck, humor, and illusions.

Penn and Teller - Magicians-turned-skeptics with a great sense of humor

James Randi - "The Amazing Randi" is also a magician-turned-skeptic.  The James Randi Educational Foundation works on teaching critical thinking.

The Skeptics Society and Skeptic Magazine - Pretty much what the name sounds like, a magazine focused on uncovering the truth about psychics, aliens, and the like, led by the fearless Michael Shermer.

The New York Skeptics and the New England Skeptical Society - Two groups that organize the Northeast Conference for Science and Skepticism (NECSS) every year.  The event is fantastic!

George Hrab - Host of the Geologic podcast and a skeptical rock musician (in both senses of the phrase - that's right, skepti-rock exists).

I know there are a ton more I could list, but hopefully this will suffice to keep me from being locked in small, dark caves for a while.  These guys are all pretty cool, so check them out when you can!


Thursday, October 4, 2012

How Our Brains Trick Us - Selective Attention

To explain selective attention, I'll start with this example from psychologist Richard Wiseman: http://www.youtube.com/watch?v=jM5ekCEqYQM

I think selective attention is one of my favorite brain failures to talk about, if just because of the sheer ridiculousness we feel once we realize what happened.  And it comes up in all sorts of places that we wouldn't necessarily expect.

Selective attention plays a role in magic all the time, as Penn and Teller so often show.  That misdirection principle they mention is the basis of most sleight of hand tricks.  All you need is to distract the audience somehow.  Next time you see somebody make something small disappear, check what they're doing with their hands.  We always have the image of a magician making a coin or a ball disappear with a large flourish of his hand.  That flourish is no accident.  We are programmed to follow that flourish with our eyes, allowing the magician to do anything he wants to with that other hand.  Pocket the coin, perhaps.  Or maybe pull it out from behind somebody's ear.  Our brains see that quick motion of the, actually empty, hand and they see that the magician is also tracking that hand with his own eyes.  They figure they had better track that motion too, just to be on the safe side.  The result of this selective attention is that we are tricked, nice and easy.  Even after you know how such tricks work, it is still difficult to train yourself not to follow the magician's moving hand, but go ahead and see if you can!

This is how pick-pocketing works as well.  Here, professional pickpocket Derren Brown shows the principles of pick-pocketing.  The video is a bit long, but you don't have to watch too much of it to get the idea.  As long as the pick-pocket keeps the subject distracted, with motion or speech, the subject won't notice that he's being pick-pocketed.  It's just a consequence of our selective attention.  We can't focus on everything at once and the pick-pocket knows how to be subtle and light so we aren't drawn to the pick-pocketing motions.  More misdirection.

Selective attention may even have something to do with if we view ourselves as lucky or not.  Richard Wiseman once put an ad in a newspaper asking people to contact him if they considered themselves very unlucky or very lucky.  He then had the responders participate in a test in which each subject was given a newspaper and the task of counting all the pictures in the newspaper.  On the second page of the newspaper, in large font, there was a notice saying "Stop counting - there are 43 pictures in this newspaper" and another one half way through the paper reading "Stop counting, tell the experimenter that you have seen this and win $250."  The people that considered themselves very lucky found these notices far more often than the unlucky people.  Wiseman said in an article published in the Skeptical Inquirer that the unlucky people were more anxious and anxiety makes people less likely to notice the unexpected.  The unlucky people were so focused on the task at hand that they were unaware of the other opportunities around them.  So if you've always wanted to be lucky, start expecting the unexpected to happen and you'll be able to see when it does!

We wouldn't want to turn off our selective attention all the time.  It's important to be able to focus on the dangerous things that are changing around you while ignoring the unimportant changes.  But I think knowing that you can be tricked is just as interesting as watching the magic show happen.  It is still difficult to figure out exactly how your brain is being fooled, and harder still to prevent it from happening.  Teaching your brain how to think differently and how to avoid getting distracted is an incredible trick in itself!

Friday, September 7, 2012

Stephen Colbert + Neil deGrasse Tyson

I know this interview is a bit old, but it doesn't matter.  It is an excellent example of how science can be made accessible to the public.  Neil deGrasse Tyson is simply brilliant.  I can't add anymore without sounding like an over-excited fangirl, so just go watch the interview.

Sunday, August 26, 2012

How Our Brains Trick Us - False Memories

Continuing along our "how our brains trick us series", we come to the existence of false memories.  Now this one is really good, because we all think we can trust our memories.  I mean, we use eye witness testimony as evidence in court, right?  So our memories must be really solid.

Hahahahaha, no way!

Our brains can, and will, create their own false memories.  Consider the results K. A. Wade, M. Garry, J. D. Read, and D. S. Lindsay from the University of Victoria in Canada and Victoria University in New Zealand show in their paper from 2002, "A picture is worth a thousand lies: Using false photographs to create false childhood memories".

For each test subject, they put together a collection of real photographs depicting events like birthdays and family outings from the time the subject was about four years old with one doctored photograph of the subject taking a hot air balloon ride with one of his or her parents.

A real image of the subject and a parent was inserted into a photograph of a hot air balloon, creating the false image used in the study done by K. A. Wade, M. Garry, J. D. Read, and D. S. Lindsay.

The researchers then interviewed the subjects about the events shown in the photographs.  Fifty percent of the subjects came up with partial or complete descriptions of the fabricated hot air balloon ride.  When revealed to them that the picture had been false, a fact confirmed by multiple family members, many of the subjects were incredulous that their memories had been wrong.  But all it took was a little prodding to get them describing how the wind felt on their faces, what the buildings on the ground looked like from so high up.  And they believed what they were saying.

This was a follow up on another study completed in 1995:  "The Formation of False Memories", by Elizabeth Loftus and Jacqueline Pickerell.  After telling their subjects that they had once been lost at a mall and asking the subjects to remember and explain what they felt during the situation, about 30% of the subjects came up with narratives.  About something that never happened.  So while the visual aid of the doctored photo helped jog false memories in a larger percentage of subjects, just suggesting that some event occurred in childhood is enough to get our brains to decide that it really did happen.

That's fairly terrifying if you ask me.

But not as terrifying as the first time you went on a roller coaster and it got stuck at the top of the hill.  Wasn't that just awful?  Waiting there, not knowing when you would get down, staring down at all the other carnival goers wandering about with their ice cream cones as if absolutely nothing was wrong?

While it is unsettling to think of childhood memories as modifiable, it is more disturbing to think about some of the more serious areas where we depend on memory.  I already mentioned judicial testimony.  What about criminal interrogations?  Therapy sessions regarding child abuse?  If just asking somebody a question can potentially prompt false recall of false past events, can we make good decisions based on the results of such sessions?

Now, I'm not saying that all of our memories are false and there is a whole big conspiracy going on to implant fabricated histories in our minds so we don't know what has really been happening to us throughout all of our lives.  But we should know what follies our brains are capable of so we can better judge the information presented to us by other people's and our own memories.

Now if you'll excuse me, I'm off to mod my tin foil hat so it's memory-tampering proof too.

Tuesday, July 31, 2012

How Our Brains Trick Us - Pareidolia


I am currently working as a TA for the Summer Science Program (SSP) which takes high school students that have been bored with their regular class work and show exceptional interest and talent in math or science and gives them five and a half weeks of lectures in astronomy, physics, computer science, and calculus, alongside a project to track and determine the orbit of a near Earth asteroid.  All of the TAs are allowed to give a guest lecture during the program.  I decided to do mine on a few of the ways our brains trick us.  This post will focus on pareidolia, with the other examples from my lecture to follow in later posts.  Some of the larger effects that I draw out from individual phenomena are actually the result of multiple psychological and biological factors working together.  I have still included them as examples because they are at least partially caused by the brain trick I am exploring.  There are too many logical fallacies and human biases to cover all of them in a 50 minute lecture.

While I have not done any original research on this topic (yet!), I’ve found that I am very interested in skepticism and critical thinking and what happens when we have a lack thereof.     The scientific method has critical thinking built into it.  It mandates that we must make our hypotheses before we see our data and that we use the rules of statistics and probability to interpret our data.  Common practices such as double-blinded studies help minimize the effect human biases have on experimental results.  But what happens once we leave the laboratory?  Surely our brains don’t stop being biased.  They don’t stop interpreting things subjectively.  How are they tricking us when we stop paying attention?

One example of our brains tricking us is “pareidolia,” the phenomenon of seeing patterns in random noise.  Our brains are excellent at pattern recognition, and, generally, this is a good thing.  It lets us learn languages, play music, do science and math, make art, and develop social skills.  But our brains overdo it.  They see patterns where ever they can.  This may be a leftover evolutionary trait.  Back when we still had to worry about being eaten, if you heard a rustle in the tall savanna grass, you had better recognize the pattern that the last time and the time before that, a rustle in the grass meant there was a lion and you’d better run.  It is much better to accept a false positive in this case (I run away because I think there is a lion, but it was really just the wind rustling the grass) than a false negative (I stay because I figure it’s just the wind and end up being eaten by a lion).  So extending this grass rustling/lion pattern makes sense. 

Recognizing facial expression patterns is also very important.  If you want to get along with the people in your group, you have to know if they’re upset with you or happy or worried.  So our brains love to see faces.  They are looking for any piece of information that will tell them what is going on, if there is any danger, if anybody looks like they are angry.  That’s why when faced with pure random noise, our brains try so hard to find a clue about what is going on, and they start to see things that aren’t really there.  Bam!  Pareidolia.

What this leads to in the wider scheme of things is cultural beliefs in things like Bigfoot, aliens, and ghosts.  One famous example of pareidolia is the “Face on Mars.”  

(Image from Wikipedia)

In 1976, the spacecraft Viking I took an image of a Martian rock outcropping that resembles a face.  Some people interpret this as evidence for intelligent life on Mars.  Others might recognize that seeing such a pattern in Martian rocks is just a result of pareidolia and our brains’ tendency to see faces when none are there.

More recently, the Mars rover, Spirit, took an image of a Martian rock formation that resembles a large, hairy, upright walking ape.  

(Image from abclocal.go.com)

This is claimed to be evidence of Bigfoot on Mars.  Such patterns tend to disappear once images are taken with better equipment or from a different angle.  When the Face on Mars was imaged again in 2001 by the Mars Global Surveyor, it's facial features are no longer visible.

(Image from http://www.msss.com/)

Interesting.

Pareidolia also gives us Jesus toast, Jesus tortillas, (Jesus anything, really), Homer Simpson on Mercury.

(Images are from (left to right) skepticmoney.com, skeptico.blogs.com, theness.com/neurologicablog)


Our brains can make faces out of pretty much anything...

(All images from geekosystem.com)


There is also auditory pareidolia.  Our brains gather information from the sounds around us, so they are also expecting auditory patterns.  This leads to exciting results such as listening to clips from skeptoid.com of arbitrary sine waves which sound like nothing at first


But once we are given a suggestion of what could be there

And listen to the original clip again

We start to hear patterns, intelligible speech, that wasn’t there before.  All our brains need is a little bit of a clue as to what we might be hearing in that random bit of noise and that’s enough for them to latch on to a pattern.  Our brains will take any bit of information they can get to make sense of the world around us, even if it means constructing an artificially ordered world.

But now that you know it happens, you can take care not to fall into the same trap.  Just as you wouldn’t think that because a cloud is in the shape of a bunny rabbit, it is actually a bunny rabbit, you need not think that because there appears to be what looks like Bigfoot on Mars, it is actually Bigfoot on Mars. And not all rock musicians have secret messages to Satan that can be revealed by playing their songs backward.

But sometimes Darth Vader really is on your toast:

(Image from coolest-gadgets.com)

Monday, June 4, 2012

Transit of Venus

Seeing as how this started as an astronomy blog, it would be wrong not to write about Venus transiting the sun tomorrow.   This link is quite handy, but if you don't feel like clicking it, just know that Venus will be moving in front of the sun as seen from Earth on June 5, 2012.  Venus transits come in pairs, with eight-ish years between each transit within a pair, and a hundred or so years between each transit.  So you ought to try and catch this one!  It will be visible from around 3:00 pm to sunset in the Pacific time zone.  Just like the solar eclipse earlier this year, don't look directly at the sun without some sort of protective eye gear.

I used to look at solar eclipse pictures, and similar phenomena, as if the object being eclipsed was a circle that was becoming a crescent or just losing a piece of itself.  But for this past solar eclipse, I was able to think about what I was seeing in three-dimensions rather than two.  This made it more obvious to me that the moon was actually orbiting the Earth and it was blocking out of the sun's light.  It was way cooler to look at the eclipse this way.  I'm hoping to do the same thing when I observe Venus's transit.  It kind of gives you a sense of scale of the solar system.  The Earth and Venus are similar sizes, so when you see how small Venus is in comparison to the sun, that's how small Earth is too.  Incredible!

Thursday, May 31, 2012

The Hidden Side of Science

The myth:  Science is a straight forward process and all research has an immediate application.

Reality:  The public mostly reads about large scientific breakthroughs which helps lead to misconceptions like the one stated above.  I understand that most of the small results in science are really only interesting to the people working in the field in which those results took place.  But at the end of the majority of science news stories, there is a paragraph at the end explaining the applications that may come about as a result of the scientific findings, often including the phrase "such technology is expected to be available in five to ten years."  Of course, this rarely happens as predicted.  Since news sources mostly print science success stories and tend to emphasize potential applications, we end up with the idea that scientists do science knowing what the applications will be and that scientists are know exactly what they are doing when it comes to research.  Really, scientists rarely start research knowing exactly what will result.  They may have some idea as to what to expect and they have defined rules that dictate which possible results will support their hypothesis and what sort of results will refute it.  Scientists typically spend a long, long time working to get these conclusive results, if they ever manage to get results at all.

Sometimes I get the feeling that the average person sees science as progressing like this:
  • A scientist wants to solve a problem of an engineering or medical sort
  • The scientist thinks really hard and/or messes around with different chemical/apparatuses and/or gets hit on the head by an apple
  • The scientist comes up with the solution and the problem is fixed
  • Science continues, yay!
But really, science looks kind of like this:

  • There are many different areas and large groups working together
  • There is a ton of crossover among disciplines
  • And there are many, many dead ends
Additionally, a single scientist will probably get about this far down one of the above paths during her entire career:  __.
This tiny step may be down a dead end, but the ending is so far away that nobody can see it.  So yes, as a scientist, there is a possibility that you will spend your entire life working on something that is wrong.  But take solace in the fact that a) you have helped science by discovering a dead end, thereby making sure nobody else will spend time there and b) even if the overall theory you were working on is false that one little bit of science you did might be useful somewhere else.  Your results may have refuted your hypothesis, but you managed to develop some new lab technique.  That's great!  Or perhaps it was a piece of code.  You can try packaging it and giving or selling it to other scientists that might need it.

So scientists make mistakes during research.  It happens.  For example, consider the widely publicized story of the so-called faster-than-light neutrinos (extraordinarily light, non-charged particles) that CERN, a nuclear and particle physics research group, detected.  Their results suggested that they had detected neutrinos that moved faster than the speed of light.

Now that is an extraordinary claim!  Einstein's theory of special relativity, which has been proven over and over again, tells us that nothing can move faster than the speed of light.  If neutrinos were moving faster than the speed of light, then much of our understanding of physics would have to be entirely rewritten.

After repeating the experiment multiple times, asking the scientific community for independent confirmation, and launching an investigation of their equipment, CERN finally found that some of their wires and some of their timing equipment was faulty.

All scientists can make mistakes, but what distinguishes good scientists from bad scientists and psuedoscientists is how they react to those mistakes.  Specifically, good scientists do not jump to conclusions.  They are skeptical of their own results when the results appear to be in disagreement with strongly experimentally verified theories.  This is because they know that scientific consensus comes from examining a large body of research comprised of innumerable studies, and not from an individual experiment.  Scientists are fallible and experiments don't go perfectly, but the science has built-in rules that minimize the damage caused by any single mistake.  And slowly, after so many researchers and mistakes and corrections, we make scientific progress.

But no matter what reading science articles makes you believe, not all of this progress comes in the form of direct application.  And this is ok!  Basic scientific research is still necessary, even if an application is not immediately obvious.  Mandating that all research result in applications is unfeasible, but that doesn't  mean basic research doesn't have value.  Remember when I said that scientists often don't know what the ultimate result of their research will be?  All this means is that applications can appear from unexpected places.  For example, Einstein's theory of general relativity describes the fundamental structure of spacetime and revolutionizes the way we think about gravity.  Yet, it is used to develop GPS, a technology that people like my mom and I depend on to get anywhere farther than three blocks away from our house.  Just because scientists can't see the end of a certain scientific path doesn't mean it won't be a fruitful path.  And with so many paths in the first place, they don't know which ones will result in applications, so they need many many people working on going down all these different paths.

Some research starts off with a final product, application, or improvement in mind, but for other projects, the end results are a mystery (if we don't factor in prior plausibility).  So when people, either jokingly or indignantly, proclaim that scientists are wasting their time on some sort of basic research, or that they are purposely neglecting things like cancer research, it becomes obvious that those people do not know how science works.  Basic research should be promoted, not denounced.  Science is so intricate and interdisciplinary that nobody knows where the most valued and sought after results will come from.  Which is why schemes like Eric Cantor's YouCut, which allows, although the set-up may have changed by now, the public to vote on which National Science Foundation projects are frivolous and should have their funding cut, may actually cut the basic research that leads to a cancer treatment or some new amazingly strong and light material.  That's not to say that all experiments are equal.  Prior plausibility is a factor here.  There is not very much reason to repeat Millikan's oil drop experiment over and over again, expect new results, and call it novel research .  However, developing life-changing technology takes time and, as with all final products, the processes that lead to them must start at the beginning.  For scientific, engineering, and technological breakthroughs, the beginning is located in the basic research stage and the process starts when a scientist utters "I wonder..."


Tuesday, April 17, 2012

A Science Feature Story

Turning Trash into Treasure:
How can we use rotting food as fuel?

At the back of any dorm refrigerator you are apt to find all sorts of questionable products.  Perhaps some black sludge that used to be a banana, a plate of white fuzz that was once take-out, or a bottle of grape juice that should now be called wine.  But maybe we should not be so quick flood these refrigerators with bleach.  New developments in energy science are allowing researchers to convert food waste into various types of biofuels.  That wilting, mushy celery stalk you were about to throw out?  Using the proper technology, it can be turned into natural gas.  Of course, it takes a bit more than an old dorm fridge to turn rotting food into an alternative source of energy in a way that is both economical sensible and environmentally safe.

The Science
The idea of biofuels is not new.  For decades we have been using corn and sugar to make liquid ethanol as a fuel for cars and other vehicles.  But using food that we might otherwise eat causes food prices to climb dramatically.  A group of scientists at the Fraunhofer Institute for Interfacial Engineering and Biotechnology in Stuttgart, Germany, has come up with a way to use food waste to create biogas, methane that has been obtained from organic sources instead of from underground.  They take the food that has spoiled before it could be bought from a wholesale market and let the food ferment in large containers.  Methane is released by the food and captured in another container.  The natural gas can then be highly compressed and used to power cars.  And it’s not just grocery stores that have excess food.  Produce waste can be collected from universities, restaurants, military facilities, pretty much anywhere food has to be prepared on a large scale.

There are several different methods by which biomass (organic material, in this case excess fruits and vegetables) can be used to extract methane.  Most of the differences come from how the main container is partitioned, which affects how the food waste and bacteria that help fermentation will mix and how the released gasses will be siphoned out.  In general, slurry that will help the biomass decompose is mixed with the biomass at the bottom of the container (see figure 1). 


The gases that are released are allowed to flow up into another chamber.  The slurry/bio-waste mix is unable to enter the section holding the gases due to partitions and/or gravity.  The gases can then be captured from the container by pushing them out using pressure gradients or simply allowing them to float out into another container (Polprasert, 167, 171, 174, 176).

Other factors that vary among facilities include what material is used to facilitate fermentation, the temperature of the process, and what the biomass consists of.  The organic matter processed ranges from corn and sugar beets to sewer sludge.  What makes all these different materials potential energy sources, though, is that as they decompose, they release gases, including methane.

Here’s a simple experiment, although it may be better to see a demonstration than try it on your own.  Go to a pond with a thick layer of muck on the bottom and bring along a funnel, a lighter, and a friend.  It helps if there are trees that often drop their leaves into the pond.  Stick your hand in the muck (go ahead, nice and deep) and mix it around.  With your other hand, hold the funnel with the wide side just under the surface of the water and the narrow end pointing into the air.  The funnel should be positioned over the spot where you are mixing up the mud.  As bubbles escape from the mud, they will be captured by the funnel.  Have your friend, while staying as far away as possible, switch on the lighter right above the narrow end of the funnel.  BOOM!

All that muck at the bottom of the pond is made of decaying organic matter, dead fish and algae, sticks and leaves that sunk to the bottom, whatever.  As it decays, it releases methane, which gets trapped under the other layers of mud.  When you stir up the mud with your hand, you allow these bubbles of methane to escape to the surface.  The funnel focuses the methane in one place so we can ignite it with the lighter.  The result: we see that methane can be burned to release energy.

Biogas plants follow the same basic principle as above, but magnified.  Organic material decomposes, the resulting gases can be collected, and, after a bit more modification, they can be used for energy.  Bacteria that decompose the organic material and create methane as a byproduct already exist in the pond water and dirt.  But the facilities have to add their own bacteria, hence they use fertilizer, sewer sludge, or some other source of bacteria in the mixing chamber of the main holding container (Fraunhofer, 2012).

The bacteria first perform hydrolysis on the cells of the fruits and vegetables.  This process breaks down the longer, more complex carbohydrates in the cells into simple sugars.  Another type of bacteria then converts the sugars into various organic acids.  Still another type of bacteria breaks down the organic acids further.  Finally, archaea, a domain of microorganism that is distinct from bacteria or viruses, turn the acids into methane and carbon dioxide.  Throughout the process, other gasses and nutrients, such as hydrogen and nitrogen, are created as by-products (Marchaim, 1992).  While the researchers in Germany are interested in the methane as a fuel, they have good plans for the numerous by-products as well (see figure 2).



The Environment
One of the by-products of the process is carbon dioxide.  Since one of the goals of using fermenting fruits and vegetables to generate energy is to reduce the amount of greenhouse gases released into the atmosphere, it would be advisable to put that CO2 to good use.  The researchers in Stuttgart have teamed up with researchers working on an algae cultivation project in Reutlingen, Germany.  The algae secrete oil that can be used in diesel powered engines, but only if they have enough carbon dioxide, sunlight, and nutrients.  The carbon dioxide from the biogas generation process helps the algae grow.  The filtrate water that is left over from the mixing also goes to the algae, as it contains enough phosphorous and nitrogen to keep the algae well fed.  Additionally, even after the bacteria cannot break down the original mixture any further, the Stuttgart researchers send the remaining sludge to the Paul Scherrer Institute in Switzerland and the Karlsruhe Institute of Technology in Germany, and converted to methane (Fraunhofer, 2012).

So the by-products of the process don't seem to be a cause for environmental concern.  But is biogas as energy efficient as other fuels?  And even if natural gas is the way to go, is there an advantage to getting it from fermenting produce instead of underground?

Some research suggests that biogas is far more efficient than other biofuels, such as ethanol.  That is, the ratio of output energy to input energy is higher for biogas than other biofuels, given the same amount of land on which the crops were grown (see figure 3).



Biogas from wheat is more energy and resource efficient than ethanol from wheat; biogas from sugar beets is more efficient that ethanol from sugar beets, and so on. The higher energy efficiency of the biogas is partially due to how the associated by-products are used. To make liquid biofuel generation more efficient, the by-products can be used to make biogas. This requires biofuel plants to alter their facilities to be able to make such a conversion, which would cost energy in its own right, but the alterations only have to be applied once to each facility (Borjesson and Mattiasson, 2008). The advantage to using spoiled produce that is past its expiration date instead of crops like corn or sugar beets, is that the crops can be used for food, while spoiled fruits and vegetables do not have much use.

Another advantage food waste has over crops is that we do not need to create more growing space to produce our fuel source. The crops have to be planted, but if we use material that would have gone to a landfill, we would actually increase the amount of available land, since landfill space would be reduced. Some land would need to be set aside for building biogas plants though.

Additionally, biogas would help decrease greenhouse gas emissions. First of all, switching from gasoline or diesel powered vehicles to natural gas powered ones would reduce the amount of CO2 released by vehicles. Moreover, processing food waste means we are sending less material to landfills. Capturing methane over landfills is much more difficult and the odors are harder to control than in the biogas generation process. Finally, using manure as a means of introducing the necessary bacteria to break down the fruits and vegetables cuts some of the methane produced by livestock. Of course, burning methane will still add some carbon dioxide to the atmosphere, but less so than other fossil fuels (Borjesson and Mattiasson, 2008) (see figure 4).


But we don’t need organic material to obtain natural gas. We can also extract natural gas from underground. So why bother with biogas? First of all, underground pockets of methane are limited and non-renewable. It is also more difficult to access the methane. A method known as hydraulic fracturing, or hydrofracking, is necessary. Engineers must drill to the natural gas far beneath the surface, and extracting the gas from underground often involves environmentally questionable chemicals. Since we must drill past underground aquifers, there is a chance that the chemicals could leak and contaminate drinking water (see figure 5). Furthermore, since underground extraction changes the pressure underneath localized patches of the surface of the Earth, we increase the possibility of a cave-in. So there is a strong case for moving away from hydrofracking and switching to fermentation to get our methane.



The Economy

While fermenting fruits and veggies to generate biogas appears to hold up environmentally, it will never get anywhere unless we have economic incentives to implement it. Natural gas is already used frequently for heat, electricity, and cooking, so residential, commercial, and industrial buildings will not have to be modified if they already use natural gas. The area that would take the most work, and money, to make using biogas feasible is in the vehicle industry. Some trucks and cars already run on compressed natural gas, but the majority does not. However, various energy companies have already gotten involved with the biogas researchers in Germany and are developing new vehicles to run on natural gas (Fraunhoffer, 2012). We will also have to tailor filling stations to biogas powered cars.

Because there is an added cost from developing the new vehicles and filling stations, biogas would have to be less expensive than gasoline and liquid biofuels. Sugar cane based ethanol from Brazil costs between $0.25 and $0.50 per liter. Biogas would have to cost $0.05 to $0.15 per equivalent liter to be feasible. In 2008, biogas derived from organic waste and manure cost between $0.45 and $0.55 per equivalent liter, so the price would have had to come down in order for using biogas in cars to be economically reasonable. Organic waste-derived biogas was still more feasible that crop-based biogas, which cost as much as $0.80 per equivalent liter. However, these calculations did not account for government subsidies. Additionally, the price of oil only has to rise above $70 per barrel for biofuels, in general, to become economical in comparison (Borjesson and Mattiasson, 2008). Currently, in March 2012, light crude oil costs $106 per barrel (CNN Money). If oil prices continue to rise, it is only a matter of time before biofuels, including biogas, become the more reasonable choice in regards to economics.

Another economic advantage to biogas is that, while natural gas from fossilized organic materials, oil, and coal, are located in a limited number of locations around the world, biogas plants could be adapted for almost everywhere (Borjesson and Mattiasson, 2008). Any sort of rotting produce works. Citrus can be used in Southern California, while old corn is used in the Midwest. The process may have to be slightly altered a bit —food waste heavy with citrus fruits prove to be a particular problem, because the acid lowers the pH level beyond the bacteria’s comfort level and counter measures must be taken to reduce acidity—but finding the solution to a slightly different mixtures of produce will only take a bit more experimenting (Fraunhoffer). As a result of this flexibility, biogas plants can be tailored to complement the local economy.

For example, the University of Illinois at Champaign-Urbana has a large agricultural program. The food waste could be collected from the dining halls, pH levels and temperatures could be varied accordingly, depending on what fruits and vegetables comprise the majority of the material, and extra manure from the cows on campus could be mixed with the produce to provide the necessary bacteria. The university can generate some of its own electricity and get plenty of positive publicity from being seen as “green.”

While there is still too much economic momentum pushing us behind gasoline to immediately switch over to biogas, we should remember that as we deplete our oil resources, gas prices will continue to increase, making alternatives look more inviting than ever. Additionally, it is nearly impossible to extrapolate with regards to technology for any extended period of time. Biogas facilities may become more efficient, as may the development of natural gas powered vehicles. And researchers may develop hybrid cars that run on natural gas and electricity. (Borjesson and Mattiasson, 2008).

In the end, turning a profit at a biogas plant may come down to whether the by-products and remaining sludge can be sold as fertilizer or something else. The researchers at Stuttgart seem to have found a solution as to what the by-products can be used for, as described earlier. However, if the by-products must be sold or used in some productive way, biogas generation may be constrained to large scale facilities. Obtaining biogas from biomass may not be feasible on the scale of an individual home if the owners lack the opportunity to put the by-products of the process to good use (Marchiam, 1992). 

The Future?

If biogas plants ever become mainstream, we would have to provide proper maintenance of the equipment involved in order for biogas to continue supplying us with energy, environmental benefits, and a potential way to sell our garbage to the grid. Even in areas where the composition of the food waste and sewer sludge or fertilizer is fairly consistent, maintenance employees will need to make sure that the nutrients in the mixture are not changing over time. If the environment changes in such a way that the bacteria fermenting and breaking down the food cannot do their job, we will need to find out where the change came from and how to counteract it (Marchiam, 1992).

It seems possible that, with time, we will separate our food waste like we do our recycling and garbage in order for it to be collected and sent to the correct facilities. This would cut down on material going to landfills and provide the biogas plants with a steady source of fuel. Since many people already spend time composting, it would be a small change to go from collecting food for compost to collecting food for biogas generation.

No matter how nice biogas sounds, though, it alone won’t save us. We will still need to use other sources of fuel to cut back on our greenhouse gas emissions. In the end, because biogas will still emit some amount of carbon dioxide, we may ultimately have to move away from it. For now, however, it may ease the transition from traditional fossil fuels to renewable sources, and it gives us something productive to do with our trash in the meantime.


Works Cited 

Borjesson, P. and Mattiasson, B. (2008). Biogas as a resource-efficient vehicle fuel. Trends in Biotechnology, 26, 7-13. http://dx.doi.org/10.1016/j.tibtech.2007.09.007

CNNMoney. (2012). Commodities. Retrieved from http://money.cnn.com/data/commodities/

Fraunhofer. (2012). Fuel from market waste [Press release]. Retrieved from http://www.fraunhofer.de/en/press/research-news/2012/february/fuel-from-market-waste.html

Macdonald, N. (2008, January 11). Powering up with biogas. The Dominion Post. Retrieved from http://www.lexisnexis.com/lnacui2api/results/docview/docview.do?docLinkInd=true&risb=21_T13955099375&format=GNBFI&sort=BOOLEAN&startDocNo=1&resultsUrlKey=29_T13955099379&cisb=22_T13955099378&treeMax=true&treeWidth=0&csi=256380&docNo=9http://www.lexisnexis.com/lnacui2api/results/docview/docview.do?docLinkInd=true&risb=21_T13955099375&format=GNBFI&sort=BOOLEAN&startDocNo=1&resultsUrlKey=29_T13955099379&cisb=22_T13955099378&treeMax=true&treeWidth=0&csi=256380&docNo=9

Marchiam, U. (1992). Biogas Processes for Sustainable Development. Retrieved from: http://www.fao.org/docrep/T0541E/T0541E00.HTM

Polprasert, C. (2007). Organic Waste Recycling: Technology and Management. Available from: http://books.google.com/books?id=owycqJMjoZoC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false

US Environmental Protection Agency: Office of Research and Development (November 2011). Plan to Study the Potential Impacts of Hydraulic Fracturing on Drinking Water Resources. Retrieved from http://www.epa.gov/hfstudy/HF_Study_Plan_110211_FINAL_508.pdf

Friday, March 9, 2012

Six Word Memoirs

Music? Science? Music? Science? Music? Science.

Thought I knew myself.  Not anymore.

The best laid plans go awry.

Less structure means more future possibilities?

Be excellent to others... and yourself.

Tuesday, February 21, 2012

A Practice Science News Article

This was my first introduction to writing a quick science news article that would go into something like the New York Times.  We were supposed to create the article based off a press release and the original paper.  My article was about a small advance toward making quantum computers.  I've already gotten comments from my professor, but more feedback is welcome, from people with and without a science or journalism background .


That’s one small twist for electrons, one ten-second leap for computer technology


Quantum computers would be able to perform 100 billion calculations nearly instantaneously and solve problems in a few months that would take even our fastest computers millions of years.  While scientists have made plenty of theoretical developments regarding quantum advances in engineering such devices have been slow in coming.  But an international team of researchers is now one step closer to a physical quantum computer after developing a new method to control certain properties of electrons.


Scientists Stephen Lyon and Alexei Tyryshkin controlled the electrons in a bar of silicon, cooled to just above absolute zero, by sending pulses of microwaves across the bar, thereby arranging the electrons in an ordered manner.  The orderliness of the electrons is what allows quantum computers to work.  As long as the electrons remain ordered, the computer has access to the information stored in the particles.

Previously, Lyon’s group was able to keep the electrons ordered for about 60 milliseconds.  Using their new method, they were able to maintain order for 10 seconds.  Other researchers have kept electrons ordered for at least hundreds of seconds, but their techniques do not use silicon, which will put them at a disadvantage when it comes time to manufacture the computer parts.

The property that allows information to be stored in electrons is the same property that Lyon and Tyryshkin must control.  This property is known as “spin” and it is a fundamental characteristic of electrons.  Spin can be thought of as the property of electrons that allows each electron to create a small magnetic field.  An electron’s spin can either be “up” or “down”, similar to how standard binary computers use either a 0 or a 1 to encode information.  An electron’s spin can also be in a “superposition”, in which its spin is both up and down.  This superposition has no analog on a macroscopic scale and is what gives quantum computers their incredible computing ability.

The microwaves that the group pulsed across the silicon allowed the researchers to control the electrons’ spin.  “The first pulse twists them, the second reverses them, and at some point the sample itself produces a microwave pulse, and we call that the echo,” Lyon stated in a press release in January.  “By doing the second pulse, getting everything to reverse, we get the electrons into phase.”  While the electrons are “in phase,” the spins of the electrons are coordinated, and the information encoded in the spin of the electrons is accessible for calculations.

Once the spin of the electrons becomes uncoordinated, the information is no longer accessible.  Therefore, developments that allow scientists control over the electrons’ spin for increasingly longer periods of time are absolutely necessary before a quantum computer can be built.  As Tyryshkin puts it, “The bottom line is, you want [coherence] as long as possible.”

Magnetism is one of the ways the electrons can become uncoordinated.  One of the reasons Lyon and Tyryshkin were successful in increasing the time the electrons’ spin remained coherent was because they were able to reduce the magnetism interfering with the electrons in the silicon bar.  Some versions of silicon will produce their own magnetic fields, depending on how the atoms are structured, while some are magnetically silent.  The team had to find the version of silicon that would produce the least magnetism.  This happened to be silicon-28.

Lyon and Tyryshkin purified the silicon so it contained very few contaminants, such as silicon-29, which produces plenty of magnetism.  In fact, they purified their sample so well, that they were worried the sample would not respond to the microwave pulses.  The response the silicon puts out due to the microwaves is how the researchers know the electrons’ spins have become aligned.  To make the sure that the silicon would produce a response to the microwaves, they had to introduce some phosphorus to the sample, a process that took extreme precision.

“A lot of the work boils down to getting the phosphorous far enough apart,” Lyon said.  Too much and the magnetism that will disorder the electrons returns; too little and the sample remains unresponsive to the microwaves and the scientists have no way of knowing how the electrons are reacting.  “It has taken quite a bit of work to get to this point,” Lyon said.  “Nine years of refining measurements and materials.”  Michael Thewalt, a physics professor at Simon Fraser University and Kohei Itoh, a professor at Keio University, helped obtain the necessary amounts of silicon.

The temperature of the silicon also plays a part in reducing magnetic noise.  The silicon was cooled to 2 kelvin (A kelvin is a unit of temperature, like degrees Fahrenheit or degrees Celsius).  For comparison, the vacuum of space is about 3 kelvin, and at 0 kelvin, or absolute zero, all thermal heat vanishes.  The low temperature reduces magnetic activity, which helps keep the electrons ordered, and their information available, longer.

While Lyon and his team have made substantial progress toward making quantum computing a reality (their work was published in Nature Materials in December 2011) by twisting and turning their electrons at their will, extending electron coherence time is only one obstacle delaying the creation of a quantum computer.  Researchers still need to find a way to increase the amount of information that the computer can handle at one time.

Currently, Lyon and Tyryshkin have the equivalent of one “qubit” of information.  A qubit is the smallest unit of information a quantum computer could deal with, similar to a 0 or a 1 for a regular computer.  Tyryshkin explained that future computers will need to control and access many more qubits.  “Right now, we are using one,” he said.  “If we could come up with a thousand, that would be a very interesting machine.”

Scientists have yet to determine how many qubits are necessary and how long the electrons would have to remain coordinated in order to create a functional quantum computer.  But when all the pieces of research and engineering finally come together, it will signal a computing revolution.

Wednesday, February 15, 2012

Detecting Planets

I wrote this as an extended definition or explanation of some scientific concept.  Specifically, I explain some of the methods we use to detect extra-solar planets.


Searching for a Needle in the Cosmos
We have long searched for a planet that is as hospitable to life as the Earth.  Such a planet would have to have a solid surface and be large enough to maintain its orbit, but not too large as to crush any life from gravitational influence. The planet would also have to be at the correct distance from its host star as to stay at a reasonable temperature and it would have to have a sufficient amount of atmosphere made out of non-toxic gases.  We have evaluated the ability of the planets within our solar system to be hospitable to life.  Some have potential, but most fall short of our requirements.  This is not a problem for astronomers, though, because there are billions and billions of planets hundreds of light years away, happily orbiting a star that is not our sun.  Any one of them could have the traits needed to host life.  Not wanting to leave these extra-solar planets uninvestigated, astronomers have developed techniques to detect planets outside our solar system.  The new techniques not only detect planets, but also inform us about the presence of characteristics necessary for supporting life on said planets.  How can this be?  How are astronomers able to detect these pinpoints of rock and gas in our ever-expanding universe, these needles in a cosmological haystack?  And how can they determine if these planets can host life?
            One of the most common methods astronomers use to detect and measure the mass of extra-solar planets is the transit method, in which the brightness of the planet’s host star is consistently measured.  The transit method only works for star-planet pairs that are oriented in such a way that the planet will pass in front of and obscure part of the star as seen from Earth.  This “edge-on” view is in contrast with an orientation where the planet looks like it is tracing out a circle around the star.  Astronomers will record and graph the brightness of the star over some amount of time.  Typically, the graph will show some variation in brightness.
            Many factors may account for the variability of a star’s brightness.  The star may be producing stellar spots, which are equivalent to sun spots, dark patches of low temperatures in comparison to the rest of the star.  The star may be rotating or it may occasionally be obscured with dust.  The way astronomers deduce what is responsible for the variability in the brightness is by looking for a particular pattern in the changes. 
What sort of pattern should we expect to see if a planet is causing the changes in brightness?  Try imagining a planet off to the side of a star.  We would measure the full, regular brightness of the star.  This level of brightness would continue until the planet started to transit, or pass, in front of the star.  Then the detected brightness of the star would continually decrease until the entire planet overlapped the star.  The measured brightness would stay at a minimum value until the leading edge of the planet reached the edge of the star.  At that point, the brightness level would gradually increase back to its original, maximum value, as more of the star was revealed as the planet moved past it.  As the planet continued its orbit, it would eventually pass behind the star.  We would then measure a small decrease in brightness since the light reflected by the planet from the star would no longer reach Earth.  Then, as the planet came around the star again, the whole process would repeat itself.  So a plot tracking the brightness of a star would indicate the presence of a planet if it showed a large dip, then an increase back to a maximum value, then a small dip, then an increase back to the maximum value again, and the pattern repeated.
            The transit method for detecting extra-solar planets is useful for helping astronomers figure out how large a planet is, which, when combined with information gathered from other sources, can help them infer what the planet is made of and how strong gravity is there.  From there, the astronomers can hypothesize about whether such a planet would be hospitable to life.  The ratio of the height of the large drop to the small drop gives astronomers the ratio of the surface areas of the star and the planet.  If the radius of the star is already known, then astronomers can calculate the radius of the planet.  By considering the densities of planets with similar radii, astronomers can then estimate the density, and therefore the mass, of the planet in question.  Knowing how large a planet is can then tell them if it is more likely to be gaseous or solid.  Currently, astrobiologists are more interested in solid planets, because they are more likely to be similar to Earth, and so more likely to be hospitable to life.
            We need to know more than how large a planet is before we can conclude if it would be hospitable to life.  Life, at least as we know it, can only survive within a certain range of temperatures.  To estimate the temperature range of a planet, astronomers must know how large the host star is and how far away from the star the planet is. The “radial velocities” method of detecting extra-solar planets, in which the gravitational effect of the planet on the star is measured, helps inform astronomers about the distance between a detected planet and the star it orbits.  Methods used to calculate the mass of a star are a subject for a different time.
            We often think of a planet orbiting the center of a star, but, in reality, the star and the planet are orbiting a common point in space.  Consider a see-saw.  If there are two five-year olds of equal mass playing on the see-saw, then they can sit equally far away from the center to remain balanced.  If we replace one of the five-year olds with a football player, who is presumably heavier than the five-year old, then the football player must sit closer to the center of the see-saw to maintain balance.  The center of the see-saw happens to be the where the “center of mass” of the five-year old and the football player is located.  Now imagine the see-saw was taken away, but the five-year old and the football player remained suspended in the air.  The center of mass of the two is still where it was when the see-saw was there.  If we moved the football player or the boy, the center of mass would also move.  The center of mass of an object, or a group of objects, is a point in space at which all the mass in the system is balanced. The center of mass may not necessarily be made up of physical matter.
Star and planet systems work pretty much the same way balancing a see-saw does, only instead of moving up and down while the center of mass stays in place, the star and planet orbit the center of mass while it stays in place.  A large difference between the mass of the planet and the mass of the star means that the center of mass will be closer to the star.  As the planet gets more massive, the center of mass moves away from the star.  This does not mean that the more massive planets are closer to the star; just that they have a greater effect on the location of the point that the star and the planet orbit around than smaller planets.  For example, Mercury is closer to the sun than Jupiter, but Jupiter is far more massive.  Because the difference in mass between the sun and Jupiter is smaller than the difference between the sun and Mercury, Jupiter affects the sun’s motion more than Mercury does.  A larger distance between the center of mass and a star makes the difference between the star revolving in place and the star actually moving in a small elliptical orbit.
            How do astronomers take advantage of the fact that a planet can affect a star’s motion by shifting the center of mass away from the star?  If a planet is large enough to cause a star to move in a noticeable orbit around the center of mass, the star is sometimes moving away from Earth and sometimes moving toward Earth (assuming, once again, that we have an edge-on perspective of the system).  As the star moves away from the Earth, the radiation emitted by the star has a longer wavelength, and as the star is moving toward the Earth, the radiation is compressed and has a shorter wavelength.  Astronomers measure the changes in the wavelength of the radiation emitted by the star.  If the changes in wavelength are large, astronomers know that the star must be traveling in a bigger orbit.  After calculating how large the orbit of the star is based on measurements of the changes in the wavelength, astronomers can estimate how large a planet would need to be in order to have an effect of the calculated size on the orbit of the star, assuming they have used other methods to find the mass of the star.
Once the astronomers know how large a planet is, they can then find how far away the planet is from the star through Newton’s law of universal gravitation.  For every planet-star pair there is something called the “Goldilocks zone”, the range of distances from the star in which the planet would be at a temperature hospitable to life.  Astronomers and astrobiologists are hoping that their calculations will lead them to find a small (Earth-sized), rocky planet within its Goldilocks zone, as this planet would have a higher probability of being hospitable to life than other kinds of planets.
            The main constraint on these methods is that it is easier to find large planets, planets a few times the size of Jupiter.  Most of these tend to be gaseous, which isn’t quite what we are looking for.  However, some rocky planets with the mass of several Earths have been discovered.  These discoveries should motivate improvement of the instruments used to uncover extra-solar planets.  Currently, astronomers can detect changes in velocity as small as one meter per second!  That is about as much as Jupiter perturbs the sun’s orbit.  Though it may take years of research, it is within our abilities to create instruments able to detect Earth sized extra-solar planets.  Once we can do that, we may eventually discover a planet as welcoming to life as Earth.  Such a discovery would be as incredible as finding a diamond encrusted needle glinting amidst a pile of dust and hay.

Author’s Note:  In the course of writing this paper, astronomers published results saying they had detected planets Earth-sized and smaller through the Kepler mission.

Sunday, February 5, 2012

How I Found Science

My introductory astronomy class has been over for a while now, but I am currently taking “Writing about Science”, a class designed to teach the strategies of writing about science for non-scientists.  I’ve enjoyed the assignments so far, and while I have no idea if I’ll keep this blog going, I thought I’d share some of the stories I’ve written.  This first one is about how I came to be intrigued by science. 



The Penny and the Feather
I was in kindergarten when I met a real psychic for the first time.  I’m not talking about those frauds who tell you to watch out for earthquakes in California, especially in the two weeks preceding and following a full moon.  I mean a person who could actually predict the future accurately.
It was the day of a school assembly and we were all crowded into the gym.  I sat on the hard wood floor, near the front of the room.  Just a few rows away was a table with all sorts of contraptions on it.  I don’t remember most of what was on there, but I do remember a grey cylinder with some black tubes connecting to a clear chamber.  There was a woman standing behind the table.  She eventually got our attention and the assembly began.
There were probably plenty of standard demonstrations of how chemicals can change color when mixed together and how light refracts and splits into a rainbow when it goes through a prism.  Just as I don’t remember most of the equipment that was on the table, I don’t remember most of the experiments that the woman completed.  Except for one.  I was happily – complacently - watching the assembly when the woman said the most ridiculous thing I had ever heard.  She said that she would remove all the air in the clear chamber by using the grey cylinder to pump it out.  This would create a vacuum in the clear chamber.  Next, she would allow a penny and a feather to start falling inside the chamber at the same time.  She then told us, with a completely straight face, that the penny and the feather would hit the bottom of the chamber at the same time.
I was incredulous!  Everybody knew that the feather would slowly float down and land after the penny did.  How could this woman claim such a thing?  “Prove it,” I thought to myself.
The woman behind the table flipped a switch and there was a loud buzzing noise interspersed with glugging sounds as the air was sucked out of the clear chamber.  Once the air was gone, she stopped the pump and set up the feather and the penny at the top of the chamber.  After a count of three, she let go and the most spectacular thing happened.  The penny and the feather hit the bottom of the chamber at the same time!  The woman had been right!  She managed to back up her claim with physical evidence and I had no choice but to change my understanding the world.  This was the power of science.  The power to predict the future accurately.  The power to run experiments over and over again and always find the same result or else uncover a flaw in our current thoughts.  The power to use logic to solidify one’s beliefs and understanding of the world.  This power was intoxicating.  And I quickly became an addict.
Once I realized I had been wrong about one thing, I had to wonder what else I was mistaken about.  I started testing claims that I had heard, but never sought evidence for.  That penny and feather ended up changing the way I experienced the world and I was surprisingly ok with this. In fact, I greatly welcomed and enjoyed science’s intrusion into my life.  Of course, the hovercraft I was allowed to play with after the assembly may also have had something to do with my favorable impression of science.  Whatever the cause, I was motivated to develop the power held by the woman from the assembly. Perhaps soon I, too, will be able to tell the future.