Quantcast
Channel: As Many Exceptions As Rules
Viewing all 288 articles
Browse latest View live

Life is Elemental

$
0
0
Biology concepts – elements, biomolecules, biochemistry, trace elements, selenocysteine, stop codon

The blue whale is the often 30 m (100 ft) long and can reach a
mass of more than 175 tons (160,000 kg). As such, it is the
largest animal ever to grace the face of Earth. Yet, it owes it
shares its intricate biochemistry with even the smallest
organisms on the planet. The commonality is due to the
chemical elements that make up the biomolecules of all living
things. A few elements are used in most biological compounds,
and many elements are used in just a few. This is today’s story.
A blue whale is the largest animal on the face of the Earth - ever. You could swing a tennis racquet while standing inside a chamber of its heart (left). Built very differently, the watermeal plant is the size of a grain of salt. Comparing the two organisms at the macrolevel is like comparing lug nuts and twinkies, or pink and Darth Vader. But looks are often deceiving.

At the genetic level, about 50% of the genes from whales and watermeal are exactly the same, coding for the same structural proteins or enzymes. At a biochemical level there's even more similarity; even if the gene products are different most of the processes that huge whales and tiny flowering plants carry out are exactly the same.

They are so similar for one overarching reason, and that reason points out an amazing commonality. Both the world’s largest animal and the world’s smallest flower come from a common ancestor. It may have been many moons since their family had that argument at the summer picnic that drove them apart forever, but they are still related nonetheless.

And since they have a common ancestor, they are going to harbor many of the same traits as that ancestor – including the ways they carry out the reactions and functions in their cells. The totality of the molecules that are present in an organism and how they interact to perform different jobs is termed an organism’s biochemistry.
Biochemistry is the chemical reactions that take place in living
organisms, like glycolysis and the citric acid cycle shown above.
Though different organisms may have subtle differences in the
proteins or even eliminate some of the steps, the overall
pathways are conserved across all life on earth. Look at the
molecules, the elements are common in availability and
common to all life. This is evidence of evolution and why
biochemistry is shared so completely.

Biochemistry refers to how information flows through organisms via biochemical signaling and how chemical energy flows through cells via metabolism. All life on Earth uses basically the same biochemistry since we all came from a common ancestor – to the best of our knowledge.

Organisms on Earth have similar biochemistry in part because they use the same types of macromolecules. Life as we know it is based on the interactions (biochemistry) of lipids, carbohydrates, proteins and nucleic acids. Each of these macromolecules is amazing and contains many exceptions, so we will deal with each in next few posts.

Whales and watermeal, all life for that matter, is organic(Greek, pertaining to an organ) since their biochemistry is based on carbon, but there many exceptions to our important molecules being organic. What is the most abundant molecule in living things? Water. Is water organic? No.

What creates the electrochemical gradient that fires our neurons? Sodium, chloride, and potassium. Are they organic? No. So the next time someone makes a joke about being a carbon-based life form, you can say you are just partly organic, and then let them ponder whether you are some kind of cyborg.

So what do the macromolecules have in common that is related to the biochemistry of life? They are made up of the same chemical elements. In fact, most all biomolecules are made up of just five or fewer different elements; carbon (C), hydrogen (H), oxygen (O), nitrogen (N), phosphorus (P), and sulfur (S).

Carbon’s importance lies in its ability to bond to many different elements, and because it can accept electrons in a bond or donate electrons to a bond. Carbon can bond to four different elements at the same time. This increases the possibility of complexity and is one reason our molecules are based on carbon. The situations are similar for oxygen, sulfur, nitrogen, and phosphorous.

One of the lesser abundant elements is sulfur since it is used in proteins as a structural element mostly, although it shows up in bone and other skeletal materials as well. Still, the average adult male (80 kg/175 lb) contains about 160 grams of sulfur; this would be about a salt shaker’s worth.

Carbon is the basis of life on Earth because of its ability to form single,
double, and triple bonds, and because it can bond with so many
different elements. On the left is the top right corner of the periodic
table, showing carbon and silicon in the same column (family).
Elements in the same family have similar chemical properties, so
scientists believe that life on other planets could be based on silicon.
This is how we got the look and feel of the alien in the Sigorney Weaver
movies of the same name. But, silicon is more abundant than carbon,
so why don’t we look like the alien?
Only two of the twenty common amino acids that make up proteins contains sulfur (methionine and cysteine). But don't minimize its importance just because it is present in only two of the protein building blocks. The sulfurs in proteins often interact with one another, determining the protein's three-dimensional structure. And for proteins, 3-D structure is everything  - their function follows their form.

Sulfur is important in other ways as well. Some bacteria substitute sulfur (in the form of hydrogen sulfide) for water in the process of photosynthesis. Other bacteria and archaea use sulfur instead of oxygen as electron acceptor in cellular metabolism. This is one way organisms can be anaerobic(live without oxygen).

In a more bizarre example, sea squirts use sulfuric acid (H2SO4) in their stomachs instead of hydrochloric acid – just how they don’t digest themselves is a mystery. Just about every element has some off label uses; we could find weird uses for C, H, O, N, and P as well. Heck, nitric oxide (NO) works in systems as diverse as immune functions and vasodilation (think Viagra).

So these are the “elements of life” – right? Well, yes and no, you can’t survive without them, but you also can’t survive with only them. There are at least 24 different elements that are required for some forms of life. Two dozen exceptions to the elements of life rule – sounds like an area ripe for amazing stories.

Some of these exceptions are called trace elements, needed in only small quantities in various organisms. It may be difficult to define “trace” since some elements are needed in only small quantities in some organisms, but in great quantities (or not at all) in others. Take copper (Cu) for instance. Humans use it for some enzymatic reactions and need little, but mollusks use copper as the oxygen-carrying molecule in their blood (like we use iron).

Let’s start with a list is of the exceptions; a list will allow you to do some investigating on your own to see how they are used in biologic systems.

Aluminum (Al)        0.0735 g
Arsenic (As)              0.00408 g
Tyrian Purple, or royal purple, is a dye made from the bodies of several
mollusks from the eastern Mediterranean. The spiny dye snail (left) is
one such mollusk that produces the purple dye from its hypobrancial
mucus glands. The dye is based on a bromine-containing compound that
the snails use to protect their eggs from microbial predators (right) and
for hunting. The dye was prized because instead of fading with time and
sun exposure, it actually became brighter. Used as early as 1500 BCE by the
Phoenicians, Tyrian Purple was worth its weight in silver for two
thousand years.
Boron (B)                   0.0572 g
Bromine (Br)            0.237 g
Cadmium (Cd)          0.0572 g
Calcium (Ca)             1142.4 g
Chlorine (Cl)             98.06 g
Chromium (Cr)        0.00245 g
Cobalt (Co)                 0.00163 g
Copper (Cu)               0.0817 g
Fluorine (F)               3.023 g
Gold (Ag)                     0.00817 g
Iodine (I)                     0.0163 g
Iron (Fe)                      4.9 g
Magnesium (Mg)      22.06 g
Manganese (Mn)       0.0163 g
Molybdenum (Mo)   0.00812 g
Nickel (Ni)                   0.00817 g
Potassium (K)           163.44 g
Selenium (Se)             0.00408g
Silicon (Si)                   21.24 g
Sodium (Na)               114.4 g
Tin (Sn)                        0.0163 g
Tungsten (W)            no level given for humans  
Vanadium (V)            0.00245 g
Zinc (Zn)                      2.696 g

You can see that for each element I gave a mass in grams. This corresponds to the amount that can be found in an 80 kg (175 lb) human male. But don’t confuse the mass found with the mass needed.

Barium (Ba) isn’t used in any known biologic system, yet you have some in your body. It is the 14th most abundant element in the Earth’s crust, so it can enter the food chain via herbivores or decomposers and then find its way up to us. You probably have a couple hundredths of a gram in you right now.

Bromine (Br) is a crucial element for algae and other marine creatures, but as far as we know, mammals don’t need any. In fact, this brings up an interesting thing about chemistry. Chlorine is integral for human life, just about anything that requires an electrochemical gradient will use chlorine, to say nothing of stomach acid (HCl).

However, chlorine gas is a chemical weapon that will burn out your lungs (and did in WWI). Bromine gas is very similar to chlorine gas - so elements that are useful as dissolved solids can be lethal as gasses.

How about something supposedly inert, like gold (Ag)? We use it for jewelry because it is rare and supposedly it doesn’t cause allergy (wrong - see this previous post). But some bacteria have an enzyme for which gold is placed in the active center. Gold is rare, so why would it be used for crucial biology? Most elements in biology are more common.

Finally, we should describe a couple of the uses of non-standard elements:

Selenium in proteins is important for stopping damage from
oxygen, but in case you don’t think that is important enough,
how about insulin function. From the cartoon above, you can
see that selenoprotin function affects insulin responsive
elements (IRS) that in turn control DNA function, cell survival
(Akt), and carbohydrate management.
Selenium is a rare element, being only the 60thmost common element in the Earth’s crust. Yet, without 0.00408 grams of selenium on board, a human is only so much worm food. Selenium is only essential for mammals and some higher plants, but it performs a unique role in those organisms.

In a few proteins, particularly glutathione peroxidase, selenium will take the place of sulfur in certain cysteine amino acids. Selenocysteineis an amazing exception because it is not coded for by the genetic code! Instead, the stop codon, UGA, (a three nucleotide run which calls for protein production to stop), is modified to become a selenocysteine-coding codon.

The selenocysteine amino acid changes the shape of the protein, and is found to be the active site for proteins such as glutathione peroxidase and glutathione S-transferase. These enzymes are crucial for cellular neutralization of reactive oxygen molecules that do damage by reacting with just about any other cellular biomolecule.

So selenocysteine is an endogenous biomolecule that is important for protecting our bodies – as important as the antibiotics we use from other organisms. But a 2013 study shows that some antibiotics (doxycycline, chloramphenicol, G418) actually interfere with the production of selenocysteine proteins by inhibiting the modification of the UGA codon. In many cases, the amino acid arginine is inserted instead of selenocysteine, reducing the functionality of the enzymes. Yet another reason to not overprescribe antibiotics.

One last exceptions - silicon is important for many grasses. Remember, this is silicon, the element; not silicone the polymer used in breast implants and caulk; and not silica, the mineral SiO2. Silicon is taken up by grasses of many types; crops, weeds, and water plants (although silicon in grasses may take the form of silica).  

Silicon (top) is an element that is used in many ways, including
in computer chips. Silica is a combination of silicon and oxygen
(middle) which is part of many products as well, including the
lightest material on Earth, aerogel, used in NASA projects.
Silicone is a rubbery material (bottom) that is used in caulks and
in many other things, including creepy movie prosthetics.
In some grasses, the inclusion of silicon makes them less likely to be victims of herbivory (being grazed on by herbivores). Herbivores avoid high silica-containing grasses because they aren’t digested well. A 2008 study showed that this reduced digestibility is related to silicon-mediated reduction in leaf breakdown through chewing and chemical digestion.

Another protective function of silicon in grasses was illustrated by a 2013 study. In halophytic (salt-loving) grasses that live on seashores, increased silicon uptake resulted in increased nutrient mineral uptake, and increased transpiration, the crucial process for water movement through the plant.

In addition, these plants have better salt tolerance in the presence of increased silicon, even though they already have specific mechanisms for reducing the damage that could be induced by such high salt concentrations. Silicon reduced the amount of sodium element found in the saltwater grasses. Pretty important for an element that is considered non-essential.

Next week, let’s start to look at the biomolecules made from C, H, O, N, P, and S. Proteins are macromolecules made up of amino acids, and amino acids are exceptional.

Tobe R, Naranjo-Suarez S, Everley RA, Carlson BA, Turanov AA, Tsuji PA, Yoo MH, Gygi SP, Gladyshev VN, & Hatfield DL (2013). High error rates in selenocysteine insertion in mammalian cells treated with the antibiotic doxycycline, chloramphenicol, or geneticin. The Journal of biological chemistry, 288 (21), 14709-15 PMID: 23589299
 
Mateos-Naranjo E, Andrades-Moreno L, & Davy AJ (2013). Silicon alleviates deleterious effects of high salinity on the halophytic grass Spartina densiflora. Plant physiology and biochemistry : PPB / Societe francaise de physiologie vegetale, 63, 115-21 PMID: 23257076

For more information or classroom activities, see:

Elements of life - 

Trace elements in diet –

Trace elements in plants –

What is biochemistry –

Sulfur –

Bromine –

Selenium/selenocysteine –

Silicon based life –



So Many From So Few

$
0
0
Biology concepts – protein, amino acids, non-standard amino acids, peptide bond


Severe dietary protein deficiency leads to distinct
symptoms, and if not resolved, death. Called kwashiorkor
(Ghanan word meaning “disease from second born”),
the deficiency leads to changes in osmotic potential in
the bodies cells as compared to their blood.
Hypoalbuminemia (low levels of the blood protein
albumin) lead to fluid leaving the vessels and accumulating
in the abdomen, called ascites. This often occurs when infants
stop nursing (like when a second child is born); they take in
enough calories but not enough protein.
Heterotrophic organisms, including us humans, must consume protein in order to survive. Meat is a great source, by far the best protein source per unit mass and the best for obtaining necessary protein subunits (amino acids). If you look at complete protein sources compared to caloric intake, four of the top five foods are: turkey/chicken; fish; pork chops; and lean beef.

Tofu comes in sixth and soybeans are seventh. This is why humans have sharp canine teeth – we're meat eaters. You can live happily (well, somewhat happily) as a vegetarian; you just have to work much harder at it.

So why is protein so important? How about, because it is one of the four major biomolecules and without it you die a horrible death? Sounds like a good reason to me.

Proteins reside in every cell of every living organism, from prokaryotes to your favorite uncle. There isn’t a job in a cell that proteins don’t have their hands in; proteins even perform numerous tasks at the extracellular level. Heck, that spider web hanging from your dusty Stairmaster is made of protein!

From prokaryotes to spiny echidnas to rosebushes, let’s look where proteins are involved in life. Proteins provide the structure from which cells hold their shape and onto which they build a membrane. Proteins do the talking, providing chemical signals and ways to sense chemical signals.

Proteins do the dirty work; as enzymes they put molecules together, cut them apart, and change their parts around. And most times, they make these reactions happen faster than they would otherwise and without being used up in the process.


Enzymes are specific for a very few molecules (called
substrates). Enzymes have a particular shape, and this
allows the correct substrate to bind and be acted on;
called the lock and key system. Notice that the
enzyme itself is not altered by the reaction, so it can
work again on another substrate molecule. However
there are exceptions – suicide enzymes are inactivated
by their own action, so they only work once.
Proteins allow for movement, like the contractile proteins in your muscles or the proteins that make up flagella and cilia. Proteins even act as defenders of the cell, as antibodies and myriad other immune molecules.

A typical cell may contain 10 billion protein molecules. However, not every cell has the same proteins. Many proteins are necessary for every cell, but others have specialized functions needed in only some cells. The exception is unicellular organisms. Their one cell must be able to produce every kind of protein they might ever need.

Space is at a premium, so cells can’t waste room on proteins that aren’t needed right now. Therefore, making protein must be efficient, tightly regulated, and fast. Over 2000 new protein molecules are made every second in most cells, while some proteins exist only to destroy unneeded or old proteins.

Humans can make about 2 million different proteins, but we only have about 25,000 genes that code for them. We accomplish this by having some genes produce many different proteins, just by changing the parts of the gene used. These alternative splice variant proteins may have different functions even though they come from the same gene. For example, the cSlo gene is required for hearing, and each one of the 576different splice variants is responsible for sensing a different frequency. Biology is just so dang efficient.

Now that you know how important proteins are, let’s find out what they are. Proteins are polymers (poly = many, and mer= subunit) made up of bonded amino acid mers. Proteins come in many sizes; the TRP-Cage protein of gila monster spit is a polymer of only 20 amino acids, while the titin protein of your connective tissue is over 38,000 amino acids long.

Maybe we'll dig into the degeneracy of the genetic code when we talk about nucleic acids, but for now let’s just accept that DNA triplets code for different amino acids, and the order of the codons determines the order in which amino acids are linked to form a specific protein. The order of the different amino acids is the key. Why? I’m glad you asked.

Amino acids (or aa’s) are all small molecules made up of carbon, hydrogen, oxygen, nitrogen, and sometimes sulfur – five of last week’s “elements of life.” It’s the arrangement of these elements that makes an amino acid. Refer to the picture below for a visual aid. The central carbon is bound to four other things (often called moieities). One is simply a hydrogen. Another is an amino group (contains the nitrogen). The third is a carboxylic acidgroup. Get it? amino acid.


While not the most exciting images, these cartoons should
help you understand the structure of the amino acid (left)
and the building of the proteins (right). Each amino acid has
the same structure, except for whatever the R group might
be. The amino end of amino acid 2 is joined to the carboxylic
acid of amino acid 1. The next peptide bond would be between
the carboxy end of amino acid 2 and the amino end of amino
acid 3. Notice how water is created each time a peptide bond
is made.

The fourth group is what makes each aa different. Called an R group, this side chain can be small or big, neutral or charged, and gives the aa its properties. The R stands for something, but that story is just too long.

In glycine, the R group is merely another H, but in tryptophan it contains complex rings. We have talked about how tryptophan is the least used amino acid; it is bulky and introduces big bends in the peptide. We’ll show that bends, kinks and other interactions between aa’s are important for the protein function.

Most organisms can make all the amino acids they need, but mammals are the exception. We have abandoned (genetically) pathways for making some aa’s, so we must get them from our diet. These are the essential amino acids, of which there are nine if you are healthy. Tryptophan must acquired by all animals – good thing plants still have the recipe.

Ribosomes (made of proteins and nucleic acids) link the individual aa’s together in the order demanded by DNA via the mRNA. The bond that connects them is called a peptide bond, and is a “dehydration” or “condensation” reaction.

Look at the amino acid picture again; the peptide bonding process kicks out water, ie. dehydration (de = lose, and hydro– water). Water forms from seemingly nowhere, like condensation on your mirror. See how fitting the names are?

When in a protein chain (also called a peptide), the order of aa’s is called the protein’s primary (1˚) structure. The primary structure in turn dictates the secondary (2˚) structure, which is a folding of small regions of the protein based on the interactions of the side chains of closely associated amino acids.

In turn, the folding of small regions brings together aa’s from farther apart, and they fold up based on their interactions. This is the tertiary (3˚) structure of the protein. If a protein needs more than one peptide chain to be functional, the shape that those different chains form when they interact is called the quaternary (4˚) structure.

These cartoons can help you picture how an individual amino
acid can affect the structure of an entire protein. In the
secondary structure cartoon, there are two basic forms that
the nearby amino acids can form, helices and sheets, other
parts will form no patterned form at all. The tertiary and
quaternary cartoons are for hemoglobin, showing how non-
amino acids may be involved (heme), and how the
individual peptides fit together.

The hemoglobin that carries oxygen in our red blood cells is made up of four protein subunits. Why is this important – because what the protein does in life is completely dependent on its three dimensional shape. Lots of aa’s means lots of potential shapes. This is in itself one of the greatest exceptions, since one of the basic tenets of biology is “form follows function.” But with proteins, function follows form.

For the greatest number of possible combinations and shapes, it’s lucky that DNA codes for 20 aa’s. Or are there more? Proteinogenic aa’s are those that can be added into a growing peptide chain, and there are actually 22 of them. The two exceptions are selenocysteine (like cysteine with selenium substituting for sulfur) and pyrrolysine(like lysine with a ring structure added to the end).

We talked last week about the functions of selenocysteineand how it can be incorporated into a peptide even though there isn’t a normal mRNA codon dedicated to it. Pyrrolysine is similar in that it becomes coded for after the modification of what is usually a stop codon, in this case UAG (a signal to add pyrrolysine is located after the UAG codon).

Pyrrolysine is used by methanogenic (methane producing) archaea and bacteria. It's important in the active site of the enzymes that actually produce the methane. New research is showing that more organisms than previously believed use pyrrolysine. A 2011 study identified more than 16 archaea and bacteria with pyrrolysine coding mRNA modifications, but it looks like there may be more.

While the mammalian titin protein is the largest protein
known (38,136 amino acids), there is a close second in a
bacterium called Chlorobium chlorochromatii CaD3. The
gene has been found for a protein of 36,000 amino acids,
but we don’t know yet of the protein is actually made. In
archaea, the halomucin protein from the square prokaryote
Haloquadratum walsbyiis 9,200 amino acids but is exported
to protect the organism from its extreme environment.

A 2013 study indicates that the typical modification of the mRNA that occurs 100 bp downstream of the UAG stop codon isn't even there in some pyrrolysine-coding genes. One hypothesis is that in genes without the modification, the UAG sometimes acts as a stop codon and sometimes incorporates a pyrrolysine. Therefore, there are truncated (prematurely stopped) and full-length versions of the protein in the cell, and the relative number of each can be affected by local conditions and stressors.

In this paper, the authors have developed a different predictor, which doesn’t rely solely on the presence of the modification. Using it, they have identified many new candidate genes in archaea and bacteria that could be using pyrrolysines. Here’s my question – all organisms use selenocysteine, but it seems only arachaea and a few bacteria use pyrrolysine. Why did it go away in higher organisms? Can it only be used for methane production? Please, no methane production jokes.

Pyrrolysine and selenocysteine are coded for by mRNA and are added to proteins, so we definitely have 22 aa’s, but could there be more? You betcha. There are over 300 non-standard amino acids, but that isn’t such a big deal. Remember the definition of amino acid; a central carbon with a hydrogen, a carboxylic acid, an amino group, and something else attached. It isn’t a wonder there are many of them.


Bacteria kill bacteria all the time. They make their own
antibiotics, called bacteriocins, by modifying short peptides
so that they interfere with cell wall synthesis in other strains.
To do this, they modify amino acids in peptides to non-standard
amino acids, including lanthionine and 2-aminoisobutyric acid.
Those that contain lanthionine are called lantibioticsand are
hot commodities right now.
A few non-standard aa’s can be found in proteins, like carboxyglutamate which allows for better binding of calcium, and hydroxyproline, crucial in connective tissue function. These are formed by modifying the amino acids already added to the growing peptide chain.

Other non-standard aa’s are produced as intermediates in other pathways and are not used in proteins. The list of them is great and their functions are even greater, but some act as neurotransmitters, others are important in vitamin synthesis, especially in plants. Still think life uses just 20 amino acids?

Next week we can finish up proteins. Life is very selective with the form of its amino acids – except when it isn’t.


Theil Have C, Zambach S, & Christiansen H (2013). Effects of using coding potential, sequence conservation and mRNA structure conservation for predicting pyrrolysine containing genes. BMC bioinformatics, 14 PMID: 23557142

Gaston MA, Jiang R, & Krzycki JA (2011). Functional context, biosynthesis, and genetic encoding of pyrrolysine. Current opinion in microbiology, 14 (3), 342-9 PMID: 21550296


For more information or classroom activities, see:

Dietary proteins –

Functions of proteins –

Standard amino acids –

Peptide bond –

Protein structure –

Non-standard amino acids -

Three Lefts Make A Right

$
0
0
Biology concepts - chirality, homochiral, enantiomer, stereoisomer, racemic mixture, biofilm, antimicrobial peptide, protein, amino acid


Geordi was the chief engineer on Start Trek; The Next
Generation, but I was partial to Scotty on the original.
They both work on the matter-antimatter reactors that
powered everything from the warp drive to the phasers.
Don’t laugh – NASA is working on a warp drive as we
speak, and we have been able to produce antimatter for
years. Not sure how antimatter will solve our energy
problems though, it takes much more energy to produce
it than we get from a matter-antimatter reactor.
Today we’ll start with a story that may seem to have nothing to do with biology. Hopefully we can draw a parallel later.

For every type of every subatomic particle there is an opposite particle. There are protons and anti-protons, electrons and anti-electrons (positrons), neutrons and anti-neutrons. Together, they make up matter and antimatter– think Star Trek.

So why is our universe made of matter and not antimatter? It turns out that when matter meets antimatter, they obliterate one another. This is bad – Scotty was always trying to prevent a matter/antimatter catastrophe on the Enterprise.

When the universe was young, there was antimatter and matter; lots of annihilations. Scientists hypothesize that there were a few more molecules of matter than antimatter, so when the fireworks were over, only matter remained; so matter matters.

What’s this got to do with protein exceptions, our subject for last week and this? We’ll get to that, a little background first. Remember that the shape of the protein is important for its function, and its shape is dependent on the amino acid order and the structure of those amino acids.

Look at your hands. They’re mirror images of one another, but no matter how you try, you can’t make your left glove fit exactly on your right hand. Unlike a ball reflected in a mirror where the two images will be superimposable on one another, there is no way to do this with say….. your left and right shoes.


If you reflect your right hand in a mirror, you get an
image exactly like your left hand. But your right and
left hands can’t be superimposed – try it, turn one over,
turn the other over, turn them around the over
direction, detach one – you still can’t do it. It is the same
with chiral molecules. The different groups come at you
or away from you, and once you reflect them, no two can
be superimposed – try it, at least one group will be
pointing the wrong direction.
Molecules like this in chemistry are called chiral; the amino acid central carbon (chiral carbon) has four different groups, so no flipping will make the two mirror images look exactly the same. The two different amino acid variations are called stereoisomers, specifically, enantiomers.

The different enantiomers (enantio = opposite) have different chemical properties. Without getting into too much chemistry (nobody wants that), different enantiomers cause light waves to rotate in different directions.

One version of a molecule called gluteraldehyde rotates light to the right (dextrorotary or D, dextra = right), while the other is levorotary (levo = left). Amino acid enantiomers are comparable to the structure of gluteraldehyde, so amino acids that parallel the two gluteraldehydes are assigned a D- or L- label, ie. L-alanine has a structure similar to the gluteraldehyde enantiomer that rotates light to the left.

This is a bit of a misnomer because light rotation depends on many factors. In fact, many amino acids labeled L-type because of their similarity to L-gluteraldehyde actually rotate light to the right –that little factoid won’t be on anyone’s final exam.

Most proteins fold on their own, and they fold the same way every time. But what might happen if the protein sometimes used a certain L-amino acid and sometimes the D-version? The two resulting proteins would fold differently and therefore have different possible functions. Heaven forbid!

To avoid this, nature has devised an ingenious solution - only L-amino acids are used. This assures that all proteins of a specific type assume the same shape, and it turns out that proteins are most stable when they are made of all L- or all D-amino acids. This is the rule of homochirality.


Polarized light has practical implications. Your 3-D movie
glasses may be plane polarizers. The light from the screen
is a mix of right and left polarized light. One lens lets in
one, while the other lets in the opposite light. This splits
the image into two images, one for each eye, and they are
separated by a distance. This gives a stereoptic image, just
like your eyes seeing something in real space.
This is one of the best-known rules of biology, right behind the “form follows function” we talked about last week. But we saw that proteins were exceptions to that rule, so there must be exceptions to this one as well.

One question before the exceptions -just why did life opt for all L-proteins instead of all D-proteins? Do they annihilate one another when they come into contact, like matter and anti-matter? Thankfully, no.

Were there more L-amino acids available on the early Earth so life made a choice and stuck with it? Maybe – this is one of the hypotheses currently being investigated. It’s also possible that some life developed as D-protein makers, but they were out-competed somehow and we descended from the L-protein winners. It isn’t as easy a question as the physics matter/antimatter issue - more on this biology exception next week. There are so many more exceptions in biology as compared to physics – that’s why I love life more than physics itself.

The most mundane is exception to homochirality is the one amino acid that isn’t chiral. Glycine’s R group is just a hydrogen, so the central carbon has two bound hydrogens, and the mirror images can be superimposed. There’s no L-glycine or D-glycine, just glycine. Don’t worry, the rest of the exceptions are better.

It turns out that the rule of homochirality is more of a guideline - there are examples of important D-amino acids (D-aa) in plants, animals, and prokaryotes. We don’t have time or space to go into many of them, but I will highlight two exceptions that are simply amazing. Let’s start with the bacteria since they own the planet, and we probably inherited our D-amino acid uses from them.

Bacteria use D-amino acids in various ways. First, new research shows that as bacterial numbers go up and resources (food) go down, D-aa-containing proteins might prepare bacteria for the bad times ahead. A 2009 study shows that Bacillus subtilisand Vibro cholerae make large amounts of D-aa as they age, including D-tryptophan, D-tryosine, D-phenylalanine, and D-leucine. As the amounts increase, they start to have an effect on the bacterial cell wall.


A gram positive bacterium has a thick cell wall, which
includes peptidoglycan. The glycan parts are NAM and
NAG. The NAM has a tetrapeptide attached to it, and this
is where the D-aa can be used. This strengthens the cell
wall and makes it even thicker. The lipoteichoic acid in
the outer layer is where change to a D-aa prevents
defensins from sitting in the cell wall and disrupting
the buried plasma membrane.
The D-aa are incorporated into the growing peptidoglycan, the elastic and stress-bearing component of the cell wall, and they also regulate enzymes that control the thickness and structure of the peptidoglycan. By putting D-aa into the cell wall, the bacteria make themselves strong for the lean times ahead.

What is more, a 2010 study showed that in B. subtilis and S. aureus, the increasing numbers of bacteria and their increasing concentration of D-aa’s leads to a breakdown of the extracellular matrix that holds all the bacteria together (the biofilm). D-aa's were able to prevent biofilm formation and degrade existing biofilm, again preparing the bacteria to go off on their own as resources dwindle.

S. aureus also uses D-aa-proteins to avoid being killed. We have antimicrobial peptides (AMP) on our skin and mucosal surface that are always looking to kill bacteria. They often work by poking holes in the bacterial membrane. However, a 2013 studyshows that by switching out L-alanine to D-alanine in its cell wall, S. aureus can render the AMP’s ineffective. The D-alanine gives the protein a different shape, so the AMP's can’t fit in and do their job. Smart bacteria.

Animals have gotten into the act, especially the gastropod mollusks, ie. some snails and sea hares (marine slugs). D-tryptophan is their exception of choice. The cone snails (genus Conus), the most predatory and venomous of the marine snails, use D-tryptophan in the active peptides of their venom, called contryphans. Large cone snails have been known to kill humans, but the role of the D-tryptophan in the venom activity is not yet known.


The cone snails use many peptide venoms to incapacitate
their faster prey; they move like snails don’t ya know. The
siphon has the black stripe and tests the water for prey.
Then the proboscis below (kind of pinkish orange at the
tip) sends out the radula that envenomates the victim.
The sea hare, Aplysia kurodai, also uses D-tryptophan in a cardioexcitatory neuropeptide called NdWFamide (it speeds up the slug’s heartbeat). This protein has also been found in terrestrial slugs, so it may be that many mollusks are D-aa users.

How about us? Do humans use D-amino acids? Sure we do. But there are tricksters here. D-alanine is found in all mammals, but we don’t know what it might be doing. Its levels change with the time of day (circadian cycling), so who know what it might be controlling. A 2013 study set out to determine what drove the circadian changes. They tested several things: changing diet or fasting didn’t matter; enzymes that degrade D-alanine didn’t change levels of function either. But when they studied germ-free mice, the D-alanine levels didn’t change.

The scientists determined that it’s our gut bacteria making D-alanine, and circadian changes in intestinal absorption rates is the reason that the D-alanine levels fluctuate. It doesn’t mean that D-alanine isn’t doing something, but it sure had us fooled for a while.

Mammals don't stop there; D-serine and D-aspartate are so important, we have special enzymes called racemases whose job it is to convert their L-forms to D-forms (racemic mixtures contain both D- and L-versions of a molecule). The most amazing exception must be D-serine. However, we will see that there is a Goldilocks effect to D-aa’s, you don’t want too much or too little.

A certain brain neuron receptor (called NMDAR, important for learning and memory) is activated by an amino acid called L-glutamate, but it needs help from either glycine or D-serine to set off the electrical impulse. It turns out that in Lou Gehrig’s disease (amyotrophic lateral sclerosis), lower motor neurons die because they undergo aberrant excitation. In genetic cases of ALS, patients have too much D-serine!


Most patients with ALS are diagnosed after age 50 and
live about five years. Stephen Hawking- the Nobel
winning cosmologist – was diagnosed at 21 and has
lived with the disease for 50 years! Hawking has round
the clock care, for the only muscles that work for him
are breathing, swallowing an eye movement. Lucky for
him, most people’s breathing and swallowing are not
spared and this is how they die. He uses his eye
movement to run his computer.
A 2012 study showed that D-amino acid oxidase (DAAO) is mutated in these patients and doesn’t do its job of breaking down D-serine. Mice with a mutated DAAO were shown to have decreased lower motor neurons and more ALS signs.

So too much D-Ser is bad, but how about too little? Many studies have shown that schizophrenia patients have low levels of D-Ser. It might be that DAAO is too active, or perhaps a D-Ser racemase is inactive, or maybe there is just too little L-Ser to make D-Ser from – we don’t know yet. 

A 2013 study has also implicated D-aspartic acid in schizophrenia. D-Asp can replace L-glutamate in activating NMDA receptors, and schizophrenic patients have low D-Asp levels in the brain and blood. The D-Ser and D-Asp data implicate glutamergic receptor activation in schizophrenia, so much work is underway to find ways to increase these D-aa’s in our brains - the very things that the rules say we shouldn’t be using in the first place. But let’s not raise them too much, no one wants ALS!

Next week, we switch our attention to carbohydrates, the energy sources in our cells. Every cell on Earth is designed to make ATP from glucose - except for those cells that ONLY use fructose.



Lam H, Oh DC, Cava F, Takacs CN, Clardy J, de Pedro MA, Waldor MK. (2009). D-amino Acids Govern Stationary Phase Cell Wall Re-Modeling in Bacteria Science, 18 (325), 1552-1555 DOI: 10.1126/science.1178123

Kolodkin-Gal I, Romero D, Cao S, Clardy J, Kolter R, Losick R. (2010). D-Amino Acids Trigger Biofilm Disassembly Science, 328 (5978), 627-629 DOI: 10.1126/science.1188628
Sasabe J, Miyoshi Y, Suzuki M, Mita M, Konno R, Matsuoka M, Hamase K, Aiso S. (2012). 

D-amino acid oxidase controls motoneuron degeneration through D-serine. Proc Natl Acad Sci U S A. , 109 (2), 627-32 DOI: 10.1073/pnas.1114639109 

Simanski M, Gläser R, Köten B, Meyer-Hoffert U, Wanner S, Weidenmaier C, Peschel A, & Harder J (2013). Staphylococcus aureus subverts cutaneous defense by d-alanylation of teichoic acids. Experimental dermatology, 22 (4), 294-6 PMID: 23528217

 Errico F, Napolitano F, Squillace M, Vitucci D, Blasi G, de Bartolomeis A, Bertolino A, D'Aniello A, & Usiello A (2013). Decreased levels of d-aspartate and NMDA in the prefrontal cortex and striatum of patients with schizophrenia. Journal of psychiatric research PMID: 23835041

For more information or classroom activities, see:

Chirality –

Bacterial cell wall –

Biofilm –

Antimicrobial peptides –

D-amino acids –

Amyotrophic lateral sclerosis –

Schizophrenia -



Sugars Speak In Code

$
0
0
Biology concepts – carbohydrates, monosaccharides, hexose, glycocode, starch, glycogen, carbohydrate linkage, bacterial persisters, fructolysis


Refined sugar is produced from two main sources, sugar cane (37
different species of grass from the genus Saccharum, bottom right),
and sugar beet (Beta vulgaris, top right). Sugar cane accounts for
80% of the sugar produced today. The cane or the beets are ground
and the sugary juice is collected with water or on its own. To refine
the sugar, which still has molasses from the fiber, is processed with
lime or soda and evaporated to produce crystals. The color is
removed by activated charcoal to produce the white sugar we most
often see (top middle). Brown sugar is sugar in which the molasses
has not been removed and still coats the crystals (bottom middle).
Unprocessed sugar from cane is shown on the bottom right, while raw
sugar (not whitened) is on the top right.
It would be hard to argue that without sugars, none of us would be here. Glucose provides us with short and medium term storage of energy to do cellular work, but would you believe that certain parts of reproduction use a completely different energy source. All hail fructose!

Sugars are better termed carbohydrates, because they are basically carbon (carbo-) combined with water (-hydrate). The general formula is Cn(H2O)n; for instance, the formula for glucose is C6H12O6.

The simplest sugars are the monosaccharides (mono = one, and sacchar from the Greek = sugar. They can be composed of 4-7 carbons, called tetroses (4 carbon sugars), pentoses (5), hexoses (6), and septoses (7).

Things aren’t so simple though, even for the simple sugars. Let’s use the hexoses as an example, although what we say will also apply to the other sugars. We said the formula for glucose is C6H12O6, so that makes it a hexose. Is it the only hexose – heck no! Hexoses can be aldoses or ketoses, depending on their structure (see picture). Even more confusing, -OH groups can be located on different carbons making them act different chemically.


This chart is a brief introduction to the complexities of simple
sugars. They can vary in the number of carbons (triose vs.
pentose vs. hexose. They can also vary in their structure even
if they have same number of carbons (glucose vs. galactose).
Yet another difference can come in their reactive group on the
end, being either a ketone group (ketoses) or an aldehyde
group (aldoses).
There are actually 12 different hexoses – some names you know; glucose, fructose, or galactose. Others are less common; idose, tagatose, psicose, altrose, gulose – you won’t find those in your Twinkies. Then there are the deoxysugars, carbs that have lost an oxygen. Fucose is also called 6-deoxy-L-galactose, while 6-deoxy-L-mannose is better known as rhamnose.

If this wasn’t difficult enough, stereoisomers again rear their ugly head, as it did last week with the proteins. Hexoses have three (ketoses) or four (aldoses) chiralcarbons each so hexoses can have eight or 16 stereoisomers! Every isomer may act differently from every other; this allows for many functions. But wait – there’s more trouble when we start linking sugars together.

Simple sugars can be joined together to build disaccharides (two sugars), oligosaccharides (3-10), and polysaccharides (more than 10). The subunits are connected by a hydrolysis reaction. Just like with the amino acid linkages in proteins, a water molecule is expelled when two sugars are joined together. Sucrose (table sugar) is a disaccharide made up of a glucose linked to a fructose.

Just where the linkage takes place is also important. Our example again can be glucose. Many glucoses can be linked together with an alpha-1,4 linkage. Long chains of glucoses linked in this way are called starch or glycogen, based on the different branching patterns they show. Mammals store glucoses as glycogen, while plants store them as starches.


Amylose is one type of starch, amylopectin being another.
They are different from celluloses only by the way the sugars
are linked together. You can see that in starch the CH2OH
group are all on the same side, while in cellulose they alternate.
This may seem like a small difference, but we can digest only
starch (or glycogen, which has the same type linkages),
not cellulose.
Humans can digest both starch and glycogen because we have enzymes that can break alpha-1,4 linkages. But if you change the chemical shape of the bond (see picture) to a beta-1,4 linkage, the glucose polymer becomes cellulose.

Plants make a lot of cellulose for structure, but even though it is made completely of glucose, humans can’t digest it at all! Ruminate animals can digest cellulose, but it takes some powerful gut bacteria to help out, and one of the side effects is a powerful dose of methane. Cows are the greatest source of methane on the planet!

We have talked about carbohydrates as energy sources, but pretty much every biological function and structure in every form of life involves carbohydrates.

Carbohydrates are important structural elements. Cellulose, thousands of beta-1,4-linked glucoses, help give plants their rigidity, especially in non-woody plants, but in woods as well (linked together by lignin). As such, cellulose is by far the most abundant biomolecule on planet Earth.

Chitin is another structural carbohydrate. Chitins make up the spongy material in mushrooms, and the crunchy stuff of insect exoskeletons.  You don’t get much more structural than keeping your insides inside.

Carbohydrates are often part of more complex molecules as well. Nucleic acids like RNA and DNA have a five-carbon ribose or deoxyribose at the core of their monomers. Glycolipids and glycoproteins (glyco- from Greek, also means sweet) are common in every cell. Over 60% of all mammalian proteins are bound to at least one sugar molecule.

The different sugar-linked complexes are part of the glycome (similar to genome or proteome), including oligo- and polysaccharides, glycoproteins, proteoglycans (a glycoprotein with many sugars added), glycolipids, and glycocalyxes (sugar coats on cell surfaces). None of these carbohydrate additions are coded for by the genetic code, yet a great diversity of glycomodifications are found on most structures of the cell.


The carbohydrate code is still a mystery to us. The glycosylation can be
linked together by N-type or O-type linkages, the order of the sugars
can vary, the numbers of each type of sugar can vary, and the branching
can vary. Every difference adds to the complexity of the code and can
direct a different message to the cell or the molecules with which
these glycans come into contact.
The diversity and complexity of these added carbohydrates is highly specific and highly regulated – this is the glycocode or carbohydrate code. Yet, we haven’t even come close to breaking the code, i.e., what series of what sugars means what.

The glycocode is important for cell-cell communication, immune recognition of self and non-self, and differentiation and maturation of specific cell types. Dysfunction in the glycocode leads to problems like muscular dystrophy, mental defects, and the metastasis of cancer – we better get cracking on the code breaking.

In the middle of 2013, a new method was developed for detecting the order and branching of sugars on different molecules. This method uses atomic force microscopy (AFM) to actually bump over the individual sugars on each molecule and identify them by their atoms, even on live cells. I’m proud to say that my father-in-law played a role in developing AFM for investigation of atom distributions on the surfaces of solid materials, mostly superconductors.

The glycome is even more diverse because different types organisms make different sugars. One thing I find interesting is that mammals don’t make sucrose. No matter what we mammals do, we won’t taste like table sugar when eaten – more’s the pity. I wonder what a sweet pork chop might taste like.


Proof that many foods have sugars – the Maillard reaction. That gorgeous
browning of your bread or steak comes from a chemical interaction
between the sugars and amino acids of the food. In the process, hundreds
of individual different compounds are made, each with a different flavor
profile. The example in the chart above is for caramelizing onions. Each
food and its chemical make up produces a different set of Maillard
products. You roast your coffee beans for the same reason. This is why
Food Network always suggests ways for you to get great searing and
browning of food.
We use sucrose as sugar because it is relatively easy to obtain from the plants that do make, like sugarcane or sugar beets. Fructose (often called fruit sugar) is actually sweeter on its own; almost twice as sweet as sucrose and three times as sweet as glucose.  This explains why so many sweetened foods are full of high fructose corn syrup (go here for our previous discussion of high fructose corn syrup).

We all know that organisms use glucose as an energy source, first through its breakdown to pyruvate via glyceraldehyde -3- phosphate (G3P) in glycolysis; the pyruvate then travels through the citric acid cycleto produce enough NADH and NADPH to generate a lot of ATP. But fructose can be used as well.

Fructose undergoes fructolysis, different from glycolysis only in the fact that one more step must be taken to generate G3P (adding the P to G3 is done by the enzyme trioskinase). In humans, almost all fructose metabolism takes place in the liver, as a way to either convert fructose to glucose to make glycogen, or to replenish triglyceride stores – so be good to your liver.

The big exception is how important fructose is in mammalian reproduction. Spermatozoa cells use fructose as their exclusive carbohydrate for production of ATP while stored in the testes. This fructose comes not from the diet but the conversion of glucose to fructose in the seminal vesicles.

Why use a different carbohydrate source just for sperm? Seminal fluid is high in fructose, not glucose. Perhaps this is a factor in seminal fluid viscosity. If this problem is solved using fructose, then the cells swimming in it would probably switch evolve to use it as an energy source.

I asked Dr. Fuller Bazer of Texas A&M about this and he pointed out that fructose can be metabolized several different ways, and some of these lead to more antioxidants and fewer reactive oxygen species - it would be important to leave sperm DNA undamaged, especially since we have previously talked about how they are more susceptible to oxidative damage.

Bazer also pointed out that unlike glucose, fructose is not retrieved from tissues and put back into circulation. Once it’s sequestered to the male sexual accessory glands, it would stay there. Still lots to be learned in this area.


Fructose is sweeter than glucose. Sucrose is one glucose joined to one
fructose, so the ratio is 50:50. In most honey, the fructose:glucose ratio
is about 55:45, so it is often sweeter than table sugar. Since it is higher
in fructose, some people liken it to high fructose corn syrup, but there
are many compounds in honey that also help the immune system, etc.
However, recent evidence is showing that some honey is being diluted
with high fructose corn syrup and some bees are being fed HFCS. The
benefits from true honey are then lost.
A 2013 study shows that maternalintake of fructose can also affect reproduction. Pregnant rats fed 10% fructose in their drinking water had significantly fewer babies, but a greater percentage of the offspring were male (60% versus 50%). The fructose did not arrest female embryos from developing or have a sex-specific effect on sperm motility, suggesting that the sugar has a direct effect on the oocyte that increases the chances of being fertilized to produce a male. Weird.

Using sugars other than glucose may be a big deal for mammals, but bacteria can thrive on many different sugars. E. coli can process glucose, but if other sources of sugar are around, they will switch over in a heartbeat – if they had a heart. E. coli has a whole different set of genes for lactose metabolism, found in something called the Lac operon. The operon gets turned on only if lactose is present and glucose is not.

The ability for bacteria to use other sugars might save us as well. Some bacteria can just shut down their metabolism if antibiotics are present and just hangout until the drugs are gone. These are called persister organisms, and they are different from antibioticresistant bacteria. A 2011 study showed that if you give sugar in combination with some kinds of antibiotics, the persisters just can’t resist the sweet treat and will not shut down their metabolism. The antibiotics then become effective. Using sugars we don't metabolize, like fructose or mannitol, ensures that they will be around to help kill the bacteria. Amazing.

We have just brushed the surface of sugary exceptions. Next week we will see how nature first selected a single type of sugar to use in biology, and then went right out and broke its own rule.



Gunning AP, Kirby AR, Fuell C, Pin C, Tailford LE, & Juge N (2013). Mining the "glycocode"--exploring the spatial distribution of glycans in gastrointestinal mucin using force spectroscopy. FASEB journal : official publication of the Federation of American Societies for Experimental Biology, 27 (6), 2342-54 PMID: 23493619

Gray C, Long S, Green C, Gardiner SM, Craigon J, & Gardner DS (2013). Maternal Fructose and/or Salt Intake and Reproductive Outcome in the Rat: Effects on Growth, Fertility, Sex Ratio, and Birth Order. Biology of reproduction PMID: 23759309

Allison KR, Brynildsen MP, & Collins JJ (2011). Metabolite-enabled eradication of bacterial persisters by aminoglycosides. Nature, 473 (7346), 216-20 PMID: 21562562 


For more information or classroom activities, see:

Testing for carbohydrates in foods –

Structures of carbohydrates –

Glycocode/carbohydrate code –

It’s Not Just Our Tooth That’s Sweet

$
0
0
Biology concepts – homochirality, carbohydrates, chiral discrimination, glycoside, H antigen


It isn’t just biomolecules that show chirality. There is also
chiromorphology, like snail shells that usually turn to the right
(dextral, or D-). There are factors in early embryonic
development that cause the body and shell to be right handed
in most gastropod species, yet other species are left handed.
There are also instances where a right-handed species will
produce a left-handed individual, so shell collectors have to be
on the look out for abnormal individuals.
A couple of weeks ago we talked about how, in most cases, life uses exclusively the left-handed enantiomers of amino acids to make proteins.  This homochirality is also see in the sugars we talked about last week, but in this case, mostly D-sugars are utilized in biological systems.

What isn’t amazing is that it happens to be L- for amino acids and D- for carbohydrates; the fact that they’re different is no big deal. Evolution just wants the parts to fit together, so if an enzyme evolved to use D-sugars, it’s not a surprise that the D-sugar would be favored in the pathway then now on. 

But it might not have been random either. No one knows for sure, but hypotheses abound for how homochirality in these biomolecular monomers was established.

One 2009 paper was concerned with the maintenance of homochirality rather than its establishment. Dr. Soren Toxvaerd stated that if you don’t believe life as we see it today occurred in a singular event, then it must have developed over a long period of time. Evidence indicates that small changes in the self-assembly of biomolecules took place over at least thousands of years.

If life took a long time to develop, then prebiotic (before life) earth must have been fairly stable in terms of enantiomer concentrations. But we know that homochiral solutions will turn to racemic mixtures (containing both L- and D- enantiomers) in a short time, days for amino acids and just hours for sugars. So how could the environment have been stable enough for life to develop over time?


One possible hypothesis about the establishment of homochirality
was put forth in 2010 by Koji Tamura, PhD in the Journal of
Cosmology. Put very simply, RNA may have developed before
proteins. RNA evolved to use only D-ribose because a mixture
would have been a symmetry violation. The action of D-ribose
would have been driven toward L-amino acids because of shape
problems with attaching D-amino acids to tRNAs. Now prove it.
Louis Pasteur, he of bacteria-free milk and germ theory, may have shown us the way. He discovered chiral discrimination. Racemic mixtures, under the right conditions, will separate into pools of homochirality. There is an energy gain and stability to packing homochiral molecules together; the other enantiomer will be excluded. This could help explain life using one enantiomer only.

What is more, hydrothermal vents and black smokers have just the needed conditions for both chiral discrimination and for self-assembly of biomolecules. Interesting huh? Think it’s a coincidence that black smokers harbor some of the oldest archaea on Earth? We may owe our very existence to plumes of superheated water and the xenophobia of enantiomers.

Lastly in this area, it may be that sugars and amino acids selected each other for homochirality. Glyceraldehyde is 1) highly discriminate for its enantiomers, 2) was present in large amounts in prebiotic oceans, 3) is used in self-assembly of many biomolecules, and 4) D-glyceraldehyde very much likes to bind to L-serine. So a slight excess in either one of these could have helped select for the other, and if this was stable, it could have caught on like “Gangnam Style.” This may be why life uses mostly D-sugars and L-amino acids and why I know the name Psy.

Now that we have delved into the mire that is maintenance of homochirality in sugars, let’s look at the rule breakers. D-sugars aren’t the only game in town.

Bacteria, oh bacteria! Once again, they lead the way in rule breaking. Last week we discussed how E.coli can generate ATP from several different sugars - glucose, lactose, etc. It takes different enzymes to metabolize each sugar, so if they are going to invest the energy in maintaining those genes and making those enzymes, there better be a good reason.


Paracoccus species 43P  has been shown to have an L-glucose
metabolic pathway. This organisms is very closely related to
Paracoccus denitrificans. P. denitrificans is believed to be the
organism that was engulfed to become the eukaryotic
mitochondrion. It closely resembles the mitochondrion, and
although random genes needed for aerobic respiration have
been found in many prokaryotes, P. denitrificans is the only
prokaryote in which all the necessary genes have been found.
A 2012 study tried growing soil bacteria on medium that contained only L-glucose as an energy source. One species of bacterium, Paracoccussp. 43P, was able to metabolize L-glucose to pyruvate and glyceraldehyde-3-P, and then make use that for ATP production. The researchers discovered an L-glucose-specific dehydrogenase enzyme, and this enzyme was active in the fluids from broken up paracoccus cells. The process is similar to one in E. coli, but here it is L-glucose specific.

Mammals can’t manage as well as some bacteria; we can’t metabolize L-glucose at all. However, that doesn’t mean it can't work for us. L-glucose has been proposed as an artificial sweetener, especially for type II diabetics. One form of L-glucose can stimulate insulin release, so this would be doubly good for type II diabetics. Unfortunately, L-glucose costs 50% more than gold; therefore, don't look for it next to the Truvia anytime soon.

One, but only one, study has been published showing rats metabolized L-fructose and L-gulose, but not L-glucose. From 1995, the authors waited until the end of the paper to explain that the metabolism was being carried out by the rodents gut bacteria, not by the rats themselves. No wonder it was only one paper.

Just because we can’t metabolize L-sugars doesn’t mean that we mammals are left out in the cold. Some sugars are used in the L-form even if they aren’t broken down to make ATP. The most egregious example of this is a hexose sugar called L-altrose. Why is it different than some other exceptions here? Because altrose doesn’t even occur in nature as a D-sugar; only the L-form has ever been found. It was first isolated in 1987 from a bacterium called, Butyrvibrio fibrisolvens, which is found in the GI tract of ruminate animals (cows and such).


Ruminants are mammals that have more involved digestive
strategies. Ruminants have many types of GI bacteria to help
them break down tough plant material; it isn’t surprising that
some of them can use nonstandard carbohydrates in their
physiology.  “Ruminating” is the act of re-chewing food that
has been partially softened by bacterial action in the first
compartment of the stomach, and then brought back to the
mouth as “cud.” I ruminate on ideas all the time, but I think I
will stop – I’m going to call it “further thought” from now on.
Ruminates go the extra mile. They digest longer and work on food harder, using bacteria to help with much of the work. Therefore, it isn’t strange to note that L-altrose has also been seen in another ruminate bacterium, Yersinia enterolitica. Remember though, this altrose isn’t being used in energy production; it's found in their outer cell wall glycoprotein, LPS (lipopolysaccharide).

It turns out that L-sugars are common in bacterial LPS. I found examples from several different bugs, including L-quinvose (6-deoxy-L- glucose), L-rhamnose, and L-fucose (6-deoxy–L- galactose).

When it comes to L-sugars, plants can get into the act as well. Rhamnose  (6-deoxy-L-mannose) occurs in nature, and can be isolated from several plants of the genera Rhamnus and Uncaria, including Buckthorn, poison sumac, and many other plants.

Rhamnose from plants takes the form of a glycoside. There’s there word again, glyco-. A glycoside in general terms is any molecule bound to a sugar. In plants, attaching sugars to create glycosides is a common way to inactivate molecules so that they can be stored for later use. When needed, the sugar residues of glycosides are cleaved away by special enzymes and then the protein, enzyme, lipid, etc. becomes active.


Digoxin (or sometimes digitalis) are cardiac glycosides from
foxglove plants. They are used to treat atrial rhythm or heart
failure problems. First used by William Withering in 1785,
digitalis is said to be the first of the modern day therapeutics.
But it can kill you too, both the plants and the drugs. A nurse
was sentenced to 18 life sentences after he was convicted of
killing more than 40 patients with digoxin.
Glycosides can be differentially regulated because there are many sugars that can be used, and several different possible linkages for each sugar/substrate combination. Therefore, cells can precisely control just when and where the glycosides are activated. This may allow cells to function for longer periods of time, but isn’t the reason that rhamnose and fucose (both L-sugars) are being included in obscenely expensive anti-aging creams.

Some evidence suggests that rhmanose and fucose can inhibit the activation of the elastase enzyme in skin cells. Elastase is known to increase in expression and activity as skin cells in culture divide several times. Therefore, companies want you to believe that rhamnose will keep your skin from looking old. Forget that keratinocytes in a petri dish bear as much resemblance to your skin as Watchmen does to Hamlet.

That was a bit sarcastic, but the cosmetic industry is a pet peeve of mine. And while I’m exposing my soul, I might as well admit to being a bit of a speciesist. I like the exceptions best when they involve Homo sapiens, so the last exception for today has to do with our own uses for a deoxy-L-sugar, fucose. I must admit that several uses of fucose apply to many mammals, but being the speciesist that you know I am, I ignore them to focus on humans.

Fucose (6-deoxy–L-galactose) is crucial for the turning of an unloved spermatozoa and a lonely oocyte into very premature teenager. Both the development and maturation of gamete cells and the development of the embryo depend on the recognition and communication of surface molecules that include fucose. But wait, there’s more.


The H antigen is linked to the red blood cell through a fucose
residue, but not in the “h” antigen mutant. Because of this, it
is not recognized for modification to the A or B antigen, and
the typical H antigen is not there to prevent development of
the H antibody.
Fucose is also a component of many glycans, including substance H. Also called the H antigen, this molecule is a precursor to the A and B antigens found on red blood cells. For people with A, B, or AB blood, the H antigen is modified to become the mature A or B antigen, but in people with O blood, the H antigen doesn’t mature and remains an H. Therefore, principal factors in every human’s development and physiology are determined in part by a sugar that we shouldn’t be using – according to the rules anyway.

However, not all is goodness and light when it comes to fucose. Some folks have a mutation in their H antigen gene that prevents its maturation to the A or B antigen. All cells would have the mutant H antigen, called h. This is different from being type O (meaning not having any A or B antigen, but still having the H antigen).

The hh or Oh blood type is called the Bombay type, and is very rare. Bombay individuals can donate blood to anyone, regardless of blood type (because they do not express any antigen to be attacked). However, because they make A, B, and H antibodies, they can receive blood only from another person with Bombay blood type. Since Bombay occurs about three times in a million births – good luck with that search for blood.

Let’s tackle the nucleic acids and their exceptions starting next week. By training I am a molecular biologist; I know an exceptional number of nucleic acid exceptions.


Shimizu T, Takaya N, & Nakamura A (2012). An L-glucose catabolic pathway in Paracoccus species 43P. The Journal of biological chemistry, 287 (48), 40448-56 PMID: 23038265

Toxvaerd S (2009). Origin of homochirality in biosystems. International journal of molecular sciences, 10 (3), 1290-9 PMID: 19399249
 
For more information or classroom activities, see:

Bombay blood type –

Glycosides –

Racemization –
http://journalofcosmology.com/SearchForLife108.html

RNA Takes First Place

$
0
0
Biology concepts – nucleic acids, DNA, RNA, central dogma of molecular biology, ribozyme, RNA world hypothesis


The Library of Congress in Washington DC was designed as a 
showplace as well as a repository. The main reading room looks as
much like a museum or a cathedral as it does a library. If I could
figure out how to get away with it, I would live in the LOC.
Did you know that there are more than 155.3 million informational items (books and such) in the Library of Congress? Established in 1800 with 3000 volumes, the library was originally housed in the Capitol Building. Unfortunately, all the books were lost when the British fired Washington in 1814. No worries, the LOC then purchased Thomas Jefferson’s personal library of over 6500 books and set up shop in new building, although not the 1892 designed library that exists today (left).

In a way, you can think of the molecular workings of the cell like the Library of Congress. You need information storage – these are the books. In each book (chromosome or parts of a chromosome) contain the instructions (genes) needed to make products (proteins) the cell may need.

Each time you want to make a certain molecule, you must consult the book (chromosome) that has the correct instruction page (DNA gene). But you may be making many copies of your product in a short period, so one book might not be enough.

You could keep many copies of each book, maybe thousands, but this would take up too much room. The LOC already covers 2.1 million sq. feet (and that’s just one main building). What if you needed 1500 copies of One Good Turn (and interesting book about the history of the screw and screwdriver) because at some time or another, 1500 people wanted to learn how to build a square screwdriver?

To avoid this need for extra space, you make copies of pages (mRNA) from the books (chromosomes) that can be taken out of the library (nucleus) and used for making the products. Each time you want a product, a translator (tRNA and ribosome) must be used. This converts the copied instructions (mRNA) into a usable product (protein).

When one or several translations have been made, the copied instructions start to tear and get worn, and finally break down. Good thing we still have the original copy of the book stored in the nucleus… I mean library. We can go back and make more copies later if we need them. Humans are amateurs, we only have about 25,000 sets of instructions stored in 46 books, nowhere near the 155.3 million of the LOC.


The central dogma of molecular biology says that DNA is replicated to
DNA, so daughter cells get a full set of instructions. DNA is also
transcribed to mRNA, which is a copied message of the instructions to
build one protein. Finally, the mRNA acts as a code that is translated
into an amino acid polymer – a protein. HIV and other retroviruses
laugh at the central dogma, going the opposite direction, RNA to
DNA. Retrotransposons laugh at HIV, as they can do all that and more.
Cells take this library/nucleic acid analogy further. Sure, they have DNA, mRNA, and tRNA so that they can carryout the central dogma of molecular biology --- DNA goes to mRNA goes to protein (via tRNA and rRNA), but they have so much more. Just as there are many kinds of information storage at the LOC--- books, images, recordings, manuscripts, pamphlets, there are different kinds of nucleic acids as well.

Ever here of small nuclear RNAs, or micro RNAs, or plasmid DNAs for that matter? We have talked about plasmids as extrachromosomal pieces of DNA that can code for genes, especially antibiotic resistance genes in prokaryotes.

But the list of RNAs is far more impressive. There are regulatory RNAs that control gene expression (whether or not a protein is made from a gene), RNAs that control modification of other RNAs or work in DNA replication. There are even RNAs that are parasitic, like some viral genomes (RNA viruses) and retrotransposons.

Of these, retrotransposonsmay be the most interesting. A transposon is a piece of DNA that can jump around from place to place in the chromosomes of a cell. Barbara McClintock won a Nobel Prize for identifying transposable elements were responsible for the different colors of corn kernels in maize.


Ancient viral RNA got inserted into plant and animal genomes. The
retrotransposon can be transcribed to mRNA, and then could be
reverse transcribed back into DNA or translated into protein. The
DNA can then insert itself anywhere in the genome. Since several
mRNA transcripts can be made from one transcribed retrotransposon,
and since several pieces of DNA can be reverse transcribed from just
one mRNA, we have the potential for millions of retrotransposons in
the genome – and that’s exactly what we have found. The bottom
cartoon shows HIV. Since reverse transcription makes more mistakes
than DNA replication, many more mutants can be produced. This is
one reason HIV is so hard to treat – it’s always changing.
Retrotransposons use the library analogy to fill the shelves with hundreds of copies of themselves. If plant nuclei were like libraries, up to 80% of their book pages would be retrotransposons!

In and of themselves, retrotransposons represent an exception in nucleic acids. They are mRNA sequences that can turn back into DNA. Transcription is the process of using DNA to produce an mRNA, so going the opposite direction is called reverse transcription. This is also what retroviruses like HIV do.

In the case of retrotransposons, the chromosome held copies will be transcribed to an mRNA, and some of those copies might be translated into protein. Other copies will be reverse transcribed back to DNA by an enzyme called reverse transcriptase and will insert themselves somewhere in the genome (see picture).

In this way, retrotransposons can make more copies of themselves and end up all over the chromosomes of the organism. Mutation occurs at a higher rate in reverse transcription than in DNA replication because reverse transcriptase makes more mistakes than replication enzymes. This is why HIV is so hard to treat; it mutates so often that drug design can’t keep up with the changes in the viral proteins.

So how can the same mRNA sometimes be translated, and other times end up in a new place on the DNA? A 2013 study has investigated how one type of retrotransposon manages these different outcomes. The BARE retrotransposon of plants has just one coding sequence for a protein, but the study results show that it actually makes three distinct mRNAs from this one piece of DNA.


Sam Kean is the author of The Violinist’s Thumb, a very readable
book on molecular biology. He goes through how fruit flies were
recruited to disprove DNA heredity and ended up as the strongest
evidence for it; how DNA is linked very strongly to linguistics and
math; and how Stalin tried to breed a race of half human - half
chimps. This is in addition to showing how most DNA on Earth is
descended from viruses.
One transcript (mRNA) is modified so it can be translated but cannot be reverse transcribed. The second transcript is packaged in small bundles to be reverse transcribed later back to DNA. The third transcript type is smaller and actually houses the bundles of mRNAs to be reverse transcribed. So this retrotransposon balances itself between making protein and inserting itself into new places in the genome.

If plants have so much nucleic acid in the form of retrotransposons, could these be the remnants of ancient viral infections? You betcha, and it doesn’t stop with plants. In his fascinating book, The Violinist’s Thumb, Sam Kean lays out a compelling argument that most human DNA is actually just viral nucleic acid remnants, much of it being mutated versions of old RNAs.

Old RNA is probably the best way to describe all nucleic acids, because the generally accepted view of the evolution of life on Earth is that everything started with RNA. This called the RNA world hypothesis and professes that the job that DNA does now was first done by RNA.

The hypothesis also says that what those that protein enzymes now do - cutting things up, putting things together, and modifying existing structures - was originally done by RNAs as well, called catalytic RNAs.

We have evidence for this hypothesis, specifically, we know of many RNAs that have enzymatic activity. Called ribozymes (a cross between ribofor RNA, and zyme for enzyme), some RNAs carry out enzymatic roles in our cells and the cells of every eukaryote and prokaryote ever analyzed for their presence.


Ribozymes, a form of catalytic RNA, are present in most cells. They come
in two flavors based on what someone thought their secondary structure
looked like – the hammerhead or the hairpin. Scientists aren’t the most
imaginative when it comes to naming things. They both sit down on an
RNA where they recognize their specific sequence, and make a cut in the
strand. In the cartoon, N stands for any nucleotide, and X stands for
unknown. On the right side is a diagram showing how one ribozyme can
act again and again to cleave RNAs.
So now we are aware of two exceptions when it comes to the central dogma of molecular biology and RNA – 1) RNA can be converted back into DNA and 2) RNA can act like an protein enzyme.

One essential ribozyme function is the synthesis of protein. The ribosome (a riboprotein because it is made up of many RNAs and proteins) translates the codons of mRNA into a sequence of amino acids. It uses the RNA to link the individual amino acids together via peptide bonds. I’d say that’s essential.

Other ribozymes work on themselves. Many mRNAs, when first copied from DNA have sequence within them that is not used in the final product. These are called intervening sequences (or introns), and are cut out (spliced) as part of the transcript processing. Group I and II introns are self-splicing. They fold over on themselves and cause their own excision from the RNA of which they are part!

Group I introns can be found in the mRNAs, rRNAs, and tRNAs of most prokaryotes and lower eukaryotes, but the only place we have found them so far in higher eukaroytes are the introns of plants and the introns of mitochondrial and chloroplasts genomes.  Yet more evidence for the plastid endosymbiosis hypothesis.

If the RNA world hypothesis is to be strengthened, we must find a catalytic RNA that can replicate long strings of RNA “genes.” If RNA was both the storage material and the enzymatic material, there must have been an RNA-dependent, RNA polymerase that was itself a piece of RNA. An RNA replicase has not been found, probably because life moved on to using DNA as the long-term repository of genetic information, But we should be able to make an RNA replicase as a proof of concept.


The RNA world hypothesis is an idea of how early life on Earth transmitted
information and carried out functions. RNA did everything, stored info.,
replicated itself, and carried out enzymatic activity. A – E represent a
possible sequence, although no times can be assigned yet. According to this
theory – the last thing that developed was enzymatic proteins – but new
evidence suggests that proteins were important for the development of
tRNAs so they must have been around earlier. Step B is an area of interest,
as scientists are trying to make an RNA that could replicate any RNA, even itself.
A few ribozymes can polymerize a few nucleotides into short RNAs. The problem is that we need to show that there is an RNA that could replicate long strings of RNA that could then go on to have biological function. Until 2011, the best we’d produced was a ribozyme (called R18) that could polymerize just 14 ribonucleotides.  

Then a study was published showing that a modification of R18 could synthesize much longer strings and could replicate many different RNA templates. In this publication, the authors could synthesize ribonucleic acids of 95 bases, almost as long as the R18 replicase itself. Another study has shown that some catalytic RNAs can self-replicate at an exponential rate, making thousands of copies of themselves while still having catalytic function.

It seems that the RNA hypothesis is getting stronger, but there remain some hurdles.
A July, 2013 study shows that primitive protein enzymes (called urenzymes, where ur = primitive) activate tRNAs much faster than do ribozymes. These primitive proteins date to before the last common ancestor, so they have been around nearly as long as life itself. tRNA urenzymes suggest a tRNA-enzyme co-evolution, providing evidence that catalytic proteins and the conventional central dogma were important in early life – a result that does not support the RNA world hypothesis. I’m glad – the hunt goes on.

In the next weeks, let’s take a look at nucleic acid structures and their building blocks. Think DNA is double stranded? – not always. Think A, C, G, T, and U are the only nucleotides life uses? – not even close.



Chang W, Jääskeläinen M, Li SP, & Schulman AH (2013). BARE Retrotransposons Are Translated and Replicated via Distinct RNA Pools. PloS one, 8 (8) PMID: 23940808

Li L, Francklyn CS, & Carter CW (2013). Aminoacylating Urzymes Challenge the RNA World Hypothesis. The Journal of biological chemistry PMID: 23867455

Ferretti AC, & Joyce GF (2013). Kinetic properties of an RNA enzyme that undergoes self-sustained exponential amplification. Biochemistry, 52 (7), 1227-35 PMID: 23384307


For more information or classroom activities, see:

Nucleic acids –
Central dogma of molecular biology –

Types of RNA –

Retrotransposons –

RNA world hypothesis –

Catalytic RNA (ribozymes) –

DNA is As Easy As A, B, Z

$
0
0
Biology concepts – forms of DNA, history of DNA structure, triplex DNA, tetraplex DNA, protooncogene


From left to right we have the main players in our little discussion
of how science can be done without experimentation. Francis Crick,
the man who never met a comb; James Watson, student of the
greatest scientists of the time; Rosalind Franklin, denied a Nobel
Prize because she died from cancer at the age of 36; Maurice
Wilkins, Franklin’s boss who she treated like a red-headed step-
child; and Linus Pauling, who won a Nobel Prize for discovering
protein structure, and the Nobel Peace Prize for his work against
nuclear weapon proliferation.
Not every scientific success is driven by experimentation, trial, error, and eventual triumph. Sometimes, you just need to be paying attention. Oh, and be a little devious.

James Watson and Francis Crick, along with Maurice Wilkins and Rosalind Franklin, worked out the structure of DNA in 1953. Years before, a Dr. Avery had shown that nucleic acids alone could be transferred between organisms to change their phenotype. This meant that it was the DNA, not the proteins, that were responsible for heredity.

Watson and Crick wanted to identify DNA's structure because, as is the case so often in biology, knowing the structure is crucial to knowing the function. The way DNA was put together would give clues as to how it passed on information to the daughter cells.

Watson and Crick were the exception in that they didn’t really do any of their own experimentation in this quest for the structure of DNA. They built some models based on other peoples’ data, and made some great insights that led them to the truth.

We know now that DNA is a double helix, but in the early 1950’s, no one had any idea about this. Rosalind Franklin had a structural picture of DNA that, when shown to Watson by Wilkins, immediately caused him to know that DNA was a helix.

The constituents and order of the building blocks of DNA (nucleotides) had been worked out by organic chemist Alexander Todd in the 1940’s– phosphate group, sugar, base. But were the phosphates on the inside of the helix or were the bases? And how were the different strands held together?

Watson and Crick thought DNA might be a triple helix. Don’t laugh, so did other eminent scientists, including Linus Pauling, the Wizard of Cal Tech, who would eventually be awarded not one, but two Nobel prizes.


Alexander Todd showed that a nucleotide was composed of a
phosphate, and ribose or deoxyribose sugar, and a nitrogenous
base – in that order. This was some elegant work showing which
moiety was bound to which. Pauling used this information to
construct his triple helix model (right image), but he had the
phosphates on the inside, and suggested that they were held
together by hydrogen bounds – wrong on both counts. By the way,
a nucleoside is just the same structure minus the phosphate group.
The other thing Watson and Crick had was an unwitting spy. Linus Pauling’s son Peter was a recent addition to the lab at Cambridge. Peter became friends with Crick and Watson. Through Peter, they knew Pauling was also working on a triple helix; they read his manuscript and knew it was flawed.

About this same time, Crick and Watson were reminded of the conclusion of Chargaff that each cell contained the same amounts of A (adenine) and T (thymine), as well as the same amounts of G (guanine) and C (cytidine). Jerry Donohue, another addition from the land of Linus Pauling who liked to flap his lips, pointed out that A could bind to T through their hydrogens, and G could base pair with C.

You notice that nowhere have we talked about Watson and Crick’s data; so far they had only built a triple helix model with the phosphates in the center – and it was really wrong.

The final piece of the puzzle was a May, 1952 X-ray crystallography image of DNA made by Rosalind Franklin that was shown to Watson by her boss. Immediately, this image put to rest any doubts that DNA was a helix, and it gave accurate measurements for how wide the molecule was and the distance between complete turns.

Using this data, Watson and Crick returned to their model making and solved the puzzle in short order (by March, 1953). Their April 1953 paper was an exception in itself; it was only one page long. It contains the most understated sentences in the history of science since Alexander Fleming said, “Hey, all my bacteria are dead.”

The consistent base pairing of A and T or G and C led them to write, “It has not escaped our notice that the specific pairing we have postulated suggests a possible copying mechanism for the genetic material.” All this meant was that they realized that the DNA structure was a perfect explanation for how it replicates so that the genetic information is passed on to each new generation. Ho hum.

So that’s it - DNA is a double helix molecule with the bases on the inside. Well, not quite. I can think of many exceptions to these rules, but let’s talk about just a couple or three. DNA comes in at least three different double helices, A, B, and Z. This was apparent early in the studies of DNA, but only molecular biologists ever remember it.


On the left are the three forms of DNA most often encountered. You
see that the A form is more compact, has a deeper major groove and
smaller rise for each turn as compared to the B form. The Z form
occurs in the middle of the B form, when special repeats of bases are
found. On the right are two X-ray images of DNA crystals. The left on
is the A form, and the right one is the B form. The A form, being more
compact, gives a poorly resolved image. You can’t blame Rosalind
Franklin from opining that DNA wasn’t a helix based on the A form
images that she first produced.
The B form of DNA is the one we see most often in biological systems. It is a right-handed helix with a major and minor groove that allows proteins good access to the DNA. On the other hand, the A form of DNA is more compact and occurs only when water is scarce. This doesn’t mean that your DNA changes form when you are dehydrated, it means that the A form is seen mostly in underhydrated crystals of DNA in the laboratory.

The A form was the first form to be imaged by Rosalind Franklin. Since it was compact, it gave a muddied X-ray image, as seen in the picture. Information on water content and the length of each turn was also disturbing, and through of Watson and Crick for a while. The aha! image that Watson got a look at was Franklin’s attempt at B DNA, and it gave him all the information he needed to finish the model.

Z DNA does occur in nature, but usually not as the sole form of DNA. When certain runs of bases are encountered (called CpG, for runs of purine/pyrimidine), and when the salt concentration in the region is high, the DNA can locally switch to a left-handed turn helix. What we usually see is a B-Z-B region.


DNA computing is not a completely new idea. In 2003 and again
in 2010, Israeli scientists built chips that computed using DNA –
about 330 trillion operations/second!  It all started in 1994, when
a California scientist stored information in DNA to do a simple
math problem. The new study using carbon nanodots will allow
for even greater and faster computing power via light.
Z DNA is turning out to be more important than once thought. Z forms are important in transcription, as the reading of DNA to produce mRNA often induces a transient switch to the Z form. Z DNA is also important for the regulation of certain genes, including some genes important in preventing cancer. There exist some proteins that specifically bind Z-DNA in order to regulate the transcription of these genes.

Using this tendency of B DNA to switch to Z-DNA under the right conditions, some researchers are using carbon nanodots to create optical logic gates; they light up if bound to Z form, and don’t if bound to B form. By controlling the conditions and the sequence of small runs of DNA, you can turn the lights on and off, similar to the 1’s and 0’s of computers. You get it now – this has the potential to become a DNA-based nano-computer!

The A, B, and Z forms of DNA aren’t the only exception to the structure of this nucleic acid. These three are all double helices, but that doesn’t mean that all DNA exists as a double helix.

Some DNA is single stranded (ss). In every cell of every organism there is transient formation of ssDNA when it is replicated, transcribed, recombined, and repaired. SSDNA is also seen in some viruses, the best known and first discovered of these being the parvovirus.


Single stranded DNA viruses enter a cell and their ssDNA becomes
double stranded by using the hosts replication apparatus. Then the
dsDNA is transcribed to make both types of proteins needed to make
more viral particles. Meanwhile, one strand of the dsDNA is copied to
ssDNA again so it can be packaged into the viral particle. On the right
is the result of one ssDNA virus, parvovirus B19. It causes slapped
cheek syndrome, with a skin rash on the trunk and limbs as well.
Parvovirus B19 causes a common childhood disease called fifth disease (erythema infectiosum, or slapped cheek syndrome). The common name comes from the fact that it is categorized as the fifth of the childhood skin rash diseases – measles, german measles, scarlet fever, and another bacterial infection that has been dropped from the list.

Fifth disease is usually self-limiting, but new evidence is suggesting that there can be long term ramifications of a B19 infection. In 2013 alone, case studies have been published linking parvovirus B19 to acute kidney infections, neurologic complications, muscle cell death, and a purple tissue swelling called Wells Syndrome. All from a single strand of DNA.

On the other hand, some DNA exists in the form of three intertwined strands, called triplex DNA. Often used in the laboratory to manipulate gene expression, triplexes also form in cells on their own. The same protooncogene (c-myc) that we referred to in the section on Z DNA also has areas that form triplex DNA and work to control how much protein is made from this gene.

The top left image shows how duplex of DNA can rearrange to
form a quadraplex. The top right cartoon shows how this
tetraplex looks, forming three planar squares of interacting
bases. The bottom image shows how a tetraplex unit can form
within a  dsDNA. This often occurs in the end pieces of our
chromosomes (telomeres) and in the regulatory parts of
some genes.

Admit it, you laughed at Watson, Crick, and Pauling when you discovered that at first they all thought DNA was a triple helix. Who’s laughing now? Of course they still got the orientation wrong, with the phosphates on the outside. If you feel the need, go ahead and snicker at the guys with the three Nobel Prizes. How many have you got?

A newer discovery is quadruplex DNA; four strands come together to form a rectangle-like structure, where four bases bond together.  It has been know for a few years that these complexes exist in the telomeres of mammals. Telomeres are on the ends of chromosomes and need special consideration to be replicated and preserved. The quadruplex structures aid in the preservation of our chromosome ends. This is important, as dysfunctions in telomere replication are thought to responsible for up to 85% of cancers.

Quadruplex structures are also being predicted and seen outside the telomeres. A new study used an antibody that recognizes quadruplex DNA to visualize and quantify these structures in living human cells. Their data shows that many DNA quadruplexes are associated with cell cycle progression, suggesting that manipulating them could become important in cancer treatment. And like clockwork, evidence also shows that the c-myc protocancer gene forms quadruplexes as well – is there any structure this gene won't form?

Next week we can continue our look at nucleic acids by looking at the exceptions to the rules of the building blocks, nucleotides. It’s not quite as easy as uracil (U) for RNA and thymine (T) for DNA. And why is U used only in RNA anyway?



Biffi G, Tannahill D, McCafferty J, & Balasubramanian S (2013). Quantitative visualization of DNA G-quadruplex structures in human cells. Nature chemistry, 5 (3), 182-6 PMID: 23422559

Feng L, Zhao A, Ren J, Qu X. (2013). Lighting up left-handed Z-DNA: photoluminescent carbon dots induce DNA B to Z transition and perform DNA logic operations. Nucleic Acids Research DOI: 10.1093/nar/gkt575



For more information or classroom activities, see:

Search for the structure of DNA –

DNA activities –

Forms of DNA –

Triplex DNA –



The Language Of Our DNA

$
0
0
Biology concepts – nitrogenous base, nucleoside, nucleotide, DNA, RNA, second messengers, G protein coupled receptors, cAMP, cGTP, cyclic dinucleotides


Grammar isn’t easy. Small changes can lead to large
differences in meaning. It is like this with the terminology
in molecular biology as well. TK is a tyrosine kinase, while
TK1 is a thymidine kinase. Thymine is a nitrogenous base
of DNA while thiamine is a vitamin. Not knowing the
difference can keep you from that PhD you’ve been wanting.
It may be that English grammar is the only subject that can approach the number of exceptions one finds in biology. When do you use "who" instead of "whom;"“its” has only got an apostrophe when it isn’t possessive; I before E except after C; plural nouns add “s” but you take away the “s” to make a plural verb; their vs. there vs. they’re. It’s exasperating – English grammar should be taught only to those over 25 years of age, when one is mature enough to handle the stress.

Today we are going to talk about the building blocks of DNA and RNA – they can be as confusing as grammar. Terms and structures will look and sound similar, but their functions are very different. We’ll try to minimize the confusing details and maximize the amazing differences.

The basic building block of a nucleic acid is the nucleotide. This is a complex molecule made up of one or more phosphate groups, a ribose or dexoyribose sugar, and one of five nitrogenous bases (A, C, G, T, or U – those are the 5 – for now). Already it's a little confusing, but we can add more complexity; if you have just the base and sugar, it is called a nucleoside, not a nucleotide. Let’s use one base as an example.

Adenine (A) is the name of one nitrogenous base. If it is bound to a ribose, it is called adenosine (A), if it is bound to dexoyribose, it is called deoxyadenosine(dA). If you add a phosphate, you get the nucleotide, but the name depends on how many phosphates; one phosphate = adenosine monophosphate (AMP) or deoxyadenosine monophosphate (dAMP), 2 phosphates = adenosine or deoxyadenosine diphosphate (ADP or dADP), 3 phosphates = the triphosphate (ATP or dATP).

The other nitrogenous bases use the same system – mostly. Cytosine (C) and guanine (G) form cytidineor guanosine nucleosides or nucleotides. The exceptions are thymine(T) and uracil (U). T is formed from dUMP by adding a methyl (-CH3) group, but not from UMP. Therefore, you don’t really find thymidine, only deoxythymidine. Since they know it only comes in one form, scientists go ahead and call it thymidine - thanks a lot.


As alluded to in the text, base plus sugar equals nucleoside. Add
a phosphate, or two, or three and you have nucleotides. The sugar
can be ribose or dexoyribose, the difference being the OH at the
second carbon position. On the right are the possible bases, the purines
have two rings, the pyrimidines have one. Notice how adding a
methyl group to uracil makes thymidine or how taking away an
amine group from cytosine makes uracil. These will be important later.
In addition to the modification of U to make T, there is the removal of the 2’-OH to make deoxyribose out of ribose. This removal is made after the nucleoside is formed. Together, the modification of U to dU and the modification of dU to dT are strong evidence that RNA predates DNA and supports the RNA world hypothesis that we talked about two weeks ago.

We said above that nucleotides are the building blocks of DNA and RNA. Specifically, it's the triphosphate nucleotides (NTP or dNTP, where N means any of the bases) that are used for incorporation into the growing chains of RNA and DNA. The energy for the bond comes from releasing two of the phosphates, so the nucleotides inDNA and RNA are bonded through one phosphate linkage.

The building of nucleic acids comes from pools of NTPs and dNTs in the cell. Evidence shows that the pool of dNTPs is about 1/10 that of NTPs. This means that there are only enough dNTPs in the cell to support DNA replication for about 30 seconds. This implies that it's the rate of turning NMPs into dNMPs (then to dNTPs) that controls things like cell cycle and cell division; no replication of DNA, no division.


Ribonuclease reductase turns NDPs into dNTPs. It is well
controlled, the catalytic site is where the reaction takes
place, so the NTP goes there. The activity site requires an
ATP to activate or a dATP to inactivate the enzyme (this
keeps the dNTP levels in check). The specificity site says
which NDP can be acted on. When dATP or ATP is bound
at the specificity site, the enzyme accepts UDP and CDP
into the catalytic site; if dGTP is bound, ADP can be acted
on; if dTTP is bound in the specificity site, GDP enters the
catalytic site.
The concentration of dT is especially important, since it only comes from modifying dU. If you add some extra thymidine to cells, they will think that they have enough dNTPs. This turns off the enzyme (ribonuclease reductase) that converts NDPs to dNDPs. As a result, you won’t have enough dNTPs to make DNA and the cell will just stop.

Uses for nucleotides A, G, and C beside inclusion in DNA or RNA are more apparent (nature hates unitaskers). ATP should be near and dear to all our hearts - all our organs for that matter. ATP is the energy currency of the cell. The energy released when two phosphates are lost to incorporate a nucleotide into a growing nucleic acid is the same energy when ATP is hydrolyzed to ADP during an enzyme reaction or relaxation of a muscle.

An adenosine variant, called cyclic AMP (cAMP) is just as crucial as any other biomolecule you can name. An uncountable number (O.K., I’m sure someone knows) of cellular reactions are regulated by the levels of cAMP in the cell.

Cyclic GMP is a signaling compound similar to cAMP. Each controls a varied number of regulatory pathways and second messengers to convey information in the cell. There are also cyclic dinucleotides. Bacteria use c-di-AMP and c-di-GMP as second messengers. This has been know for some time, but a new study shows that these cyclic dinucleotides stimulate specific inflammation in a mammalian host by triggering production of the proinflammatory molecule IL-1beta. This stimulation pathway is via a completely new pathway. These are most definitely important molecules outside the nucleic acids.


cAMP and cGMP are single nucleotides in which the phosphate group
binds to the sugar at two points – it circularizes. Just because they
aren’t shown here, don’t think that cUMP or cCMP don’t exist – they
do, and they are second messengers too. In the case of the cyclic
dinucleotides, the phosphate of each nucleotide is joined to two
different sugar molecules. It is still circular, but in a way that
involves both nucleotides. The cGMP and cAMP are used in higher
organisms, the c-di-GMP and c-di-AMP are used by bacteria for
various operations, everything from gene regulation to virulence.
Cyclic di-GMP may be important for secondary signaling, but GTP and GDP also get into the game. G protein coupled receptors start many of the second messenger systems. There are many types of G protein couple receptors, but that will have to wait for another day.

CTP can act as an enzyme cofactor, especially in the production of one of the phospholipids that is most important in biological membranes (phosphatidylcholine). A similar reaction using CTP as a cofactor is the focus of a new study because the product of the reaction is important in the life cycle of the parasite that causes malaria (P. falcipaurm). The new study shows that the levels of CTP and CDP will regulate the efficiency of the enzyme using CTP, so manipulating these levels might be a target for anti-malarial drugs.

Lastly, uridine (U) is important outside of nucleic acids as well. When combined with an adenosine and four (yes, 4) phosphates, it is called uridine adenosine tetraphosphate(Up4A). This dinucleotide has recently been identified as an important controlling molecule in vascular endothelium physiology. It causes a contraction in several types of muscle cells in vessel walls, thereby regulating the tension of the walls, called vascular tone. In this way, Up4A helps manage pressure and its dysfunction is important in many vascular diseases.

As we discussed a couple of weeks ago, DNA is double stranded and the bases are paired - A with T and G with C. Chargaff first showed that the levels of dG and dC and of dA and dT were always the same in a cell.  Donahue then showed that they could base pair by hydrogen bonds.


Different amounts of G+C vs. A+T in regions of DNA lead
to different staining of the chromosome regions. GC regions
are more dense, so some stains are excluded and they show
up unstained. This difference in GC content has functional
consequences as well. High GC areas are more gene dense,
and have regulatory regions as well. A new study shows that
in chickens, high GC regions are associated with regulatory
regions of genes – the higher the GC content, the more
expression from that gene.
If you know how much dG is in a cell, then you know how much dC is there. But this doesn’t mean that G+C = A+T. The %GC content is different in different species. P. falciparumis a very low GC organism, only about 20% of the nucleotides of DNA are G or C, while other prokaryotes are up to 78% GC. See the picture caption for more on this subject.

So DNA has dA, dC, dG, and dT, while RNA uses U instead of T. Why? Such a simple question, but not many people bother to ask. There is more than one reason, but they’re all related to long-term protection of genetic information.

The cytosine base can be deaminated (removal of an amine group) to form uracil. In RNA, this mistaken identity would lead to an incorrect translation or perhaps a loss of function of a structural RNA. Fortunately, these are short-term problems because each RNA is short lived. But if U was used in DNA, then how would the repair enzymes know which U’s were correct and which were actually deaminated C’s?

Since dT is used in DNA instead of dU, any dU must be a deaminated C and should be replaced. If it were allowed to remain, then an incorrect U would be copied as an incorrect A (U is like T because it pairs with A) and this would be forever kept in the DNA - a permanent mistake. Not good.

Second, uracil forms a stable product when damaged by radiation, while radiation damage to T’s can be detected and replaced by repair enzymes. So again, using dT in DNA leads to a more stable, more protected, long-term storage molecule.

A third reason for dT in DNA is related to base pairing. U pairs best with A, but it can base pair with G, T, or C. This increases the chances of mismatched pairs in the DNA double strand - not good for keeping information pristine in the long run. Protection against damage is also illustrated by the fact that dT is basically methylated U.


This is a cartoon representation of a tRNA that is charged
with a phenylalanine amino acid.  The different loops are
associated with efficiency of action with the template,
binding to the ribosome, and binding of the amino acids.
The T loop actually contains a T base (at grey arrow), it’s
an RNA, but it includes a T – that’s the very definition
of an exception.
Methyl groups have a tendency to protect the bases from enzymes that break down DNA (nucleases). We will talk about this more next week. So again, using dT in DNA is more protective than using uracil.

Whew, good thing we use U for RNA and T for DNA, right. Well….. not always. tRNAs are a huge exception, which we will talk about much more in future posts. Thymidine is found in the T arm or T loop of tRNA; here it is important for binding the tRNA to the ribosome during translation. A DNA nucleotide in an RNA??? What gives?

Remember, T only occurs naturally as dT. T ends up in tRNA by virtue of a modification that methylates a U. Once modified, you can’t tell it from any other T – except that now it is bound to a ribose, not deoxyribose. English grammar seems a lot easier by comparison, doesn’t it.



Abdul-Sater AA, Tattoli I, Jin L, Grajkowski A, Levi A, Koller BH, Allen IC, Beaucage SL, Fitzgerald KA, Ting JP, Cambier JC, Girardin SE, Schindler C. (2013). Cyclic-di-GMP and cyclic-di-AMP activate the NLRP3 inflammasome. EMBO Rep.

Nagy GN, Marton L, Krámos B, Oláh J, Révész Á, Vékey K, Delsuc F, Hunyadi-Gulyás É, Medzihradszky KF, Lavigne M, Vial H, Cerdan R, Vértessy BG. (2013). Evolutionary and mechanistic insights into substrate and product accommodation of CTP:phosphocholine cytidylyltransferase from Plasmodium falciparum FEBS J. DOI: 10.1111/febs.12282

Rao YS, Chai XW, Wang ZF, Nie QH, Zhang XQ. (2013). Impact of GC content on gene expression pattern in chicken Genet Sel Evol. DOI: 10.1186/1297-9686-45-9



For more information and classroom activities, see:

Nucleotide/nucleoside –

Cyclic nucleotides/dinucleotides –

Why thymidine is used in DNA –


Post of the Living Dead

$
0
0
Biology concepts – ethnobotany, pathophysiology of diseases, zombies?


You know you have a phenomenon when you can knit a
zombie. These you can take apart and reassemble, just
like real zombies. Knitting a zombie somehow weakens
the argument that we use them to address our deep
seeded fears; these might show up at a baby shower.
Are there movies made these days that don’t have zombies in them? How did I miss the zombie aspect when I studied Lincoln and the Civil War? University classes have been developed to discuss the social implications of zombie life and why they have become so prevalent in our media.

Some “experts” believe that zombies hold fascination for us because of our need to address fears without directly confronting them. Death, the breakdown of social order, chaos, perhaps science run amok – these are all found in the zombie stories.

In fact, it could be proposed that all these fears stem from the need for expected outcomes and a sense of finality. After you die, who cares about control, order, or even diet – but what if you were condemned to be undead? What if everyone else was undead and you had to deal with their lack of finality? We watch zombie movies to deal with our own fears on a subconscious level.

Did you know that zombie legends may have some basis in truth? In the early 1980’s an ethnobotanistfrom Harvard named Wade Davis claimed to have found two potions used to create zombies.

In Haiti, the fear of zombies has been present for many years. Based in African folklore and maintained by the descendents of slaves that revolted and remained in Haiti, witch doctors (bokurs) are very real sources of reverence and fear for the average man and woman. Just how the bokors managed to create zombies was the question asked by Dr. Davis.

Ethnobotanists are scientists that study different
civilizations (ethno-) for how they use plants (-botanist)
in medicine and culture. Mark Plotkin spent years with
tribes in the Amazon studying the use plants by the
medicine men. After returning with many plants from
which to extract compounds, Plotkin started Shaman
Pharmaceuticals to test the compounds for medicinal uses.
Plotkin now runs an Amazon Preservation organization.

Davis claimed to have found a powdered potion that could make a person appear to be dead. Based on a toxin called tetrodotoxin, Davis said the powders were most commonly blown into the faces of victims and the toxin would inhibit firing of voltage-gated channels in the nerves. Heart rates would go to near zero and the affected individuals would seem dead.

In many cases, tetrodotoxin doesn’t just make people appear dead, it kills them for real. Blue ringed octopods, several species of fish, and other animals contain this toxin and it will mostly certainly kill. The sushi dish called fugu is a pufferfish that contains tetrodotoxin. Even when prepared correctly (which takes a government issued license), the consumer will experience numbness in their tongue and mouth. Davis stated that dosing the victim often resulted in real death; making zombies is apparently not so easy.

After a hasty burial, the bokor would dig up the poisoned victim before true death occurred, and a zombie slave was produced by giving a second potion. This poison was based on a Jimson weed (Datura stramonium) extract, leading to the second name for the plant, zombie cucumber.

Jimson weed extract is a psychoactive agent that leads to stupor and amnesia, but no loss of consciousness. There said to be the basis of the loss of free will, and slow dragging movements of zombies. In such a state, the affected could be turned into slave labor, or used to frighten others into compliance with the bokurs’ wishes.

Davis’ samples and explanations were not accepted by the majority of the scientific community. The toxins described are very toxic; small mistakes in dosage would most certainly be fatal. And a motive of creating cheap labor is not convincing; most labor in Haiti is cheap.

But don’t try to tell that to the Haitians. They actually have a law forbidding the creation of zombies. Article 249 of the code of laws states, “It shall also be qualified as attempted murder the employment which may be made against any person of substances which, without causing actual death, produce a lethargic coma more or less prolonged.” Scary.

So let’s talk zombies. Some people will get into intricate detail, but let’s confine ourselves to two kinds of zombies. One kind of zombie was dead but is now reanimated by some means – usually some kind of parasite or virus that brings some tissue back to life. The other sort of zombie always remains living, but loses all ability to direct its own thoughts and actions; it is driven only to acquire flesh – usually bbrraains!


The top cartoon shows how the valves in your veins work to
keep blood from pooling in the lower extremities. For veins
above your heart, you don’t need the valves, so they don’t
have them. On the bottom, you can see what happens in the
case of varicose veins, when blood will begin to pool. The
picture on the bottom right shows what happens when
valves go bad (insufficiency). Can you get varicose veins
above your heart – yes, but it usually requires
severe liver damage.
As a biologist, the undead zombie troubles me for several reasons. No heartbeat, little brain activity – just what is moving them and how do they digest the brains they eat? I know that some sites have very intricate back stories as to how they stay upright and mobile, but they don’t hold water scientifically.

If they have no heartbeat, how do they circulate their blood? I saw one website that stated unequivocally that muscle movement (contractions and relaxation) pushes the blood around the body. Ridiculous, right? Not really. You do this every minute of every day. On the artery side of our circulation, the muscle in our vessels work with the contraction of the heart to push the blood along, but across the capillary bed, there is little pressure left to push blood through the veins.

So you use the action of your large muscle groups to squeeze the blood in the veins. When you walk, bicycle, or step dance, the contractions push the blood up against gravity. Your veins have one-way valves, so after the blood passes a valve, it won’t flow backwards. If this wasn’t so, all your blood would end up in your feet, and you’d have to buy shoes like Shaquille O’Neal’s.

On the other hand, if we are talking about living dead zombies, then there are many possible biologic causes. Rabies is caused by a bacteria passed through the bite of an infected animal. The result is an infection that can take one of two forms. In furious rabies, the victim will display erratic behavior, aggressiveness, foaming at the mouth, and will eventually start biting people.

The paralytic form of rabies will cause slow movements, dragging feet, lack of coherent behavior, and foaming at the mouth. The foaming is due to the inflammation of the throat that makes it so painful to swallow that victims will choose to drool and foam instead. Either form of rabies could mimic what we think of as zombies.

The top picture is Cujo, from the Stephen King story of the
same name. The dog was huge, and then caught rabies from
a little field animal. He lost all sense of personal hygiene and
liked to bite anything that moved – just like zombies. Below is
a photomicrograph of the negri (named for Adelchi Negri)
bodies that appear in the brain tissue of rabies victims. They
stain darkly and are made up of riboproteins made by the
rabies virus.

Most people believe that you get bit, and then you develop rabies, so it is easy to put cause with effect. But a new study shows that rabies can incubate for decades before symptoms manifest. The man in question showed signs while he lived in the U.S., but the organism he harbored is found only in Brazil, a place he hadn’t lived or visited for many years, although he did remember a biting incident in his home country. If the symptoms occur out of nowhere, does that support or negate the idea that he might be a zombie?

If you want an example of living dead, look no further than African sleeping sickness (also called trypanosomiasis). The bite of the tsetse fly transmits the Trypanosoma brucei parasite, which then takes up residence in brain.

The symptoms get progressively worse, starting with headaches and moving through slurred speech and then to a zombie like state with sleep being impossible at night and staying awake being difficult during the day. They enter a living nightmare, unable to respond or act, until they fall into a coma and die after a few weeks. Sounds like a zombie to me.


African sleeping sickness is an often fatal disease with parasites
invading the brain and doing damage. Survivors are usually
permanently brain damaged. The tsetse fly (below) is the vector
that carries the trypanosome. While it takes a blood meal, it
vomits the organisms into the wound
Current estimates are that nearly 200,000 people are infected with T. brucei, although new cases are dropping significantly – yea. This a significant disease that requires diligent public health measures to keep the number of flies under control and to protect the population from the flies. It was once thought that humans had to move into the woodland to meet up with the tsetse flies, but new studies show that the flies do enter buildings.

Recently, a study showed that things thought to be repellent to tsetse flies, such as smoke or human presence in a confined space, did not stop the flies from alighting on humans inside buildings. The number of landings also increased as the temperature increased. Buildings must now be considered a possible venue for disease transmission. But at least with this disease you can just go to sleep, it isn’t like your ear will fall off.

Leprosy (Hansen’s disease), on the other hand, can present some of the bizarre symptoms seen in zombies. The causative organism is Mycobacterium leporae, and has been around for thousands of years. There was no treatment for most of those years; an effective antibiotic regimen was not devise until the 1940’s. Because of this, leper colonies were used for two thousand years to separate the victims from the rest of the population. This is interesting, because it’s relatively hard to catch Hansen’s disease.

On the left is a leprosy victim. Remember that leprosy itself
doesn’t cause loss of body parts, but secondary infections are.
The 9 banded armadillo is one of the very few animals that is
susceptible to leprosy, so it is often used in the laboratory. Below
on the right, current treatments are very effective at alleviating
the swelling and skin lesions associated with the disease. Leprosy
shouldn’t be considered a killing or disfiguring disease anymore.

The disease has a very long course because it is a very slow growing bacterium; the doubling time is about 14 days. M. leprae has a predilection for setting up shop in skin and nervous tissue, and this is the reason it is linked to zombies. Leprosy itself doesn’t cause body parts to fall off, but it does compromise the blood flow and immune reaction in extremities.

Secondary infections can then gain a foot-hold and cause loss of fingers and such. This tendency for loss of noses, ears, etc. is exacerbated because the organism prefers it a bit cooler, so it tends toward areas with lots of surface area and less blood flow. A new study shows that M. leprae causes damage to the olfactory bulb, affecting the size of the nervous tissue responsible for smell sense. This would be especially bad for zombies, since they are said to act mostly on a sense of smell to attract them to potential victims.

The damage to skin and nerves could additionally contribute to a look of decay about the face and body, and a shuffling gait – also zombie like properties. Even zombies don’t want to catch leprosy.

Next week, can we use rigorous biological analysis to determine if zombies are a form of life? I sure hope so, or it’ll be a short post.



Boland TA, McGuone D, Jindal J, Rocha M, Cumming M, Rupprecht CE, Barbosa TF, de Novaes Oliveira R, Chu CJ, Cole AJ, Kotait I, Kuzmina NA, Yager PA, Kuzmin IV, Hedley-Whyte ET, Brown CM, & Rosenthal ES (2013). Phylogenetic and epidemiologic evidence of multi-year incubation in human rabies. Annals of neurology PMID: 24038455

Vale GA, Hargrove JW, Chamisa A, Hall DR, Mangwiro C, & Torr SJ (2013). Factors affecting the propensity of tsetse flies to enter houses and attack humans inside: increased risk of sleeping sickness in warmer climates. PLoS neglected tropical diseases, 7 (4) PMID: 23638209

Veyseller B, Aksoy F, Yildirim YS, Açikalin RM, Gürbüz D, & Ozturan O (2012). Olfactory dysfunction and olfactory bulb volume reduction in patients with leprosy. Indian journal of otolaryngology and head and neck surgery : official publication of the Association of Otolaryngologists of India, 64 (3), 261-5 PMID: 23998032



For more information or classroom activities, see:

Ethnobotany –

Vein valves –

Rabies –

African trypanosomiasis –

Hansen’s disease –



The Living Dead - Living Or Dead?

$
0
0
Biology concepts – characteristics of life, cell theory, reproduction, homeostasis, evolution


How could this monster have come from the Walking Dead
TV show? In what way is he/she walking, and while
she is in dire need of a makeover, can you 
consider her dead?
A popular zombie TV show is called, "The Walking Dead." The George Romero films call zombies the “living dead.” Can you say that corpse-like people have life? Let’s recall our two varieties of zombies we talked about last week, those that were dead, but have been reanimated (brought back to life), and those that haven’t died specifically, but exhibit zombie behaviors.

In either case, they are moving and eating and moaning, and generally disrupting the social order. So are they forms of life? To answer the question, we first have to ask what it takes to be considered alive. What characteristics of living things separate them from non-living things?

Don’t laugh, it’s not always so easy to tell if something is alive or not. There have been several different systems for defining life, everything from mechanism, which says life is just a special set of chemical reactions, to vitalism, which says life consists of a vital force all its own and doesn’t obey the laws of physics.

Philosophers and scientists in history have searched for a single attribute definitionof life– the one thing that separates all life from all non-life, but it hasn’t been very successful. How about, “livings things die?” Although it sounds sexy at first, this isn’t a very helpful definition.

Death is just an absence of life, so wouldn’t you need to define life first? Anyway - living things can die, but that doesn’t mean they must die. There is a species of jellyfish, Turritopsis dohrnii, which very well may be immortal. Most jellyfish start out as an immature polyp which develops into a medusa (the bell shape with all the trailing filaments). Then they get old and die.


This is the medusa form of T. dohrnii. The many hairs have
nematocysts to capture prey, while the red area contains
what nervous tissue the animal has, as well as the digestive
system. At any point in its life cycle, it can regress to its
infant form (polyp) and then send out clones of itself and
grow up again. It would be is if you and your parents were
the same age!
But T. dohrnii can revert from medusa to polyp and then start the process all over again. It’s hard to say that they are truly immortal, they can still get eaten or sick, and how would we know if they did live forever; have you ever done anything forever?? Wait - downloading a movie on a 2009 MacBook Pro takes forever.

Think about the number of exceptions to biologic rules we have discovered together in the past couple of years. Do you really think there can be one attribute of living things to which there is no exception, counter-example, or borderline case?

Some scientists say that we can't define what constitutes life because, as of now, we are limited to knowing only life on Earth. Life elsewhere may be completely different. But according to this line of thinking, you could never define life, because you couldn’t ever know if you had found every candidate in the universe.

For now, a list of characteristics that living things all possess and non-living things do not is most appropriate. Some folks use four characteristics, some five, six, seven or eight. I tend to go with a seven characteristic set, because that’s the number of items that most people are able to remember. You think it’s a coincidence that telephone numbers are seven digits?

Let’s take a look at the seven characteristics and see how many are fulfilled by zombies.

Cells – Living things are made of cell(s). This has been well discussed for hundreds of years and we haven’t found any exceptions to this rule. In fact, scientists have extended the idea of cells even further. Called the cell theory, the idea is that life is made of cells, the cell is the basic unit of life, and all cells come from other cells. Anyone disagree?


The top left image is of a piece of cork as Robert Hooke
would have seen it in his microscope. The right top image
is a higher power microscope image of cork. Hooke thought
the empty spaces looked like the empty rooms of monks
(below), so he called them cells. In cork, you are really seeing
the remnants of cells, only the cell walls are left.
Cell theory didn’t come about all at once. Different scientists added different parts, including Rudolph Virchow, the father of modern pathology (pathos = disease, and ology = study of).  The TED video series has a good video about the weird history of the cell theory, including how Virchow’s contribution was probably stolen from someone else.

Organization – Livings things are organized at one or many levels. Even a single-celled organism has organization. It has a membrane to keep it’s insides inside, and everything else outside. This is organization. A bacterium has a single (usually), large circular DNA on which is has coded all its genes. That’s organization.

Multicellular organisms inherently have more organization, since they have cells that must communicate with one another and start to have specialization of function, but all cells must transfer their hereditary material, and this requires great organization.

A new review has started to collate the evidence that this complex organization extends to the nucleus-less bacteria. It seems that bacterial RNA functions are sequestered in specific parts of the cell. These spatial relations seem also to affect the functions of the cell, so even at the lowest level of biologic organization, there seems to be a lot going on.

Biologic organization moves from small to big - cells, tissues, organs, organ systems, organisms. Bacteria jump straight from cell to organism, but we can go further. We can start with individuals and then move to populations, communities, ecosystems, and biomes. Even single celled organisms participate in these levels of organization. How about zombies?

Growth and development – Living organisms increase in size and mass over their life cycle. This may be subtle, like budding yeast producing small offspring that then grow to become the same size as their parent.

Growth can take the form of hypertrophy, where existing cell(s) become bigger, or hyperplasia, where existing cells divide to become more cells. I leave it to you to decide which one occurs in single-celled organisms.


Growth and development includes the increase in cell number
as an organism grows. But cell division isn’t uncontrolled. You go
from 1 cell to about 75 trillion, but there is also a lot of cell
death. In the fetal state, if division was not accompanied by a
whole lot of apoptosis (programmed cell death), a human
newborn would weigh over a ton!
In multicellular organisms, both forms can occur. For instance, your muscles hypertrophy, while your prostate undergoes hyperplasia. Really, most instances of hyperplasia in humans is pathologic.

That doesn’t mean that increasing cell number is always bad in humans. It’s how we develop from kids to adults, how we go from a single celled zygote to an infant. Do child zombies grow up to become adult zombies? I haven’t seen enough zombie movies to render an intelligent opinion.

Energy – Living organisms acquire, store, transduce (change from one form to another) and expend energy. Acquiringenergy comes in three primary forms – you gather energy from the sun (photosynthesis), from chemicals (chemosynthesis), or from eating other living things (hetertrophy). Zombies crave the flesh of other humans, so they are heterotrophs, cannibals to be specific.

Usingenergy means doing work – living things do work. Cells build and breakdown molecules (metabolism) and use those molecules to do work – produce heat, move, grow, sense and respond. Zombies move, although not very quickly. How slow must you be to end up a zombie meal?

Response – Living organisms have systems in place to maintain optimal growing conditions for themselves, even as the conditions around them change. In scientific terms, we call this homeostasis(homeo = similar to, and stasis = stand still).

Remember in National Treasure when Jon Voight said, “maintain the status quo.” That’s homeostasis in a nutshell. However, the processes to accomplish this can be quite complex. Just think about how many things must happen for you to try and stay cool in the heat or stay warm in the cold. Shivering alone is a very complex process of mini-spasms in your muscles.


Maintaining your blood glucose is one form of homeostasis.
Using energy lowers blood glucose, eating raises it, but you
need it to be steady. Hormones like glucagon raise the levels of
glucose in the blood, while insulin lowers it. Grehlin and
somataostatin work on your hunger and all these work through
several different cell types in your digestive tract; alpha, beta,
delta, and epsilon cells are all involved.
A new study has begun to study how your body weight is maintained whether you take in too many calories or too few. Your set point weight will be maintained until too much change has occurred over time and a new set point is established. This study found that many organs are involved, and are controlled by the brain in the effort to maintain a weight even when too much energy has been taken in. This neuronal pathway may be a new way to manipulate weight gain and loss.

Reproduction – Reproduction could mean a couple of different things. It could refer to the replication of DNA in each cell, with the passing on of hereditary material to each of the daughter cells.

In some organisms, replication and reproduction (budding or binary fission) occur together. This is most often asexual reproduction, but yeast are single-celled organisms that can exchange some DNA in a sexual mode and then divide to produce two offspring.

Do you think zombie cells replicate and form two daughter cells? If so they do a lousy job of it. Zombies looked so decayed and lose so much tissue that I find it hard to believe that they are replacing lost cells by mitosis. How about reproduction as defined by producing more versions of the parent?

For many organisms, sexual reproduction is how they produce more individuals of their species. Sexual reproduction requires replication by mitosis in all cells of the organism, as well as meiosis to produce gamete cells (sperms and ovum). I don’t know if zombies undergo sexual reproduction – and I’m not going to ask.


Zombie production is more like bacterial conjugation than
like reproduction. One bacterium possesses a characteristic
coded for by DNA. It joins with another bacterium and
transfers a copy of the different characteristic. Now both
bacterium possess that trait. Sounds like one zombie biting a
person and creating another zombie from them, doesn’t it.
In the infectious disease form of zombies, where the dead body is reanimated by the bacteria, virus, or parasite growing within them, they make more zombies by biting someone and passing on the infection. And they bite two friends, and they bite two friends, and so on…. The population grows geometrically.

This is definitely production, but does it qualify under the definition of biologic reproduction? Aren't you more changing an existing organism than producing a new one? I think this is more about conjugation rather than reproduction (see picture).

Adaptation – This means evolution. Nothing stays the same in nature. Even organisms that have been around for millions of years, like crocodiles and cockroaches, are always changing. Changing environments, fluctuating numbers of predators, etc. are constantly putting pressure on organisms. Mutations occur with or without changes around the organism, but pressures make some of the mutations positive changes and some negative changes. Those mutations that lead to more offspring and more surviving offspring are kept – natural selection, and over time there are many of these adaptations – evolution.

Remember, individuals do not evolve, only populations. So you couldn’t see a zombie evolve, even if you chose to stick around to watch – bad idea. But do zombies as a species evolve?

So how do zombies stack according to our seven characteristics? Are they an exception or a borderline case? How about other things – viruses, for instance. This is a common example for arguing about what life is. How about flame? That’s an interesting discussion to have.

Next week, some aspects of zombies involve free will - do you think they want to eat brains? Are there other examples in nature where something can steal your free will?



TedEd video (2013). The Wacky History of the Cell Theory TED

Campos M, & Jacobs-Wagner C (2013). Cellular organization of the transfer of genetic information. Current opinion in microbiology, 16 (2), 171-6 PMID: 23395479

Yamada T, Tsukita S, & Katagiri H (2013). Identification of a novel interorgan mechanism favoring energy storage in overnutrition. Adipocyte, 2 (4), 281-4 PMID: 24052907



For more information or classroom activities, see:

Immortal jellyfish –

Characteristics of life –



Free Will Ain’t Free

$
0
0
Biology concepts – neural parasitology, domoic acid toxicity 


Zombies don’t have a choice in how they behave. Free will
is a thing of the past; they don’t even have the ability to resist
a dance routine with Michael Jackson. Michael seems awfully
at ease in the midst of the undead. Creepy….yes. Scary…. yes.
A thriller?…..maybe no.
It would be hard to believe that zombies develop a taste for human flesh, especially brains, out of nowhere. It would seem that they don’t have a choice; their activities are decided for them. They’ve lost their free will.

Don’t scoff at this; nature is full of examples where one organism can cause another organism to change its behavior – just think of all the silly things boys do trying to impress girls. But first a couple of stories where a change in behavior has less to do with parasitism.

In August 2013, residents of Moscow began reporting that the pigeons were acting odd. They would walk around in a funk, not get out of the way of traffic, and not fly away from danger. One family reported that their dinner one evening was disrupted by a pigeon on their window ledge that lost its balance and fell into their kitchen.

These zombie pigeons (the pecking dead, as one website called them) were freaking out the population, so the scientists went to work. It seems that many of the dead and affected pigeons were carrying salmonella bacteria and/or had Newcastle disease. The virus that causes this disease, unimaginatively called the Newcastle disease virus (NDV), can be transmitted to humans, so it's a good thing the population got freaked out.

The virus causes the birds to stagger about, stumble around in circles, and turn their heads upside down – much like vodka does in humans. However, when humans get NDV, they most likely will just have a flu-like episode.

Zombie birds also led to a famous movie. Alfred Hitchcock’s classic film, The Birds, is the story of a terrifying attack on a small fishing village by many flocks of different birds. They attacked people, flew into car windows and houses, and caused deaths and damage. Seems silly doesn’t it, being killed by a shore bird? I think I would have them give some other reason in my obituary.


Tippi Hedrin was the female lead in Hitchcock’s The Birds. Hitch
had seen her in some commercials and chose her over Grace
Kelly….. GRACE KELLY! Later on , he developed an unhealthy
obsession with Tippi, and who wouldn’t, with all that running and
screaming and bird doo?
It turns out that the movie was based on a 1961 incident near Monterey Bay, California. The birds went nuts and no one knew why – that makes it creepier. It wasn’t until 1995 when another episode of bizarre behavior in sea lions led to the answer.  The sea lions in 1995 and 2010-11 were acting like zombies as well. They wouldn’t get out of the way of boats or they would come up on land and just keep scooting inland until they died.

In 1987, it was recognized that a toxin produced by certain species of marine algae was responsible for the zombie like behaviors. Called domoic acid, the toxin is produced by the algae and accumulates in marine organisms that feed on phytoplankton or algae that are contaminated. Normally, levels of domoic acid are too low to cause problems, but in years where the algae overgrows, called a bloom, the levels will rise dramatically.

Although the acid seems to have no affect on lower life like shellfish, bigger animals are strongly affected, including humans. When the sea lions or birds feed on contaminated food, they begin to display the bizarre behaviors. In the case of the 1961 birds, there happened to be a collection of samples from the bay that had been kept all these years. Tests on the shellfish and algae samples from 1961 showed high levels of domoic acid.

In a strange coincidence, a new paper has been published about how infections can move through a flock of birds. It uses a mathematical model based on many predictors and factors. The model is called the Zombie-City model, based on how a zombie population might grow in a population of unsuspecting humans. But we want to focus on the loss of free will in nature’s creatures.

Free will in lower animals?  It does exist. Most people believe that the behaviors of insects and such are merely responses to environmental and situational cues, and any variation in behavior is due to misreading of cues or random errors. But studies in fruitflies show that they can pick out their own patterns of behavior when a blank canvas is given them.


The Emerald Cockroach Wasp (A. compressa) is solitary insect,
it doesn’t live communally as many bees and wasps do.  Only
the females have stingers, so making zombies is definitely a
reproductive strategy. In 1941, they were introduced to
Hawaii to try and control the cockroach population, but it
didn’t work. They just don’t lay enough eggs.
One such case of co-opted free will in an insect is the Jewel Wasp (Ampulex compressa) and the American Cockroach (Periplaneta americana). The wasp lives in Africa and Asia, so this isn’t something we could use to get rid of NYC cockroaches. P. Americana isn’t even native to the Americas. It was introduced from Africa as early as 1625, before it was officially named.

What the wasp steals is the roach’s ability to decide if it wants to walk or run. Most wasps sting to kill, but the Jewel Wasp stings the cockroach in the brain, altering its behavior with its venom. A 2010 study showed that the wasp stings the roach continuously for up to three minutes, trying to locate a particular part of the cockroach’s brain.

What it is searching for is called the subesophageal ganglion, the part of the brain that allows the roach to initiate walking and running movements. When that part of the brain is flooded with venom, the cockroach stands still, with no will to begin leg movements. It isn’t paralyzed – it’s just a zombie.

Another study has started to investigate just how the wasp venom robs the cockroach of its will to walk. There is an insect neurotransmitter called octopamine that is released by some of P. Americana’s neurons. It is this transmitter that allows the cockroach to initiate walking.

The study hasn’t pinpointed just how the venom interrupts the octopamine signaling, but they know if they deplete the amines in the brain, they see the same affect. If they add back octopamine, they can rescue the cockroach’s natural behavior. However, the study also showed that the venom doesn’t reduce octopamine levels and it doesn’t prevent is release, so there's still more work to be done.


The wasp first stings the cockroach in the abdomen, just a
quick sting to temporarily weaken the front legs. Then it
stings the brain and when the cockroach stops
moving, it cuts off part of one antenna. I don’t know why. It
grasps the antenna in its jaws and herds to the roach to its nest.
Instead of pulling, they should evolve saddles.
Why does the wasp turn the cockroach into a zombie? I’m glad you asked. Remember, the cockroach isn’t paralyzed, it just hasn’t the will to walk on its own. So the wasp tugs on the cockroach’s antennae and herds the roach into its underground nest. There the wasp lays an egg in the cockroach’s abdomen and the emerging larva feeds on the cockroach until they are ready to emerge eight days later.

So why not just kill the cockroach with the sting and lay the egg? The larva need fresh meat, and a dead cockroach rots in one day. To make the meal satisfactory for the eight days needed, the cockroach must remain alive, but in a state where it can’t attack the wasp or the larva; hence the zombification.

It gets even creepier. The wasps have gotten so good at this strategy that they now go to the trouble of cleaning their meal. A 2013 study shows that the wasp larvae produces several antimicrobial chemicals that rid the cockroach of any contaminating bacteria or parasites as the larvae munch on it. I know I’d clean a zombie before I ate it.

There are several other examples of theft of free will, including a couple of fungi that make ants stop their normal work and climb high in trees to allow for the best spread of the fungal spores as they mature. There’s also a hairworm that forces grasshoppers to commit suicide by jumping into water, just so the worm can complete its life cycle. But I don’t want to leave this subject without hitting the king of neural parasitologyToxoplasma gondii.


The spiny ant is the zombie victim of the O. unilateralis fungus.
When infected, the ant stops doing its job for the colony and
falls out of the tree canopy. After wandering the forest floor, it
will bite the underside of a leaf and never let go. It just stays
there waiting to die. Then the fungus sprouts a fruiting body
with spores out the top of its head, and the spores shoot of
into the air. The low altitude (less humid) and under leaf
position give the fungus the greatest chance to survive. Fossil
evidence shows that this has been occurring for at least 48
million years.

T. gondii is a single-celled eukaryotic parasite that has a complex life cycle. It can reproduce asexual in any of the hosts it infects, but can only reproduce sexually in cats, of all things. This is important because sexual reproduction is an obligate life cycle stage for the parasite and contributes to its evolutionary health.

The parasite has taken steps to insure that it finds its way into cats by changing the behaviors of the mice and rats it finds itself inside. It messes with rodent brain chemistry (since it tends to form cyst organisms in the brain) that makes rodents unafraid of cats. In fact, a recent study found that the organism confuses the rodents into believing that cat urine smells like a potential mate!           

T. gondiiactivates a certain neuronal transcription factor, which leads to increased production of different proteins in the brain. In rodents and humans, this leads to an increase in dopamine (similar to octopamine in the cockroach) production and a decrease in tryptophan usage.

Because the cysts target areas of the rodent brain that control fear, the change in behaviors are involve, but are not limited to fear. There isn’t any evidence that the cysts have a selective range in the human brain, but considering the changes that occur in men as described below, it is a possibility.


On the left are four T. gondii parasites and on the right are the cysts
that they can form in the brain. Recent ecvidences are showing
that cysts of T. gondii can be linked to increased chance of suicide
attempt and more violent attempts, more depression and
neurotic behavior, and an increased chance of having children
that will develop schizophrenia. In humans, this may have
something to do directly with the parasite, or it may be that high
levels of immune modulators lead to the changes in thoughts and
behaviors. Scary thought, our immune system could drive us
to kill ourselves.
There is a suspicion that T. gondii produces an enzyme that catalyzes the formation of a dopamine precursor. Dopamine is a catecholamine, so it is involved in the fight or flight response; definitely a fear component. The interesting thing is that even a latent (asymptomatic) infection affects men and women differently. Bizarrely, a 2011 study showed that infected men are more attracted to cat urine, while infected women find it less attractive.

In general, men with a long-term T. gondii infection show lower IQs, are taller (about 3 cm on average), and are more likely to break rules, take risks, be jealous, and exhibit anti-social behaviors. Right now – I’m not so proud to be a guy.

On the other hand, women with long-term toxoplasmosis infections tend to be more outgoing, friendlier, more promiscuous, and more attractive to men. Wow, Mars and Venus to the nth degree! The question still remains – how do the forced behavior changes in humans benefit the organism?

Next week, we return to our discussion of nucleic acid exceptions by discussing instances where organisms can rewrite their genetic code.


House PK, Vyas A, & Sapolsky R (2011). Predator cat odors activate sexual arousal pathways in brains of Toxoplasma gondii infected rats. PloS one, 6 (8) PMID: 21858053

Flegr J, Lenochová P, Hodný Z, & Vondrová M (2011). Fatal attraction phenomenon in humans: cat odour attractiveness increased for toxoplasma-infected men while decreased for infected women. PLoS neglected tropical diseases, 5 (11) PMID: 22087345

Banks CN, & Adams ME (2012). Biogenic amines in the nervous system of the cockroach, Periplaneta americana following envenomation by the jewel wasp, Ampulex compressa. Toxicon : official journal of the International Society on Toxinology, 59 (2), 320-8 PMID: 22085538

Herzner G, Schlecht A, Dollhofer V, Parzefall C, Harrar K, Kreuzer A, Pilsl L, & Ruther J (2013). Larvae of the parasitoid wasp Ampulex compressa sanitize their host, the American cockroach, with a blend of antimicrobials. Proceedings of the National Academy of Sciences of the United States of America, 110 (4), 1369-74 PMID: 23297195


For more information or classroom activities, see:

Moscow zombie pigeons-

Domoic acid toxicity –

Jewel wasp and cockroach –

Toxoplasma gondii

Fungal parasites and ants –

Hairworm and grasshopper -

Zombie apocalypse case study for class -



Rewriting the Genetic Code

$
0
0
Biology concepts – DNA, RNA, tRNA, nonstandard nucleotides, codon, anticodon, genetic code, selenocysteine, isodecoder, mitochondria


Just looking the Imperial Hotel in Tokyo doesn’t really give us an idea
of why they inspired Frank Lloyd Wright’s son to invent Lincoln Logs.
 It was the interlocking beams of the basement in which his vision was
born. They were supposed to protect the hotel from earthquake
damage. It worked. In 1923, the same year the hotel was finished,
there was a great earthquake in Tokyo and the Imperial was one of
the few buildings that survived. It also survived the bombings of
WWII. So they tore it down in 1968.
In 1916 John Lloyd Wright invented Lincoln Logs. The construction set was based on his memory of the Imperial Hotel in Tokyo, an edifice designed by his father, Frank Lloyd Wright. The construction set had specific pieces that fit together in a specific way.

The first edition of Lincoln Logs, sold in 1918, gave instructions for building Abraham Lincoln’s boyhood home and Uncle Tom’s cabin. The parts were commensurate for building those structures. Each set of instructions called for the small pieces to be put together in a certain order so that the resulting product conferred a meaning – this is where Lincoln grew up or this is where Tom lived.

DNA and RNA are similarly constructed. There are a few pieces (nucleotides A, C, G, T, and U) that can be used to build different structures. Each small piece can be joined with other small pieces to become part of the whole structure, a structure with meaning. In the case of DNA and mRNA, three nucleotides in a row can confer meaning for one protein building block. The entire series of nucleotides then has the meaning of an entire protein.

The three nucleotide codons relate to a certain amino acid building block to be inserted into a growing protein. This code, the genetic code, gives meaning to the string of DNA nucleotides in genes and the string of nucleotides in the mRNA transcribed from the gene. This is usually where our learning about nucleic acids ends.


The top left picture is Marshall Nirenberg, the initial decoder of the
genetic code. The right photo is Robert Holley, discoverer of tRNA.
Below is the genetic code in graphic style. The four large letters
represent the possible first bases of a codon (in mRNA). The light
yellow letters are the possible 2nd bases, and the darker yellow letters
are the final possibilities. Outside are the amino acids that are coded
by the individual codons. Note that most have more than one codon
and some have codons that begins with different letters, like serine
at 1:00 and 8:30.
The history of the genetic code is worth knowing, as is the history of about every part of science. I often use history to illustrate points in the blog. It is said that those who ignore history are doomed to repeat it, and science has its own version of this axiom, “Six months in the lab can save you a whole afternoon in the library.” Think about it. And besides saving you from repeating others' work, knowing history helps you ask better questions.

But I digress – let’s talk briefly how we decoded the pathway of gene to protein. It begins with Watson and Crick publishing the structure of DNA in 1953. We knew how the different bases could be ordered, but we still didn’t know how they called for a specific amino acid sequence.

In 1955, Francis Crick thought he had an idea about how it might occur, but he didn’t have all the players. He called his idea the Adapter Hypothesis. What he was missing was the adapter, the piece that he said carried amino acids and put them in the correct order.

One neat trick came from George Gamow, a nuclear physicist best known for his role in theorizing the Big Bang (the birth of elements from a cosmic explosion, not the TV show). We had four nucleotides to encode information and 20 (you and I know there are 22) amino acids to be coded for. He used some “way beyond me” math to determine that the most efficient mechanism would have three nucleotides code for one amino acid.

This was followed by an interesting experiment done by Marshall Nirenberg at the National Institutes of Health near Washington, DC. He made a synthetic RNA of a single nucleotide (UUUUU….). He then combined this with the innards of a bunch of cells (cell lysate) so that everything needed to make a protein would be present. He detected a peptide of phenylalanine amino acids. What is more, there were 1/3 as many amino acids as there were nucleotides!

So UUU coded for phenylalanine. This was followed by many more experiments using different sequences of nucleotides, and the code was decoded. Along with this knowledge came the discovery of tRNA by Robert Holley in 1965. This RNA combined an anticodon sequence to recognize a codon on mRNA and carried the appropriate amino acid at the other end. The tRNA was Crick’s adapter, and perhaps the code would have discovered years earlier if the adapter had been pursued in earnest.


The process of turning an mRNA into a protein involves the ribosome and
the tRNAs. When an mRNA is bound to a ribosome, the three nucleotides
in the codon (pink letters) match with three letters of a tRNA anticodon
(blue letters). Different tRNAs will drift in and out until the right one is
bound. The tRNA has the amino acid (aa) bound to the end opposite the
anticodon. If this is the first position of the peptide, it will occur in the P
(peptidyl) site. The second tRNA will be added to the A (acceptor) site and
the ribosome will shift as it creates a peptide bond between the two aa’s.
The shift puts the first aa in the E (exit) site to reelase the tRNA, the 2ndaa
goes to the P site, and the A site is open for the next tRNA.
There are 64 possible codons that can be made from four nucleotides (4 x 4 x 4), but Holley found fewer than 64 tRNAs, one for each codon. Even I know that this kind of math doesn’t work. It turned out that the genetic code was degenerate; more than one codon calls for a particular amino acid. Most amino acids have 2-4 codons assigned to them (we have talked about the exceptions to that rule).

In most cases, codons that call for the same amino acid have the same first two nucleotides; it’s the third position (wobble position) that varies. It was discovered that the third position of the anticodon binds to the DNA very loosely, so the codon/anticodon binding is usually determined by the first two nucleotides. This allows a single tRNA to recognize more than one codon.

It turns out that there are 40-55 different tRNAs, depending on the organism. Why so many? As an example, arginine is coded for by several codons (CGG, CGA, CGC, CGU, AAC, and AAU). It is impossible for one tRNA to recognize both AAC and CGG, so there must be more than one tRNA for arginine.

Serine and leucine are like this as well, and there are most certainly some amino acids whose tRNAs can’t bind to all four possible nucleotides in the wobble position (like glycine), so they would need more than one tRNA. These are the isodecoder tRNAs (different anticodons, but code for same amino acid).

There are also different isodecoder tRNA genes, having different sequences outside the anticodon, but code for the same amino acid. Humans have about 274 genes for our 55 different tRNAs. This implies that the different sequences might have some functions other than just helping to add the right amino acid to a growing peptide sequence.

A 2010 minireview talked about those possible tRNA functions. In one discussed study, a cleaved tRNA is shown to have increased expression when cells are proliferating. Reducing the levels of this cleavage product reduced the rate of cell division. In another study, a tRNA cleavage product silenced the expression of a specific gene. I’ve said it before: nature abhors a unitasker.

UGA, UAA, and UAG are the most common stop codons (see the text for the
exceptions). When the stop codon ends up in the A site, no tRNA fits properly,
but a releasing factor (RF) can be bound. There are at least two RF, RF-1
recognizes UAA and UAG; RF-2 recognizes UAA and UGA. When bound they
cause the ribosome to fall apart.

There are also three codons that don’t code for an amino acid. These are the stop codons that tell the ribosome to stop making the protein and release it.

So we have coding codons and noncoding codons. Experiments in other organisms in the 1960’s and 1970’s indicated that all life uses the same genetic code, making it the universal genetic code. And here begins the exceptions.

The genetic code is almostuniversal. Considering how many genes from how many organisms there are, the number of exceptions is relatively low. But they are still too numerous for us to talk about them all. That doesn’t mean we should talk about a few of the most interesting.

Mitochondria are the source of many of the exceptions. The endosymbiotic theory states that a bacterium was engulfed by an archaea and they agreed to allow each other to do what they do best. These engulfed bacteria became mitochondria and chloroplasts. But they didn’t always follow the same path.


We are finding that tRNAs can have multiple functions. 1- This is the
usual route, the tRNA codes for an amino acid in a growing peptide.
2- Some tRNAs code for the carry the same amino acid, but have
differences in structure. The change in structure means they don’t
bind the amino acid, so they are free to do other things. 3 and 4- These
non-aa bound tRNAs may be used for regulating expression of specific
genes, usually in the end of the gene. 5- there are probably functions
we don’t know yet.
Remember that mitochondria have their own genomes and machinery for transcribing DNA to mRNA, and translating mRNA to protein. This includes their own set of tRNAs. Since they are all packaged in a closed system, there is no demand for mitochondria to use the same genetic code as nuclear genes. And in many cases, they don’t.

In animal and protist mitochondria, but not plants, the stop codon UGA instead codes for the amino acid tryptophan. You’d think that this would leave them with just two possible stop codons, and some do. But in vertebrates, the codons AGA and AGG (usually code for arginine) have been converted to stop codons. So we actually have four mitochondrial stop codons.

Furthermore, animal mitochondria have switched up another codon; AUA codes for methionine instead of isoleucine. In yeast mitochondria, all the CU_ codons code for threonine instead of leucine. Again I ask… why? I ask that a lot. Not so much why the genetic code has changed in mitochondria, but why it hasn’t in plants.  You tackle that one on your own.

Nuclear genes have far fewer exceptions to the universality of the genetic code. A protist or two have converted two stop codons to code for glutamine, and the bacterium Mycobacterium capricolum has converted the stop codon UGA to a tryptophan codon. Beyond that, we have couple exceptions we have already discussed a bit, selenocysteine and pyrrolysine.

The interesting story is selenocysteine (SeC). We said that it is coded for by a stop codon plus a special stem/loop structure downstream called the SECIS structure. This makes it the 21st amino acid. If it is coded for, even indirectly, it’s going to need a tRNA. In this case, a serine tRNA is modified in a two-step process to carry a SeC.


These are two marine ciliate protist Euplotes crassus organisms
undergoing sexual reproduction, a marine ciliate protist. They are
interesting for many reasons, but one is that they use a slight
variation of the genetic code, and the other reason has to do with
something called a frameshift. The codons are read in 3’s, but some
genes in E crassusrequire a shift in the reading frame to produce
the correct protein. This means that they go along a 3, 3, 3, 3, then
the ribosome has to move 1 nucleotide over, and then it starts
reading 3, 3, 3, 3 again. The one nucleotide doesn’t code for
anything, but must be there to change the reading frame.
A recent paper identified that the stop codon UGA in Euplotes crassa codes for both Sec and cysteine. Which one gets put in to the growing peptide is based on how far the site is from the SECIS structure.

The same group has a new paper that says humans can also end up with cysteine in the Sec site (originally a UGA stop codon). How can these two examples of cysteine in a Sec site take place, especially since the cysteine and SeC tRNAs are completely different?!

It turns out that it's the levels of selenium and a molecule called thiosulfate (SPO4) that is important for converting other amino acids to cysteine. In some cases, the serine tRNA can be made into a cysteine tRNA instead of a SeC tRNA. So here we have a case of a UGA stop codon converted to a Sec codon then converted to a cysteine codon. Exceptional.

Next week, we can finish up nucleic acid exceptions. Do you think A, G, C, T, and U are it when describing nucleotides? Not even close.




Xu XM, Turanov AA, Carlson BA, Yoo MH, Everley RA, Nandakumar R, Sorokina I, Gygi SP, Gladyshev VN, & Hatfield DL (2010). Targeted insertion of cysteine by decoding UGA codons with mammalian selenocysteine machinery. Proceedings of the National Academy of Sciences of the United States of America, 107 (50), 21430-4 PMID: 21115847

Thoru Pederson (2010). Regulatory RNAs derived from transfer RNA? RNA DOI: 10.1261/rna.2266510



For more information or classroom activities, see:

Genetic code –

Isodecoder tRNAs –
http://ymalblog.blogspot.com/2011/10/misfolded-human-trna-isodecoder-binds.html


 

Covering All Our Bases

$
0
0
Biology concepts – nucleoside, tRNA, RNA editing, nonstandard bases, DNA oxidation


Specialized pieces are needed to best build special Lincoln
Log structures, like this castle. This is much like how
specialized nucleosides are needed to carry out special
functions of RNAs. Really – a log castle? Wouldn’t the
Black Knight just burn it?
Last week, we used Lincoln Logs as a model for the different nucleic acids. The small logs mean little until you put them together in an order of which you can make – a cabin, for example. This week we can take the analogy a little further.

Some editions of Lincoln Logs have specialized pieces for building special buildings. These buildings have different purposes, like a sawmill or a bank, and the specialized pieces help them carry out their function of being that building.

Low and behold, there are special building blocks for building specialized nucleic acid structures; usually these are RNAs for which the usual building blocks just won’t do. These are the exceptions to the nucleotide rules of A, C, G, and T for DNA and A C, G, and U for RNA.

There are a few different nucleotides located in DNA molecules, but to date all these have been found to be damaged bases. Oxidized guanosine bases have been the most commonly identified mutations, because guanine is more susceptible to oxidation than the other bases. However, a recent study has identified a 6-oxothymidine in the placental DNA of a smoker.  

More than 20 oxidized DNA bases have been found at one time or another. Their importance lies in their inability to direct correct base pairing in a replicating DNA or a transcribed RNA. In particular, 8-oxoguanosine in a DNA molecule often base pairs with A instead of C, while an oxidized 8-oxoguanosine nucleotide (damaged before it is incorporated into a DNA) will often be put in where a T should rightfully have been placed.

Both of these problems would lead to mistakes in replication or transcription. Some of these mistakes could be in places that matter. If they change a codon, they might cause the wrong amino acid to be incorporated and the resulting protein might be nonfunctional. Or they could create or destroy a stop codon or a splice site. These would definitely alter the resulting protein. Mistakes like this spell disease or cancer.

The top left image shows how 8-oxoguanine is produced by
oxidative damage or radiation. The bottom left shows it
effects on DNA. There can be a miscmatch base pairing
between G and A instead of G and C when the G is damaged.
One possible result is shon on the right. Huntington’s
disease may involve the mismatching of unrepaired
8-oxoguanosines with adneosines. As a result, areas of the
brain are lost and the fluid filled sinuses are enlarged.

Oxoguanosine has been the most studied of the oxidized bases, and several diseases have been linked to this mutation. Many cancers have shown this mutation – leukemias, breast cancer, colorectal cancer, etc. But in addition, things like Parkinson’s disease, Huntington’s disease, Lou Gherig’s disease (ALS), and cystic fibrosis have been correlated with 8-oxoguanosine.

Don’t make the mistake of assuming that an 8-oxoguanosine is the cause of any or all of these diseases, most have many potential causes. The point is that this mutation maycontribute to these diseases in some cases. The point then is to find out how to better prevent or repair them. However, your body is pretty good at doing this itself – if everything is behaving normally.

There are specific repair pathways dedicated to removing and replacing oxidized bases (base excision repair or BER) or for nucleotides that contain oxidized bases (nucleotide excision repair or NER) in DNA. In RNA, the major process to deal with 8-oxoguanosine is to destroy the damaged RNA. There are actually several overlapping and redundant repair pathways for 8-oxoguanosine, suggesting that this mutation is particularly damaging and must be dealt with for proper cell function.

It is when the body’s sensing and repair mechanisms don’t work that the problems begin. Therefore, science needs to find better ways to tell when the natural processes aren’t working and develop artificial ways to reverse the damage. A 2013 review is showing the way to detecting mutated guanines in bodily fluids and tissues.

Specifically, this study looked at methods of detecting 8-oxoguanosine levels in plasma, urine, and cerebrospinal fluid and what those changes might mean. The levels found represent a balance between the production and repair of the mutations, so an increase means that more mistakes are being made, or fewer are being repaired. Either way, it means that something must be done.


This is a cartoon showing RNA processing. IT IS NOT TO BE
CONFUSED WITH RNA EDITING!! In processing of eukaryotic
mRNAs, the front end (5’ terminus) is capped so it will last
longer. Then the end is augmented with a bunch of A’s, called
the poly-A tail. Finally, the introns are removed and the
exons (the parts that code for a protein) end up in a
continuous sequence.
But what about nonstandard bases that are actually supposed to be in nucleic acids? The vast majority of these are found in the RNAs and help to point out yet another exception. You think that the RNA transcribed from DNA is the same RNA that functions or is translated to protein? Not always.

RNA editing takes place all the time, where RNA bases are changed after the RNA is transcribed from DNA. In the majority of cases, the RNA editing modifies a standard nucleoside to another standard nucleoside, or add/subtract nucleotides.

Insertion/deletion edits for uracils can increase or decrease the length of the transcript. The mRNA is paired with a guide RNA(gRNA) and base-pairing takes place. For insertion, when there is a mismatch between the mRNA and the gRNA, the editosome inserts a U, so the mRNA transcript gets longer. In deletion editing, if there is an unpaired U in the mRNA, it gets cut out, so the transcript gets shorter.

This was first discovered in a parasite called Trypanosoma brucei, the causative agent of African Sleeping Sickness. There are so many positions at which these insertions/deletions take place that it has come to be known as pan-editing.

In other cases, the editing takes the form of C being replaced by a U. In some cases this results in a protein sequence different than that coded for by the DNA - on purpose!! If that isn’t an exception, I don’t know what is. Other times, the changing of a C to a U creates a stop codon.

In the human apolipoprotein B transcript, the intestinal version undergoes the C to U editing and creates a stop codon, so the apolipoprotein B is 48 kD in mass (B48). In the liver, no editing takes place, so the protein is much larger (B100).


Here are two examples of RNA editing. The top image
shows the insertion/deletion mechanism, where a guide
RNA binds to the mRNA and where there are mismatches
a U is inserted and where there are unmatched U’s, they
are removed. The bottom example is an example where
a base is changed, and this changes the codon, so a
different amino acid is inserted when translated.
There is a lot of C to U editing in plants – I mean, a lot. So much editing goes on that there is now a 2013 database and algorithm to do nothing but predict C to U and U to C edits. Yes, there are U to C edits as well, but only in plant mitochondria and plastids. As far as is known, U to C edits work to destroy stop codons.

Then there is A to I editing. Wait you say, there’s no I in nucleic acids (well, there are actually two “i”s, but you know what I mean). “I” stands for inosine, the first specialized Lincoln Log and our first nonstandard nucleoside. Adenosine (A) is deaminated to form an inosine (I).

There are many functions for inosine editing. Changes from A to I in mRNA alter the protein made since the inosines get read as G’s. Genomically coded A’s end up being read as G’s in the mRNA, and this it changes the gene product! We have many more inosine changes than other primates do. Many of these A to I edits in humans are related to brain development and are a big reason why we are smarter than chimps.

There is also A to I editing in regulatory RNAs called miRNAs (micro RNA). The miRNAs suppress (prevent) translation of some transcripts, but editing of the pre-miRNA makes it bind less well to protein complexes that process the pre- to mature miRNA. More editing mean less binding of miRNAs, which leads to decreased regulation, more transcript translation, and increased protein. This may be one way A to I editing increases human brain power.


Micro RNA is important for controlling the amount of a
transcript that will be translated to protein. The miRNA
can be edited, which will change the amount that is
processed by the protein complex, and therefore changes
the amount that is incorporated into the complex
that will degrade mRNAs.
The search is on to discover the regulation of which A’s get turned to I’s in several types of RNAs ; called the inosome (like genome). The inosome is yet another code we haven’t figured out yet. But inosine doesn’t have to bein a nucleic acid to have an effect. Sometimes it functions just by itself.

Inosine and adenosine accumulate extracellularly during hypoxia/ischaemia (lack of oxygen or blood flow) in the brain and may act as neuroprotectants. A new study extends this protective action to the spinal cord in rats in a hypoxic environment. To characterize hypoxia-evoked A and I accumulation, they examined the effect of hypoxia on the extracellular levels of adenosine and inosine in isolated spinal cords from rats. "Isolated" means the rats and their spinal cords were not necessarily in the same room at the time - so it could be a while before this helps humans.

But perhaps the most common use for I is to alter tRNA binding to amino acids and to the target codons. A to I editing can occur in the anticodon, and change which amino acid is placed in the growing peptide. This is especially true in many organisms for the amino acid isoleucine. Many tRNAs will insert an isoleucine into the protein only when the anticodon of the tRNA has been edited to contain an I in the first position (equivalent to the wobble position of the mRNA codon).


This menacing creature is a worm that lives at the bottom
of the Ocean in the Sea of Cortez. It thrives in the methane
ice on the ocean floor, making it a psychrophile. It can’t
even survive or reproduce if keep above freezing.
What is more, there are other nonstandard nucleosides that serve similar functions, usually with isoleucine or methionine amino acids. Agamantidine is present in many archaeal anticodons and codes for isoleucine. Agamantidine is also present at other points in the tRNA for isoleucine and is important for adding the isoleucine amino acid to the tRNA.

Other nonstandard (modified) nucleosides also work in tRNAs. Lysidine, dihydrouridine, and pseudouridine are some of the more common specialized Lincoln Logs – or maybe we should stick to calling them nonstandard nucleosides. They can be found in the tRNAs of organisms from each of the three domains of life (archaea, bacteria, and eukaryotes). For example, psycrophiles– organisms that grow at very low temperatures – have 70% more dihydrouridines because they help the tRNAs to flex as they need to, even at subfreezing temperatures.

Found mostly in tRNAs, but not exclusively in tRNAs, there are over 100 non-standard nucleosides. Many times they function to increase tRNA binding to transcripts via the anticodon-codon, or increase the binding of the amino acid to the tRNA. They ultimately work to increase translation efficiency. They are weird and are exceptions, but we can’t live without them.

Next week we can spend some time talking about exceptions in the realm of lipids, the last of our four biomolecules.


Paz-Yaacov N, Levanon EY, Nevo E, Kinar Y, Harmelin A, Jacob-Hirsch J, Amariglio N, Eisenberg E, & Rechavi G (2010). Adenosine-to-inosine RNA editing shapes transcriptome diversity in primates. Proceedings of the National Academy of Sciences of the United States of America, 107 (27), 12174-9 PMID: 20566853

Takahashi T, Otsuguro K, Ohta T, & Ito S (2010). Adenosine and inosine release during hypoxia in the isolated spinal cord of neonatal rats. British journal of pharmacology, 161 (8), 1806-16 PMID: 20735412

Lenz H, & Knoop V (2013). PREPACT 2.0: Predicting C-to-U and U-to-C RNA Editing in Organelle Genome Sequences with Multiple References and Curated RNA Editing Annotation. Bioinformatics and biology insights, 7, 1-19 PMID: 23362369

Poulsen HE, Nadal LL, Broedbaek K, Nielsen PE, & Weimann A (2013). Detection and interpretation of 8-oxodG and 8-oxoGua in urine, plasma and cerebrospinal fluid. Biochimica et biophysica acta PMID: 23791936

Wang P, Fisher D, Rao A, & Giese RW (2012). Nontargeted nucleotide analysis based on benzoylhistamine labeling-MALDI-TOF/TOF-MS: discovery of putative 6-oxo-thymine in DNA. Analytical chemistry, 84 (8), 3811-9 PMID: 22409256



For more information or classroom activities, see:

RNA editing –




Sugars Speak In Code

$
0
0
Biology concepts – carbohydrates, monosaccharides, hexose, glycocode, starch, glycogen, carbohydrate linkage, bacterial persisters, fructolysis


Refined sugar is produced from two main sources, sugar cane (37
different species of grass from the genus Saccharum, bottom right),
and sugar beet (Beta vulgaris, top right). Sugar cane accounts for
80% of the sugar produced today. The cane or the beets are ground
and the sugary juice is collected with water or on its own. To refine
the sugar, which still has molasses from the fiber, is processed with
lime or soda and evaporated to produce crystals. The color is
removed by activated charcoal to produce the white sugar we most
often see (top middle). Brown sugar is sugar in which the molasses
has not been removed and still coats the crystals (bottom middle).
Unprocessed sugar from cane is shown on the bottom right, while raw
sugar (not whitened) is on the top right.
It would be hard to argue that without sugars, none of us would be here. Glucose provides us with short and medium term storage of energy to do cellular work, but would you believe that certain parts of reproduction use a completely different energy source. All hail fructose!

Sugars are better termed carbohydrates, because they are basically carbon (carbo-) combined with water (-hydrate). The general formula is Cn(H2O)n; for instance, the formula for glucose is C6H12O6.

The simplest sugars are the monosaccharides (mono = one, and sacchar from the Greek = sugar. They can be composed of 4-7 carbons, called tetroses (4 carbon sugars), pentoses (5), hexoses (6), and septoses (7).

Things aren’t so simple though, even for the simple sugars. Let’s use the hexoses as an example, although what we say will also apply to the other sugars. We said the formula for glucose is C6H12O6, so that makes it a hexose. Is it the only hexose – heck no! Hexoses can be aldoses or ketoses, depending on their structure (see picture). Even more confusing, -OH groups can be located on different carbons making them act different chemically.


This chart is a brief introduction to the complexities of simple
sugars. They can vary in the number of carbons (triose vs.
pentose vs. hexose. They can also vary in their structure even
if they have same number of carbons (glucose vs. galactose).
Yet another difference can come in their reactive group on the
end, being either a ketone group (ketoses) or an aldehyde
group (aldoses).
There are actually 12 different hexoses – some names you know; glucose, fructose, or galactose. Others are less common; idose, tagatose, psicose, altrose, gulose – you won’t find those in your Twinkies. Then there are the deoxysugars, carbs that have lost an oxygen. Fucose is also called 6-deoxy-L-galactose, while 6-deoxy-L-mannose is better known as rhamnose.

If this wasn’t difficult enough, stereoisomers again rear their ugly head, as it did last week with the proteins. Hexoses have three (ketoses) or four (aldoses) chiralcarbons each so hexoses can have eight or 16 stereoisomers! Every isomer may act differently from every other; this allows for many functions. But wait – there’s more trouble when we start linking sugars together.

Simple sugars can be joined together to build disaccharides (two sugars), oligosaccharides (3-10), and polysaccharides (more than 10). The subunits are connected by a hydrolysis reaction. Just like with the amino acid linkages in proteins, a water molecule is expelled when two sugars are joined together. Sucrose (table sugar) is a disaccharide made up of a glucose linked to a fructose.

Just where the linkage takes place is also important. Our example again can be glucose. Many glucoses can be linked together with an alpha-1,4 linkage. Long chains of glucoses linked in this way are called starch or glycogen, based on the different branching patterns they show. Mammals store glucoses as glycogen, while plants store them as starches.


Amylose is one type of starch, amylopectin being another.
They are different from celluloses only by the way the sugars
are linked together. You can see that in starch the CH2OH
group are all on the same side, while in cellulose they alternate.
This may seem like a small difference, but we can digest only
starch (or glycogen, which has the same type linkages),
not cellulose.
Humans can digest both starch and glycogen because we have enzymes that can break alpha-1,4 linkages. But if you change the chemical shape of the bond (see picture) to a beta-1,4 linkage, the glucose polymer becomes cellulose.

Plants make a lot of cellulose for structure, but even though it is made completely of glucose, humans can’t digest it at all! Ruminate animals can digest cellulose, but it takes some powerful gut bacteria to help out, and one of the side effects is a powerful dose of methane. Cows are the greatest source of methane on the planet!

We have talked about carbohydrates as energy sources, but pretty much every biological function and structure in every form of life involves carbohydrates.

Carbohydrates are important structural elements. Cellulose, thousands of beta-1,4-linked glucoses, help give plants their rigidity, especially in non-woody plants, but in woods as well (linked together by lignin). As such, cellulose is by far the most abundant biomolecule on planet Earth.

Chitin is another structural carbohydrate. Chitins make up the spongy material in mushrooms, and the crunchy stuff of insect exoskeletons.  You don’t get much more structural than keeping your insides inside.

Carbohydrates are often part of more complex molecules as well. Nucleic acids like RNA and DNA have a five-carbon ribose or deoxyribose at the core of their monomers. Glycolipids and glycoproteins (glyco- from Greek, also means sweet) are common in every cell. Over 60% of all mammalian proteins are bound to at least one sugar molecule.

The different sugar-linked complexes are part of the glycome (similar to genome or proteome), including oligo- and polysaccharides, glycoproteins, proteoglycans (a glycoprotein with many sugars added), glycolipids, and glycocalyxes (sugar coats on cell surfaces). None of these carbohydrate additions are coded for by the genetic code, yet a great diversity of glycomodifications are found on most structures of the cell.


The carbohydrate code is still a mystery to us. The glycosylation can be
linked together by N-type or O-type linkages, the order of the sugars
can vary, the numbers of each type of sugar can vary, and the branching
can vary. Every difference adds to the complexity of the code and can
direct a different message to the cell or the molecules with which
these glycans come into contact.
The diversity and complexity of these added carbohydrates is highly specific and highly regulated – this is the glycocode or carbohydrate code. Yet, we haven’t even come close to breaking the code, i.e., what series of what sugars means what.

The glycocode is important for cell-cell communication, immune recognition of self and non-self, and differentiation and maturation of specific cell types. Dysfunction in the glycocode leads to problems like muscular dystrophy, mental defects, and the metastasis of cancer – we better get cracking on the code breaking.

In the middle of 2013, a new method was developed for detecting the order and branching of sugars on different molecules. This method uses atomic force microscopy (AFM) to actually bump over the individual sugars on each molecule and identify them by their atoms, even on live cells. I’m proud to say that my father-in-law played a role in developing AFM for investigation of atom distributions on the surfaces of solid materials, mostly superconductors.

The glycome is even more diverse because different types organisms make different sugars. One thing I find interesting is that mammals don’t make sucrose. No matter what we mammals do, we won’t taste like table sugar when eaten – more’s the pity. I wonder what a sweet pork chop might taste like.


Proof that many foods have sugars – the Maillard reaction. That gorgeous
browning of your bread or steak comes from a chemical interaction
between the sugars and amino acids of the food. In the process, hundreds
of individual different compounds are made, each with a different flavor
profile. The example in the chart above is for caramelizing onions. Each
food and its chemical make up produces a different set of Maillard
products. You roast your coffee beans for the same reason. This is why
Food Network always suggests ways for you to get great searing and
browning of food.
We use sucrose as sugar because it is relatively easy to obtain from the plants that do make, like sugarcane or sugar beets. Fructose (often called fruit sugar) is actually sweeter on its own; almost twice as sweet as sucrose and three times as sweet as glucose.  This explains why so many sweetened foods are full of high fructose corn syrup (go here for our previous discussion of high fructose corn syrup).

We all know that organisms use glucose as an energy source, first through its breakdown to pyruvate via glyceraldehyde -3- phosphate (G3P) in glycolysis; the pyruvate then travels through the citric acid cycleto produce enough NADH and NADPH to generate a lot of ATP. But fructose can be used as well.

Fructose undergoes fructolysis, different from glycolysis only in the fact that one more step must be taken to generate G3P (adding the P to G3 is done by the enzyme trioskinase). In humans, almost all fructose metabolism takes place in the liver, as a way to either convert fructose to glucose to make glycogen, or to replenish triglyceride stores – so be good to your liver.

The big exception is how important fructose is in mammalian reproduction. Spermatozoa cells use fructose as their exclusive carbohydrate for production of ATP while stored in the testes. This fructose comes not from the diet but the conversion of glucose to fructose in the seminal vesicles.

Why use a different carbohydrate source just for sperm? Seminal fluid is high in fructose, not glucose. Perhaps this is a factor in seminal fluid viscosity. If this problem is solved using fructose, then the cells swimming in it would probably switch evolve to use it as an energy source.

I asked Dr. Fuller Bazer of Texas A&M about this and he pointed out that fructose can be metabolized several different ways, and some of these lead to more antioxidants and fewer reactive oxygen species - it would be important to leave sperm DNA undamaged, especially since we have previously talked about how they are more susceptible to oxidative damage.

Bazer also pointed out that unlike glucose, fructose is not retrieved from tissues and put back into circulation. Once it’s sequestered to the male sexual accessory glands, it would stay there. Still lots to be learned in this area.


Fructose is sweeter than glucose. Sucrose is one glucose joined to one
fructose, so the ratio is 50:50. In most honey, the fructose:glucose ratio
is about 55:45, so it is often sweeter than table sugar. Since it is higher
in fructose, some people liken it to high fructose corn syrup, but there
are many compounds in honey that also help the immune system, etc.
However, recent evidence is showing that some honey is being diluted
with high fructose corn syrup and some bees are being fed HFCS. The
benefits from true honey are then lost.
A 2013 study shows that maternalintake of fructose can also affect reproduction. Pregnant rats fed 10% fructose in their drinking water had significantly fewer babies, but a greater percentage of the offspring were male (60% versus 50%). The fructose did not arrest female embryos from developing or have a sex-specific effect on sperm motility, suggesting that the sugar has a direct effect on the oocyte that increases the chances of being fertilized to produce a male. Weird.

Using sugars other than glucose may be a big deal for mammals, but bacteria can thrive on many different sugars. E. coli can process glucose, but if other sources of sugar are around, they will switch over in a heartbeat – if they had a heart. E. coli has a whole different set of genes for lactose metabolism, found in something called the Lac operon. The operon gets turned on only if lactose is present and glucose is not.

The ability for bacteria to use other sugars might save us as well. Some bacteria can just shut down their metabolism if antibiotics are present and just hangout until the drugs are gone. These are called persister organisms, and they are different from antibioticresistant bacteria. A 2011 study showed that if you give sugar in combination with some kinds of antibiotics, the persisters just can’t resist the sweet treat and will not shut down their metabolism. The antibiotics then become effective. Using sugars we don't metabolize, like fructose or mannitol, ensures that they will be around to help kill the bacteria. Amazing.

We have just brushed the surface of sugary exceptions. Next week we will see how nature first selected a single type of sugar to use in biology, and then went right out and broke its own rule.



Gunning AP, Kirby AR, Fuell C, Pin C, Tailford LE, & Juge N (2013). Mining the "glycocode"--exploring the spatial distribution of glycans in gastrointestinal mucin using force spectroscopy. FASEB journal : official publication of the Federation of American Societies for Experimental Biology, 27 (6), 2342-54 PMID: 23493619

Gray C, Long S, Green C, Gardiner SM, Craigon J, & Gardner DS (2013). Maternal Fructose and/or Salt Intake and Reproductive Outcome in the Rat: Effects on Growth, Fertility, Sex Ratio, and Birth Order. Biology of reproduction PMID: 23759309

Allison KR, Brynildsen MP, & Collins JJ (2011). Metabolite-enabled eradication of bacterial persisters by aminoglycosides. Nature, 473 (7346), 216-20 PMID: 21562562 


For more information or classroom activities, see:

Testing for carbohydrates in foods –

Structures of carbohydrates –

Glycocode/carbohydrate code –

Give Thanks For The Cranberry

$
0
0
Biology concepts – epigynous berries, seed dispersion, scarification, drupe, endocarp


Ocean Spray alone sells 86.4 million cans of jellied cranberry
sauce each year. No matter which sauce you prefer, I bet it
has a lot of added sugar. Cranberries alone are tart enough
to shrink your head.
Cranberry sauce is a Thanksgiving staple, but it’s a lot like fruitcake at Christmas – you either love it or hate it. Let me give you some reasons to love it.

Cranberry (Vaccinium macrocarpon) is one of very few commercially grown fruits native to North America. The vine needs cool temperatures and acidic, sandy soil conditions, so New England, Southern Canada and the Pacific Northwest are prime growing locations. Similar latitudes in Europe also support growth of cranberries (Vaccinium oxycoccus) in their bogs. We have previously talked about bogs where the acid conditions preserve human remains and produce bog mummies.

But there is an exception in the Southern Hemisphere – Chile in South America. In the northern part of Southern Chile, volcanic ash soils mimic the sandy soils of peat bogs, both in consistency and acidity. Runoff from the Andes Mountains allows for water, and the temperatures are similar to those in Washington and Oregon - perfect for cranberry growing.

The Ocean Spray Company harvests berries in North America in autumn, but it needs berries in the summer too. In January of 2013, Ocean Spray bought the cranberry processing interests in Chile. The harvesting period in Chile is March to May, just in time to supplement Ocean Spray’s dwindling supplies.

Cranberries are tart compared to other fruits; they have five times as much acid as their close cousins, the blueberries. Why? It may be the acidic soils they grow in. In terms of evolution, growing in peat bogs was a good choice. Not many things can grow in a bog, so competition is low. Competition for what is the question – there is very little nitrogen in the soil of a bog, and the water is acidic too.

Plants need fresh water and nitrogen to survive, so the cranberry evolved better nitrogen tapping mechanisms, as well as leaves and stems that can retain their fresh water very well. Not many other organisms have adapted to these conditions, but the cranberry thrives, transferring the acids to its leaves, stems and fruits.

This is the bog copper butterfly (Lycaena epixanthe) that
lives its entire life on a cranberry vine. It not only survives
the acidic condition of the plant – it eats it up. It lays its eggs
on the under side of the leaf, and the pupa and the larva can
survive a flood that covers the plant for months.

This acidity is also a help when it comes to pests. Several acidic compounds have been isolated from V. macrocarpon that stop insects from eating the leaves and stems. I’m guessing insects don’t like Sour Patch Kids. The exception is the butterfly Lycaena epixanthe; it spends its entire life feeding on the cranberry plant.

The second reason for the high acid content of the cranberry is that it doesn’t need to be sweet. The blueberry is much sweeter, but it  has to be. Blueberry bushes spread their seeds by having birds, rodents, or humans eat them one place and excrete them in their feces somewhere else; sweetness promotes consumption.

Seed dispersal is the most basic reason for any plant producing a fruit. If a seed falls directly beneath the parent plant, no one wins. Both patent and child will require the same nutrients, and they will end up competing for everything. Things would also get very crowded.

Several mechanisms of seed dispersal have evolved. Wind is a popular way to disperse seeds. You’ve seen those helicopter seeds from Maple trees – they catch the air and twirl down vertically, but also move horizontally. Sycamore trees have tufts on their seeds to catch the wind as well.

Fruiting is also a way to disperse seeds. Animals need carbohydrates, and fruits are an important source for many animals. When they eat the fruit, they also eat the seeds. Later on, the animal grabs a copy of Sports Illustrated, locks the door, and deposit the seeds somewhere else.


These are some of the types of fruits. The peach is a drupe. It
has an edible mesocarp. The coconut is also a drupe, but its
mesocarp is more fibrous (flake coconut). The tomato is a true
berry. It’s pericarp and locules or all edible. The raspberry is an
aggregate fruit, many ovules and mesocarps held together. The
raspberry is also a drupe, which you know when you get those
seeds stuck in your teeth. Each little fruit is a druplet.
In fact, some seeds must pass through the digestive tract of an animal in order to germinate. Some seeds, like those of drupes (drupa = overripe olive), have a hard endocarp (seed coat), derived from the ovary wall. In fact, that’s what makes a drupe a drupe. Fruits like peaches, almonds, coconuts, olives, are considered drupes and each little part of a blackberry or raspberry is a druplet.

The germinating embryonic plant isn’t strong enough to break through the drupe endocarp on its own. Something must be done to weaken the endocarp. The weakening (scarification) may come from scratching the surface, freeze/thaw, fire (for the Ponderosa Pine), or perhaps from the digestive enzymes of an animal. Many berries, like blackberries, currants, and raspberries requiredigestive scarification in order to germinate. But the cranberry isn’t one of these berries.

Why don’t cranberries need to be eaten for seed dispersal? Because they float! When the bog (or similar sandy wetland) floods, the berries are carried away from the parent plant, away to some far off place that may or may not be suitable for cranberry vine growth. That’s the problem with floating; you gotta go with the flow.

Cranberries float because they have air pockets trapped within them. Floating fruit isn’t that exceptional, apples float too. It’s a good thing; think how may lives this has saved during bobbing for apples season!

On top we see the coconut – it’s a drupe with a tough exocarp.
You can see the germinating plant coming through one of the
eyes. Seed dispersal for the coconut is shown on top right. We
don’t know where palms come from originally, because they
could spread around the world in just one generation. The
cranberry also floats, because of the air pockets shown on the
bottom right. The frog is just a bonus – cute, huh?

Given their bouyancy, it amazes me that it wasn’t until the 1960’s that someone thought of flooding the bogs in order to harvest the cranberries. They have machines that shake the vines and release the ripe berries.

Cranberry plants grow very low to the ground, they have long runners (rhizomes), that can extend six or more feet from the parent vines, and these can sink roots to become new plants. Because of their short stature, it only takes about 18 inches of water to flood a cranberry bog for the wet harvest. So those commercials with the two goobers standing waist high in water in their waders are a bit of a stretch.

The cranberry was probably at the first Thanksgiving; they are hearty and ready to be harvested just about the time we are sitting down to our turkey and stuffing.  But, the pilgrims misled us – the cranberry isn’t a real berry! And don’t say it was because the pilgrims were from across the ocean. The cranberry is closely related to the European lingonberry, so the mistake had already been made.

The cranberry is a false berry, also called an epigynous berry (epi = in addition to, and gynous = ovary). A berry is a fleshy fruit derived from a single ovary. False berries develop from an inferior ovule and contain tissues from parts of the flower other than the ovary, while true berries develop from superior ovary tissue only (see picture). Other examples of epigynous berry-producing plants are bananas, coffee and cucumbers.


Here is one difference between real and false berries. All true
berries are hypogynous, where the ovary (in red) is above
where the petals and pistil come out. False berries have an
inferior ovary. Another difference is that the true berry is
made from only the ovary, while the false berry incorporates
other parts of the flower. Below on the left is the red currant,
and on the right is the cranberry. As a berry, the currant is true
and the cranberry is false. But really, can you tell the difference?
The V. macrocarponfalse berry fruit is indispensible as a Thanksgiving sauce, but medicine has found other uses for cranberry compounds. In the first 10 months of 2013 alone there were 86 papers published on the merits of cranberry compounds.

Most people who know about medicinal cranberries have had a urinary tract infection (UTI). For a hundred years or so, old wives (and young wives) have espoused the virtues of cranberry juice in preventing or treating UTIs.

Recent years have seen many studies try to validate the home remedy. As for if cranberries work, there is evidence on both sides. Hundreds of published reports say it’s the best thing since sliced bread, and hundreds say it doesn’t do a darn thing. Such is science – and that’s a good thing. Argue away so we know we get it right in the end.

One 2013 study found that sweetened dried cranberries added to the diet made a real difference in women who were susceptible to UTIs. Half the women in the study didn’t have even one UTI while on the study, and they all had reduced numbers of incidents.

As for why caranberries may work, scientists first thought it was the acid that killed the UTI-causing bacteria. Then it was believed that cranberry compounds prevented the attachment of the bacteria to the wall of the urogenitial epithelium via the bacterial fimbriae (appendages for attachment). This may actually be true, but other actions are also possible.

Another 2013 study showed that for the UTI causative agent Proteus mirabilis, eating powdered cranberry was very effective for preventing UTI. In this experiment, the researchers found that the organisms did not swim well or swarm when exposed to cranberry compounds. In fact, the gene that expresses proteins for their flagella (for motility) were inhibited by cranberry powder.

In addition, their urease virulence factor was also suppressed. A virulence factor is any molecule that helps an infectious organism to colonize and/or obtain nutrition from a host, or helps it to evade or suppress the host immune system.

This is a dividing bacterium showing the fimbriae that help it
attach to surfaces. You can see the difference between these
and the flagella that help in the motility of the organism. It
may be that cranberry compounds mess with both to
prevent UTIs.

Not to be a downer, but a different group carried out a meta-analysis (an organized compilation of many studies involving a lot of statistical math) of many cranberry/UTI studies in 2013 and determined that cranberry compounds have no effect on the prevention or treatment of UTIs. So, all that talk about just how cranberry molecules suppress UTIs (fimbriae, acid, down regulation of host molecules) can be ignored if you don't believe they work.

The news is better on other fronts. In obese men, cranberry juice was able to inhibit the stiffening of blood vessels, an important factor in development of cardiovascular disease (CVD). The effect was greatest in men with metabolic syndrome– a combination of high blood pressure, blood glucose, and cholesterol, as well as obesity.

A second study confirmed this by showing that 1 cup of cranberry juice each day reduces blood glucose levels and CVD risk in men with type II diabetes. And this is just the beginning; 2013 studies also show how cranberry compounds may help you age well– this makes sense, some vines have been producing cranberries since before the American Civil War. Other studies show that cranberry is a potent anti-viral agent as well as preventing bacterial UTIs. Respect the berry – uh, false berry!

Next week, let’s talk about another symbol of Thanksgiving, the indian corn that you think is just decorative is actually a fascinating story of discovery.



Burleigh AE, Benck SM, McAchran SE, Reed JD, Krueger CG, & Hopkins WJ (2013). Consumption of sweetened, dried cranberries may reduce urinary tract infection incidence in susceptible women -- a modified observational study. Nutrition journal, 12 (1) PMID: 24139545

McCall J, Hidalgo G, Asadishad B, & Tufenkji N (2013). Cranberry impairs selected behaviors essential for virulence in Proteus mirabilis HI4320. Canadian journal of microbiology, 59 (6), 430-6 PMID: 23750959

Lorenzo AJ, & Braga LH (2013). Use of cranberry products does not appear to be associated with a significant reduction in incidence of recurrent urinary tract infections. Evidence-based medicine, 18 (5), 181-2 PMID: 23416416

Ruel G, Lapointe A, Pomerleau S, Couture P, Lemieux S, Lamarche B, & Couillard C (2013). Evidence that cranberry juice may improve augmentation index in overweight men. Nutrition research (New York, N.Y.), 33 (1), 41-9 PMID: 23351409

Shidfar F, Heydari I, Hajimiresmaiel SJ, Hosseini S, Shidfar S, & Amiri F (2012). The effects of cranberry juice on serum glucose, apoB, apoA-I, Lp(a), and Paraoxonase-1 activity in type 2 diabetic male patients. Journal of research in medical sciences : the official journal of Isfahan University of Medical Sciences, 17 (4), 355-60 PMID: 23267397


For more information or classroom activities, see:

Seed dispersal mechanisms –

Scarification –

Different types of fruits –

Fimbriae and flagellae –




Corn Color Concepts

$
0
0
Biology concepts – maize, transposon, antigenic variation, cereal grain, food grain, caryopsis


The Corn Palace in Mitchell, South Dakota, uses corncobs
to make murals on the sides of the building - yes, the mural
on the right is made of corncobs. Each year’s murals have a
different theme, and they use 13 different shades of corn in
their artwork, but after the drought of 2012 they only had 8
shades to work with for 2013. This is a picture of the palace
as it appeared in 1907. Notice the questionable decoration
on the center minaret – of course this was 25 years before
the rise of the Nazi party.
Thanksgiving decorations typically include some colorful earns of dried corn, commonly referred to as “Indian corn.” However, this corn has a history much more involved than mere decoration. People might be less inclined to hang it around their house if they knew how much it has in common with the organisms that cause gonorrhea, Lyme disease, and Pneumocystis pneumonia.

One of the first misconceptions we have to get out of the way is that corn is actually corn. The word corn doesn’t literally refer to the stuff on the cob we eat in the summer and the stuff we pop on a cold afternoon. What we call corn is much more accurately called maize.

The word "corn" comes from an old german/french word. In most uses before the 1600’s, corn meant the major crop for one particular area or region. In England, corn meant wheat; in Scotland or Ireland, it most likely means oats. There is even mention of corn in the King James Bible. This was translated several times and hundreds of years before maize arrived in Europe. The “corn” of the Bible most likely means the wheat and barley that were grown in the Middle East at the time.

When Columbus took maize (Zea mays) across the Atlantic to Europe, he might have referred to it as the chief crop of the Indians; therefore, it was Indian corn. After a while, domesticated maize became so ubiquitous that the word “Indian” was dropped, and all maize became corn – like all facial tissue becoming Kleenex.

The history of maize is, well, a-maizing. The corn we know today is the most domesticated of all crops. It can’t survive on its own; it has to be managed by man. Rice and wheat have naturally wild versions of themselves that still grow in nature, but there is no wild corn, it is purely man-made.


Today’s “corn” is actually a selective breeding result from a
grass called teosinte and a grass called gamagrass. Genetic
experiments have confirmed that each of these grasses was
involved in the evolution of maize. There was also some back
crossing of early maize with the grasses again. You can see
how the kernels and plants have changed over time.
The earliest corn-like plant was called teosinte. It's a grain plant with very small, vertical kernels. This plant was bred with something else, maybe gamagrass, and over time became early maize. Early maize was then bred back to teosinte, and the cob emerged. A recent article from Florida State shows that corn was being bred and harvested as early as 5300 BCE.

The early plants were quite variable, growing from 2 to 20 feet tall. The ears, when they developed, were small and had only eight rows of kernels. More breeding took place, especially when the plants were brought north. At that time, ears grew near the top of the plant, and the growing season in the north was too short to allow full development.

Maize is a grass, so it has the nodes and internodal growth as we discussed a few months ago. Corn grows about 1 node unit for each full moon; the Indians needed a corn that would mature in just three moon cycles. So they planted kernels from stalks that had the lowest ears, thereby selecting for plants they could harvest before it got too cold. Their selection was for size and production, but colors came along for the ride.

There are many color genes possible in maize. A new version, called glass gem corn, shows just how many colors are possible (see picture). Indian corn, as we define it now, can be found in most of these colors; sometimes ears are all one color, sometimes they are combinations of colors. It all depends on who is growing nearby, but we need to know a little more about corn in general to explain this.

This Carl’s glass gem corn. The photographer swears there
was no manipulation of this image. The corn is just this
pretty! I’d hate to eat it. This strain was the result of many
years of selective breeding, and the seeds were passed
down through a couple growers before they got this result.

Maize is a food grain, meaning that has small fruits with hard seeds, with or without the hulls or fruit layers attached. More specifically, maize is a cereal grain, because it comes from a grass. Wheat is a grass, so is barley, rice, and oats. Basically, these are the grains your morning cereal is made from, so which came first, the breakfast “cereal” or the “cereal” grain? The answer is out there.

And by the way - yes, grains are types of fruits. The fruit is more precisely called a caryopsis (karyon means seed); a small fruit and seed from a single ovary, which doesn’t split open when mature (indehiscent). One of the characteristics of most grains is that the pericarp (the fruit) is fused to the seed coat, so it is difficult to talk of the fruit without including the seed.


The point of this cartoon is to show you that there are
many layers to the kernel. The whole thing is not the
embryonic plant, just the germ. Some people say wheat
germ is healthy to eat. It would take a lot of kernels to get
much germ. You can see the hull is made up of several
layers as well, this is here the color is expressed. The
endosperm is what tastes food. It is many cells, all
storing the sugars.
The hull is a little more vague. Corn has a husk(the leaves that surround the ear), which is often considered the same thing as a hull. But each kernel on the ear also has a hull, the epidermis that is more brittle when dried. In other plants, husk and hull mean the same thing.

It's the hull that shows the color of a kernel of maize. You can pop blue, red, or purple corn, but the popcorn will still be whitish yellow.The color genes are present in all the cells of a kernel, but they are only expressed in the epidermis or hull; this will be important in a minute or two.

So how can Indian corn have kernels of different colors? The same way that you and your siblings look different. Each kernel is a different seed, so each is a different potential plant. The male flowers of the corn tassel send out grains of pollen to pollinate the female flowers. Each pollen grain has a sperm cell, and each has undergone the same process of mitosis and meiosis as human sperm – there is genetic variation there.

The female flowers are the silks on the ear of corn. Each silk is connected to a different ovary (potential kernel). Again, each egg is a different version of the maternal plant’s genome. Different silks could be pollinated by different male plant pollens floating around in the air – nothing says that all the kernels must have the same dad.

What we call Indian corn is just corn that has not been bred
so much as to have only color gene, and can be pollinated by
different dads. You can see that Indian corn can have several
colors or one major color. The interesting parts are those
spots and streaks. Read on for more about them.

So, it isn’t to difficult to see that different kernels could be different colors, either from random assortment and mendelian genetics, or from different pollens meeting different eggs. The reason we eat yellow corn or white corn or yellow/white corn is because the color genes have been selected for by breeding, and the pollination process is highly controlled. This is not the case with Indian corn.

So that’s the story for corn color – or is there more? Look closely at Indian corn above; some kernels have streaks or spots of color. How does that happen?! This is completely different from having kernels of different color, and relates to one of the great exceptions in DNA biology.

Barbara McClintock found that by observing the chromosomes of maize very carefully, specifically chromosome nine, and by looking at the resulting kernels from selective breedings, she could match changes in the chromosome to changes in color streaks and spotting.

She noticed changes in the length of the arm in some cells, and related this to the movement of genes along the chromosome. To this point, all scientists believed that genes stayed in the same place on a chromosome forever. McClintock saw genes jumping from one place to another. She called them transposons.


The mechanism of transposon control in corn is a bit
complicated. The C gene codes for pigment, but can be
disrupted by the Ds transposon. (top). If Ds never moves
out, then the kernel will be white in this example. If the Ds
gene never moves in, the kernel will be completely purple.
If it jumps out and in or in and out, then you get spots. The
bottom image shows that the early the change, the larger the
spot, because more daughter cells will have the functional
or dysfunctional gene.
But this jumping is not haphazard. It was under the control of another gene. When one gene (Ds) was activated to jump by another gene (Ac), its new position disrupted a third gene’s (C) sequence (Ds = disrupter, Ac = activator, and C = color).

When Ds was located inside C, no color was produced, but when it was not, the daughter cells could produce color. A kernel has many cells that divide and divide, so some progeny could switch back and forth and produce cells on the hull that may or may not be able to produce the color protein (see picture). If the move to disrupt C occurred early, more daughters would be produced and more of the surface would lack color. If it was late, the spot would be smaller (see bottom image to left).

This idea of jumping genes was revolutionary …. and not well accepted at first. Even though Barbara’s science was impeccable, others just weren’t as good at spying the small changes in the chromosome. It took a while for the laboratory techniques to catch up to Barb’s eyes – then they gave her the Nobel Prize.

From our new knowledge of transposons have come many discoveries – some not so savory. Some infectious agents, both bacterial and eukaryotic, use jumping genes to escape our immune system. Neisseria gonorrhea was one of the first shown to do this. Our immune system, given time, will find bacteria that have taken up residence inside us; in gonorrhea's case, through sexual transmission.

N. gonorrhea has found that if it can change its costume, our immune system must start over looking for it. The proteins it has on its surface are what our immune cells recognize, we call them antigens. Gonorrhea organisms can go through antigen variation; they have many surface antigen genes, and can switch them out if they are detected.


Variable surface glycoproteins are like selecting for antibiotic resistant
bacteria. One organism may switch its VSG for antigenic variation,
just like one bacterium might pick up a resistance gene.
When the immune system finds and mounts a response to the
organisms with the “blue” VSG, they are killed, but now the “green”
VSG organisms can proliferate. This is like when the antibiotics kill
off the susceptible bacteria, the resistant ones (green) then
have more room and food to overgrow.
They do this by moving different surface antigen genes in and out of an expression site. Only the surface antigen gene in the expression site is transcribed and translated to protein, but they can jump in and jump out when needed. Antigenic variation also occurs with Borrelia burgdorferi, the causative agent of Lyme disease, the Plasmodium falciparum of malaria, and Pneymocystis jirovecii, a eukaryote that causes the pneumonia most AIDS patients contract.

In the case of Pneumocystis, a 2009 study showed that there are over 73 major surface glycoprotein (MSG) genes that can be switched in and out. They differ by an average of 19%, so the protein sequence of each is markedly different. Even though we don’t know the function of the MSG, it would appear that it is designed to increase the variation of the organism, probably to avoid an immune response.

Still have that warm and fuzzy feeling about Indian corn as a representative of Thanksgiving?

Next week, we start to look at the last of the four biomolecules - lipids. Can you believe some people can't carry any fat on their body, no matter how much they eat?


Pohl ME, Piperno DR, Pope KO, Jones JG. (2007). Microfossil evidence for pre-Columbian maize dispersals in the neotropics from San Andres, Tabasco, Mexico. Proc Natl Acad Sci U S A. , 104 (16), 6870-6875 DOI: 10.1073/pnas.0701425104

Keely SP, & Stringer JR (2009). Complexity of the MSG gene family of Pneumocystis carinii. BMC genomics, 10 PMID: 19664205



For more information or classroom activities, see:

History of maize –

Transposons –

Antigenic variation -



The Skinny On Fat

$
0
0
Biology concepts – lipids, fatty acid, saturated fat, trans fat, interesterification, adipose tissue, lipodystrophy, LDL and HDL


This is Lizzie Velasquez, a 24 year old with a genetic
form of lipodystrophy. She must consume 5000-
8000 calories and eat 80 times each day just to survive.
Her condition is called neonatal progeroid syndrome,
which includes premature aging and an oversized head
along with the lipodystrophy. She has dealt with more
than any person should have to, and now is a
motivational speaker – "it’s going to get better" is her
theme. Her second book, Be Beautiful, Be Youis a
must read. The picture is from one of her public talks.
Most of us worry about gaining weight. We would love to be skinnier, lighter, trimmer, svelter (a new word?). But what if you had the opposite problem – you couldn’t gain any weight, no matter how much you ate?

There is a group of disorders known as the lipodystrophies (lipo = fat, dys = bad, and trophy = nourishment) in which afflicted people can't store any fat. Their stories tell us that being skinny is no blessing.

Lipodystrophies can be congenital(con = with, genitus = to beget), so they are present from conception, or they can be acquired. In congenital cases, the genetic mutation sometimes has little to do with fat, sometimes it does. There are four known mutations in four different proteins that can all lead to a lipodystrophy.

People with a congenital lipodystrophy tend to develop type II diabetes. They also get arthritis and other disorders. Some mutations also carry higher risks of mental retardation and most increase the risk of cardiac disease and cirrhosis of the liver.  These can kill you.

Acquired forms often result from drug treatment. In HIV retroviral treatment, there can by lipodystrophy and lipoatrophy– which is loss of fat from one particular anatomic location, usually the face. On the other hand, visceral fat (fat around the internal organs) is increased during anti-HIV treatment. It matters, since visceral fat is associated with more heart and liver disease.


Lipoatrophy refers to the loss of fat in a particular area of the
body. On the left is the facial atrophy seen in patients on anti-
viral therapy in HIV infection. On the right is a specific
lipoatrophy surrounding an insulin injection sight for diabetes.
A 2013 study sought to determine why the opposite things happen with fat in different places. They tracked different markers in each location and found that mitochondrial changes were the same in visceral adipose tissue (VAT) and subdermal adipose tissue (SAT). But the signals to build fat decreased only in SAT. Most telling, inflammatory signals were much greater in SAT than in VAT; it may be that less inflammation leads to less fat wasting. Strange that fat would be linked to inflammation – or maybe not - keep reading.

Fat may be considered evil, but it serves a purpose. The problems that lipodystrophy patients encounter underline that fat is a necessary tissue for animals. Problems arise when you accumulate too much of it, either under your skin, around your organs, or in your blood. If you don’t use the calories you take in for energy, your body will store them for later.

Chemically, a fat is made up of three fatty acids (see below)attached to a 3-carbon glycerol molecule. In adipose tissue (from latin aipem = fat) and subcutaneous fat, these triglycerides, also called triacylglycerols (tri = 3, glycer = glycerol, and acyl = acid) are stored until they are released to the blood stream as fatty acids alone. The fatty acids can then be broken down and used to generate ATP in the cells.

Fats are a much more efficient storage form of energy as compared to glucose or glycogen. There is 4.5 x more energy in fat as compared to the same mass of glycogen or glucose. In addition, since fats are hydrophobic (hydro = water, and phobic = fearing), they can be stored without water. These two factors mean that a lot of energy can be stored in a little space.


A fat molecule is really a triglyceride. The right structure is a
typical triglyceride with three fatty acids in black connected
to the bluish glycerol. By ester bonds in dark red. A trans
double bond is shown in green. On the right is the partial
hydrogenation process that converts a polyunsaturated fat
into a trans fat. Usually there is a mix of products, with some
cis bonds being converted to trans bonds.
Glucose first gets stored as glycogen, but we make only a certain amount of glycogen. Usually a human has about one lazy day’s worth of glycogen. Energy beyond that gets stored as fat, and that’s a good thing. Imagine how large we would all be if we all our energy reserves were in the form of glycogen + water. A normal human adult male would weigh an extra 110 pounds (50 kg)!

A fatty acid is a chain of carbons with a carboxyl group (HO-C=O) on one end. If the chain of carbons contains only single bonds, then the fatty acid is called saturated. If there is one double bond between carbons, then it is an unsaturated fatty acid, and if there are two or more double bonds (unsaturations), then it is a polyunsaturatedfatty acid.

The same terminology is used for triglycerides (fats) made from the fatty acids. A fat with only saturated fatty acids is a saturated fat. The double bond type also makes a difference for the fatty acid and fat. If the bond is in one configuration, it is called cis, and it creates a bend in the chain. If the double bond is in the other configuration, then it is called trans, and it is much straighter, like a saturated fatty acid.

You have heard of the benefits of polyunsaturated fats as opposed to saturated fats, and of the evils of trans-fats. Saturated fats tend to produce bad results in the blood stream. Their breakdown results in more acetates which stimulate cholesterol production. Also, saturated fats tend to clump together and form blockages in vessels. This leads to atherosclerosis and can kill you.


The left cartoon shows the buildup of plaque over time in
atherosclerosis. It takes along time, but we all seem to be
working hard to make it happen. The right image is a
photomicrograph showing the blockage in a large coronary
artery. Think the amount of blood getting through is enough
to nourish your heart? Think it’s going to have a happy ending?
However, saturated fats are good at promoting liver and lung health, so some saturated fat in your diet is not a bad thing. Trans-fats, on the other hand, are harder to discuss. They can be made in a factory by removing some double bonds from polyunsaturated fats by partial hydrogenation. They also occur naturally, but are very rare compared to cis fats, so they don’t usually cause a problem.

The vast majority of trans fats we eat are industrially made. The trans double bonds are created in the hydrogenation process (adding hydrogens to reduce the number of double bonds). Some cis- double bonds become trans- double bonds during the process.

Industry likes the saturated and trans-fats because they tend to be more solid at room temperature (higher melting temperatures). Saturated and trans-fats have more hydrogens (see picture above). The kink in trans-fats also increases the melting temperature.

The hydrogens and kinks lead to more interactions between the different molecules – they hold on to one another more tightly. Melting is basically making the different molecules separate by adding energy, so the added hydrogens have the end result of raising the melting temperature. This is good for making things like margarine.

Unfortunately, trans-fats tend to increase low-density lipoprotein (LDL) production; these contribute greatly to artery clogging and heart disease. The blocking of arteries is bad enough, but if they occur in the brain, or if part of a plaque breaks off, travels to the brain and blocks a vessel – that’s a stroke. There’s not much that’s worse than a stroke.

Saturated fats also raise the levels of LDL’s - so why are trans-fats worse for you than saturated fats? The levels of LDLs are only one aspect in disease promotion, the level of the good-for-you HDLs (high density lipoproteins) is just as important. It's the ratio of LDL:HDL that matters.


In the space of the vessel are some large cells with very
light cytoplasm that looks like Swiss cheese. There are
many small clear droplets and some larger ones. These
are fat droplets and give the cells their name – foam cells.
You can see that some have more than one nucleus. Two
diseased macrophages will often merge into
multinucleate giant cells.
When you eat saturated fats, both the LDL and the HDL levels increase, so the ratio stays generally the same. With trans-fats, the LDL production goes up but the HDL level stays the same or decreases. This leads to a bad ratio and disease progression.

The next question then is how do HDLs help prevent disease caused by LDLs? LDLs supply cholesterol to the cells that need it – and that’s all your cells (more on this next week). But if there is too much LDL, then they start to accumulate in the vessels and can form things like foam cells.

Foam cells are tissue macrophages located in/on the vessel walls. The job of macrophages is to eat things, so these macrophages eat up the extra LDLs in the area - but they don’t break them down. They build up and start to look like foam inside the cell. Unfortunately, the macrophages then become part of the problem; as they accumulate they form fat streaks in the vessel wall. This is the beginning of plaque formation and atherosclerosis.

A 2005 review looked at how HDLs are health promoting. It turns out that they steal cholesterol from LDLs, but they don’t promote the formation of foam cells and plaques because of their different structure. Therefore, having more HDLs will rescue more cholesterol from LDLs and transport it to the liver for eventual destruction.


The top cartoon shows how HDL help get rid cholesterol
after it has been phagocytosed by a macrophage foam cell.
A pre-HDL interacts with receptors on the macrophage
which then transfer cholesterol to the HDL. This is taken to
the liver where it is broken down and reused for bile
production. This is called reverse cholesterol transport. The
bottom image shows the functions of HDL, even beyond its
ability to negate the unhealthy effects of LDLs.
By stealing the cholesterol from LDL, HDLs also stop many mechanisms that can lead to vessel blocking, like the stimulation of vessel inflammation by LDLs, the formation of clots in the vessels (HDLs are anti-thrombotic, a thrombus is a clot), and by preventing the oxidation of LDLs.

Oxidation of LDLs leads to oxygen radicals can damage vessel cells and promote plaque formation. But HDL complexes include an enzyme called paraoxonase, which prevents the oxidation of closely associated LDL molecules. Preventing oxidation also reduced the production of pro-inflammatory molecules in the vessel wall and decreased the recruitment of some inflammatory cell to the area. Hurrah for HDL!

But wait – of course there’s an exception. HDLs from patients with existing diseases, like coronary artery disease (CAD) or chronic kidney dysfunction (CKD) actually contributeto plaque formation rather than prevent it! A 2013 review talked about how HDLs from CAD patients limit the anti-inflammatory and repair processes in the vessels cells, and in CKD patients promote inflammation and raise blood pressure. I guess the best way to prevent atherosclerosis is to not develop atherosclerosis.

Overall, you want to reduce fat intake, but especially trans-fats and saturated fats. Food labels are now required to show how much trans fat is in the product, but the manufacturers are getting around the regulation. They combine different fatty acids in a fat and they call them interestrified fats. Partial hydrogenation is still a major factor, but they aren’t called trans-fats. This allows them to keep it below the FDA radar. Interesterified fats don’t exist in nature – that should tell us all we need to know.

What we need is a way to partially hydrogenate the polyunsaturated fats that does not create trans-fats – you work on that while I butter my bagel. Next week we can look at more aspects of fats, and how they are different from the other lipids.



Gallego-Escuredo JM, Villarroya J, Domingo P, Targarona EM, Alegre M, Domingo JC, Villarroya F, & Giralt M (2013). Differentially Altered Molecular Signature of Visceral Adipose Tissue in HIV-1-Associated Lipodystrophy. Journal of acquired immune deficiency syndromes (1999), 64 (2), 142-8 PMID: 23714743

Xu S, Liu Z, & Liu P (2013). HDL cholesterol in cardiovascular diseases: The good, the bad, and the ugly? International journal of cardiology, 168 (4), 3157-9 PMID: 23962777

Barter, P. (2005). The role of HDL-cholesterol in preventing atherosclerotic disease. , 7(Suppl F), F4-F8. European heart Journal, 7 DOI: 10.1093/eurheartj/sui036


For more information or classroom activities, see:

Trans fats –

interesterification –

LDL:HDL –




Request For Feedback

$
0
0

Good afternoon readers,

I am seeking feedback from readers and users of the blog, As Many Exceptions As Rules.

If you have read the blog and wish to express an opinion, or if you have made use of the blog in a classroom or other, I would appreciate a short e-mail detailing the ways the blog is being used.

Any feedback you have would be appreciated, either as a comment below this post or to the following address:



You can also send by snail mail to:

Mark E. Lasbury, MS, MSEd, PhD
5060 E. 71st Street
Indianapolis, IN 46220

The Colors of Alien Plants

$
0
0
Biology concepts – photosynthesis, chlorophyll, pigmentation, astrobiology, exoplanet, dormancy

The King Crimson Norway Maple in our front yard is
at least 50 ft. tall. It isn’t a rare tree, but I like it a lot.
In fact, it is invasive and native only in Asia. You can’t
plant on in several eastern states in the US, they are
taking over in some deciduous forests.
There is a large King Crimson Norway Maple (Acer platanoides 'King Crimson) in our front yard. Healthy and round, it is a fine showpiece. We are also blessed with a 15-foot tall burning bush (Euonymus alata 'Compacta') not more than thirty feet from the maple. The burning bush straddles the property line with our neighbor, so when it needs work, its theirs, and when it is beautiful in autumn, it’s ours. Together, they make our landscaping come alive with color and provide ample shade.

They’re autotrophs (auto = self, and troph = feed) as are most plants. They make their own carbohydrates from sunlight, carbon dioxide, and water. Our sun radiates light energy that can be captured and transduced to chemical energy, but not all stars are the same and not all plants are green, so…..

Question of the Day: How can starlight support non-green plants and could it might it be different elsewhere?

Chlorophyll is one of several plant pigments, and chlorophyll itself comes in several flavors, but the primary plant chlorophylls are a and b. The “a” version is the major pigment for photosynthesis, absorbing light at the two ends of the visible spectrum – blues and reds (see picture). Green and yellow light get reflected, and this is what we see. Chlorophyll probably evolved to use red and blue because blue is high energy and red is abundant.

Chlorophyll b is an accessory pigment that plants use in smaller amounts. The “b” version absorbs light from near the same wavelengths as chlorophyll a, but they pass the energy on to the “a” version for use in photosynthesis. The two chlorophylls differ at only one of their 55 carbon atoms.

Green light is higher energy than red light, but less abundant
in our atmosphere. Blue light is much higher energy, so it
can power a lot of photosynthesis even if it isn’t that abundant.
Therefore, is it surprising that our plants appear green,
chorophylls absorb and use red light because it is abundant, and
blue light because it is high energy. Green isn’t worth bothering
with and is reflected.
There are also chlorophylls c, d, and f. Chlorophyll c is also an accessory pigment which transfers energy to chlorophyll a, but it is very different structurally. Chlorophyll c is found only in some marine algae, and actually comes in three similar structures; c1, c2, and c3.

Chlorophyll f absorbs in the near infrared (NIR, not visible but close to red) range. Discovered in 2010 as the major chlorophyll in stromatolites of Australia, it is the first new chlorophyll identified in the last 60 years. However, its usefulness in photosynthesis has not yet been confirmed.

Chlorophyll d, on the other hand, is found to be the primary chlorophyll in cyanobacteria. A recent study showed that this chlorophyll absorbs NIR light as well. Though lower energy than red light, but the 2012 papershows that the cyanobacteria are just as efficient at photosynthesis as plants with chlorophyll a. This works out well since in water, the higher energy wavelengths are absorbed near the surface and the only light that penetrates to the cyanobacteria is the NIR.

This is important for the science of astrobiology, predicting what life might look like on other planets and trying to identify which planets might hold life. Knowing that low energy light can still power photosynthesis tells us that we should not discount the planets around red dwarf stars. These stars have light of different wavelengths than our sun. Autotrophs from planets around red dwarfs may use NIR chlorophylls exclusively; therefore they might reflect all light and appear almost white.

On the other hand, light from different stars might drive evolution of different chlorophylls, so plants on other planets might not be green at all, but could reflect just lower energy light and appear red, or reflect just higher energy waves and be blue – blue plants, cool!

Current possible habitable exoplanets have been numbered and are
under investigation. Scientists look for planets in the habitable zone,
meaning they are of a temperature to have liquid water. They also look
for rocky planets that are about the same size as Earth to provide the
same amount of gravity. They also look for planets around stars with the
same kind of light as our sun – maybe they shouldn’t limit it to stars like
ours. Those with “Kepler” in their name come from the orbiting Kepler
telescope, which is now in danger of never working again.
Based on the light reflected from exoplanets (planets outside our solar system), a 2007 study in the journal, Astrobiology, says we might be able to predict the color of their possible plants and the wavelengths they might use. Furthermore, a study in 2012 stated that in binary systems that have two stars, each giving off different wavelengths of light, might force the evolution of dual photosynthetic mechanisms, leading to perhaps alternating plant colors, depending on which sun is shining.

Chlorophylls provide energy through photosynthesis, but they also have a cost. The old saying, “It takes money to make money” applies to plants as well. It takes energy to make chlorophyll, so it only pays to make chlorophyll when there is ample sunlight to put through photosynthesis. When the daylight get shorter on Earth, the profit margin for producing chlorophyll goes down, so the plant just stops making it.

This is when we start to see the other pigments, those that might play a role on other planets. Other major pigments are the yellow, orange or red carotenoids and the flavonoids. When the plant reduces chlorophyll production, the green color is then a lower percentage of the total pigment in the leaf and the other colors can show through. This gives the bright colors of fall foliage.

But these same pigments can make it seem that a green plant is a non-green plant. Plants that produce large amounts of purple, brown, or maroon pigments have leaves that are so dark that they appear black. Purple, black, and red plants have chlorophyll aplenty, it’s just that the color is masked by other pigments.

Carotenoids are a diverse group of pigments, but yellows and oranges seem to predominate. Carrots get their color from carotene, one type of carotenoid. Xanthophyll is another, which reflects yellow light wavelengths. While chlorophylls absorb red and blue light, carotenoids absorb the blue wavelengths, as well as green light, reflecting only the lower energy yellow, orange, and red light.

Retinal is the major pigment used in our vision. Transduction
of light energy into chemical energy and a nerve impulse is
powered by a cis- to trans- conversion of part of the molecule.
Is it any wonder that this ability to capture light energy can
also be applied to photosynthesis.
By absorbing the green light that would usually be bounced back from chlorophyll, they can prevent us from seeing them as green. Additionally, non-green plant pigments can contribute to photosynthesis, serving as accessory pigments to chlorophyll.

Carotenoids absorb light energy, and while they can’t convert this directly to chemical energy through photosynthesis on Earth, they can transfer this energy to chlorophyll, which then carries it through photosytems I and II of photosynthesis.

In addition, some archaea use retinal (another pigment) to extract energy from the green wavelengths of light. So, why aren’t plants truly black? Wouldn’t it be most efficient to absorb all wavelengths of light for photosynthesis and reflect nothing, thereby appear black to us. Wouldn’t this be the most efficient use of the sun’s energy?

The answer is easy – evolution doesn’t work to maximum efficiency. Natural selection is random and works with what it is given – nothing in nature is engineered by decisionto maximize efficiency. But that doesn’t mean there can’t be black plants around other stars, having undergone completely different evolutionary paths.

Even if something used carotenoids, retinal, xanthins and chlorophylls, could it extract energy by absorbing all light waves that strike the plant? Um, no. No plant comes close to absorbing all the light that it can use, and no plant is made of only pigment molecules. There will always be reflections from other molecules.

Plus, if all light was absorbed, can you imagine how hot the plant would get? Imagine a blacktop parking lot being alive; you can fry an egg on an asphalt surface during the summer!

Purple heart (left) and black pepper pearl (right) have lots of pigments
that make them colored purple and almost black. However, they have
chlorophyll too, it is just masked by the other colors. They do
photosynthesis just like other plants, but they certainly look
interesting.
Carotenoids are longer lived than chlorophyll. When autumn comes around, the plant breaks down chlorophyll so that the components can be reused, but the carotenoids stick around much longer. Therefore, the yellows and oranges are not masked by the greens, and the leaves change colors.

Anthocyanins of the flavonoid class are another set of plant pigments. These colors are also more stable than chlorophylls. Our King Crimson Maple makes a lot of red anthocyanin pigments that absorb the green light coming in to the leaf and perhaps a lot of the green light reflected by the chlorophyll. Therefore, as the amount of the anthocyanins in a leaf increases, the green color is masked by the red.  

Plants can use anthocyanins as “sunscreen” because in addition to absorbing green light, they also absorb ultraviolet light. Even though plants and animals need oxygen, they can also be damaged by the production of oxygen radicals (highly reactive compounds) produced by ultraviolet light energy striking oxygen-containing molecules and breaking them apart. Ultraviolet light can especially damage DNA, so anthocyanins can protect cells from mutations that might lead to inefficient activity or even cancer. It might be that on other planets, anthocyanins could be photosynthetic and plants live on UV light.

Sunscreen protects our skin from damage, just as red pigments protect the plant leaves. Even more, eating plants high in anthocyanins, like red grapes, blackberries, and blueberries, can transfer those antioxidant molecules to us for protection of our tissues and blood…. but don’t eat your Norway Maple.

On the left is our King Crimson. The yellow arrow shows the darker leaves
that get more sunshine. The green arrow shows the shaded leaves that
make much less red pigment because they don’t need the protection. On
the right is the burning bush in the open, so it has more carotenoids that
show up in the Fall.
When fall comes, or it is time for the fruits to ripen, plants start to produce even more anthocyanins (as in green apples turning red), because as other compounds in the plant breakdown more oxygen radicals will be produced. Therefore, the plant needs more protection.

Returning to our maple and our fire bush, it would seem that the maple leaves are dark red, almost purple, because of the high anthocyanin pigment concentration relative to the chlorophyll concentration (red + green = almost purple). But not all of them are purple (see picture). Other examples of this, the purple heart plant and the oxalis regnelli, remain purple all through their growing cycle. 

Our burning bush is deep red in autumn because it is not shaded at all, so it produces more anthocyanin to protect its leaves in the summer. If it were shaded part of the time, it might be more pink. If the leaves need protection, they make more anthocyanin, and if not, they don’t.  Don't ask me about shade on other planets.

Next week, your Fourth of July ice cream may have a side effect - ever wonder how "brain freeze" works?



Behrendt, L., Schrameyer, V., Qvortrup, K., Lundin, L., Sorensen, S., Larkum, A., & Kuhl, M. (2012). Biofilm Growth and Near-Infrared Radiation-Driven Photosynthesis of the Chlorophyll d-Containing Cyanobacterium Acaryochloris marina Applied and Environmental Microbiology, 78 (11), 3896-3904 DOI: 10.1128/AEM.00397-12

O'Malley-James, J., Raven, J., Cockell, C., & Greaves, J. (2012). Life and Light: Exotic Photosynthesis in Binary and Multiple-Star Systems Astrobiology, 12 (2), 115-124 DOI: 10.1089/ast.2011.0678 

Kiang, N., Segura, A., Tinetti, G., Govindjee, ., Blankenship, R., Cohen, M., Siefert, J., Crisp, D., & Meadows, V. (2007). Spectral Signatures of Photosynthesis. II. Coevolution with Other Stars And The Atmosphere on Extrasolar Worlds Astrobiology, 7 (1), 252-274 DOI: 10.1089/ast.2006.0108

Sweet Suffering

$
0
0
Biology concepts – nociception, cranial nerve, headache, referred pain, vasodilation, mechanoreceptor

Nancy Johnson, a Philadelphia housewife, received a
patent for the hand crank ice cream freezer in 1843. She sold
the patent for 200 dollars because she couldn’t afford
the manufacturing cost. She received the patent on Sept. 9,
which makes me think the idea sprung from a 4th of July
problem of ice availability. It took her two months or so to
solve the problem.
The Fourth of July means fireworks, but it also means breaking out the ice cream maker and churning up some cold sweetness. Is it any wonder that July is National Ice Cream Month?

But all is not happiness and light in ice cream land --- ever had “brain freeze?” The typical ice cream headache is sensed as a pain in your head. Painsensing neurons are located throughout your body, exceptfor your brain – so……..

Question of the Day: If your brain doesn’t have pain receptors, why does brain freeze hurt?

Nociception (from Latin noxa = pain) is the input of stimuli from the environment that will be sensed as pain, but how can ice cream be a noxious stimulus? Not much research has been done in this area, especially given the number of names for the phenomenon – brain freeze, ice cream headache, cold stimulus headache, even sphenopalatine ganglion cephalgia (ceph = head, and algia = pain). 

The incidence of cold-stimulus headache has only been looked at in three populations. Some Danes (15%) and Taiwanese teenagers (41%) have ice cream headaches, but a more complete 2012 study has been published from Brazil. These researchers indicate that 37% of over 400 people did experience cold headache, with migraine sufferers more susceptible (50%). Apparently, cold weather people don’t experience ice cream headache as much, maybe because they eat less ice cream.

Let me describe the typical ice cream headache for those of you who haven’t had the pleasure. The pain, usually in the forehead (60%) or sides of head (48%), begins just a few seconds after a big mouthful of ice cream, typically lasts about 20-60 seconds, and then subsides over a short period of time. The faster the cold is applied, ie. the faster you gobble down your dessert, the more likely the headache. This is an important point, as we will see later.

Neurons that sense thermal stimuli, both hot and cold, have
receptors on the sensory endings. Nociceptors for pain the
shallow skin do not have receptors, they are merely free
nerve endings. The mechanical nociceptors respond to
deformation pressure, but also to incision wounds. Chemical
nociceptors responds through TRP channels, much like for hot.
This is why capsaicin pepper spray hurts, it is like being burned.
There are specific nociceptive receptors in skin and tissues for thermal stimuli, chemical stimuli, and mechanical stimuli. A thermal stimulus is most commonly heat; you learn at an early age to take your finger off the iron. But what about cold, why might it be sensed as pain?

Specific channels found only in nociceptive nerve endings allow for the flow of sodium ions to start en electrical impulse. There are different versions of the Na+ channel that respond to cold, the more active Nav1.7 and the much less active Nav1.8. It wasn’t until 2007 that scientists even found a reason for the 1.8 receptor.

There is a desensitization of neural endings as they fire over and over. It is harder for the neurons to keep rebuilding their electrical potential after repeated impulses; like when you stop feeling your backside against the chair after you’ve been sitting for a while.

The researchers found that if the 1.7 receptors keep receiving a cold input, they stop firing and you will not be aware of the continued cold. But this is where the 1.8 receptors come in. Nav1.8’s are harder to stimulate, but they will react to the continued signal even when the 1.7 receptors have been desensitized. This is why you feel the intense cold as pain.

The scientists in the 2007 study were able to inactivate the 1.8 channels in mice. They then desensitized the 1.7 receptors with a cold stimulus and the mice would run around on dry ice without feeling any pain at all. They would stay there until they froze solid if the researchers didn’t pick them up.

The desensitization of the normal receptors is what is behind cold analgesia (a = without, algen = to feel pain). On the other hand, the Nav1.8 receptors are responsible for cold hyperalgesia(hyper = more or beyond). So, is this why the ice cream hurts your head?

Hyperalgesia is an amplified pain response. Things that should
hurt a little end up hurting a lot. The chart shows that sensitivity
to pain is not changed, it takes the same amount of stimulus to
cause pain. Neither is the maximum amount of pain felt changed.
The hyperalgesia is in the middle, a stimulus causes more pain
than it should. In a weird twist, long-term use of painkillers
(opioids) can actually result in hyperalgesia – too bad for addicts.
Nope, cold receptors probably aren’t the reason for pain during an ice cream headache. Mechanical receptors might be more important. I say “might be” because scientists don’t really know what causes the cold-stimulus headache. They have a couple of theories though, and both make sense.

First is the vascular theory of headache, related to the body’s desire to retain heat. A loss of heat is potentially dangerous, especially in the brain. When cold food is passed over the palate (the roof of the mouth), the cold stimulus is passed through the bones of the palate and to the blood vessels that enter the brain from the sinuses.

The brain doesn’t want to allow this cold stimulus to cool the blood going to the brain, so the nerve (trigeminal nerve, cranial nerve V) for much of the head and neck will cause the vessels to constrict. Problem solved, right? No, the brain also needs oxygen, so vasoconstriction isn’t the best idea. Constriction means less blood; less blood means less oxygen.

Therefore, the nerve causes the vessels to undergo a rebound vasodilation, also called the trigeminoparasympathetic reflex. This is similar to when the blood vessels of the face and other skin are exposed to cold, and then your skin appears reddened from the vasodilation. Of course, the reddened skin (dilation of skin capillaries) doesn’t hurt until the Nav1.7 cold receptors start to desensitize and the Nav1.8 receptors kick in. In the case of ice cream on your palate, the faster you stuff it in your mouth, the larger the constriction, and the larger the rebound vasodilation.

The trigeminal nerve is divided into three divisions, the
ophthalmic nerve (V1), the maxillary nerve (V2), and the
mandibular nerve (V3). V1 and V2 carry only sensory
(afferent) information, but V3 carries both sensory and
motor signals. The entire nerve is above the level of the
spinal column, so it is called a cranial nerve (there are 10).
The pain signals from the palatal region or most often
referred to the ophthalmic region.
The pain is from the vasodilation. Dilation stretches the vessel wall and this is sensed by the mechano-nociceptive receptors. Just because the brain tissue itself doesn’t contain pain receptors doesn’t mean that the blood vessels don’t.

Arguing against this theory is the fact that things other than ice cream can stimulate an ice cream headache. Some folks get the same headache when they have a cold breeze pass across their head, or when they scuba dive. In these cases, we need a different reason for the pain in the forehead.

A second hypothesis is available and has to do with something called referred pain. Heart attacks are famous for referred pain. It is common during myocardial infarction (heart attack) to have pain in the left arm or the jaw or neck. Sometimes this is the only pain that is felt, while in other heart attacks there is no referred pain at all.

Referred pain occurs when there is a noxious stimulus in a deep tissue, from a place that there is normally little pain stimulation. There are fewer nociceptive receptors in organs and vessels as compared to the skin and other shallow structures that get hurt more often. In referred pain, the discomfort is sensed in some other location, not where the stimulation occurred.

How does this error in localization happen? The brain sends nerves out to the body (efferentneurons), and there are also nerves that carry information from the peripheral body to the brain (afferentneurons). In the majority of cases, these afferent and efferent signals travel a distance in the spinal column and then exit to the brain on one end and to the peripheral body on the other.

On the left is a close up cartoon showing spinal nerves leaving the
vertebral column. At each level, a spinal nerve leaves the column on
 each side. A cartilage disc separates each of the vertebrae and
ensures that there is sufficient space for the spinal nerve to exit the
column without being impinged. When the cartilage degrades, you
can end up with a herniated disc and a pinched nerve. On the right,
you see that spinal nerves leave the column at all levels, from the neck
to the sacrum. Those that exit together at the coccygeal end are called
the cauda equina (horse’s tail).
Specific afferent neurons gather sensory information from some superficial (skin/muscle) part of the body or from a deeper part of the body. In many cases, the afferents from a superficial area those from a deep region will enter the spinal column at the same place. This is where the problem starts.

The brain isn’t used to having a pain stimulus come from a vessel or an organ, so it sometimes gets confused, and tries to sort "present" information in the context of "past" experience. The sensory information gets switched as to its apparent source. Therefore, the brain may assign the pain to superficial area innervated by the afferent neurons that enter the spinal column at that same level.

In a heart attack, afferent neurons that would sense damage to be interpreted as pain enter the spinal column at T1-T4 levels (from between the first and fourth thoracic vertebrae). These also happen to be the levels that collect sensory information from the left arm, left side of chest, neck, parts of the jaw, and the upper back. When the signals are confused by the brain, the signals interpreted as pain are assigned to one or more of the areas with common spinal level innervation. Hence, your heart attack may hurt in your left arm, jaw, neck, chest, or back.

For a cold stimulus headache, the idea is the same, but the anatomy is just a little different. Nerves that innervate the head don’t necessarily enter or leave the spinal column. They sense things and send signals to areas above the level where the spinal cord begins. The trigeminal nerve (cranial nerve V) carries afferents from all the cranial vessels but also from parts of the face and forehead and sends efferents to the head and face.
The marine plankton organism Gambierdiscus toxicus
(150 µm dia.) lives in saltwater and is the food item
for several species of marine organism. It produces
several different types of ciguatera toxins, which can
work their way up the food chain as bigger
things eat littler things. When we eat the fish that is
contaminated with the toxin, we have a trip much like on LSD.
The toxins can also produce a severe cold allodynia in the
mouth and all over the body.

The theory says that when the nociceptive receptors are triggered because of the cold stimulus on the palate, either directly or via the rebound dilation of the cranial blood vessels, the pain is wrongly assigned by the brain as coming from the forehead. Your ice cream headache is a mistake your body makes. Just be glad you don’t have cold allodynia (allo = other, and dynia = pain), a condition where any cool or cold sensation is sensed as pain. A 2011 dental study indicates that cold allodynia is not only in response to subtle stimuli, but the pain also lasts much longer than in the control population.

Worse would be a cold allodynia induced by fish. Seem impossible? Well several kinds of fish can carry ciguatoxins, which can induce hallucinations (ichythosarcotoxism) and a potent cold allodynia. I worry for many of the judges on Iron Chef America when a chef decides to make fish ice cream. Now that I know about a “hallucinogenic fish toxin-induced pain from anything cool” – well, I’ll have to pass on the fish ice cream.

Being considering the animals you think are the toughest. Next week I will give you my contender, an animal you've probably never heard of.



Zimmermann, K., Leffler, A., Babes, A., Cendan, C., Carr, R., Kobayashi, J., Nau, C., Wood, J., & Reeh, P. (2007). Sensory neuron sodium channel Nav1.8 is essential for pain at low temperatures Nature, 447 (7146), 856-859 DOI: 10.1038/nature05880

de Oliveira, D., & Valenca, M. (2012). The characteristics of head pain in response to an experimental cold stimulus to the palate: An observational study of 414 volunteers Cephalalgia, 32 (15), 1123-1130 DOI: 10.1177/0333102412458075

Viewing all 288 articles
Browse latest View live