As with 15 million people worldwide, I have been so taken with the 2010 TED talk by Brene Brown on her research on ‘vulnerability’, that when I heard her TED Radio interview recently, I had to take notes. The TED Radio Hour is great because it takes several TED speakers and interviews them around a consistent theme . The theme for this podcast is about ‘Making Mistakes”. Apart from Brown’s interview, I enjoyed the interview with Jazz player, Stefon Harris, a vibraphone player whose interpretation of the non-mistake of Jazz seemed to have a lesson for many aspects of problem solving in life. The joy of podcasting is being able to replay to really listen and contemplate about something being said, especially the various gems that came from Brene Brown. So here are the gems I found in that interview. A little paraphrasing.
Shame is the fear of disconnection. “I’m not good enough for other people “, “If I tell people …. they won’t want to be around me”.
Out of shame we numb vulnerability (we don’t tell how we are or even listen to how others are). We can’t numb vulnerability in isolation so, effectively, we numb joy and happiness as well.
Vulnerability is our most accurate measure of Courage. eg talking as listening to a difficult conversation. When not acting with vulnerability, we will come to be in shame. Vulnerability is necessary if we want to come through for others. Vulnerability is uncertainty, risk, emotional. It is intimacy, trust, connection. Vulnerability is a raw bid for connection as distinct from a ‘normalising’ conversation. A normalising conversation is a broadcast of complaint to enroll others in your complaint, like we often do on FaceBook. Vulnerability is showing how we are being from our emotional response to a difficulty. Difficulties are what our days are made up of. (Me: We can either be vulnerable and happy Or complaining and unhappy. How to deal with complaint in a vulnerable manner, that is a whole wonderful possibility in life.)
Vulnerability is the birthplace of creativity, innovation, change. (Take it to the bank).
SHOW UP RIGHT NOW. Perfection isn’t possible and if it was, no one wants it of you. WE WANT VULNERABILITY.
Oh, and about Stefon Harris. In jazz there are no mistakes. A mistake would be be not perceiving what someone else just did. That would be an opportunity missed. To get somewhere you think you might like to be, be patient and listening.
The 2012 Nobel Prize in Physiology or Medicine has been jointly awarded to Sir John B. Gurdon and Professor Shinya Yamanaka. They received the award ‘for the discovery that mature cells can be reprogrammed to become pluripotent’. What does that mean?
Humans are made of many tissues: bone, skin, muscle and more. Look at the tissues under a microscope and you’ll see they are made of cells. Cells are the building blocks of life. An adult human is composed of trillions of cells of many types, but all humans start out as a single, fertilised egg cell. After fertilisation, this cell starts to split, forming new cells. There are cells that develop which have the potential to form any type of cell. Some stem cells, which can go on to form most other cells, are said to be ‘pluripotent’.
Eventually, as the foetus grows, these pluripotent cells are said to specialise – that is they mature into specific types of cells. Some will become skin cells while others will form muscles and other body parts. Originally, the theory was that once a cell matured into one cell type, it couldn’t become another cell type – for example, a white blood cell couldn’t become a neuron cell in the brain.
The work of John and Shinya showed this isn’t the case. Both were able to take mature, specialised cells and show that they could be changed, or ‘reprogrammed’, to revert to immature cells. These cells could then mature into another type of cell.
Their discoveries have helped scientists better understand the development of cells and cell specialisation. There are also a number of medical applications, to treat diseases such as cystic fibrosis and Huntington’s disease, made possible by their work.
GREAT BALLS OF LIGHTNING
You’re at home, sitting on the couch. Outside, there is thunder and lightning. You notice something at the window: a strange, glowing ball of light. As you watch, it appears to pass through the glass. It wanders through the air before abruptly disappearing.
What you’ve seen isn’t magic, it’s a puzzling phenomenon called ball lightning. Rare and mysterious, witnesses often describe it as a ball of light, about the size of a grapefruit that moves slowly through the air before disappearing. It is often seen during thunderstorms, but not always.
Ball lightning is so rare, scientists don’t know what it actually is, and some people doubt if it exists at all. Possible causes proposed by scientists include microwave radiation, nuclear energy or reactions caused by more typical lightning. Now researchers from CSIRO and the Australian National University have used maths to come up a new explanation.
They propose that certain conditions in the atmosphere cause charged particles (or ‘ions’) to clump together on a surface like the outside of a glass window. These negatively charged ions attract positive ions, which group together on the other side of the glass.
Having too much of the same charge in one place is unstable. A more stable situation is to have the charges balanced, which can happen by moving charges. This is called a ‘discharge’, and often takes the form of a spark.
In this case, it is proposed that the positive charge on the inside of the glass discharges by removing electrons from gas molecules, creating a mixture of electrons and ionised gas molecules. Under certain conditions, the discharge would give off light. Instead of a short, sharp spark, the discharge could form a glowing, moving ball: ball lightning.
This is the first time that a simple mathematical formula has been used to explain ball lightning. However, it doesn’t necessarily mean that the mystery is solved. Scientists will continue to debate this subject in the future. So even though the numbers might add up, ball lightning may remain a mystery for a little bit longer
Water, water everywhere?
It’s a small molecule, made of oxygen and hydrogen atoms in a V-shape. It’s colourless, odourless and expands when it freezes into a solid. It’s water, and without it, we wouldn’t be here.
Scientists have discovered organisms that can live without light or oxygen, while others thrive with very little heat. But scientists have yet to find life that doesn’t need water. Without water, many biochemical reactions necessary for life would be impossible, including photosynthesis and respiration.
Water also plays a role in the Earth’s climate. Water vapour forms clouds that fall as rain. Ocean currents transport vast quantities of water across the planet. These ocean currents can influence climate patterns.
It may seem that there is water everywhere, but in Australia there often isn’t enough of it. Australia is surrounded by water, but the salty water of the oceans can’t be directly consumed by humans, or used for agriculture. Australia is the driest inhabited continent and this makes the limited freshwater supplies even more valuable.
This week is National Water Week. The purpose of the week is to raise awareness in the community about water issues in Australia. It encourages the community, businesses and industry to reduce water consumption.
There are a number of ways to save water. CSIRO scientists are involved in a number of projects to use water more efficiently in agriculture and industry, as well as developing varieties of grain that use less water.
At home, you can make a difference. Having a shorter shower is the biggest saving that can be made inside the home, while fixing leaking toilets and taps can save a surprising amount of water. In the garden, planting hardy, native species that don’t require much water, as well as watering your plants and lawn at night, can save water. Even just being aware of the amount of the water you use and trying to use less can help.
The whole tooth and nothing but the tooth
Say cheese and flash that beautiful smile. You should be proud of those choppers; after all, teeth have been around for nearly half a billion years.
The fossilised teeth of backboned animals, or vertebrates, are a paleontologist’s dream – they’re tough enough to survive weathering and often contain distinctive features that reveal a lot about an animal’s diet. But when did the first tooth appear?
Winding back the clock roughly 440 million years, there’s an aquatic ancestor called the placoderm. These armoured fish provide us with the oldest evidence of vertebrate jaw bones. New body parts rarely pop into existence. Instead, a body part evolves or changes just enough to perform another job. In this case, the front gill slits of placoderm’s ancestors became useful for getting a grip on lunch.
There have been some big questions about when these early jaws sprouted teeth and if so, what those teeth might have been like. Were placoderms all gums? Did they have something like a toothy peg? Or could the first fish flash a sharp-looking grin?
Western Australia has well-preserved placoderm fossils with hints to help solve this mystery. However, the clues are inside the fossilised jaw bones. Cracking them open to take a peek is out of the question.
Instead, researchers from Curtin University, University of Bristol and London’s Natural History Museum worked with physicists in Switzerland using X-rays to peer inside the fossils. Only these X-rays are millions of times stronger than those at your local hospital, created by a type of particle accelerator called a synchrotron.
They found evidence of distinct structures that could be compared to what we’d today consider to be teeth. This suggests that teeth evolved not long after the first jaws appeared.
Science by the people
Citizen science is on the rise. More and more, amateurs, or ‘citizen scientists’ are given opportunities to help scientists.
Citizen science projects typically use volunteers to assist in the collection or analysis of data. This data can then be used by professional scientists in their research. Examples include bird watching groups doing bird surveys, amateur astronomers sharing their observations and games such as Foldit.
Becoming a professional scientist is no easy feat. It generally requires years of study, research and often multiple university degrees to be considered an expert. One of the key features of science is its rigorous standards for experiments and obtaining data. This is to make sure the data is accurate so better conclusions can be made. There have been concerns about the quality of the data gathered by citizen scientists, as they may lack skills professional scientists have gained through years of training.
Yet in some projects this is not the case. A comparison between bird surveys conducted by amateurs and professionals in the Mount Lofty Ranges in South Australia showed that, overall, the results were quite similar.
There were some discrepancies. The citizen scientists reported seeing a few species proportionately more often and other species relatively less often than the professionals. Professionals are thought to be more systematic in their method and better able to identify species by their calls.
So it seems that the professionals do have some advantages. But rather than rejecting the value of citizen science, the researchers who did the comparison suggest that professional data can be used to calibrate the data collected by citizen scientists.
Some fields of science require highly specialised knowledge and expensive equipment, meaning they will be closed off to citizen scientists. But other fields of science, particularly the environmental sciences, could greatly benefit from the participation of citizens. Citizen science lowers the costs of doing research and engages the community in the scientific process.
Feral felines avoid top dogs
Dogs chase cats – it’s one of the facts of life. However, what seems to be true in the backyard might not be the case in the Australian bush.
Dingoes are wild dogs that have lived in Australia for several thousand years, while the arrival of cats was much more recent. Since their introduction, feral cats have caused significant damage to the Australian environment. Cats have only lived in Australia for a few hundred years so many native species haven’t adapted strategies to avoid cats. This makes the native species easy prey.
Both feral cats and dingoes are predators, however, dingoes are larger than cats. This means the dingoes can eat prey, such as wallabies, that are too big for cats. Because of this, dingoes are considered to be apex predators, while cats are not. Apex predators sit at the top of the food chain.
Apex predators can control the numbers not just of prey species, but other predators. This can be because they kill or eat other predators, or because they out compete lesser predators for prey. In Australia, there is evidence that a higher number of dingoes leads to a reduction in the number of feral cats.
Researchers recently set up cameras to monitor feral cat and dingo behaviour in the Taunton National Park in Queensland. They found that cats and dingoes were often active in the same areas. In other words, it didn’t appear as if the cats were put off by the presence of dingoes in an area.
One theory is that the presence of dense vegetation means that the smaller cats can hide from the dingoes. This is supported by another study from Cape York in northern Queensland which found that more complex habitats allowed feral cats to coexist with dingoes.
While dingoes restrict the access of feral cats to prey, these studies show that the relationship between the two predators is more complex than the dogs simply chasing away the cats.
Something to Bragg about
If Lawrence Bragg was still alive he really could be boastful. This November marks the centenary of crystallography. It’s a powerful technique Bragg helped to develop for studying the structure of chemicals.
Born in Adelaide, Lawrence remains the youngest person ever to win a Nobel Prize, the most prestigious award in science. Lawrence was just 25 years old when he shared the 1915 Nobel Prize in Physics with his father William.
Bragg’s work is still important today because it forms the basis of X-ray crystallography, a method for determining how atoms are arranged in a crystal. This kind of structural information is important for developing new technologies such as materials and medicines.
When a beam of X-rays strikes a crystal, the atoms in the crystal cause the X-rays to bend and spread out, forming a distinctive pattern. This phenomenon is known as diffraction and occurs when electromagnetic waves, such as visible light or X-rays, encounter an obstacle or narrow opening.
The resulting diffraction pattern for each substance is unique and depends on the spacing and arrangement of atoms within the crystal. Using Bragg’s famous law, it is possible to interpret the diffraction pattern and determine how the atoms are positioned within the crystal.
Table salt and diamonds were some of the first crystals to be studied by the Braggs using X-ray crystallography. Since then, the technique has been used to study the structure of countless substances. Perhaps most famously, it was used to determine the double helix structure of DNA, the chemical instructions on which life is based.
Understanding the chemical structure of substances helps scientists understand their properties, so it can be an important step in solving difficult problems. CSIRO scientists are using X-ray crystallography to study the structure of a protein found in the brain called amyloid beta. This protein is associated with Alzheimer’s disease, which results in memory loss. Understanding the structure of amyloid beta may help find a treatment for the disease.
So Lawrence Bragg is one person who might deserve to be just a little bit immodest. The method he helped to develop one hundred years ago remains one of the most important and powerful tools for investigating chemical structure today.
The case of the phantom island
A research team on board Australia’s Marine National Facility research vessel, Southern Surveyor, have made an unusual discovery: an island that isn’t there.
We rely on maps all the time. Street directories and websites such as Google Maps help us to find our way around unfamiliar places, and help prevent us getting lost. In some situations, accurate maps are more than just a convenience – they are a necessity. For example, nautical maps used by ships show many hazards such as shallow water, sand bars and coral reefs. Knowing where these are allows a ship’s crew to navigate a safe path to their destination. In case of an emergency, using a map to locate the nearest dry land may be of crucial importance.
‘Sandy Island’ appeared on many maps, including Google Maps and some meteorological maps. It seemed to be a rather large island (28 kilometres long) located in French territorial waters between Queensland and the island of New Caledonia – two places we’re sure are real!
The group of scientists from the University of Sydney, who were leading a research team on board Southern Surveyor, noticed the island wasn’t on the ship’s nautical charts, or on French government maps. The scientists were taking seafloor samples to try to work out the age and history of the seafloor across the Coral Sea.
It was late at night, around 10.30 pm when they arrived where Sandy Island was supposed to be. The captain went slowly for fear of running aground, but lo and behold there was 1300 metres of water below them, and no island!
How this phantom island found its way onto so many maps is not yet clear. What is clear is that a few places are going to have to update their maps, especially as accurate maps are important for nautical safety.
The mystery of Sandy Island shows how important it is to check sources of information. It is also a reminder that direct observation is a good way to confirm what we know. We know so little about our oceans; only 12 per cent of Australia’s territorial waters have been mapped.
Practice makes perfect?
When you learn something new, be it a musical instrument or how to ride a bike, you usually need to practise. Practising means we get better at doing things and learning things we didn’t know before. It makes sense that to get better at something we should practise longer – right?
It might not be so simple. When you learn something new, it’s not just what you do in training that matters, it’s also what happens afterwards. Your brain needs time to process the new information, so that it stays in your memory for later. This is called consolidation, and the best time for it to happen is while you sleep.
Researchers from the University of New South Wales wanted to know more about ‘wakeful consolidation’; that is, consolidation that happens while you’re still awake. They set up an experiment where three groups of participants tracked dots on a computer screen. One group practised for one hour straight and another for two hours straight. The third group practised for two hours in total, but they had a break after one hour. During the break participants weren’t allowed to sleep, but could do anything else, like go for a walk or read a book.
By the second day, the researchers found that the group that had the break were better at the activity than those who practised for two hours without one. What’s more, of the groups that didn’t have a break, the group that practised for only one hour did better than those that practised for two.
The researchers suggest that this is because the group that practised longer didn’t have time to consolidate what they learned in the first hour. The group that did have a break, and the group that practised for a shorter time, did have time to consolidate, so they performed better the next day.
This research has important implications for teaching and learning. It provides evidence that in order to learn most effectively, students shouldn’t slug it out too long on the one topic. Taking breaks mean you might be able to learn more, for less.
From the Scientific American blog, by Steven Ross Pomeroy | August 22, 2012 | assistant editor for Real Clear Science, a science news aggregator. He regularly contributes to RCS’ Newton Blog. As a writer, Steven believes that his greatest assets are his insatiable curiosity and his ceaseless love for learning.
In the wake of the recent recession, we have been consistently apprised of the pressing need to revitalize funding and education in STEM fields — science, technology, engineering, and math. Doing this, we are told, will spur innovation and put our country back on the road to prosperity.
Renewing our focus on STEM is an unobjectionably worthwhile endeavor. Science and technology are the primary drivers of our world economy, and the United States is in the lead.
But there is a growing group of advocates who believe that STEM is missing a key component – one that is equally deserved of renewed attention, enthusiasm and funding. That component is the Arts. If these advocates have their way, STEM would become STEAM.
Their proposition actually makes a lot of sense, and not just because the new acronym is easy on the ears. Though many see art and science as somewhat at odds, the fact is that they have long existed and developed collaboratively. This synergy was embodied in great thinkers like the legendary Leonardo Da Vinci and the renowned Chinese polymath Su Song. One of Carl Jung’s mythological archetypes was the artist-scientist, which represents builders, inventors, and dreamers. Nobel laureates in the sciences are seventeen times likelier than the average scientist to be a painter, twelve times as likely to be a poet, and four times as likely to be a musician.
Camouflage for soldiers in the United States armed forces was invented by American painter Abbot Thayer. Earl Bakken based his pacemaker on a musical metronome. Japanese origami inspired medical stents and improvements to vehicle airbag technology. Steve Jobs described himself and his colleagues at Apple as artists.
At TED 2002, Mae Jemison, a doctor, dancer, and the first African American woman in space, said, “The difference between science and the arts is not that they are different sides of the same coin… or even different parts of the same continuum, but rather, they are manifestations of the same thing. The arts and sciences are avatars of human creativity.”
By teaching the arts, we can have our cake and eat it, too. In 2008, the DANA Arts and Cognition Consortium, a philanthropic organization that supports brain research, assembled scientists from seven different universities to study whether the arts affect other areas of learning. Several studies from the report correlated training in the arts to improvements in math and reading scores, while others showed that arts boost attention, cognition, working memory, and reading fluency.
Dr. Jerome Kagan, an Emeritus professor at Harvard University and listed in one review as the 22 most eminent psychologist of the 20th century, says that the arts contribute amazingly well to learning because they regularly combine the three major tools that the mind uses to acquire, store, and communicate knowledge: motor skills, perceptual representation, and language.
“Art and music require the use of both schematic and procedural knowledge and, therefore, amplify a child’s understanding of self and the world,” Kagan said at the John Hopkins Learning, Arts, and the Brain Summit in 2009.
With this realization in mind, educators across the nation are experimenting with merging art and science lessons. At the Wolf Trap Institute in Virginia, “teaching artists” are combining physical dance with subjects like math and geometry. In Rhode Island, MIT researcher Jie Qui introduced students to paper-based electronics as part of her master’s thesis exploring the use of technology in expressive art. Both programs excited students about science while concurrently fueling their imaginations. A potent blend of science and imagination sounds like the perfect concoction to get our country back on track.
Celebrated physicist Richard Feynman once said that scientific creativity is imagination in a straitjacket. Perhaps the arts can loosen that restraint, to the benefit of all.