Monday, May 13, 2013

Climate Change and Weather Predictions: The differences in regards to Chaos Theory

A paper I wrote for my meteorology class on 5/11/2013:



            Back in 1963, a man named Lorenz was studying patterns of rising warm air in the atmosphere. His studies led to a model of chaos theory, the Lorenz Attractor (an attractor in math being a representation of space: “the smallest unit which cannot be itself decomposed into two or more attractors with distinct basins of attraction (Weisstein, 2013).”). This “strange attractor” came from his studies that showed how even a slight interference in pattern could cause the outcome to be completely unexpected. Take a look at the below, where blue and magenta travel together until one takes an unexpected diversion based on a very slight change in input:
Illustration of deterministic chaos. Imagine two systems started at slightly different initial conditions. They will follow each other closely for some time, but within a short time our ability to predict them breaks down (front and side view of the Lorenz attractor).
(Axelsen,2010)
Illustration of deterministic chaos. Imagine two systems started at slightly different initial conditions. They will follow each other closely for some time, but within a short time our ability to predict them breaks down (front and side view of the Lorenz attractor).
(Axelsen,2010)
 

What the Lorenz attractor models is the chaos of weather prediction (Axelsen, 2010).
            What Jacob Axelsen points out in his discussion of chaos theory and weather is that climate change predictions are not ruled by the same chaos (2010). Climate change can be predicted, but it will be the daily weather that results that remains hard to pin down. This is because the inputs for weather are sensitive and fragile – air, easily manipulated, and water systems. Climate prediction comes from much more predictable inputs such as radiation and molecules with predictable behaviours (Axelsen, 2010). Weather, essentially, is not going to be easy to figure out as climate change takes place, but the inputs from climate can be predicted.
            Much of the more consistent study of Earth’s past climate has come from looking at glacial, arctic and Antarctic ice cores. The use of ice cores determines two major components of climate change predictability: temperature and Carbon Dioxide (CO2) levels. CO2 from the atmosphere in the past can be measured by studying air bubbles in the ice cores. The temperature is determined by the way the ice is formed (Ferguson, 2013). The data from ice cores collected gives data from as far back as 80,000 years (Ferguson, 2013), so a good representation of past climate information is available to scientists in comparing the correlation of temperature to atmospheric CO2 levels.
            CO2 is one of several greenhouse gasses in Earth’s atmosphere. It is not the most plentiful; water vapor is a greenhouse gas and is much more plentiful in the atmosphere and actually absorbs more radiation (Aherns, 2012). However, the concern with CO2 is the amount that is generated by anthropogenic means through the relatively recent phenomenon of burning fossil fuels for power and technology. CO2 is a byproduct of spent fossil fuels.
            There has been more recent data taken directly from the atmosphere for the past few decades at the NOAA Earth System Research Laboratory at Mauna Loa, Hawaii. Below, you can see the data collected represented:
 
“The carbon dioxide data (red curve), measured as the mole fraction in dry air, on Mauna Loa constitute the longest record of direct measurements of CO2 in the atmosphere. They were started by C. David Keeling of the Scripps Institution of Oceanography in March of 1958 at a facility of the National Oceanic and Atmospheric Administration[Keeling, 1976]. NOAA started its own CO2 measurements in May of 1974, and they have run in parallel with those made by Scripps since then [Thoning, 1989]. The black curve represents the seasonally corrected data.”


            Based on this data, and the correlation of temperature and CO2 levels in ice cores, the prediction that the Earth’s surface temperature will increase makes sense. The warming of Earth makes some climatic changes easy to predict: the Northwest U.S. mountain ranges will experience more rainfall than snowfall, affecting water supplies. The tropical inland regions will get drier. Heat from the West African coast and the warming that the equatorial Atlantic experiences will add fuel to hurricane development, making them stronger and longer lasting. All these climatic predictions can be made in general (Ahrens, 2012).
            There is one major aspect of weather that could make climate change go either way. Cloud formation can be predicted to increase with climate change. What can’t be predicted is whether those clouds will increase or decrease Earth’s surface temperature. Scientists are currently trying to create models to figure this out. Clouds are large condensed bodies of water, which could have a greenhouse effect. They could also have an albedo effect and insulate the atmosphere from further radiation. Either way, the clouds and the likely increase in formation is one of the major sticking points in climate change predictability (Ahrens, 2012).
            In the end, there are two things that are certain: CO2 is increasing due to human activity and increasing CO2 adds greenhouse gases. There are questions, though, that remain in trying to predict whether climate change and warming will occur due to this phenomenon. Will increased CO2 create a bloom of plankton that can actually decrease CO2 levels? Will the increased cloud formations help cool or add to the warming? These may be the questions that chaos theory makes hard to answer when it comes to climate change.
References:
Ahrens, C. D. (2012). Essentials of meteorology, an invitation to the atmosphere. (6th ed. ed.). Belmont: Brooks/Cole Pub Co.
Axelsen, J. (2010, July 9). Chaos theory and global warming: can climate be predicted. Retrieved from http://www.skepticalscience.com/chaos-theory-global-warming-can-climate-be-predicted-intermediate.htm            
Ferguson , W. (2013, March 1). Ice core data help solve a global warming mystery. Retrieved from http://www.scientificamerican.com/article.cfm?id=ice-core-data-help-solve
Weisstein, E. W. (2013). Attractor from mathworld. Retrieved from http://mathworld.wolfram.com/Attractor.html

Saturday, September 8, 2012

The Time Machine Design Contest


On the Saturday night before the first day of fifth grade for Lily (daughter, 10 years old), she and I decided to have a contest. We would each design a time machine and a winner would be declared. Since dad was the only one around to judge, you can guess who won (he's partial to anything Lily designs, and to be fair, she's quite an artist). Below are our designs, done completely independently with no input from one another or anyone else. Feel free to voice your own favorite in the comments :)

Mom's (my) Design...Pretty self-explanatory.



Lily's design requires more description:


Paraphrasing Lily's verbal description: This time travel device is portable and is designed to be worn on the wrist. The map is a touch screen and used to select where you would like to go. There is a B.C./A.D. selector to go along with the year/month/day selector. There is also a "Change History Mode" with an on/off indicator, used when you would like to alter the past, or, as Lily put it: "If you want to go back in time and warn yourself about something, like, 'hey! don't do that!'" The back of the device offers a way to take others with you; finger pads allow up to two friends to go along. You will also notice the color selection she offers (located at the top of the page above the design), and, she points out, when not in use, the device is designed to be worn on a belt. Lily's is similar to mine that it has a "wormhole finder," but hers is located in the device much like a modern smart phone.

:) :) :) :) :)

Lily and I had quite a bit of fun designing and explaining our time machines, and I am pleased to share them with you. Perhaps, when we have a finished product, we will also share video of our time-traveling adventures! 



Saturday, July 7, 2012

Nebulas, a Guest Post

This is another guest post by my daughter, Lily, now ten years old. Her passion for space is inspiring! :)


Picture credit: 
http://ircamera.as.arizona.edu/NatSci102/NatSci102/lectures/spectroscopy.htm 

Nebulas, 
By Lily Satterlee

Do you wonder where stars come from? The answer is nebulas. They’re a large cloud of dust and gas. Behind the clouds are new born stars. It is mostly hydrogen and helium. 


How these stars are are made is movement in the nebula causes clumps to form of elements and debris. These clumps increase the gravity around them as their mass grows. 
Evolution of a star; from nebula birth to solar system.
Picture credit:
http://www.daviddarling.info/encyclopedia/N/nebhypoth.html
The gravity attracts more elements and debris and the clumps increase in mass and gravity more. The increase in mass and gravity also cause an increase in density and energy to make… a star.


The most  famous nebulas are the Orion nebula and the eagle nebula which has the pillars of creation. 
Orion Nebula picture by the Hubble Telescope


The Pillars of Creation in the Eagle Nebula, Hubble Photo

Nebulas are also where stars die. When a star dies the nebula reacts.
As you can see our galaxy would not be what it is with out the nebula.

Butterfly Nebula, Hubble Photo.

Thursday, May 17, 2012

Urban Agriculture


photo credit: http://www.visualizenashua.com/idea/urbancommunity-gardens/




            “Food is a form of energy…but it’s also a form of power. And when we encourage people to grow some of their own food, we are encouraging them to take power into their own hands: power over their diet, power over their health and power over their pocket books…”(Doiron)
            Supermarkets bring in food from all over the world to sell to U.S. citizens. The choices are abundant. Despite this variety of choice and what is usually a picturesque product in the produce aisles, citizens are losing confidence in the quality of product they are consuming (Doiron). The disconnect between the producer and the consumer that exists in the food industry has spurred a rise in Community Supported Agriculture (CSA) systems in the U.S. CSA systems involve an agreement between the grower and the buyer; locally grown food is delivered for a subscription price weekly (Patel). CSA systems are one type of growing trend in the world today of urban agriculture which can simply be defined as “the growing, processing, and distribution of food and other products through intensive plant cultivation and animal husbandry in and around cities.” (Brown and Carter 3). Urban agriculture provides more than additional food for the market. For individuals and communities, there are social, ecological and economic benefits.
            Raj Patel, in the eighth chapter of his book Stuffed and Starved: The Hidden Battle for the World Food System, describes the rise of CSA systems and farmer’s markets in the United states, with over 1,000 CSA Systems today and by the end of the 20th century (Lovell 2505), over 7,175 farmer’s market in 2011 (Union of Concerned Scientists 1-4). The history of Urban Agriculture in the United States dates back to the 1890’s, where lots in cities like Detroit, New York and Philadelphia were being cultivated to provide food for residents, and in the 1930’s during the Great Depression era, where it was not only necessary to have a form of subsistence, but also the employment that city farms offered (Lovell 2505). One of the most iconic developments in urban agriculture in the U.S. was the Victory Garden, promoted by the government during World War II as a result of food rationing (Lovell 2505). A staggering 40% of all produce consumed (in the U.S.) during this era was grown in Victory Gardens (Doiron).
            Throughout the world there is historic precedent for the benefits of urban agriculture. In the middle ages, kitchen gardens were prevalent to provide for the residents of the household. In the 16th century, Machu Picchu was a city designed to support an agricultural system within the critical infrastructure (Smit). Shanghai, China is thought to be the city of urban agriculture origin and today 60% of the vegetables and 90% of the eggs consumed by residents are products of its urban agricultural system (Lovell 2504).
            Today the U.S. has several thriving Community Supported Agricultural systems. In the city of Seattle, the Department of Neighborhoods has been running the P-Patch program since 1973 and now encompasses over 23 acres in gardens; three of those gardens are market gardens which “offer low income people supplemental income and opportunities to connect with the larger community” (Department of Neighborhoods). Also in Seattle, the Seattle Central Community College runs an education program called the Sustainable Agriculture Education (SAgE) Initiative which inspired a student to start the urban farming collective Alleycat Acres (Cimons). New York City’s Green Thumb program has more than 600 gardens and produces for over 20,000 urban residents (Lovell 2505). In chapter eight of his book, Patel describes the People’s Grocery of west Oakland, a cooperative market that provides from the soil to the shelves for the residents of this “food desert.” 
Photo Credit:  http://www.diabetesfamilies.com/blog/food-deserts
            There are benefits to the individual or household that grows, whether on their own garden plot or within a community garden. As Roger Doiron indicated in his talk about garden plots, individuals can eat healthier and save money. Urban gardening also offers the individual physical and psychological benefits. The physical labor offers recreation and relaxation in a more natural environment than the city usually offers, and psychologically, the act of working to produce from and connect with the environment gives a sense of accomplishment and satisfaction (Bukvic 99). In chapter 10 of his book, Patel talks about food sovereignty, the aim to put the power of food production back in the farmer’s hands. Patel points out here that, to achieve this, it requires gaining a taste for local, seasonal food, training one’s desire to enjoy fresh instead of processed. He notes that food that has not been prepped, packaged and shipped tastes better to the individual almost 100% of the time.
            Urban gardens offer social benefits as well. These include of social cohesion. In the U.S. and Europe, particularly, it is documented that urban agriculture is not an activity limited to the lower socioeconomic population; higher-income participants, immigrants and the elderly all participate in community supported agriculture systems (Bukvic 96). The community urban garden brings together neighbors to interact, who otherwise would not. Community gardens also offer extra-curricular opportunities for youth. One of The city of Seattle’s P-Patch program’s positive aspects is it’s 30 gardens dedicated to youth participants, offering an educational and productive experience for the young demographic (Seattle Department of Neighborhoods). This interaction can help youth learn about horticulture and the environment as well as give them cooperation and job skills important to growth and development (Hung). In an informal survey conducted on the website designed for mothers, Cafemom, one third of participants noted that their local school district operated some type of community garden for the kids to run. Of those who commented, all thought it could be a benefit for their own child to participate (Satterlee).
Raj Patel also describes many of the social benefits of Community Supported Agriculture systems. When he discusses the People’s Grocery in chapter eight, he details this system that runs its own garden and marketplace. Not only does the People’s Grocery bring fresh, locally grown food to the food desert of West Oakland, it also runs an educational program; offering nutritional classes at the local YMCA. These educational benefits are not limited to groups like the People’s Grocery. Even without a formal organization, a community garden offers environmental education through “a connection to an agroecological system” (Lovell 2502). 
Photo Credit: Jennifer Esperanza/Flickr
            In West Oakland, the People’s Grocery offers a positive outlet in an obvious redline district. This community has one supermarket and 36 convenience and liquor stores, according to Patel in chapter eight. An organization like the People’s Grocery gives the community residents a chance to produce together, and offer up a form of protest to what the food industry has relegated the citizens to in food choice. Patel notes that the People’s Grocery is “aware that theirs is a political project.” This is a multifunctional form of activism; residents can plant, grow, harvest and sell everything locally and have an impact on the market of liquor and convenience that has been imposed on them. Patel admits that this activism is not likely to easily change West Oakland, that when it does start to have a larger impact, there will be a bigger push-back from the larger industry. This fight will only be helped by the interaction and growth that the community of West Oakland’s citizens has experienced in the People’s Grocery project.
            Food security is a main motivator behind the push to implement more urban gardening programs. While the thinking behind food security for the longest time was merely sustainable production, researchers have come to include distribution, consumption and disposal and recycling of waste (Koc et al. 32). This security in a wavering global economy is important. Besides the confidence of the consumer lacking in major production farms, the ability to provide local food can stimulate the local economy through market jobs and coops. In addition to the People’s Grocery cooperative, Patel describes several other food coops from the San Francisco Bay Area, CA in chapter eight of his book. The Arizmendi Bakery and the Cheeseboard Collective have similar coop worker models to the People’s Grocery. These community run cooperatives provide living wages across the board for local production and marketing. Entrepreneurship is another positive economic aspect of urban gardening. Growing a plot into a product for Farmer’s Markets can expand into products for restaurants and catering services; an example of successful urban gardening entrepreneurship is that of the Nuestras Raices, Inc, a community gardening cooperative that started out providing for eight families in the poorest part of Holyoke, Massachusetts, and grew to support 100 families plus local food businesses such as those described above (Brown and Carter 9).
            There are many environmental benefits to cities that embrace urban agriculture. Patel describes a prototype in Austria where 75 percent less waste and 63 percent less air pollution was generated. Due to local availability, this prototype used 72 percent less energy and with the waste-water benefits, used 48 percent less water than its rural counterparts (Patel). Less waste in the city is accounted for by the gardens using organic waste that would end up in a landfill, and easing the burden on waste-water treatment plants by using city gray water and storm water (Bukvic 101). Another interesting environmental benefit is that of less meat consumed. When shopping in local markets, especially farmer’s markets, shoppers have been shown to buy 75% more fruits and vegetables then at conventional super markets (Union of Concerned Scientists 1). This is significant when the environmental impact of producing meat is considered. An estimated one-fifth of the world’s land is used for raising livestock, about twice that of growing crops. In addition to land use for livestock and grain production to feed livestock, meat requires much more energy to transport due to the need for refrigeration (Koc et al. 147-148).
            The environmental and personal health impacts aren’t completely devoid of risk. One concern regarding growing crops within an urban environment is soil contamination. Heavy metals and remnants of pesticides are a risk, as well as polluted groundwater (Bukvic 101). Notably, however, polluted groundwater or contaminated water sources is shown to not be necessarily more likely in an urban environment than a rural one (Lovell 2513). Without attention, resident health can be affected, though many organized CSA systems have in place a process to test for contamination before planting (Department of Neighborhoods).

            There are other challenges to urban gardening also. One, which Patel briefly describes in the last chapter of his book through the example of the bull-dozed South Central Farm in Los Angeles CA, is that of land security. Many urban gardens were or are a result of a fed-up neighborhood taking over, illegally, empty lots and trying to make them “green” (Lovell 2511). This illegal use of lots, empty or not, invites the fate of the fourteen year-old South Central Farm. City and community departments such as the P-patch program in Seattle offer some security in the existence of ordinances already in place when land is acquired for use (Department of Neighborhoods). Another challenge, especially to those with limited income and in the poorer communities is that of start-up costs. Tools, labor, soil or mulch, processing and time, these can all be difficult to obtain as an urban low-income citizen (Brown and Carter 15). Some of the problems that are encountered as you go into the more populated and developed areas of the city is sunlight availability (Lovell 2512). This does have some response through the use of rooftop gardens that have become prevalent in the city of Chicago. Seasonal and climatic limits are also a challenge. Without knowledge on how to preserve produce that is more prevalent in rural areas, it can be difficult to keep production throughout the year (Brown and Carter 16).
            These are challenges that sound quite tame in comparison to the vast benefits of Community Supported Agriculture systems and urban gardening. Today’s average urban and peri-urban household is busy, and this means that the average household typically spends a total of 31 minutes to prepare, eat and clean up after a meal (Doiron). This is a social phenomena that makes the future of urban gardening seem bleak, and yet, it is growing more and more in popularity. It is becoming a necessity; as of 2007, the world population went from primarily rural to primarily urban (Doiron). From the individual benefits to the environmental and social benefits, as the world population grows and the global economy remains shaky, this process shows great promise to achieving food security.


Works Cited:
Brown, Katherine H., and Anne Carter. “Urban Agriculture and Community Food Security in the United States: Farming from the City Center to the Urban Fringe.” The Community Food Security Coalition’s North American Urban Agriculture Committee. October 2003. Web. 18 March 2012.
Bukvic, Karmen. “The importance of Ljubljana’s plot gardening for individuals, the environment and the city.” Urbani izziv 21.1 (2010). 94-105. Proquest. Web. 08 March 2012.
Cimons, Marlene. "Promoting Sustainable Agriculture." U.S.News & World Report 2011: 1. ProQuest Nursing & Allied Health Source; ProQuest Research Library. Web. 19 Mar. 2012.
Doiron, Roger. “Roger Doiron: My subversive (garden) Plot.”  TED.com. TED Talks, September 2011. Web. 19 Mar 2012
Hung, Yvonne. “East New York Farms: Youth Participation in Community Development and Urban Agriculture.” Children, Youth and Environments 14.1 (2004). 20-31. web. 12 March 2012.
Koc, Mustafa. For Hunger-Proof Cities: Sustainable Urban Food Systems. Ottowa, Canada: International Development Research Center, 1999. Print.
Lovell, Sarah Taylor. “Multifunctional Urban Agriculture for Sustainable Land Use Planning in the United States.” Sustainability 2.1 (2010). 2499-2522. Open Access web. 03 March 2012.
Patel, Raj. Stuffed and Starved: The Hidden Battle For the World Food System. Brooklyn, N.Y.: Melville House Publishing, 2007. Electronic book.
Satterlee, Dorian. “Survey for Mom’s with Kids: Does your school have an agriculture/horticulture/gardening club?” Cafemom.com. March 2012.
Seattle Department of Neighborhoods. “P-Patch Community Gardening Program Factsheet.” (2011). Web. 15 March 2012.
Smit, Jac. “Community-Based Urban Agriculture as History and Future.” City Farmer, Canada’s office of Urban Agriculture. (2002). Web. 18 March 2012.
Union of Concerned Scientists. “Good Food is Right around the Corner.” Earthwise 14.2 (2012). 1-4. Print.

Sunday, April 1, 2012

Nanotechnology and Food

             The word technology often makes the public nervous when it is coupled with the word food. It is understandable that when it comes to what is ingested and used to sustain the human body, the consensus is that Mother Nature knows best. It might seem, then, that the idea of nanotechnology being incorporated with food would not be welcomed with open arms. Yet, if a dairy farmer were to gaze at his product at the nano-level, it might intrigue him to see the natural occurrence of nanoparticles in the casein micelles that inspire such technology (Institute of Medicine).

            Nanotechnology is definitively broad; it is conducting science, engineering and technology at the nanolevel of 1 to 100 nanometers, according to the National Nanotechnology Initiative (NNI) at the Nano.gov website. This scale is not new to the processes of human digestion, as indicated in the introduction; most of the processes in the body take place at the nano-level (Institute of Medicine). What makes this technology different and unique is that at the nanometer range, materials have new and unique properties and novel functions (Poole and Owens 4). Due to its interdisciplinary possibilities, the funding and investment for research is quite high and can prove to provide innovation to food processing and products (Neethirajan 39). The benefits of using nanotechnology in food production would include nutrition enhancement and safety regulation enhancement, yet there exists a gap of knowledge of risks which need to be evaluated by national, global and private organizations.
Image credit: http://www.sustainpack.com/nanotechnology.html
            Nanotechnology currently offers many benefits outside the food industry. Manufactured nanotechnology has actually existed for thousands of years; evidenced in iridescent goblets from the fourth century A.D. and stained glass used for centuries following (Poole and Owens 1). Today it is found in everyday sports items like tennis rackets and baseball bats; in rechargeable batteries for automobiles; and in household cleaning products (Nano.gov). Besides the advantage of funding that such an interdisciplinary science has, the applications can also offer crossover applications.

Raj Patel, in the Introduction of his book Stuffed and Starved: The Hidden Battle for the World Food System, presents a problem in today’s society: while 800 million global citizens go hungry, one billion are at the same time overweight. Both groups are malnourished. Accompanying this problem of inadequate food is the issue of sustainable food. Currently, production has kept up with exponential human growth and the hunger of 800 million is likely due to a corrupt food market system and various global conflicts and not overpopulation (Patel). Yet the evidence of the inability to sustain our current production starts with the beginning of agriculture. While human population has spent most of its existence on earth in a steady state with little growth, the introduction of agriculture spurred the exponential population growth (Sagan 16). As human quality of life now depends on the continuation of agriculture, it is important to ensure its sustainability not only for population but also for the changing climate.
Wireless nanosensor networks. Image credit:
http://www.nano.org.uk/forum/viewtopic.php?p=8875


One field of nanotechnology application is in food quality monitoring. Nanosensors offer the ability to track contaminants from the farm to the table. Beginning in the fields, nanosensors, through remote sensing devices that may be applied to crops, can monitor pest infestation, soil conditions and growth, helping to minimize pesticide use and utilize the full potential of cropland (Meetoo 392). Currently being proposed for monitoring grain bins are nanosensors that can detect insects or fungus through thousands of nanoparticles distributed on single, lightweight sensors. Other sensors are being designed to detect E. coli and salmonella. These bacteria sensors, useful in the bulk and limited quantity transportation of foods, include Nano Bioluminescent spray being developed by Agromicron Ltd. The spray contains nanoparticles that react with bacteria and produce a visual glow to indicate infestation (Neethirajan 40). The application of sensors in the food system would be beneficial in assuring food safety and spoilage prevention. 
Another area of application in development is that of food packaging. This sector of the food industry seems to be advancing quickly, likely due to its indirect contact with food. It includes the use of nanosensors, but also takes advantage of the lightweight characteristic of nanotechnology. Silicon-based nanoparticles offer a lightweight, more heat-resistant and stronger covering for foods that require vacuum covering to stay fresh (Meetoo 394). Metal nanoparticles can be used for antimicrobial packaging, preventing bacterial and fungal growth on food and resisting dirt. Even edible food nanoparticles are being researched for such applications (Neethirajan 41).
Some of the most advantageous yet intimidating applications are those of nanotechnology being used for encapsulation. This is the use of nanoparticles containing nutrients, flavor enhancers or texture enhancers and utilizing a controlled release. This technology has been incorporated by an Australian company, George Weston Foods. Using encapsulation, the company fortifies its bread with fish oil and masks the taste and smell by keeping the oil encapsulated until digestion (Neethirjan 43). This is just one example of using the technology in this way.
Image Credit:
http://www.foodsci.uoguelph.ca/deicon/casein.html
The aforementioned milk protein, casein micelles, offers a natural model for encapsulation delivery. Water molecules are polar molecules; they have a positive end and a negative end. Micelles are made up of surfactants that have hydrophilic (water-favoring) heads that are also polar, and hydrophobic (water resistant) non-polar tails (Poole and Owens 326). These surfactants get together and form a nanoparticle (micelle) in nature that offers a biological delivery system. Scientists can take this design and synthesize a vitamin delivery system using these proteins (Neethrijan 43). Though encapsulation appears to present the most risk due to its direct interaction with food products, it also offers the most promise due to the natural blueprints available.
This particular delivery system can be incorporated with the sensor system to cater to individual needs and tastes. Sensors in nanocapsules can trigger a release of nutrients if it senses a lack of nutrients in the consumer. Microwaves can trigger sensors to release specific flavor or color enhancers. This delivery system can also be utilized for textures, adding the desired fatty texture to low-fat foods (Neethrijan 44).
Technology of any kind used in food processing and production is often viewed skeptically by the general public. This applies to not only western consumers, but also by what Raj Patel refers to in his book as the “global south,” the developing countries that while some of the biggest producers of agriculture are also the hungriest. Patel, in the For Africa! section of chapter 6 of his book, points out that countries such as Zambia have rejected food aid from the U.S. due to the incorporation of notorious genetically modified organisms (GMO’s) that the U.S. Food and Drug Administration (FDA) allows and that Zambia’s own scientists have been unable to vet for themselves. This aversion is completely understandable, and should be addressed by creating a global cooperation when it comes to incorporating nanotechnology into food; research should not rest solely in the hands of the profit-seeking corporations that seek to use it in their food products.
These concerns are not lost on regulators and scientists in the U.S. or even on global organizations. In 2010, the United Nations Food and Agriculture Organization (FAO) and the World Health Organization (WHO) had a meeting on the potential applications and safety concerns of nanotechnology in the food and agriculture sectors. In 2009, the National Science Foundation’s Institute of Medicine hosted a workshop forum on the same, with members of the FDA, Environmental Protection Agency (EPA) and National Science Foundation in attendance as contributors to discussion.
Image credit: 2007 How Stuff Works

One of the knowledge gaps in such a young technology that has only recently come to be considered for application in the food industry is that of how nanoparticles are distributed once ingested (World Health Organization 29). Most of the toxicology research done thus far has been in the occupational sector of nanotechnology, where workers are exposed to nanoparticles for short periods of time and the path of intake is more likely inhalation or absorption (through the skin) then ingestion/oral intake, which was pointed out in both of the meetings. In summary, the World Health Organization’s assessment of risks read as following:
“Future needs and ways forward to prevent human health risks at international and national levels concern knowledge (scientific and market data), resources (funding for studies, facilities and trained investigators), and processes (international scientific collaboration on characterization, methods design and testing; international, multistakeholder collaboration on guidelines development and harmonization; public engagement and societal governance).”
The knowledge, resource and process needs were laid out in the meeting report, with emphasis on the necessity for collaboration and public engagement.
            At the 2009 Institute of Medicine workshop, many safety concerns were brought up. The fact that nanomaterials fall within the biological size scale makes it possible that there can and will be interactions at the biological level; cellular interference and possible DNA interference. The Institute of Medicine published the discussions on these risks in the last chapters of its book, Nanotechnology in Food Products. In chapter three of this book, it was noted by speaker Fred Degnan, an attorney, that even with the FDA encouraging early and often dialogue with industry producing Nanotechnology, the FDA should work to provide written guidance for what it requires in research and development to approve nanomaterials in food products.  This would be a vast improvement to the FDA’s requirements for GMOs, as Patel points out in chapter six of Stuffed and Starved; Patel describes the the US Food and Drug Administration’s handling of new GM crops; that the research into the safety of these foods was left entirely in the hands of the profit-seeking private sector that was engineering the crops for consumption. Where the FDA could have done much more in the way of research, it relied on the words of an industry that had already invested significant amounts of time and money into crops that were supposed to make food more nutritious for world population. The FDA speaker at the workshop does acknowledge that the burden of proof of safety lies in the hands of the manufacturers (Institute of Medicine).
            Though the FDA has not yet publicly produced a set of written guidelines, the European Food Safety Authority (EFSA) has, which is a start to a more conformed regulatory process for global major food manufacturers. In the abstract of the paper, the European Food Safety Authority claims that it “has developed a practical approach for assessing potential risks arising from applications of nanoscience and nanotechnologies in the food and feed chain.” The EFSA overview lays out a flow chart, beginning with the question of whether or not the material in question is even an engineered nano-material (or ENM) and how to proceed from there on assessing the risk (9). That this guidance exists should be taken into consideration and used as a model for other regulatory and health agencies and organizations on the national and global levels. This consistency would aid in the collaboration of top tier scientists, academics and manufacturers as well as give the process transparency for the public.
            Consumer education is the most important aspect of integrating nanotechnology into food production. In an informal survey of less then one hundred people, two things stand out about public awareness on the subject: That the public understands little about the actual technology, and that they don’t want manufacturers to be the ones researching its use in their food (Satterlee). A more formal survey of a similar nature was conducted by the National Science Foundation and found that not only did half of the participants know “little or nothing” about the technology, only six percent cared to apply it to use in food (National Institute of Medicine). Julia Moore, of the Woodrow Wilson International Center for Scholars, spoke at the workshop for Nanotechnology in Food Products and had this to say after analyzing the surveys taken on the subject: “public opinion is really up for grabs when it comes to nanotechnology. The public really doesn’t know very much to have an opinion.” This emphasizes that scientists and organizations still have the opportunity to form public opinion about it, and transparency is going to count for a lot.
            One lesson learned from the failure of public information on GMOs might be best summed up in Patel’s book in chapter 6’s I’d Like to Thank the Academy. Patel describes a story of a whistle-blowing scientist, Ignacio Chapela. Chapela submitted and had published in the peer-reviewed journal Nature an article on the cross contamination of genetically modified maize in Mexico. The article was mysteriously retracted. In an attempt to avoid this type of corruption in the research of other technology in the food industry, it is promising that such a wide collaboration is involved. From academics to global organizations, the importance of transparency cannot be stressed enough to ensure that the benefits of nanotechnology are safely integrated into the food system. As consistency and guidance is produced, all involved in regulation and research will be aware that it is their responsibility to ensure safe and effective applications of technology.




Works Cited
European Food Safety Authority. “Guidance on the risk assessment of the application of nanoscience and nanotechnologies in the food and feed chain.” EFSA Journal 9.5 (2011) : 1-36. Web. 24 Feb 2012.
Institute of Medicine of the National Academies. Nanotechnology in Food Products Workshop Summary. Washington, D.C.: National Academies Press, 2009. Electronic book.
Meetoo, Danny D. “Nanotechnology and the food sector: From the farm to the table.” Emirates Journal of Food and Agriculture. 23.5 (2011): 387-403. Web.
Nano.gov. National Nanotechnology Initiative. Web. 22 Feb 2012
Neethirajan, Suresh. “Nanotechnology for the Food and Bioprocessing Industries.” Food and Bioprocess Technology. 4.1 (2010): 39-47. Web.
Patel, Raj. Stuffed and Starved: The Hidden Battle For the World Food System. Brooklyn, N.Y.: Melville House Publishing, 2007. Electronic book.
Poole, Charles P. and Frank J. Owens. Introduction to Nanotechnology. New Jersey: John Wiley & Sons, Inc, 2003. Print.
Sagan, Carl. Billions & Billions. New York: Randomhouse, 1997. Print.
Satterlee, Dorian. “Survey on Nanotechnology and Food.” Survey monkey, Feb. 2012. Web.
World Health Organization. “FAO/WHO Expert meeting on the application of nanotechnologies in the food and agriculture sectors: potential food safety implications Meeting report.” Rome : FAO and WHO, 2010. 1-130. Web.

Saturday, December 31, 2011

Subatomic Particles and The Standard Model

As the name might suggest, subatomic particles are particles that are smaller than an atom... Which is an interesting conundrum for the atom: The Greek root for the word atom, "atomon," means "that which cannot be divided." 
When atoms were first decidedly discovered, they were thought to be fundamental, a not-dividable particle that made up all elements. But as compounds and solutions were broken down into elements, and these elements became more categorical, it seemed that even individual atoms had to possess smaller building blocks.


"...experiments which "looked" into an atom using particle probes indicated that atoms had structure and were not just squishy balls. These experiments helped scientists determine that atoms have a tiny but dense, positive nucleus and a cloud of negative electrons (e-)."(Berkeley Lab, 2011)


Picture credit: wikispace History of the Atom




Soon enough, scientists had determined that an atom is made up of three sub-atomic particles: Protons and Neutrons in the nucleus and that cloud made up of the much smaller elementary particle, the electrons. But are these three particles fundamental? Well, the electrons are. 


So electrons are (to date considered) fundamental subatomic particles. But what, then, are protons and neutrons made of? 
Protons, it turns out, are made of two "up" quarks and one "down" quark, held together with a "cloud of gluons" (R. Nave).
Neutrons are made up of two "down" quarks and one "up" quark. 


What scientists have developed to determine fundamental particles is the Standard Model Theory. This theory has been supported through experimentation in particle accelerators such as the Large Hadron Collider(LHC) at CERN. 
The Standard Model has 12 fundamental matter particles: six quarks and six leptons. The up and down quarks are just two of the quarks; there are also: charm, strange, top and bottom quarks.
Leptons include the electron as well as the following: neutrino electron, muon, tau, muon-neutrino and tau-neutrino.
picture credit: Cern, http://public.web.cern.ch/public/en/science/standardmodel-en.html

These particles are members of multiple generations, 1st, 2nd and 3rd. Up and down quarks, for example, make up the first generation of quarks. The second and third generation particles are heavy and unstable and quickly decay to the more stable first generation. This is why our protons and neutrons are made of first generation quarks, and why it is electrons that occupy the cloud surrounding the atom's nucleus.


The Standard Model Theory does include forces and carrier particles which play a role in keeping atoms together. Carrier particles are carrying three of the four forces known: strong and weak nuclear forces and electromagnetism. Note that gravity is not included which is part of the reason that this model is not considered complete enough for the science community. These forces hold together the matter particles and the carrier particles include bosons, photons and gluons. Photons carry electromagnetism, bosons carry the weak force and gluons carry the strong force. Now if gravity could be added to the Standard Model, a carrier particle called a graviton could be included, but so far, scientists have not been able to produce any results to add the force and its carrier. This is one of many goals of the LHC and it's collaborators. 












References:
Berkeley Labs. http://particleadventure.org/standard-model.html. accessed 29Dec2011

Nave, C. R. and Sheridan, John, The Microwave and Infrared Spectra and Structure of Hydrothiophosphoryl Difluoride, Journal of Molecular Structure 15, 391, 1973. (http://hyperphysics.phy-astr.gsu.edu/hbase/particles/proton.html).

CERN, European Organization for Nuclear Research http://public.web.cern.ch/public/en/science/standardmodel-en.html . 2008.

Sunday, December 11, 2011

Nobel Prize 2011 Physics: Dark Energy and Accelerating Expansion of the Universe





When you throw a ball into the air, gravity will eventually cause it to stop it's upward movement and accelerate it back toward you, right? Well, what if that ball kept going up? For that matter, what if it kept going up and increasing its speed as it did so? 
We would have to assume that something, some force, is working harder then the force of gravity. This is not so hard to believe as far as the forces go...of the four known basic forces (weak and strong nuclear forces, electro-magnetic force and gravity), gravity is observably the weakest. But lets assume the ball is not fitted with a magnet headed toward a massive piece of iron and that we have not fitted it with nuclear reactor boosters...what, then, could be working against the gravity?


This is the conundrum that Nobel Laureates Saul Perlmutter, Brian Schmidt and Adam Reiss faced when they discovered that the universe was expanding...at an accelerated rate. 



For a long time, scientists believed that the universe was static; this was due to a paradox that Newton was aware of after his discovery of the force of gravity. According to his law, Newton realized that if the universe were finite, that it should be collapsing due to stars attracting one another...but that did not appear to be the case and so the universe was determined to be static. 
http://www.popgive.com/2010/06/brain-twisting-paradoxes.html 
Olbers’ paradox is the argument that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. It is one of the pieces of evidence for a non-static universe such as the current Big Bang model. 
The problem with this was that a static universe would make an infinite universe and that could not be...if the universe were infinite, our night sky would be as bright as day from all the stars shining from the endless reaches of space (whether the light came from close by or gazillions of light-years away, an infinite universe means stars "forever").
 Newton was aware of the paradox, but decidedly stuck by the static universe theory. Years later, Einstein also realized that according to his theory of General Relativity, the universe should be expanding or collapsing, but to fix that, he came up with the cosmological constant, cancelling the effect of gravity on a large scale, thus keeping a static universe as the rule (SDSS, Expanding Universe). 


Image credit: grandunificationtheory.com 
Edwin Hubble, with the assistance of "larger telescopes...being built that were able to accurately measure the spectra, or the intensity of light as a function of wavelength, of faint objects (SDSS, Expanding Universe)," then discovered the universe was indeed expanding. Through the observations of distant galaxies, Hubble discovered that the redshift of these galaxies increased the further they were from the earth. This led Einstein to call his cosmological constant his "biggest blunder." Now scientists knew that the universe was not only finite, but that because it was expanding, there was a point in time and space where the universe was incredibly small and dense; it had a beginning...a "Big Bang.


Enter Nobel Laureates, Brian Schmidt, Adam Reiss and Saul Perlmutter. From Nobelprize.org's popular information:


 "Saul Perlmutter headed one of the two research teams, the Supernova Cosmology Project, initiated a decade earlier in 1988. Brian Schmidt headed another team of scientists, which towards the end of 1994 launched a competing project, the High-z Supernova Search Team, in which Adam Riess was to play a crucial role."


Using IA supernovae (the death of white dwarf stars in a binary star system, to be specific) as basis for their measurements, the two competing teams came to the same surprising conclusion: Yes the universe is expanding, but it was not slowing down as previously believed. While trying to determine the fate of our universe, the teams had found that the supernovae were becoming much fainter then expected. This find was to be the key to the roles that the mysterious dark energy and dark matter play in the cosmos. Where a vacuum of nothing in space should be, there is something. That something must be working against? or with? gravity to accelerate expansion in a universe that is supposed to be slowing down. Dark energy and dark matter are believed to make up 95% of our universe, while we, the earth, the moon, the sun and all the stars...all other matter...only comprise the last 5%. 


With the discovery of acceleration came the true value of Einstein's cosmological constant. Without the cosmological constant, the formula for expansion would not allow for acceleration. So Einstein's blunder could turn out to be the value of that vacuum of space that contains "something."


So, as a toddler who continues to ask why with each answered question, our universe presents new unknowns with each discovery!







References:
Sloan Digital Sky Survey (SDSS). http://skyserver.sdss.org/dr1/en/astro/universe/universe.asp . viewed on 12/11/2011.

"The Nobel Prize in Physics 2011 - Popular Information". Nobelprize.org. 11 Dec 2011 
http://www.nobelprize.org/nobel_prizes/physics/laureates/2011/popular.html


Perlmutter, S. (2003) Supernovae, Dark Energy and the Accelerating Universe, Physics Today, vol. 56,no.