Pages

Friday, August 13, 2010

Agrobiotechnology

Agrobiotechnology is a range of tools, including traditional breeding techniques, that alter living organisms, or parts of organisms, to make or modify products; improve plants or animals; or develop microorganisms for specific agricultural uses. Modern agricultural biotechnology includes the tools of genetic engineering. The research on and development of agricultural products such as crop varieties and crop protection products by modifying genes to confer desirable properties such as pest resistance or improved nutritional profiles.
The first genetically engineered product went on the market in 1994. The FDA determined that a new tomato, which could be shipped vine-ripened without rotting rapidly, was as safe as other commercial tomatoes. Since then, more than 50 other genetically engineered foods have been determined by the agency to be as safe as their conventional counterparts.


The Grocery Manufacturers of America estimates that between 70 percent and 75 percent of all processed foods available in U.S. grocery stores may contain ingredients from genetically engineered plants. Breads, cereal, frozen pizzas, hot dogs and soda are just a few of them.


Soybean oil, cottonseed oil and corn syrup are ingredients used extensively in processed foods. Soybeans, cotton and corn dominate the 100 million acres of genetically engineered crops that were planted in the United States in 2003, according to the U.S. Department of Agriculture (USDA). Through genetic engineering, these plants have been made to ward off pests and to tolerate herbicides used to kill weeds. Other crops, such as squash, potatoes, and papaya, have been engineered to resist plant diseases.

More than 50 biotech food products have been evaluated by the FDA and found to be as safe as conventional foods, including canola oil, corn, potatoes, soybeans, squash, sugar beets and tomatoes.

Memtable - Part 2 - Implications

The Memtable is a very useful gadget to society; therefore it’s going to get a major acceptance from people. Especially people in the work force like businesses and corporations because it makes there life easier and saves a lot of their time. However it has no obvious effect on our culture but I do not expect it to influence our culture in anyway because it’s only a tool we use to make our life more simple and easy.



 
The only ethical challenge this gadget proposes to society is that it might make people more lazy and uneager to do certain things like organizing or remembering certain information that is already saved in the Memtable or unable to do many things manually but instead depending on this machine to do it for them, this could be a minor disadvantage in this machine and many other similar gadgets, but It’s advantages are more than the disadvantages, therefore this will not affect the demand for it much. Over all I think the mem table is a very innovative technology and will be a very useful tool to society as a whole.

Memtable - Part 1 - Intro

MemTable is an interactive touch table that supports co-located group meetings by capturing both digital and physical interactions in its memory. Everyone can be a scribe at the MemTable. The goal of the project is to demonstrate hardware and software design principles that integrate recording, recalling, and reflection during the life cycle of a project in one tabletop system. The project has been developing during the last year.

Memtable poses an important question to HCI designers: How can the multi-person interactions we design be integrated with our workpractices into systems which have history and memory? What is the social computing space of the future?

 MemTable’s hardware design prioritizes ergonomics, social interaction, structural integrity, and streamlined implementation. Its software supports heterogeneous input modalities for a variety of contexts: brainstorming, decision making, event planning, and story-boarding. The user interface introduces personal menus, capture elements, and tagging for search purposes.

Users in this meeting were discussing potential themes for a restauraunt in their neighborhood. A Chef, Architect, Designer, and Finacial Advisor each had individual concerns which they could record to the table - it's physical design also creates an ergonomic and formal setting for structured collaborative discussions where any person can be a scribe at any time.

Use case example, photo 1: Marcelo and Natan had a meeting to discuss a new prototype of a food printer before demo week. Here is a screenshot of the review panel. Note the 5 different types of input supported by the system: text, image capture, sketching, laptop capture, and audio.



Use case example, photo 2: Marcelo and Natan bring out version 2 of their prototype and sketch improvements together.

Sunday, August 8, 2010

DNA Computers

 DNA computers will be the next-generation computers made of genes' building blocks. Because of their speed, miniaturization and data storage potential DNA computers are being considered as a replacement for silicon-based computers. Current DNA computer research has already proven that DNA computers are capable of solving complex mathematical equations and storing enormous amounts of data.

Limitation of Silicon Chips

Silicon-based computer chips have been around for more than 40 years and manufacturers have been successful in making silicon-based chips smaller, more complex and faster than their predecessors. According to Moore's Law, microprocessor size is halved every eighteen months. However, there is a limitation to how small, fast and compact silicon computer chips can be.

Advantages of DNA Computers

DNA computers show promise because they do not have the limitations of silicon-based chips. For one, DNA based chip manufacturers will always have an ample supply of raw materials as DNA exists in all living things; this means generally lower overhead costs. Secondly, the DNA chip manufacture does not produce toxic by-products. Last but not the least, DNA computers will be much smaller than silicon-based computers as one pound of DNA chips can hold all the information stored in all the computers in the world.

With the use of DNA logic gates, a DNA computer the size of a teardrop will be more powerful than today's most powerful supercomputer. A DNA chip less than the size of a dime will have the capacity to perform 10 trillion parallel calculations at one time as well as hold ten terabytes of data. The capacity to perform parallel calculations, much more trillions of parallel calculations, is something silicon-based computers are not able to do. As such, a complex mathematical problem that could take silicon-based computers thousands of years to solve can be done by DNA computers in hours. For this reason, the first use of DNA computers will most probably be cracking of codes, route planning and complex simulations for the government.

History of DNA Computers and when it will be in use?

The first person who thought of and experimented with DNA as an alternative to silicon chips was Leonard Adelman, a computer scientist working in the University of Southern California. The 1994 experiment using DNA as a way of solving complex mathematical problems was a product of a book's influence (Molecular Biology of the Gene written by James Watson).

DNA computers will work through the use of DNA-based logic gates. These logic gates are very much similar to what is used in our computers today with the only difference being the composition of the input and output signals. In the current technology of logic gates, binary codes from the silicon transistors are converted into instructions that can be carried out by the computer. DNA computers, on the other hand, use DNA codes in place of electrical signals as inputs to the DNA logic gates. DNA computers are, however, still in its infancy and though it may be very fast in providing possible answers, narrowing these answers down still takes days.

This an IBM computer chip that uses DNA origami which involves single DNA molecules self assembling in a solution as a result of a reaction between a long single strand of viral DNA and a mixture of different short synthetic oligonucleotide strands. These short segments act as staples that can be modified to provide attachment sites for nanoscale components at resolutions (separation between sites) as small as 6 nanometers (nm)

Tuesday, August 3, 2010

Nanotechnology

Introduction

Manufactured products are made from atoms. The properties of those products depend on how those atoms are arranged. If we rearrange the atoms in coal, we get diamonds. If we rearrange the atoms in sand (and add a pinch of impurities) we get computer chips. If we rearrange the atoms in dirt, water and air we get grass.
Since we first made stone tools and flint knives we have been arranging atoms in great thundering statistical herds by casting, milling, grinding, chipping and the like. We’ve gotten better at it: we can make more things at lower cost and greater precision than ever before. But at the molecular scale we’re still making great ungainly heaps and untidy piles of atoms.

Nanotechnology is about rearranging atoms whichever way we want.
That’s changing. In special cases we can already arrange atoms and molecules exactly as we want. Theoretical analyses make it clear we can do a lot more. Eventually, we should be able to arrange and rearrange atoms and molecules much as we might arrange LEGO blocks. In not too many decades we should have a manufacturing technology able to:

•    Build products with almost every atom in the right place.
•    Do so inexpensively.
•    Make most arrangements of atoms consistent with physical law.

Often called nanotechnology, molecular nanotechnology or molecular manufacturing, it will let us make most products lighter, stronger, smarter, cheaper, cleaner and more precise.
The technology allows us to work on a macroscopic scale.

The advantages of nanotechnology

One of the basic principles of nanotechnology is positional control. At the macroscopic scale, the idea that we can hold parts in our hands and assemble them by properly positioning them with respect to each other goes back to prehistory: we celebrate ourselves as the tool using species. Our wisdom and our knowledge would have done us scant good without an opposable thumb: we’d still be shivering in the bushes, unable to start a fire.

At the molecular scale, the idea of holding and positioning molecules is new and almost shocking. However, as long ago as 1959 Richard Feynman, the Nobel prize winning physicist, said that nothing in the laws of physics prevented us from arranging atoms the way we want: “…it is something, in principle, that can be done; but in practice, it has not been done because we are too big.”

What would it mean if we could inexpensively make things with every atom in the right place?

Products could be much lighter, stronger, and more precise.
•    For starters, we could continue the revolution in computer hardware right down to molecular gates and wires — something that today’s lithographic methods (used to make computer chips) could never hope to do.
•    We could inexpensively make very strong and very light materials: shatterproof diamond in precisely the shapes we want, by the ton, and over fifty times lighter than steel of the same strength.
•    We could make a Cadillac that weighed fifty kilograms, or a full-sized sofa you could pick up with one hand.
•    We could make surgical instruments of such precision and deftness that they could operate on the cells and even molecules from which we are made — something well beyond today’s medical technology.
The list goes on — almost any manufactured product could be improved, often by orders of magnitude.

What will we be able to make?

Nanotechnology should let us make almost every manufactured product faster, lighter, stronger, smarter, safer and cleaner. We can already see many of the possibilities as these few examples illustrate. New products that solve new problems in new ways are more difficult to foresee, yet their impact is likely to be even greater. Could Edison have foreseen the computer, or Newton the communications satellite?
1. Improved transportation
2. Atom computers
3. Military applications
4. Solar energy
5. Medical uses

How long?

The single most frequently asked question about nanotechnology is: How long? How long before it will let us make molecular computers? How long before inexpensive solar cells let us use clean solar power instead of oil, coal, and nuclear fuel? How long before we can explore space at a reasonable cost?6
The scientifically correct answer is: I don’t know.
From relays to vacuum tubes to transistors to integrated circuits to Very Large Scale Integrated circuits (VLSI) we have seen steady declines in the size and cost of logic elements and steady increases in their performance.





Conclusion: 

Nanotechnology is prediced to be developed by 2020 but much depends on our commitment to its research.
•    Extrapolation of these trends suggests we will have to develop molecular manufacturing in the 2010 to 2020 time frame if we are to keep the computer hardware revolution on schedule.
•    Of course, extrapolating past trends is a philosophically debatable method of technology forecasting. While no fundamental law of nature prevents us from developing nanotechnology on this schedule (or even faster), there is equally no law that says this schedule will not slip.
•    Much worse, though, is that such trends imply that there is some ordained schedule — that nanotechnology will appear regardless of what we do or don’t do. Nothing could be further from the truth. How long it takes to develop this technology depends very much on what we do. If we pursue it systematically, it will happen sooner. If we ignore it, or simply hope that someone will stumble over it, it will take much longer. And by using theoretical, computational and experimental approaches together, we can reach the goal more quickly and reliably than by using any single approach alone.
While some advances are made through serendipitous accidents or a flash of insight, others require more work. It seems unlikely that a scientist would forget to turn off the Bunsen burner in his lab one afternoon and return to find he’d accidentally made a Space Shuttle.
Like the first human landing on the moon, the Manhattan project, or the development of the modern computer, the development of molecular manufacturing will require the coordinated efforts of many people for many years. How long will it take? A lot depends on when we start.

Sunday, August 1, 2010

LuminAR

LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into a new category of robotic, digital information devices. The LuminAR Bulb combines a Pico-projector, camera, and wireless computer in a compact form factor.











This self-contained system enables users with just-in-time projected information and a gestural user interface, and it can be screwed into standard light fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed to interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment their environments with media and information, while seamlessly connecting with laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces and objects into interactive spaces that blend digital media and information with the physical space. The project radically rethinks the design of traditional lighting objects, and explores how we can endow them with novel augmented-reality interfaces.


This project of a MIT Student named Natan Linder, is a sentient desk lamp that has little hope of replacing your desktop lamp any soon but it certainly is considered to be an offer in the near future because of the heavy endorsement and support from big time companies like Intel and Microvision.
It uses a Microvision Show WX projector, which is both focus and care free. Yet it is under research but this release is definitely a preview of what might be how a personal computer or part of its applications might look like in the next decade.
source - LuminAR

Saturday, July 31, 2010

< HTML5 >

Definition from Wikipedia:

“HTML5 is a standard for structuring and presenting content on the World Wide Web”

Introduction:

HTML5 is a name that is soon to be famous as it will redefine the internet and how we know it; as we have noticed colossal improvements to the ways we use the internet as a means of not only interaction but also as a tool of entertainment, development in the abilities of the browsers and the speeds of our connection have grown rapidly; yet, there is still one more leap in the internet industry as a whole, and the browser market in particular which will encompass how can the leading browsers of this “internet boom era” cope with the ever increasing speeds and utilize it for the best product for the consumer of today’s internet technology. Answers were, the emergence of monster popular websites like YouTube, Facebook, and many more, that provided an answer of adding entertainment to the web but created a problem for the not so savvy internet user with the pack of problems that come with them such as, installing plugins and upgrading them, choosing between tons of different browsers because of performance and security scrutiny’s; but the W3C “The World Wide Web Consortium” in sighted the impracticality for innovators to produce richer Web applications and services in the internet because of the unnecessary added complexity of technicalities the consumer will need to go through.

So the World Wide Web Consortiums’ (W3C) HTML5 proposal of creating an all-in-one web presentation platform that will soon ease the World Wide Web from the pain caused by the rich internet application revolution, was by providing HTML5 as a solution which tackles the gap that Flash, Silverlight, and JavaFX are trying to fill.

What does HTML5 promise?

It will show the true specification boasts capabilities covering video & Graphics on the web, as well as a batch of APIs “Application Programming Interface”. The HTML5 standard for structuring will also include features like video playback and drag-and-drop that have been previously dependent on third-party browser plug-ins such as Adobe Flash, Microsoft Silverlight, and Google Gears.


Additional descriptions to the features of HTML5 from Wikipedia: “HTML5 introduces a number of new elements and attributes that reflect typical usage on modern websites. Some of them are semantic replacements for common uses of generic block "div" and inline "span" elements, for example (website navigation block) and (usually referring to bottom of web page or to last lines of html code). Other elements provide new functionality through a standardized interface, such as the multimedia elements and Video”. “Wikipedia".

In addition to specifying markup, HTML5 specifies scripting application programming interfaces (APIs). Existing document object model (DOM) interfaces are extended and de facto features documented.




There are also new APIs, such as:

· The canvas element for immediate mode 2D drawing.
· Timed media playback.
· Offline storage database (offline web applications).
· Document editing.
· Drag-and-drop.
· Cross-document messaging.
· Browser history management.
· MIME type and protocol handler registration.
· Microdata.
· Geolocation.
· Local SQL Database. Web SQL Database.
· Indexed hierarchical key-value store (formerly Web-Simple-DB). Indexed Database API.


Some of the new features are part of HTML5 and some are maintained in separate specifications. “Wikipedia

How will it be integrated in the internet society?

HTML5 promises to replace proprietary add-ons once and for all by providing these capabilities as an industry standard to be embedded and coded into each and every new single website and browser, and because its being released from a coalition of big shot investors such as Google’s’ YouTube and operas’ new CSS3; big time websites will automatically adapt to this reformation of the web spontaneously.

Just recently after the announcement of this technology reform, many browsers have already embedded some of the promising features in its upgrades such as Mozilla “3.6.8” and Safari. Here is a demo of some of the technologies fully represented by the leading PC and Mobile phone Company Apple™. Therefore, Adobe Flash and Microsoft Silverlight will soon see their turf invaded by HTML5.

After all the excitement done by introducing this “behind the scenes” yet omnipotent global impacting technology, this standard of web presentation format is still under progress and its’ makers say it will be five to ten years at least before it’s done, so it’s too early for making any comparisons at this time, so some plugins such as Silverlight, Adobe air, and JavaFX will still be necessary as it provides more advanced features such as a richer programming model “C#”, 3D and out-of-browser capabilities. With those features, those kinds of plugins will ultimately provide a richer internet experience.

The release timeline of HTML5 and its counterpart CSS3 will need a decade until their efforts are finalized, and fully implemented consistently across all browsers, thus, widespread platforms such as Flash will continue to deliver an ubiquitous platform that is consistent and able to provide richer, more engaging user experiences.

Google's FĂȘte agrees. They say that HTML5 is only a starting point, and companies such as Google will add their own advancements, such as the ability to drag and drop images to a browser. which is now featured in Gmail - & BOX.net

If you want to see how much is the scale of HTML5 being implemented? Click test to see the percentage of your browser.