Tuesday, September 27, 2011

Piezoelectric Remote Control Changes Channels When You Bend It

Piezoelectrics, a technology in which motion creates a small voltage, is one of our favorite up-and-coming areas of research, whether it's powering DARPA's beetle-bots or dampening bumps in the road. Now, Japan's Murata Manufacturing claims they have come up with a remote control that works by harnessing the piezoelectric power generated by, well, bending and twisting the remote itself.

Called the Leaf Grip Remote Controller, the remote looks mostly like two handles with a transparent screen between them. That screen, or, more accurately, that film, has two layers of piezoelectric sensors, one of which detects bending, while the other detects a twisting motion. The film is unusual in that it shows no signs of having any pyroelectric properties, which often go hand in hand with piezoelectrics. Pyroelectrics, as the name suggests, generate voltage when exposed to changes in temperature, which Murata says "is a drawback because they cannot detect bending and twisting vibrations separately from changes in temperature."

The remote works essentially like you'd think it would: Twist the sides to move up and down channels, bend the film to change volume, that kind of thing. We do, of course, have to question the usefulness of such a control method over, you know, buttons, but it's still pretty impressive that they've managed to pull it off (if they have; we haven't seen the remote in action). What do you guys think? Any interest in bending your remote control?



--
Sateesh.smart

The Future of Skin

Of all our human organs, skin is arguably one of the most abused — yet it's also arguably the most reliable. It protects everything inside us, helping us avoid harm by sensing obstacles in our way, making sure we stay hydrated, and ensuring we keep ourselves at the right temperature. It constantly replenishes itself, sloughing off former layers that we've either burned or dried out or scraped or ignored, while new ones grow in their places.

Many of skin's properties would be useful in other applications — like helping people with artificial limbs regain some of what they've lost. And an electronic skin, or at least some tactile sensory ability, could help machines understand the delicate differences in force that are required to grip an apple, a hand or a piece of steel.

Researchers trying to duplicate its beneficial properties are building teeny stretchable electronics that can give artificial limbs a real sense of touch.

And scientists are making several changes to human skin itself, turning it into a 21st century interface capable of much more than feeling another person's caress. From conductive tattoos that turn skin into a human-machine communications device, skin is getting plenty of upgrades.


--
Sateesh.smart

Baffling CERN Results Show Neutrinos Moving Faster Than the Speed of Light

Don't go throwing out your physics texts just yet, but there's some strange and unprecedented news brewing at CERN today that could potentially undo large parts of the Standard Model, and it has nothing to do with particle collisions at the LHC or elusive god particles. Physicists running routine neutrino experiments between CERN's Geneva HQ and the Gran Sasso laboratory in Italy 455 miles away have found that their neutrinos seem to be traveling faster than the speed of light. That's right: faster than the fastest known speed in the universe. It's certainly not something we could have predicted when putting together our latest FYI, which investigates whether anything can move faster than light.

Just a refresher--not that you need it--but nothing travels faster than the speed of light. In physics-as-we-understand-them, it is the absolute and ultimate speed limit in our universe. We've tested and retested the speed of light, measured it in as many ways as we can think of, and much of modern physics is built upon the idea that nothing can exceed it.

So naturally this result is potentially huge. But, as noted above, it's not yet time to tear down the whole of modern physics and start all over. Here's what's going on: CERN physicists are firing neutrinos--which don't interact with normal matter and thus can pass straight through the earth--to a detector in Italy. The aim here was to test the frequency of oscillations (that's when one flavor of neutrino spontaneously shifts to another flavor), so the Geneva team was sending a beam of muon neutrinos toward Gran Sasso, and the Gran Sasso team was recording how many ended up there as tau neutrinos.

But in doing so, they started to notice something odd. The neutrinos from CERN were showing up at Gran Sasso a few billionths of a second early--in other words, they appeared to be getting from Switzerland to Italy faster than light would travel the same distance.

This isn't an isolated anomaly, but has been going on for years. The team has now measured some 15,000 batches of neutrinos coming across that distance, and they say they've reached a point where the statistical significance is such that, were they trying to prove anything else, it would count as as formal scientific discovery. But try as they might, they can't explain what's happening.

Nobody, least of all the researchers involved, is ready to call the Standard Model's upper speed limit busted just yet. But they also can't explain what's happening, which is why they are opening up their data to scrutiny from the wider scientific community. So now we'll have to wait and see if others in the physics world can replicate their results or come up with some kind of explanation as to why the neutrinos appear to be breaching a fundamental physical law.



--
Sateesh.smart

Sunday, September 25, 2011

The neutrino catcher that's rocking physics

Meet OPERA, a massive experiment shaking the world of physics that lies deep inside the Gran Sasso mountain near L'Aquila, Italy.

OPERA stands for Oscillation Project with Emulsion-tRacking Apparatus. It detects neutrinos - an elusive type of subatomic particle - originating 730 kilometres away at the CERN physics laboratory near Geneva, Switzerland.

OPERA's detector is made of two huge, identical modules, each weighing 625 tonnes. The modules contain sheets of lead that produce electrically charged particles called taus when hit by neutrinos. The taus leave tracks on sheets of photographic film and strike strips of plastic to produce brief flashes of light, revealing the precise time of the neutrino's arrival.

The OPERA collaboration is making waves with its recent announcement that it has detected neutrinos apparently travelling faster than the speed of light, in violation of Einstein's cosmic speed limit.


--
Sateesh.smart

Second big satellite set to resist re-entry burn-up

Even if NASA's 6-tonne UARS satellite does not cause any injury or damage when it re-enters the Earth's atmosphere today, there is more space junk headed our way next month. A defunct German space telescope called ROSAT is set to hit the planet at the end of October – and it even is more likely than UARS to cause injury or damage in populated areas.

No one yet knows where UARS (Upper Atmosphere Research Satellite) will fall to earth. Although most of the craft's mass will be reduced to an incandescent plasma, some 532 kilograms of it in 26 pieces are forecast to survive – including a 150-kilogram instrument mounting.

NASA calculates a 1-in-3200 chance of UARS causing injury or damage. But at the end of October or beginning of November, ROSAT – a 2.4-tonne X-ray telescope built by the German aerospace lab DLR and launched by NASA in 1990 – will re-enter the atmosphere, presenting a 1 in 2000 chance of injury.

The higher risk stems from the requirements of imaging X-rays in space, says DLR spokesperson Andreas Schütz. The spacecraft's mirrors had to be heavily shielded from heat that could have wrecked its X-ray sensing operations during its eight-year working life. But this means those mirrors will be far more likely to survive a fiery re-entry.

Broken mirror, bad luck

On its ROSAT website, DLR estimates that "up to 30 individual debris items with a total mass of up to 1.6 tonnes might reach the surface of the Earth. The X-ray optical system, with its mirrors and a mechanical support structure made of carbon-fibre reinforced composite – or at least a part of it – could be the heaviest single component to reach the ground."

At the European Space Agency in Darmstadt, Germany, the head of the space debris office, Heiner Klinkrad, agrees that ROSAT's design means more of it will hit the surface. "This is indeed because ROSAT has a large mirror structure that survives high re-entry temperatures," he says.

ROSAT was deactivated in 1999 and its orbit has been decaying since then. "ROSAT does not have a propulsion system on board which can be used to manoeuvre the satellite to allow a controlled re-entry," says space industry lawyer Joanne Wheeler of London-based legal practice CMS Cameron McKenna. "And the time and position of ROSAT's re-entry cannot be predicted with any precision due to fluctuations in solar activity, which affect atmospheric drag."

Solar swelling

US Strategic Command tracks all space objects and the US-government-run Aerospace Corporation lists both upcoming and recent re-entries on its website. But ROSAT is not yet on the upcoming list because its re-entry time is far from certain.

The moment a craft will re-enter is difficult to predict because it is determined by two main factors. First, the geometry of the tumbling satellite as it enters the upper atmosphere, which acts as a brake. Second, the behaviour of the upper atmosphere itself, which grows and shrinks with the amount of solar activity, says Hugh Lewis, a space debris specialist at the University of Southampton, UK.

"Solar activity causes the atmosphere to expand upwards, causing more braking on space objects. The reason UARS is coming back sooner than expected is a sudden increase in solar activity. Indeed, we expect to see a higher rate of re-entries as we approach the solar maximum in 2013," he says.

But don't expect it to be raining spaceships – what's coming down is partly a legacy of 1990s space-flight activity. "Some of the re-entries we see today [with UARS and ROSAT] are a heritage of years with high launch rates, which were a factor of two higher than they are today," says Klinkrad.

"The trend is towards smaller satellites, with more dedicated payloads," he says, rather than "all-in-one" satellite missions on giant craft like UARS. That means debris from future missions should be smaller.



--
Sateesh.smart

Hardy 6-tonne satellite zooms towards Earth

Hit the deck! A 6-tonne, abandoned NASA satellite with some particularly hardy parts is zooming towards Earth.

The Upper Atmosphere Research Satellite (UARS) was launched in 1991 to study chemical changes in the Earth's upper atmosphere and decommissioned in 2005. NASA says it will probably re-enter Earth's atmosphere on 23 September, give or take a day.

"This is the largest NASA satellite to come back uncontrolled for quite a while," says Nick Johnson, chief scientist for NASA's Orbital Debris Program Office at the Johnson Space Center in Houston, Texas.

Most of the satellite's mass should burn up but the agency has identified 26 parts likely to survive, including a 150-kilogram instrument mount and three 50-kilogram batteries. In total, more than half a tonne of debris should crash on Earth in a debris footprint some 700 kilometres long.

Cow flat

Based on the number of pieces the satellite is expected to break into, the total area it could potentially hit and the number of people in those areas, NASA estimates that it has a risk of approximately 1 in 3200 of injuring somebody (pdf).

That's well above the risk of 1 in 10,000 that NASA now requires before it launches a satellite, but UARS was designed and launched before NASA drew up those rules.

Johnson points out that no one has ever been seriously injured by space debris, although there are anecdotal reports that a cow was killed by debris from the Skylab space station, which re-entered over Australia in 1979.

Hugh Lewis, an expert in space-debris modelling at the University of Southampton, UK, agrees that the risk is low. "You have to remember that over 20,000 objects larger than 10 centimetres wide have re-entered the atmosphere since the beginning of the space age - and there have been no reported injuries from these events," he says. "At 1 in 3200, the probability of the surviving pieces of UARS causing injury are remote"

Legal unknown

If people or property were to be hit by the UARS debris they would chart new legal territory, says Joanne Wheeler, a specialist in space law at CMS Cameron McKenna in London, since neither the UN Outer Space Treaty nor the UN Convention on International Liability for Damage Caused by Space Objects have ever been tested in court.

NASA has not yet said where UARS will hit, other than somewhere between 57 degrees north and 57 degrees south. That area spans northern Canada through to the southern tip of South America and most of Europe and Asia and so encompasses most of the world's 7 billion population, along with a huge swathe of ocean.

Any wannabe space-junk collectors, be warned. On its website, NASA says: "If you find something you think may be a piece of UARS, do not touch it. Contact a local law enforcement official for assistance."



--
Sateesh.smart

Faster-than-light neutrino claim bolstered

Representatives from the OPERA collaboration spoke in a seminar at CERN today, supporting their astonishing claim that neutrinos can travel faster than the speed of light.

The result is conceptually simple: neutrinos travelling from a particle accelerator at CERN in Switzerland arrived 60 nanoseconds too early at a detector in the Gran Sasso cavern in Italy. And it relies on three conceptually simple measurements, explained Dario Autiero of the Institute of Nuclear Physics in Lyon: the distance between the labs, the time the neutrinos left Switzerland, and the time they arrived in Italy.

But actually measuring those times and distances to the accuracy needed to detect differences of billionths of a second (1 nanosecond = 1 billionth of a second) is no easy task.

Details,

"These are experiments where the devil is in the details – the details of how each piece of equipment works, and how it all goes together," said Rob Plunkett of Fermilab in Batavia, Illinois.

The detector in the Gran Sasso cavern is located 1400 metres underground. At that depth Earth's crust shields OPERA (which stands for Oscillation Project with Emulsion-tRacking Apparatus) from noise-inducing cosmic rays, but also obscures its exact latitude and longitude. To pinpoint its position precisely, the researchers stopped traffic in one lane of a 10-kilometre long highway tunnel for a week to place GPS receivers on either side.

The GPS measurements, which were so accurate they could detect the crawling drift of the planet's tectonic plates, gave precise benchmarks for each side of the tunnel, allowing the researchers to triangulate the underground detector's position in the planet. Combining that with the known position of the neutrino source at CERN gave a distance of 730,534.61 metres, plus or minus 20 centimetres.

To determine exactly when the neutrinos left CERN and arrived at Gran Sasso, the team hooked both detectors to caesium clocks, which can measure time to an accuracy of one second in about 30 million years. That linked the labs' timekeepers to within one nanosecond.

"These kinds of techniques that we have been using are maybe unusual in high energy physics, but they are quite standard in metrology," Autiero said. Just to be sure, the collaboration had two independent metrology teams from Switzerland and Germany check their work. It all checked out.

The researchers also accounted for an odd feature of general relativity in which clocks at different heights keep different times.

A 'beautiful experiment'

Other physicists are impressed."This is certainly very precise timing, more than you need to record for normal accelerator operations," Plunkett told New Scientist. His project, the MINOS experiment at Fermilab, has already requested an upgrade to their timing system so they can replicate the results, perhaps as soon as 2014.

"I want to congratulate you on this extremely beautiful experiment," said Nobel laureate Samuel Ting of the Massachusetts Institute of Technology in Cambridge during the question and answer session that followed Autiero's talk. "The experiment is very carefully done, and the systematic error carefully checked."

But only time will tell whether the result holds up to additional scrutiny, and whether it can be reproduced . There is still room for uncertainty in the neutrinos' departure time, Plunkett says, because there is no neutrino detector on CERN's end of the line. The only way to know when the neutrinos left is to extrapolate from data on the blob of protons used to produce them.

"Of course we need to approach it sceptically," he says. "I believe everyone will be pulling together to figure this out."



--
Sateesh.smart

Thursday, September 22, 2011

Frying pan forms map of dead star's past

Ten millennia ago, when humanity was busy inventing agriculture, a bright new star in the constellation we know as Centaurus may have awed our ancestors. After exploding as a supernova, the star would have faded from view within a year or so – and eventually from living memory, until, 25 years ago, a radio telescope near Canberra, Australia, found its curious remains.
And what remains they are. Remnants of supernovae tend to be seen as roundish clouds expanding symmetrically from the original explosion site, but the Centaurus remnant is shaped like a frying pan.
Fresh calculations confirm that this unique shape is not just a curiosity: it is a map of the supernova's entire history, which includes one of the fastest-moving stars in the galaxy.
Back in 1986, the original discovery team – led by Michael Kesteven, then of the Herzberg Institute of Astrophysics in Victoria, British Columbia, Canada – noted the odd shape of the remnant.

Cosmic lighthouse

In addition to the usual vaguely round shell of hot, expanding gas, this one had something long and narrow sticking out of it. Was this some sort of jet shot out from the dying star, the team wondered?
There was little further progress until 2009, when another team found an important new clue to the shape of this peculiar object, which they nicknamed the frying pan. Led by Fernando Camilo of Columbia University in New York, they had another look at the object with another Australian telescope: the Parkes Observatory radio telescope in New South Wales.
At the end of the frying pan's handle they discovered a neutron star – the crushed core of the star that had died in the supernova. This neutron star is spinning fast and, like a cosmic lighthouse, gives off radio beams that sweep around as the star rotates, appearing to us as regular pulses. Such pulsing neutron stars are appropriately called pulsars.
The team concluded that the supernova explosion hurled this stellar corpse from the blast site, leaving behind a glowing trail, which is still seen today as the handle of the frying pan.

Entire history

Now new radio measurements and calculations by a team led by Stephen Ng of McGill University in Montreal, Canada, confirm this picture.
The pulsar is unlikely to be an interloper that just happened to pass through the supernova remnant – the odds of that are about 800 to 1, given that the remnant is relatively small and pulsars are few and far between, the team says.
Other glowing pulsar trails have been found, but this is the first that is connected all the way back to its supernova remnant. That makes it a record of the supernova's entire history, from the explosion right up to the dead core's current position.
"Most pulsar tails are too old – the supernova remnant has already dissipated," says Ng.

Asymmetrical mystery

The pulsar is thought to have left the glowing trail because of the wind of energetic particles that it spews into the surrounding space. This wind ploughs into galaxy's interstellar gas, energising it and making it glow.
Based on the width of the trail, Ng's team estimates the pulsar's speed at between 1000 and 2000 kilometres per second. That could make it the fastest star known: the current record-holder is another neutron star, racing at 1500 kilometres per second from the Puppis A supernova remnant where it was born.
Such speeding neutron stars are a puzzle, because supernova explosions should be pretty symmetrical, making it unlikely that they would provide a strong kick in any one direction. Solving the puzzle of the frying pan's origin may have deepened another one.
Read previous Astrophile columns: The most surreal sunset in the universe, Saturn-lookalike galaxy has a murky past, The impossibly modern star, The diamond as big as a planet

--
Sateesh.smart

Will the universe end in a big snap?

Focusing on a logical but gruesome end for the universe could reveal elusive quantum gravity

IMAGINE one day you wake up and look at yourself in the mirror only to find that something is terribly wrong. You look grainy and indistinct, like a low-quality image blown up so much that the features are barely recognisable. You scream, and the sound that comes out is distorted too, like hearing it over a bad phone line. Then everything goes blank.

Welcome to the big snap, a new and terrifying way for the universe to end that seems logically difficult to avoid.

Dreamed up by Massachusetts Institute of Technology cosmologist Max Tegmark, the snap is what happens when the universe's rapid expansion is combined with the insistence of quantum mechanics that the amount of information in the cosmos be conserved. Things start to break down, like a computer that has run out of memory.

It is cold comfort that you would not survive long enough to watch this in the mirror, as the atoms that make up your body would long since have fallen apart. But take heart, accepting this fate would without question mean discarding cherished notions, such as the universe's exponential "inflation" shortly after the big bang. And that is almost as unpalatable to cosmologists as the snap itself.

So rather than serving as a gruesome death knell, Tegmark prefers to think of the big snap as a useful focal point for future work, in particular the much coveted theory of quantum gravity, which would unite quantum mechanics with general relativity. "In the past when we have faced daunting challenges it's also proven very useful," he says. "That's how I feel about the big snap. It's a thorn in our side, and I hope that by studying it more it will turn out to give us some valuable clues in our quest to understand the nature of space." That would be fitting as Tegmark did not set out to predict a gut-wrenching way for the universe to end. Rather, he was led to this possibility by some puzzling properties of the universe as we know it.

According to quantum mechanics, every particle and force field in the universe is associated with a wave, which tells us everything there is to know about that particle or that field. We can predict what the waves will look like at any time in the future from their current state. And if we record what all the waves in the universe look like at any given moment, then we have all the information necessary to describe the entire universe. Tegmark decided to think about what happens to that information as the universe expands (arxiv.org/abs/1108.3080).

To understand his reasoning, it's important to grasp that even empty space has information associated with it. That's because general relativity tells us that the fabric of space-time can be warped, and it takes a certain amount of information to specify whether and in what way a particular patch of space is bent.

One way to visualise this is to think of the universe as divided up into cells 1 Planck length across - the smallest scale that is meaningful, like a single pixel in an image. Some physicists think that one bit of information is needed to describe the state of each cell, though the exact amount is debated. Trouble arises, however, when you extrapolate the fate of these cells out to a billion years hence, when the universe will have grown larger.

One option is to accept that the added volume of space, and all the Planck-length cells within it, brings new information with it, sufficient to describe whether and how it is warped. But this brings you slap bang up against a key principle of quantum mechanics known as unitarity - that the amount of information in a system always stays the same.

What's more, the ability to make predictions breaks down - the very existence of extra information means we could not have anticipated it from what we already knew.

Another option is to leave quantum mechanics intact, and assume the new volume of space brings no new information with it. Then we need to describe a larger volume of space using the same number of bits. So if the volume doubles, the only option is to describe a cubic centimetre of space with only half the number of bits we had before (see diagram).

This would be appropriate if each cell grows, says Tegmark. Where nothing previously varied on scales smaller than 1 Planck length, now nothing varies on scales smaller than 2 Planck lengths, or 3, or more depending how much the universe expands. Eventually, this would impinge on the laws of physics in a way that we can observe.

Photons of different energies only travel at the same speed under the assumption that space is continuous. If the space-time cells became large enough, we might start to notice photons with a very short wavelength moving more slowly than longer wavelength ones. And if the cells got even larger, the consequences would be dire. The trajectories of waves associated with particles of matter would be skewed. This would change the energy associated with different arrangements of particles in atomic nuclei. Some normally stable nuclei would fall apart.



--
Sateesh.smart

Sunday, September 18, 2011

Nokia app powers portable brain scanner

YOU can now hold your brain in the palm of your hand. For the first time, a scanner powered by a smartphone will let you monitor your neural signals on the go.

By hooking up a commercially available EEG headset to a Nokia N900 smartphone, Jakob Eg Larsen and colleagues at the Technical University of Denmark in Kongens Lyngby have created a completely portable system.

Watch a video of the app in action.

This is the first time a phone has provided the power for an EEG headset, which monitors the electrical activity of the brain, says Larsen. The headset would normally connect wirelessly to a USB receiver plugged into a PC.

Wearing the headset and booting up an accompanying app designed by the researchers creates a simplified 3D model of the brain that lights up as brainwaves are detected, and can be rotated by swiping the screen. The app can also connect to a remote server for more intensive number-crunching, and then display the results on the cellphone.

"Traditionally, in order to do these kind of EEG measurements you have big lab set-ups that are really expensive," says Larsen. "You have to bring people in, isolate them and give them specific tasks." The smartphone EEG would let researchers study people's brain signals in more natural environments such as at home or in the workplace. Teams can also use the smartphone's other features to conduct experiments such as displaying pictures or videos that elicit a specific brain response, or monitoring groups of people as they work together on a task.

The system might also assist people with conditions such as epilepsy, attention-deficit hyperactivity disorder and addiction by cutting down on the number of hospital visits they need to make, in the same way that home-based heart monitors do.

"Our vision is for EEG to be a regular thing that you have at home," says Arkadiusz Stopczynski, who worked on the project with Larsen. "It's much better than going to the lab, sitting there for 1 hour of EEG and going home."

"The realisation of a real-time brain-mapping system on a cellphone is a nice task," says Gunther Krausz of G.Tec Medical Engineering, a firm based in Schiedlberg, Austria, which supplies EEG systems to researchers. But as a research tool, he says, the phone can't compare with dedicated medical devices. "You need sophisticated stimulation devices and data-processing", which cannot be done with the app alone
--
Sateesh.smart

Wednesday, September 14, 2011

Discovery of 50 exoplanets, the largest group of alien world

Astronomers have announced the discovery of 50 (yes, five-zero) exoplanets, the largest group of alien worlds announced at one time. Sixteen of these worlds are "super-Earths" -- exoplanets that possess masses larger than Earth, yet much smaller than the gas giants.
This time, however, the announcement doesn't come from NASA's orbital exoplanet hunter, the Kepler Space Telescope, it comes from the European Southern Observatory's (ESO) High Accuracy Radial velocity Planet Searcher (or HARPS for short)
Enter HD 85512b, an exoplanet with a mass 3.6 times that of the Earth. This "super-Earth" is exciting in that not only can it be considered a jumbo-sized Earth, it also orbits its sun-like star (HD 85512) on the inner rim of the star's habitable zone.
"This is the lowest-mass confirmed planet discovered by the radial velocity method that potentially lies in the habitable zone of its star, and the second low-mass planet discovered by HARPS inside the habitable zone," said Lisa Kaltenegger, of the Max Planck Institute for Astronomy, Heidelberg, Germany and Harvard Smithsonian Center for Astrophysics, Boston, who is an expert on the habitability of exoplanets.

HARPS contrasts greatly from the methods employed by Kepler to detect exoplanets. Located at La Silla Observatory in Chile, HARPS is a spectrometer that analyzes the light from stars.
Over a period of time, HARPS may detect a slight shift in a target star's frequency. If this shift is periodic, it means something is orbiting the star. By precisely measuring the tiny shift, the HARPS team can deduce the orbiting body's mass and orbital period (its orbital radius can therefore be calculated).
However, one thing HARPS cannot deduce is an exoplanet's physical size. Therefore, although finding a 3.6 Earth mass world is exciting, we have no idea whether it's the same size as the Earth, or puffed up like a mini-Neptune. Therefore, we have no idea whether HD 85512b is rocky, or gaseous, or something more exotic.
This is where Kepler is different, it is able to deduce exoplanets' physical size. When a world passes in front of its parent star, the exoplanet's silhouette will block a certain amount of light, relating to its physical size.
But Kepler can only see exoplanets if they are orbiting their star "edge-on" -- making HARPS's "radial velocity method" a very powerful tool for detecting worlds that would otherwise remain hidden from Kepler's "transit method" eye.
Also, HARPS is no newbie to the field of exoplanet hunting. In the last eight years the instrument has detected 150 new exoplanets, and with the help of these new data, it has made a groundbreaking discovery: Approximately 40 percent of sun-like stars possess at least one exoplanet less massive than Saturn.
Also, the majority of exoplanets of Neptune mass or less appear to be in systems with multiple planets.
Needless to say, this finding will reinvigorate the discussions of the potential for life on these low-mass worlds, particularly when super-Earths are pottering around inside their stars' habitable zones.
But as discussed by Ray Villard in a recent Discovery News article ("Pale Red Dot: Desert Planets as Abodes for Life?"), dry, desert worlds may have a higher likelihood of supporting alien life than worlds drenched in water, like Earth. Desert planets have habitable zones three times wider than Earth-like worlds.
--
Sateesh.smart

Star Blasts Planet With X-rays

A nearby star is pummeling a companion planet with a barrage of X-rays 100,000 times more intense than the Earth receives from the sun.

New data from NASA's Chandra X-ray Observatory and the European Southern Observatory's Very Large Telescope suggest that high-energy radiation is evaporating about 5 million tons of matter from the planet every second. This result gives insight into the difficult survival path for some planets.

The planet, known as CoRoT-2b, has a mass about three times that of Jupiter -- 1,000 times that of Earth -- and orbits its parent star, CoRoT-2a at a distance roughly 10 times the distance between Earth and the moon.

The CoRoT-2 star and planet -- so named because the French Space Agency's Convection, Rotation and planetary Transits, or CoRoT, satellite discovered them in 2008 -- is a relatively nearby neighbor of the solar system at a distance of 880 light years.

"This planet is being absolutely fried by its star," said Sebastian Schroeter of the University of Hamburg in Germany. "What may be even stranger is that this planet may be affecting the behavior of the star that is blasting it."

According to optical and X-ray data, the CoRoT-2 system is estimated to be between about 100 million and 300 million years old, meaning that the star is fully formed. The Chandra observations show that CoRoT-2a is a very active star, with bright X-ray emission produced by powerful, turbulent magnetic fields. Such strong activity is usually found in much younger stars.

"Because this planet is so close to the star, it may be speeding up the star's rotation and that could be keeping its magnetic fields active," said co-author Stefan Czesla, also from the University of Hamburg. "If it wasn't for the planet, this star might have left behind the volatility of its youth millions of years ago." Support for this idea come from observations of a likely companion star that orbits CoRoT-2a at a distance about a thousand times greater than the separation between the Earth and our sun. This star is not detected in X-rays, perhaps because it does not have a close-in planet like CoRoT-2b to cause it to stay active.

Another intriguing aspect of CoRoT-2b is that it appears to be unusually inflated for a planet in its position.

"We're not exactly sure of all the effects this type of heavy X-ray storm would have on a planet, but it could be responsible for the bloating we see in CoRoT-2b," said Schroeter. "We are just beginning to learn about what happens to exoplanets in these extreme environments." These results were published in the August issue of Astronomy and Astrophysics. The other co-authors were Uwe Wolter, Holger Mueller, Klaus Huber and Juergen Schmitt, all from the University of Hamburg.

NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations from Cambridge, Mass.

--
Sateesh.smart

NASA Announces Design for New Deep Space Exploration System

NASA is ready to move forward with the development of the Space Launch System -- an advanced heavy-lift launch vehicle that will provide an entirely new national capability for human exploration beyond Earth's orbit. The Space Launch System will give the nation a safe, affordable and sustainable means of reaching beyond our current limits and opening up new discoveries from the unique vantage point of space.

 
The Space Launch System, or SLS, will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment and science experiments to Earth's orbit and destinations beyond. Additionally, the SLS will serve as a back up for commercial and international partner transportation services to the International Space Station.

"This launch system will create good-paying American jobs, ensure continued U.S. leadership in space, and inspire millions around the world," NASA Administrator Charles Bolden said. "President Obama challenged us to be bold and dream big, and that's exactly what we are doing at NASA. While I was proud to fly on the space shuttle, kids today can now dream of one day walking on Mars."


The SLS rocket will incorporate technological investments from the Space Shuttle program and the Constellation program in order to take advantage of proven hardware and cutting-edge tooling and manufacturing technology that will significantly reduce development and operations costs. It will use a liquid hydrogen and liquid oxygen propulsion system, which will include the RS-25D/E from the Space Shuttle program for the core stage and the J-2X engine for the upper stage. SLS will also use solid rocket boosters for the initial development flights, while follow-on boosters will be competed based on performance requirements and affordability considerations. The SLS will have an initial lift capacity of 70 metric tons (mT) and will be evolvable to 130 mT. The first developmental flight, or mission, is targeted for the end of 2017.

This specific architecture was selected, largely because it utilizes an evolvable development approach, which allows NASA to address high-cost development activities early on in the program and take advantage of higher buying power before inflation erodes the available funding of a fixed budget. This architecture also enables NASA to leverage existing capabilities and lower development costs by using liquid hydrogen and liquid oxygen for both the core and upper stages. Additionally, this architecture provides a modular launch vehicle that can be configured for specific mission needs using a variation of common elements. NASA may not need to lift 130 mT for each mission and the flexibility of this modular architecture allows the agency to use different core stage, upper stage, and first-stage booster combinations to achieve the most efficient launch vehicle for the desired mission.

"NASA has been making steady progress toward realizing the president's goal of deep space exploration, while doing so in a more affordable way," NASA Deputy Administrator Lori Garver said. "We have been driving down the costs on the Space Launch System and Orion contracts by adopting new ways of doing business and project hundreds of millions of dollars of savings each year."

The Space Launch System will be NASA's first exploration-class vehicle since the Saturn V took American astronauts to the moon over 40 years ago. With its superior lift capability, the SLS will expand our reach in the solar system and allow us to explore cis-lunar space, near-Earth asteroids, Mars and its moons and beyond. We will learn more about how the solar system formed, where Earth' water and organics originated and how life might be sustained in places far from our Earth's atmosphere and expand the boundaries of human exploration. These discoveries will change the way we understand ourselves, our planet, and its place in the universe.
 


--
Sateesh.smart

Tuesday, September 13, 2011

Electrified roads could power cars from the ground up

THE cars of the future could be powered by electrified roadways. Such technology would allow electric cars to forgo their heavy batteries, which not only add to a vehicle's weight, increasing the energy needed to move it, but also force it to sit idle while recharging.

The idea has been around for decades. Previous attempts used an electrified coil in the road to create an electromagnetic field that interacts with a coil attached to the car. "Since the coils must be exactly aligned face-to-face to achieve a high energy efficiency, such schemes may be useful for [charging] vehicles in a parking lot, but never very effective for cars while running," says Masahiro Hanazawa at Toyota Central R&D Labs in Nagakute, Aichi, Japan.

Hanazawa and Takashi Ohira at Toyohashi University of Technology, also in Aichi, are developing a system that transmits electric power through steel belts placed inside two tyres and a metal plate in the road. "Our approach exploits a pair of tyres, which are always touching a road surface," says Hanazawa.

To test how much energy would be lost as electricity travelled through the tyres' rubber, Hanazawa and Ohira set up a lab experiment in which they put metal plates on the floor and inside a tyre. "Less than 20 per cent of the transmitted power is dissipated in the circuit," says Ohira. The team presented its work in May at the International Microwave Workshop Series on Innovative Wireless Power Transmission in Kyoto, Japan.

With enough power the system could run typical passenger cars, says Ohira, and the team are now developing a small-scale prototype to prove it. He admits, however, that the system's energy loss is "much higher than regular batteries".

John Boys, an electrical engineer at the University of Auckland, New Zealand, notes that with this system, the metal pads on the road would need as much as 50,000 volts to power the car, the same voltage used to operate tasers. "You wouldn't want to step on that," he says.

What's more, at the energy levels needed, the electric plates would produce a large magnetic field that would "cause significant radio-frequency interference that might create chaos with all manner of electrical systems", says Boys.

It would be expensive to "rip up the roads and install the necessary infrastructure", says Daniel Friedman at the University of New South Wales in Sydney, Australia. But he adds that one way around that may be to limit metal plates to main highways, and then run cars on other roads using small batteries.



--
Sateesh.smart

Mega space storm would kill satellites for a decade

A MAJOR solar storm would not only damage Earth's infrastructure, it could also leave a legacy of radiation that keeps killing satellites for years.

When the sun belches a massive cloud of charged particles at Earth, it can damageMovie Camera our power grids and fry satellites' electronics. But that's not all. New calculations suggest that a solar megastorm could create a persistent radiation problem in low-Earth orbit, disabling satellites for up to a decade after the storm first hit.

It would do this by destroying a natural buffer against radiation - a cloud of charged particles, or plasma, that normally surrounds Earth out to a distance of four times the planet's radius.

The relatively high density of plasma in the cloud prevents the formation of electromagnetic waves that would otherwise accelerate electrons to high speeds, turning them into a form of radiation. This limits the amount of radiation in the innermost of two radiation belts that surround Earth.

But solar outbursts can erode the cloud. In October 2003, a major outburst whittled the cloud down so that it only extended to two Earth radii. A repeat of a huge outburst that occurred in 1859 - which is expected - would erode the cloud to almost nothing.

Yuri Shprits of the University of California in Los Angeles led a team that simulated how such a large storm would affect the radiation around Earth.

They found that in the absence of the cloud, electromagnetic waves accelerated large numbers of electrons to high speed in Earth's inner radiation belt, causing a huge increase in radiation there. The inner radiation belt is densest at about 3000 kilometres above Earth's equator, which is higher than low-Earth orbit. But the belt hugs Earth more tightly above high latitude regions, overlapping with satellites in low-Earth orbit.

Speeding electrons cause electric charge to accumulate on satellite electronics, prompting sparks and damage. Increasing the number of speeding electrons would drastically shorten the lifetime of a typical satellite, the team calculates (Space Weather, DOI: 10.1029/2011sw000662).

The researchers say that the destructive radiation could hang about for a long time, spiralling around Earth's magnetic field lines. In 1962, a US nuclear test carried out in space flooded low-Earth orbit with radiation that lasted a decade and probably ruined several satellites.

"When you get this radiation that far in, it tends to be quite long-lived and very persistent," says Ian Mann of the University of Alberta in Edmonton, Canada, who was not involved in the study.

Thicker metal shielding around satellite electronics would help, says Shprits. The persistent radiation would also be hazardous for astronauts and electronics on the International Space Station.



--
Sateesh.smart

Saturn-lookalike galaxy has a murky past

Centuries before telescopes revealed galaxies to us, philosopher Immanuel Kant suggested that "island universes" – agglomerations of stars and gas – existed.
It's a fitting description for the peculiar galaxy pictured right. Consisting of a bright yellow spherical core surrounded by a symmetrical luminous ring of blue-tinged stars, Hoag's object looks like it's doing an impression of the planet Saturn. This is unlike any other galaxy – and so perplexing to astronomers that it might as well exist in another universe.
"It's one of these weird little objects you point at without fully understanding what they mean," explains François Schweizer of the Carnegie Observatories in Pasadena, California.
He has done his bit in a 60-year struggle to pin down the forces and events that might have created the enigmatic galaxy. Recent observations suggest the core came first and the ring added in later – but the details are still puzzling.

Funhouse mirror

Galaxies come in a variety of shapes, but most fit into one of just a few categories – simple spheroids or ellipses, flattened discs, elegant spirals like the Milky Way or irregularly shaped blobs. But Hoag's object is so bizarre that when astronomer Arthur Hoag first spotted it in 1950, he wasn't even sure it was a galaxy.
Instead he thought it might be a ring-shaped puff of gas given off by a dying star: a type of object called a planetary nebula that is commonly found in the Milky Way. But Hoag himself was unsatisfied with that explanation, noting that the ring was not emitting light at the wavelengths characteristic of the hot gas in such clouds.
He further speculated that the ring could be an optical illusion, the result of a phenomenon called gravitational lensing, in which a foreground galaxy's gravity bends the light of a more distant galaxy, giving it a peculiar shape – like looking at it in a funhouse mirror.
But that explanation failed, too. In 1974, observations of the core of Hoag's object showed it weighs far too little to cause the extreme gravitational lensing needed to turn a background galaxy into a ring.

Galactic cannibal

Then in 1987, a team led by Schweizer suggested that Hoag's object formed when a lightweight galaxy passed by a heavier, elliptical one whose gravity tore the lighter traveller apart. The heavier galaxy is what we see as the bright core, while gas stolen from the smaller galaxy settled into orbit and fuelled copious star formation, creating the ring.
One way to test this scenario is to look for faint galaxy fragments around Hoag's object, since these are often seen in the aftermath of a galaxy torn asunder. But the most sensitive observations yet, made with a 6-metre telescope in Russia, have found nothing of the kind, says a team led by Ido Finkelman of Tel Aviv University in Israel in a study to appear in Monthly Notices of the Royal Astronomical Society.
That makes the team sceptical of the galactic cannibalism scenario, though they admit that if the carnage happened more than 3 billion years ago, there might not be any detritus left to see.

Wild optimism

As an alternative, they suggest the core of Hoag's object simply sucked in gas floating in intergalactic space via gravity to build up the supply needed to form the ring.
It's a reasonable idea, says Schweizer, who was not involved in the new study. But he is not ready to rule out an ancient act of cannibalism.
Further observations are needed to point the way to the answer: "Perhaps we haven't even yet thought of the correct one," he suggests.
Hoag's object is a surprisingly difficult nut to crack. The first report on it by Hoag himself seems wildly optimistic in hindsight. "Since the object is unique, I suggest that a proper identification would be a worthwhile short-term project," he wrote.

Sunday, September 11, 2011

Quantum computer chips pass key milestones

Quantum computer users may soon have to wrestle with their own version of the "PC or Mac?" question. A design based on superconducting electrical circuits has now performed two benchmark feats, suggesting it will be a serious competitor to rival setups using photons or ions.

"The number of runners in the race has just gone up to three," says Andrew White of the University of Queensland, Australia, who builds quantum computers based on photons and was not involved in the new result.

The defining feature of a quantum computer is that it uses quantum bits or qubits. Unlike ordinary bits, these can exist in multiple states at once, known as a superposition. They can also be entangled with each other, so their quantum states are linked, allowing them to be in a sort of "super" superposition of quantum states.

This means quantum computers could perform multiple calculations simultaneously, making them much faster than ordinary computers at some tasks.

Previously, setups using photons or trapped ions as qubits have made the most headway in early calculations. Now Matteo Mariantoni of the University of California, Santa Barbara, and colleagues have boosted the computing power of a rival design, first demonstrated in 2003, that uses tiny, superconducting wires instead.

Wire loops

Mariantoni's team used a chip embedded with micrometre-sized loops of wire made of a mixture of aluminium and rhenium. When these wires were cooled to within a whisker of absolute zero, they became superconducting, meaning their electrons coupled up as structures called "cooper pairs".

The pairs in each wire were made to resonate as an ensemble. Because each ensemble could exist as a superposition of multiple different resonating states, they acted as qubits.

Mariantoni's team entangled these qubit wires using a second type of wire, known as a bus, that snaked all around the chip. First they tuned this bus so that it took on some of the quantum information in one of the qubits. Then they transferred this information to further qubit wires, thus entangling the qubits.

Benchmark tests

The design made strides in solving calculations often used as benchmarks for testing quantum computers' capabilities.

It ran a calculation known as the quantum Fourier transform, which is a central component of the most famous quantum algorithm, known as Shor's. If Shor's were run on a system with enough qubits, it would allow huge numbers to be factorised quickly. That has not happened yet, but if it ever did, it would cause many current encryption systems to break down, since they rely on the fact that ordinary computers can't do this.

The researchers also used entangled qubits to create a system known as a "Toffoli OR phase gate", which is a critical step towards building codes that do quantum error correction. This required entangling three qubits – a first for superconducting quantum circuits. "Getting three bits to play well together is hard," says White.

Ordinary chips

The advances may seem like baby steps, since both Shor's algorithm and the Toffoli gate have been realised with relatively low numbers of photons and trapped ions.

But the reason the new result is exciting is that it could be hard to scale up these systems, which tend to be delicate and require specialised equipment, while the superconducting system uses chips like an ordinary computer. "The beautiful thing about a solid circuit is that it's something you can write using lithographic technology," says White. "It looks much easier than say ion traps or photonic approaches."

But future quantum computers might not come down to an either/or choice like that between a Mac and a PC. Instead, true to their quantum nature, they may be "superpositions" of different designs. "I don't think anybody knows what the best architecture will be," says White. "Probably we will end up using hybrids of the various approaches."



--
Sateesh.smart

Ask Engineer by MIT