Aug 9, 2013

IBM creates Corelet programming language to make software that operates like the human brain


At the International Joint Conference on Neural Networks held this week in Dallas, researchers from IBM have taken the wraps off a new software front-end for its neuromorphic processor chips. The ultimate goal of these most recent efforts is to recast Watson-style cognitive computing, and its recent successes, into a decidedly more efficient architecture inspired by the brain. As we shall see, the researchers have their work cut out for them — building something that on the surface looks like the brain is a lot different from building something that acts like the brain.
Head researcher of IBM’s Cognitive Computing group, Dharmendra Modha, announced last November that his group had simulated over 500 billion neurons using the Blue Gene/Sequoia supercomputer at the Lawrence Livermore National Laboratory (LLNL). His claims, however, continue to draw criticism from others who say that the representation of these neurons is too simplistic. In other words, the model neurons generate spikes like real neurons, but the underlying activity that creates those spikes is not modeled in sufficient detail, nor are the details of connections between them.


To interact with IBM’s “True North” neural architectural simulator, the researchers have developed an object-oriented language they call Corelet. Building blocks, or corelets, can be built using 256-neuron neuromorphic CPUs designed to do specific tasks. The “True North” library already has some 150 pre-designed corelets to do things like detect motion or image features, or even learn to play games. To play pong, for example, a layer of input neurons would be given information about the “ball” and “paddle” motions, an output layer would send paddle motion updates, and intermediate layers would perform some indeterminate processing.


The problem with assigning specific functional tasks to specific cores is that a further rift with real brains is introduced — a rift even beyond the simplicity of the models of individual neurons. Real neural networks don’t just do one thing, but many simultaneously. I think that if the researchers were seriously attempting to capture particular functions of real brains they would not be building complex million- or billion-neuron systems that look like the image above. Instead, they would be building rather more specific systems composed of just a handful a richly modeled neurons that mimic actual functions of real nervous systems — like, for example, the spinal reflex circuit:

Like a pong controller, a simple network such as this would have inputs, outputs, and intermediate neurons, but unlike pong, the spiking capability and activity would bear traceable relevance to the task at hand. Systems of neurons built on top of a circuit, like a reflex arc, could be added later — but without the underlying relevance to the real world, not only are they meaningless, but impossible to comprehend. If, however, researchers insist on jumping right away to massive neuron count models, perhaps we might suggest a thought experiment to probe how arbitrary networks might be functionally organized.


If an individual neuron is going to generate meaningful spikes, the consensus is that the neuron needs to have some minimum level of complexity. For the thought experiment then, let a neuron be represented by a whole person, and the spike of the neuron be the clap of the person. When assembled into a room, we know from general experience that a large group of clapping human neurons can quickly evolve synchronized clapping from initially random applause within a few seconds — no big deal. We might imagine the crowd clappers could also quickly provide an answer to the question 2+2, by similarly organizing beats of 4. The magic, and relevance, for designing network chips comes in when you begin to add the specializations of input and output.





IBM Watson: Now IBM wants to produce a system that derives its intelligence from thinking, rather than merely searching through vast amounts of data.
Instead of presenting the simple 2+2 query to the whole network, we can present it to just a few input units, who transmit the message in whatever way they see fit. Simultaneously, different queries can be presented to other input units. The output units then can be instructed to listen for messages and transmit outputs in the way that they see fit. The key addition we would require here is than the intermediate human units can move about some limited space to better hear activity of their particular choosing. Finally, we would need to add some driving energetic force to incentivize any behavior in the first place, and also limit the amount of claps or spikes they can produce. An example of this organizing incentivze might be jelly beans that are sprinkled onto the hungry crowd as it moves about.

If the amount of clapping an individual can perform is then directly confined by the amount of jelly bean energy each unit can accrue, the energy-incentivation loop is closed and we have all the essentials for a neural computing system. If instead of trying to model extremely complex neurons in an attempt to capture and comprehend network behaviors, we simply create the real network just described and record its behavior for observation, I would offer that greater understanding into network dynamics relevant to real brains will have been gained, than any attempt using billions of simple processing elements.

When we realize that each individual neuron, each cell, bears inside the full survival instinct and repertoire that enabled its amoeba-like forebear to thrive and reproduce on its own in a hostile world, we have some appreciation for the repurposed power possessed in each one. Ignoring the complexity of individual neurons beyond simple electrical behavior is folly if we desire to build computing systems with the power of the brain.

New flexible micro-supercapacitor paves way for tiny electronics



Before the age of the smartphone, mobile phone manufacturers were locked in an arms race to see who could create a smaller, but still usable device. Smartphones came along, and now the arms race is more or less focused on how big a screen can be while still being accepted by consumers. During this arms race, the way to keep phones from being unwieldy is to make them thin. Researchers have created a new supercapacitor so small that if it were used in smartphones, could make the devices even thinner and lighter than they are now.

Normally, electrodes in supercapacitors are made from carbon or polymers that can conduct electricity with ease. Researchers at Leibniz Institute for Solid State and Materials Research in Dresden, led by Oliver G. Schmidt, turned away from the usual electrode materials and instead used manganese dioxide — an unconventional choice, because the material isn’t known for being adept at conducting electricity. However, the material is cheaper than the usual electrodes, and also not as harmful to the environment. So, in order to make manganese dioxide conductive, the team turned to something a supervillain might to do a captive hero: vaporize it with an electron beam.


Once the manganese was vaporized and Lex Luthor finally defeated Superman, the atoms in the vaporized gas reformed into thin, flexible strips. The strips were still as conductive as the non-gaseous manganese dioxide, so the team connected thin layers of gold to the films, increasing the conductivity. The team found that the new micro-supercapacitor was not only flexible enough to save some space and shrink down the size of mobile devices, but that it also stored more energy and provided more power per unit volume than its competing supercapacitors.

Though the manganese is cheaper than a carbon-based electrode, adding the thin gold strips — which are expensive — counteract the reduced cost. So, the team is currently working on a way to reduce the cost once again. This likely means the researchers will have to turn to a material other than gold sometime down the road, or they could perhaps conduct an aggressive takeover of the Cash4Gold business and accrue the needed gold that way.

In essence, the flexible supercapacitor works, but not for the team’s initial goals. The researchers aimed at creating the flexible supercapacitor with a high energy density, but at a low cost. Adding the gold to help achieve that high energy density, unfortunately, increased the cost beyond an acceptable amount. If there’s something to take from this experiment, though, it’s that the supercapacitor itself was a success, and bringing it to to the consumer market is more about cost now than anything else.


Research paper: DOI: 10.1039/C3EE41286E - ”On chip, all solid-state and flexible micro-supercapacitors with high performance based on MnOx/Au multilayers”


Marvel at the most detailed photos of the Sun ever taken


Astronomers at the Big Bear Solar Observatory have captured the most detailed, visible-light images of the Sun. In the image above, you can see the terrifying detail of a sunspot, where intense magnetic activity prevents the convective flow of superheated plasma. In the image below, the Sun’s photosphere (the surface region that emits light) shows off its “ultrafine magnetic loops.”

These images were captured by the New Solar Telescope, which is equipped with a 1.6-meter clear-aperture Gregorian telescope and the Visible Imaging Spectrometer (VIS). With a huge aperture and modern imaging sensor, the NST is the largest and best solar telescope on the planet — and indeed, it was built specifically by the New Jersey Institute of Technology (NJIT) to study the activity of the Sun. Scientific observations began in 2009, but it seems it took more than four years for the conditions to be just right to capture these photos.

In the image at the top of this story, you see the most detailed photo ever of a sunspot. The dark patch in the middle is the umbra, with the “petals” forming the penumbra. The texture around the outside is what most of the surface of the Sun looks like. Like most solar phenomena, we don’t know exactly what causes a sunspot, but it appears to be some function of the Sun’s intense magnetic fields and differential rotation (where internal regions of the Sun rotate at different speeds).
Basically, something causes the magnetic field to collapse in on itself. This intense magnetic field is vertical (normal to the Sun’s surface), pointing straight down, blocking the Sun’s normal convection and in turn reducing the sunspot’s surface temperature. This is why sunspots appear darker — a sunspot might be just 2,700-4,200 degrees Celsius, while a normal patch of the Sun is around 5,500 C. The lighter, petal-like regions are where the magnetic field is more inclined, allowing for some convection to occur.
 
 
The second image, above, appears to just be a close-up of the Sun’s photosphere, captured by the Visible Image Spectrometer’s H-alpha filter (red light produced by energetic hydrogen atoms). The lines/loops of hydrogen plasma are created by magnetic fields that emanate from the Sun’s inner layers. Basically, it just gives us a better idea of just how crazy the surface of the Sun is. In the image below, captured by the TRACE space telescope as it orbited near the Sun, you can see what a sunspot looks like from another angle.


The New Solar Telescope, and space-based telescopes such as NASA’s STEREO, are of vital scientific importance because they give us more data about one of the most significant objects in the universe:  the Sun. By learning more about sunspots, solar flares, and other heliophysical phenomena, we stand a better chance of weathering whatever the Sun throws at us and prospering here on Earth. 

Aug 8, 2013

ReRAM, the memory tech that will eventually replace NAND flash, finally comes to market



A new memory technology company, Crossbar, has broken cover with a new ReRAM design it claims will allow for commercialization of the technology. The company’s claims aren’t strictly theoretical; today’s announcement reveals that the design firm has successfully implemented the architecture in silicon. While that’s not the same as initiating mass production, it’s an important step in the search for a NAND flash replacement.

ReRAM (also known as RRAM) works by creating resistance rather than directly storing charge. An electric current is applied to a material, changing the resistance of that material. The resistance state can then be measured and a “1″ or “0″ is read as the result. Much of the work done on ReRAM to date has focused on finding appropriate materials and measuring the resistance state of the cells. ReRAM designs are low voltage, endurance is far superior to flash memory, and the cells are much smaller — at least in theory.
 
Crossbar memory characteristics. (Click to zoom in)


Crossbar has been working to turn theoretical advantages into practical ones. The company’s design is ready for mass production but will target low-density applications for now — think embedded microcontrollers. Demonstrating the capabilities of the part now is important to grabbing investor attention. Crossbar might be a small player, but it’s a small player in a field that’s attracting a lot of prominent attention from major companies; SK Hynix, Panasonic, and HP are all working on ReRAM designs. Long-term, the same principles that make ReRAM function might allow its use as a DRAM replacement, though mass storage ReRAM and ReRAM-DRAM might use different architectures, with one emphasizing long-term storage and the other accelerating random access.

Flash in the pan

ReRAM is the most likely candidate for replacing NAND flash and make no mistake — we need a NAND flash replacement. Sub-20nm NAND roadmaps are peppered with references to 1X and 1Y technology as a means of implying node scaling when lower nodes aren’t actually on the table. The broad plan is to rely on 3D die stacking as a means of improving cost-per-GB as opposed to transitioning to smaller 2D process geometries. Flash will still scale to 14nm within the next few years, but every smaller process node sharply increases the amount of ECC RAM required, degrades longevity, and requires greater over-provisioning and more intelligent recovery schemes at the controller level. This, in turn, slows down performance and increases die sizes. SLC (single-layer cell NAND) doesn’t really suffer from these issues, but it’s inordinately expensive.



We don’t know where, exactly, the limit it, but the ITRS predicts that NAND below 7nm, in 2D or 3D form, isn’t going to happen, period. That’s more or less when CMOS itself runs out of steam, and even getting down to 7nm is currently dubious given the troubles with EUV lithography and the advent of double/quad patterning. The endurance issue will eventually bite into enterprise and database use, or force those industries to adopt SLC NAND. The bottom line is that regardless of when it happens, NAND scaling isn’t going to continue indefinitely.


The current hope is that ReRAM will be ready for widescale adoption by the 2017-2018 timeframe. The first 3D NAND devices are currently expected in 2015, which means commercial ReRAM deployment would begin well before NAND hits its absolute scaling limit. Given the difficulty of ramping an entirely new technology, it wouldn’t surprise us if NAND’s last generations focus primarily on low-end consumer applications, while ReRAM comes in at the top for the enterprise market, where endurance and write requirements are difficult to meet with smaller NAND geometries.

Put in context, then, the work Crossbar is doing to bring ReRAM to market is essential early work towards building the practical standard of the future. Not that ReRAM is guaranteed — there’s always the possibility of a problem or another technology might suddenly have a breakthrough moment. But as things stand today, ReRAM appears to be the memory technology with the fewest obstacles standing between it and commercialization as a long-term replacement for NAND.


Jul 31, 2013

Tiny twisted magnets could boost hard drive capacity by 20 times



Quantum physicists at the University of Hamburg have finally worked out how to read and write data using skyrmions — tiny twisted knots of magnetism that could allow for storage densities 20 times greater than today’s hard drives — allowing for hard drives that might one day store hundreds of terabytes of data, or alternatively finger-tip-sized drives that can carry a few terabytes.

Since they were first hypothetically described in the 1960s by a British physicist called Tony Skyrme (yes, they’re named after him), skyrmions have remained fairly elusive. At the time, skyrmions never really took off as theoretical physicists were more interested in quarks and string theory. In more recent years, though, as our tools for observing and testing quantum effects have improved, the skyrmion has come back into vogue.

Basically, a skyrmion is a twisted vortex of magnetized palladium atoms. The magnetization of an atom is defined by the spin of its electrons — depending on which way they spin, the magnetic pole is either at the top or the bottom of the atom (like a tiny little bar magnet). In general, magnetized atoms align in one direction, causing macroscopic samples to exhibit the same behavior — i.e. an actual bar magnet. In a skyrmion, however, the atoms don’t align; instead, they form a twisted vortex (pictured above).

Due to a property known as topological stability, these vortices are surprisingly hardy. Much in the same way that it’s impossible to remove the twist from a Möbius strip without destroying it completely, these skyrmions can be pushed around, but the vortex remains. Most importantly, though, the topological stability of skyrmions persists at tiny scales. In this case, the researchers were able to create stable vortices that consisted of just 300 atoms — just a few nanometers. In conventional hard drives, where conventional ferromagnetism is used and there’s no topological stability, each magnetic site (i.e. bit) needs to be much larger (tens of nanometers), otherwise neighboring bits can corrupt and interfere with each other.


The researchers at the University of Hamburg, led by Kristen von Bergmann, used a scanning tunneling microscope (STM) to create and destroy skyrmions. By using the tip of the STM to apply a stream of “twisted” (polarized) electrons, the north-south-aligned palladium atoms can be converted into skyrmions (the black dots in the video above). By applying electrons with the opposite spin, the skyrmions can be deleted.

This is the first time that skyrmions have been created and deleted since their theoretical conception in the ’60s — but we’re still a long way away from skyrmion-based 100-terabyte hard drives. Scanning tunneling microscopes are room-sized devices, and in this case the palladium had to be cooled with liquid helium (4.2 Kelvin, -269 Celsius) before the skyrmion would play ball. In the short-term, heat-assisted magnetic recording (HAMR) promises massive improvements to hard drive density, and it should be ready for commercial deployment soon. Still, as computers get ever smaller, and data storage requirements grow exponentially, skyrmions in specific and topological stability in general will likely be the focus of lots of future research.

Research paper: DOI: 10.1126/science.1240573 – “Writing and Deleting Single Magnetic Skyrmions”

Self-organizing ‘giant surfactants’ can take chips below 10nm



In the quest for faster processors that generate less heat, engineers have worked hard over the years to perfect more intricate fabrication procedures. Packing more transistors into a smaller space has allowed computing power to balloon in recent years, but how much further can we go? A team of researchers at the University of Akron have developed a new type of nanomaterial that could make semiconductors more efficient than ever.

The researchers, led by Dr. Stephen Z.D. Cheng of UA’s College of Polymer Science and Polymer Engineering, call the material a giant surfactant. While made up of individual nanoparticles, the giant surfactant takes its name from the fact that the assembled molecule is similar in scale to run-of-the-mill macromolecules. However, giant surfactants are of interest because they retain their surfactant functionality on the nanoscale.

A surfactant is a general term for any compound that can lower the surface tension of a liquid. A great many substances have surface tension, but water is the one that people are most familiar with. Surface tension is the force that allows water to form droplets, rather than simply flow outward. Surfactants are important in making semiconductors because the fluids used in various production steps have surface tension, and controlling that quality is vital to guiding them into narrow trenches and other small features. Without very precise control, transistors can’t be placed very close together.

A giant surfactant could revolutionize the production of electronics by allowing engineers to build considerably more dense chips. The University of Akron researchers used nanopatterning to construct the giant surfactant structures from the nanoscale components. Although, the nanomolecules do most of the work — nanopatterning is a kind of self-assembly.

Giant surfactants form a thin-film, organized lithographic pattern on semiconductor crystals, which acts as a guide for the production process. Because the molecules self-assemble, the structure is incredibly consistent, which could mean less waste from faulty transistors in the final product.
 


Current semiconductor manufacturing processes have reached 22nm, which is the distance between the transistors on a chip. Intel’s Ivy Bridge and Haswell are both based on the 22nm process, whereas the most recent ARM CPU cores are still 32nm and 28nm. It is not clear that Moore’s law will hold up much longer with current materials, but the researchers believe giant surfactants could make continued advancement possible. In fact, Dr. Cheng claims that giant surfactants could enable sub-10nm spacing of components.

As computing increasingly moves to mobile devices, having smaller, more powerful processors is of high importance. The lattice formed by giant surfactants provides a ready-made template for creating the necessary chips. The team has hope this is not just a discovery of great scientific interest, but one of enormous practical importance as well. The University of Akron Research Foundation is seeking to patent materials developed by Dr. Cheng and his colleagues.


New material identified by US Navy could revolutionize computer chip heat dissipation


 
One of the greatest challenges in semiconductor design is finding ways to move waste heat out of a structure and into whatever dissipation area is designed for it. This issue doesn’t get a lot of play — CPU and system cooling, when discussed, tends to focus on finding more efficient ways to remove heat from a heatsink lid or the top of the die. The question of how efficiently heat can be transferred to that point is just as important as what happens to it afterwards. Researchers working at the US Naval Research Laboratory in partnership with Boston College have found a new, extremely efficient transmission method. The secret? Cubic boron arsenide.

According to the research team, the room temperature thermal conductivity of boron arsenide BAs) is higher than 2000 Wm-1K-1. That’s on a level with diamond or graphite, which have the highest bulk values known, but are both extremely difficult to work with or integrate into a product. Mass synthesis and precise application of both diamond and graphite are both difficult, which limits practical uses of their capabilities. Boron arsenide could prove more tractable.

The reason born arsenide conducts heat so effectively is due to vibrational waves (phonons) within the lattice structure. In a conventional metal, heat is carried by electrons. Since electrons also carry an electrical charge, there’s a correlation between a metal’s thermal conductivity and its electrical conductivity at room temperature. Metals like copper and aluminum, that transmit heat well, also tend to carry electricity fairly well, particularly when compared to iron, which is a poor carrier, or lead, which is basically the grumpy llama of the metallic world.

The work being done here is theoretical and based on modeling the known lattice structure of boron, but the math checks out. The lattice structure and known properties of semiconductors, including semiconductor work being done in the III-V group of which boron is part, points to potential applications in solar cells and radiation-hardened circuits. One of the other advantages of boron, unlike a material like diamond, is that III-V semiconductor manufacturing is already an area of ongoing research. Boron can be bonded to gallium arsenide (BGaAs), though data on its efficacy in this configuration is somewhat limited.

Should the researchers’ prediction prove valid, there are undoubtedly uses for this capability. Gallium arsenide is a tricky substrate to work with, which is one reason why silicon has remained the industry standard, but multiple manufacturers are expected to deploy III-V materials in coming years as CMOS scaling becomes ever more difficult. Moving heat away from the transistor could allow for higher performance and reduce the need for cooling in any application where heat buildup is detrimental to product function (which is to say, most of them). Boron has also earned scrutiny in recent years thanks to the way it partners up with graphene. As shown in the image at the top of the story, boron nitride and graphene can be grown side by side, creating nanowires of graphene that are isolated by the boron. These types of applications suggest a great deal more attention may be focused on boron in the future, particularly if production can be ramped to industrial levels.Source: extremetech.com



Jul 30, 2013

Our Destiny Lies Not in Our Stars, But in Our Bacteria

Think you know all about evolution (assuming you accept it)? We have a gut feeling there’s more to it than you think.


Color-enhanced scanning electron micrograph showing Salmonella typhimurium invading cultured human cells. (PHOTO: PUBLIC DOMAIN)

Strictly by the numbers, the vast majority — estimated by many scientists at 90 percent — of the cells in what you think of as your body are actually bacteria, not human cells. The number of bacterial species in the human gut is estimated to be about 40,000. ... The total number of individual bacterial cells in the gut is projected to be on the order of 100 trillion.
There’s been a lively academic debate—the “hologenomic theory of evolution”—drawn along these lines, wondering that if microbes are so much of us, surely they must affect our evolution. Hologenomic refers to a critter’s genetic package and that of its itty-bitty entourage, or as biologist Seth Bordenstein (“a scientist, educator, optimist, consultant, and non-linear thinker”) puts it, “the aggregate genome and microbiome of animals.” He and his colleagues at his Vanderbilt lab believe these combined sets of genetic stuff form a “persistent, evolutionary unit.”

Given the weight of biomatter in an animal, and the yeoman’s work these bugs do, it’s an intuitively sensible proposition. Still, until late it’s been somewhat of a microbiology wallflower, or perhaps Wallin-flower, since Ivan Wallin suggested something along the lines of such “symbiosis and speciation” in 1927.

To fast forward more than eight decades, here’s a recent tweet from Bordenstein: “Biologists no longer study if there r genomes in life’s structures but whether all those genomes r interconnected beyond self #hologenome

Bordenstein and his post-doc Robert Brucker concocted an elegant proof of this—their study appears online in the journal Science—using gut bacteria not from people but from three different species of a parasitic wasp.

Two of the species are believed to have diverged 400,000 years ago, the third about a million years ago. As a result, this gave the researchers two species with somewhat similar genomes and somewhat similar but not identical “microbiomes,” while the third species’ genome and microbiome were further removed. When the various species mate with each other, those differences are apparent—offspring from the two more closely related species tend to survive, while those from one of the close and the far species tend to die out over time. (Bordenstein has described these wide-gulf hybrids as having “chaotic and totally different” gut flora, hardly a recipe for thriving.)

Feeding the wasps sterile food so that the gut flora wouldn’t develop, Bordenstein and Brucker interbred the insects in the lab. The presence of the gut bacteria actually tamps down successful hybridization. Among what we might call the bug-less insects, the survival rate for hybrids was pretty much the same regardless of what pair of parents they had. If that’s interesting enough, as a sort of second stage check, when the gut bugs were introduced to the hybrids, the ones with the wider genetic (and microbiomic) gulf tended to not survive another generation.

“Our results move the controversy of hologenomic evolution from an idea to an observed phenomenon,” Bordenstein was quoted in a release from Vanderbilt. “The question is no longer whether the hologenome exists, but how common it is?” And while the research has been met with interest, not everyone accepts the whole of the hologenomic concept yet. As evolutionary geneticist John Werren—who considers this wasp work important—asked Science’s Kai Kupferschmidt, “They are not co-evolving as a single unit, so why would we call them a single genome?”

But for those of us not on the frontlines of microbiology, whether hologenomic or not, the title of Valerie Brown’s piece seems more accurate than ever: “Bacteria R Us.”

Earth acts as a giant particle accelerator, creating the dangerous Van Allen radiation belts



One of the many, many worries people had when first sending humans to the Moon had to do with the Van Allen radiation belts. These are layered, two-lobed areas of space around the Earth that have an unusually high density of high-energy charged particles, including electrons. These electrons damage electronics, penetrating deep into a spacecraft and often causing harmful releases of energy in semiconductors or electrical relays. When the Apollo missions sent humans through these belts of space, NASA simply had no idea what to expect since prior human flights had never gone far enough out to cross the fields. The astronauts zipped through unharmed, however, and today the Van Allen belts aren’t thought to pose a significant danger to living things so long as they are shielded and don’t spend too long inside.

However, even after all that, we still had no clear understanding of why the belts were so dangerous to electronics — an invisible force appeared to be ramping up these charged particles to nearly the speed of light, but where was that force coming from? We eventually developed two competing theories, one which said the source of accelerating energy was foreign, the other arguing that it was local. We know the particles mostly come from gusts of solar wind, but is there something intrinsic to our area of space that gives the particles a boost? This week the journal Science published the answer: it’s the Earth’s own magnetic field that makes the Van Allen belts so dangerous.

The cause, it seems, are lower-energy electrons that give off just the right frequency of electromagnetic radiation, in this case in the radio portion of the spectrum. It’s a powerful enough source of energy to be detectable with a hand-held antenna and headphones, though that can’t be too surprising given the level of energy it can impart to particles in the Van Allen belts. Lead researcher Geoffrey Reeves likens the effect to hitting a tether ball: “The waves have just the right frequency to hit that tether ball each time it comes around, at just the right time, so it goes faster.” Eventually, these electron tether balls approach relativistic speeds.



For decades it was believed that Earth had only two Van Allen belts, but just a few months ago the Van Allen probes discovered a third, much farther out than the first two. It turned out to be transient, eventually being blown away by a strong solar shockwave. Still, as we become increasingly dependent on global communications technology, a detailed understanding of these belts of space will become more important. Everything from GPS satellites to research telescopes shield their electronics and generally shut down when entering them to minimize the chance of damage.

Even then, solar storms and local geomagnetic phenomena can swell the fields dramatically, sometimes engulfing whole fleets of satellites with little warning. It’s only recently that scientists have truly appreciated how volatile these fields of space can be. Right now, their changes are often unpredictable — but this breakthrough might help us change that. Understanding the nature of the the space around our planet will be critical to predicting its actions.

Interestingly, there is a proposition to actually destroy the Van Allen belts with a program called the High Voltage Orbiting Long Tether (HiVOLT). This system of five 100 kilometer-long charged tethers would be deployed from satellites and magnetically deflect the charged particles. This would disburse them remarkably quickly, with projections putting the electron flux at just 1% of today’s level after two months of operation.

Regardless, understanding the Van Allen belts will be necessary if want to have any hope of continuing to advance our mechanization of the skies at the current pace.

Now read: Hubble discovers the first blue planet outside the Solar System, but it isn’t covered in water

Research paper: doi:10.1126science.1237743 - “Electron Acceleration in the Heart of the Van Allen Radiation Belts”

Jul 29, 2013

Cure cancer completely-cancer treatment


It was in 1988, while presiding over the Antonio parish in Pouso Novo, Brazil,  that Father Romano Zago first learned from the local peoples about a potent all-natural healing recipe derived from the Aloe Arborescens plant, which is said to work by detoxifying the body and promoting a healthy immune system.

He began experimenting with the recipe, as well as recommending it to his friends and members of his church whenever they were ill, allowing him to observe first-hand the positive results they were experiencing from using it. Father Zago later traveled to Jerusalem and Italy where he continued to offer his recipe and see improvement of the immune system’s of those who tried it. This inspired him to devote the remainder of his life to further research and educate others on the benefit of including the juice of Aloe Arborescens into their healing plans.
 
Father Zago published two books that included the original recipe, as well as the results that he witnessed. He also cites numerous scientific articles which demonstrate the therapeutic and anti-tumor potential, as well as an encyclopedic bibliography of current information on the scientific studies which validate the healing and curative properties possessed by the Aloe arborescens plant.
 
Health benefits of the Aloe Arborescens-  There has been much publicized scientific research and literature on the synergistic benefits of the 300 phyto-therapeutic, biochemical and nutrient constituents of the common Aloe Vera plant, which can help build the body’s defenses and enhance the immune system to protect against diseases. However, this particular recipe is derived from its “cousin plant,” the Aloe Arborescens, which contains about 200% more medicinal substances and almost 100% more anti-cancer properties.
 
This recipe, as well as the commercial product that is available online, are considered to be a stage 4 supplemental treatment that can be used in combination with other non-toxic treatments as well as with conventional treatments to lessen any toxic side effects.
 

Aloe Arborescens Protocol

The Aloe Arborescens protocol is designed to supercharge the immune system quickly as well as to flood the body with super-nutrients.  It can be made from scratch if you have access to the actual plant, or you can find a quality commercial product online with this link. Aloe Arborescens Based Brazilian Formula for Supreme Immune Health Support.
 
 
Recipe:
1) Half a kilo (or 1.1 lbs) of pure, raw honey (i.e., NOT synthetic or refined honey)
2) 350 grams of Aloe Arborescens leaves (approximately 3 or 4 leaves, depending on their size)
3) 40 to 50 ml of distillate (you can use whiskey, cognac, or some other pure alcohol) this amount is equal to 7  teaspoons, and is used as a preservative for the finished product.
4) mix together in a blender until fully blended together.
Dosage:  take 1 Tablespoon, 3 times per day on an empty stomach, take at least 30 minutes before a meal.
  • It is important to shake the bottle well before pouring the dosage.
  • Store the finished product in a dark glass bottle and store in a dark area of the refrigerator (preferably a drawer which will not get much light).
  • product should not have contact with direct sunlight
  • take the product immediately upon measuring and pouring it out.
The daily dose should be taken 10 days on, 10 days off, 10 days on, 10 days off, etc. until the cancer patient has regained their strength; or this can safely be taken without any days off if the patient so desires.  Because it is all-natural,  it can safely be added to other non-toxic treatments, and can also be useful for those taking chemotherapy or radiation treatments to lessen side-effects.
 

Google is evidently working on real-time mobile translation tech

Yes, like the Babel fish

 
Google has its sights set on the future with projects like Google Fiber and Google Glass, and now it's adding real time voice-to-voice translation to that list as well.

Google's Vice President of Android Hugo Barra said this week that Google is now in the early stages of creating real-time translation software that it hopes to perfect within the next "several years," according to The UK Times.

The company already has prototype phones that can translate speech in real time, so that a user speaks into the device in one language and the person on the other end hears it in a different one, like the fictional Babel fish in "The Hitchhiker's Guide to the Galaxy" or the TARDIS in "Doctor Who."

"That is where we're headed," Barra told the publication. "We've got tons of prototypes of that sort of interaction, and I've played with it every other week to see how much progress we've made."

Same old hurdles

Google's speech-to-speech translation project is reportedly being developed as part of Google Now, Google services suite that's being designed to predict your needs before you know them.

The real-time translation is reportedly better for certain language pairs, such as Portuguese and English, but accuracy remains an issue.

Anyone who's tried to use Apple's Siri or Android's voice-to-text services knows that a little background noise can cause a lot of inaccuracies, and that's something Google is wrestling with still.


The groundwork for real-time, voice-to-voice translation certainly exists, though, between that speech recognition software and Google's online Google Translate service.
Google said that on that service alone it translates a billion entries per day in 71 languages, and it just added new languages from places like the Philippines, South East Asia and Indonesia.

Don't stop me now

Google discussed voice translation software back in 2010, when Google Distinguished Research Scientist and head of machine translation Franz Och offered this:

"We think speech-to-speech translation should be possible and work reasonably well in a few years' time. Clearly, for it to work smoothly, you need a combination of high-accuracy machine translation and high-accuracy voice recognition, and that's what we're working on.

"If you look at the progress in machine translation and corresponding advances in voice recognition, there has been huge progress recently."

It would have been nice if he was right - we'd probably have real-time voice translation on our Galaxy S4 right now. But at least we know they're still working on it.

Who’s killing the bees? New study implicates virtually every facet of modern farming

The problem is fairly well known: Colony Collapse Disorder (CCD) is sudden and devastating, wiping out entire hives so quickly that scientists have had a hard time pinpointing the exact cause. 
 Bee populations have plummeted, and possible explanations have ranged all over the map, from cellphone radiation to global warming, but a new study published in PLOS ONE suggests a complex combination of issues. Our agricultural chemicals, techniques, and pest species are all working together to create an incredibly lethal situation for the modern worker bee. Solving the problem will require much more than yet another adjustment to what farmers spray on their fields — and yet solve the problem we must, as without bees almost the entirety of the modern agricultural system will fail. And then we’ll be without humans, too.



 Nosema ceranae, a major killer of honey bees.

The study found that combinations of some of the most common fungicides and herbicides were, occasionally, reaching lethal doses in bees. The bigger problem is that even a non-lethal dose can increase the bees’ susceptibility to the parasite Nosema ceranae, which has already been accused of contributing to CCD. Could this be the answer? Well, it’s almost certainly a big part of it — and it’s good that we know this. The troubling part is that there doesn’t seem to be any one DDT-like Satan chemical to blame here. This is simply a product of exposing a very fragile species to a lot of different, highly active foreign chemicals. Perhaps the most troubling part of the study, however, was what it found about where these chemicals are being picked up.
Farmers used to keep bees for themselves; it was one of the fundamental skills that made farmers farmers. As the agricultural business became more specialized, bee farms began to crop up to do this job for them. Rather than keep bees year round, most farmers now pay a bee farm to cart a hungry hive over and let the bees loose in their fields only when specifically needed for pollination. However, these bee farms tend to keep only one type of insect, usually the Asiatic honey bee. They’re good pets, and their honey provides a secondary source of revenue. The problem is that not every bee collects pollen equally well from every type of plant. So when you let a honey bee loose in a field full of, say, blueberry plants, they can collect far less pollen than a more specialized pollinator like a bumble bee, and they’re forced to hunt further afield.
This study found that some of the most damaging chemicals were not being collected from food crops, which have their dangers but are ultimately fairly well regulated. Rather, bees are increasingly picking up chemicals from weeds and other pest plants in the fields surrounding the crops they are supposed to be pollinating. This is a problem since, for obvious reasons, we have far fewer regulations on what farmers can spray on weeds.
Everyone from farmers to agri-chemical engineers must take note of this study. Bees are one of the most important complex species on the planet, up there with earthworms and flies in terms of occupying an absolutely critical node in the web of natural interdependence. It is estimated that bees are responsible for the pollination of one third of the world’s crops; without them, many farms would simply fail. On the other hand, food is pretty important, as well. Everything from the concentration to the variety of chemicals used seems to contribute to the problem, but it’s those very concentrations and varieties that allow food to maintain the price and abundance we see today — and even that often isn’t very good.


If pro-bee alarmism were to spark a huge pull back in our use of chemicals in farming, it would impact both the chemical manufacturers and the farms themselves. Rampant weeds and pest insects, now veritable super-versions built to fight our chemical defenses, could drastically change the landscape of the food business if left unchecked. On the other hand, checking them seems to be leading unavoidably to the downfall of our most important industrial species.
Perhaps we should consider supplementing our never-ending efforts to modify crops and chemicals with concurrent efforts to modify bees. It’s a simple truism that we’d be better off if farms at least partially returned to being whole-cycle operations that keep bees and rotate crops. However, that’s unlikely to happen in the race to provide food to an ever more cash-strapped population. But why not look into co-developing bees with the chemicals we know they will encounter?
Natural resistances can be bred, and we’re approaching an age in which we could even directly induce them with manipulation of DNA. Though the idea would be controversial with environmentalists, we could hypothetically co-co-develop crops, bees, and chemicals to exist together comfortably. The biggest downside would be creating yet more patentable species, and yet more ways for biotechnology to hold the modern food crop hostage.

Research paper: doi:10.1371/journal.pone.0070182 – “Crop Pollination Exposes Honey Bees to Pesticides Which Alters Their Susceptibility to the Gut Pathogen Nosema ceranae”

[Image credit]

 

7nm, 5nm, 3nm: The new materials and transistors that will take us to the limits of Moore’s law



At Semicon West 2013, the annual mecca for chipmakers and their capital equipment manufacturers, Applied Materials has detailed the road beyond 14nm, all the way down to 3nm and possibly beyond.

The talk, delivered by Adam Brand of Applied Materials, mostly focused on the material and architectural challenges of mass-producing transistors at 14nm and beyond. At this point, 14nm seems to be the final node where silicon — even when in the shape of a fin (as in FinFETs) — will be thick enough to prevent quantum tunneling and gate leakage.

Transistor gate length (Lg), over time. The plateau was between 45nm and 28nm, until Intel’s 22nm FinFET (thin channel transistor) kicked in.


Beyond 14nm, as we move to 10 and 7nm, a new fin material will be required — probably silicon-germanium (SiGe), or perhaps just pure germanium. SiGe and Ge have higher electron mobility than Si, allowing for lower voltages, and thus reducing power consumption, tunneling, and leakage. SiGe has been used in commercial CMOS fabrication since the late ’80s, too, so switching from silicon won’t be too painful. (The primary reason that we’ve been using silicon for so long is that the entire industry is based on silicon. The amount of time, money, and R&D that would be required to deploy new machines for handling new materials that we know relatively little about would be astronomical.)


According to Brand, SiGe will take us to 7nm — but after that, we’re probably looking at a new transistor structure. Just as FinFET created a larger surface area, mitigating the effects of quantum tunneling, both Gate All Around (GAA) FETs and vertical tunneling FETs (TFETs), would again allow for shorter gates and lower voltages. As you can see in the diagram below, a GAA FET essentially consists of nanowire source and drains, surrounded by a gate. A vertical TFET is similar in that it uses nanowires, but the actual method of operation is very different from conventional FETs. Again, though, TFETs allow for lower operating voltage. Another option is a somewhat conventional FinFET, but with the fin constructed out of III-V semiconductors such as gallium-arsenide (GaAs), which again have higher electron mobility than silicon.

The path beyond 14nm is treacherous, and by no means a sure thing, but with roadmaps from Intel and Applied Materials both hinting that 5nm is being research, we remain hopeful. Perhaps the better question to ask, though, is whether it’s worth scaling to such tiny geometries. With each step down, the process becomes ever more complex, and thus more expensive and more likely to be plagued by low yields. There may be better gains to be had from moving sideways, to materials and architectures that can operate at faster frequencies and with more parallelism, rather than brute-forcing the continuation of Moore’s law.

For the complete set of slides, hit up the Semicon West 2013 website [PDF]. Unless you’re a PhD-wielding process chemist working at Intel or TSMC, though, the contents may go over your head.




MIT successfully implants false memories, may explain why we remember things that didn’t happen



Researchers at MIT have implanted false memories into the brains of mice, causing them to be fearful of an event that didn’t actually occur. This is a very important study that demonstrates just how unreliable memories can be, and goes a long way to explaining why humans regularly recall things that didn’t actually happen — such as alien abductions, or when giving eyewitness testimony that they believe to be true, but is actually a false memory.

This breakthrough comes from the same team that discovered that memories are stored in individual neurons – and the process of implanting (or “incepting” as the researchers call it, in a homage to the film Inception) false fears is essentially the same, but with a vital extra step added to the end.

The researchers place a mouse in a brand new environment. As the mouse explores this environment (Place A), new memories are created in the hippocampus (the region of the mammalian brain that we know is deeply involved with memory formation). In Place A, the mouse has the time of its life. The mouse is then relocated to a different environment (Place B). While in Place B, the neuroscientists stimulate the memory of Place A using optogenetics (more on that below), while simultaneously delivering electric shocks to the mouse’s feet, causing fear and pain. Then, when the mouse is returned to Place A, it freezes in fear. This is because the mouse’s brain has somehow confused the fear of electric shocks in Place B with its memory of Place A — in other words, a false memory has been created.
 

Optogenetics is, as the name suggests, meddling with the genetics of cells so that they are sensitive to light. In this case, the MIT researchers used a virus to infect the neurons in the specific region of the hippocampus where Place A memories are formed. This virus changes the neuron’s DNA so that they produce a protein switch that is sensitive to light. Then, when these neurons are struck by light (a hole is drilled in the mouse’s skull and a laser is shot into that region of the hippocampus), the memory is turned on. Optogenetics is one of the most exciting developments in neuroscience as it allows us to interact with very specific regions of the brain in vivo — in living, breathing, memory-forming test subjects.

Now, freezing in fear isn’t on the same level as the elaborate false memories that humans sometimes conjure up, but it shows incontrovertibly that false memories can be created — and, more importantly, that the physiological process of creating and recalling false memories and real memories is very similar. This doesn’t explain how we create such fantastical false memories as being abducted by aliens, but it does explain why we so vehemently believe that these memories are real. More research is needed, but it seems that, as far as we’re concerned, false and real memories are both equally real.


The next step, of course, is to actually do something with these findings. The research group would like to use its memory manipulation technology to fix/treat undesirable brain function, such as anxiety and depression. Being able to delete or reprogram bad memories, a la Eternal Sunshine of the Spotless Mind, would probably make short work of many mental woes. Perhaps more excitingly, though, is the potential to directly encode new memories into our neurons — kind of like when Neo learns to fly a helicopter in The Matrix. That’s probably a few years away yet, though.

Research paper: DOI: 10.1126/science.1239073 – “Creating a False Memory in the Hippocampus”

Jul 26, 2013

Google Translate adds handwriting support and tries to make sense of your scrawl


Why type it when you can write it? Google Translate users can now scribble symbols using a new handwriting input tool. While this is unlikely to be a faster option for translating a lot of languages, it does prove useful for inputting certain foreign characters. Want to find out what that Russian or Chinese phrase means? Don’t bother trying to work out how to input these characters via your keyboard, just draw them.

This is an option that has been available to users of the Google Translate Android app for a little while, but it's now also available to desktop users. Things are a great deal easier if you have access to a graphics tablet, but the onscreen handwriting input panel can also be used in conjunction with a mouse.

Analysis of whatever you scrawl into the input box works much like predictive texting -- a number of possibilities are listed to choose from and an instant translation is provided.

Support currently stretches to 45 languages, and not all of them are symbol based. This is handy for those occasions that require the addition of an all-important accent or other embellishment that could completely change the meaning of a word.

Oh, and if you find that handwriting input is not immediately obvious, just look beneath the regular input panel, click the keyboard icon there, and the option will present itself.

Jul 25, 2013

45 years of Intel: but can it keep pace with portability?

Intel celebrates four and a half decades 

 

Intel's 45 today: the Intel Corporation - Intel's a portmanteau of Integrated Electronics, although it was nearly called Moore Noyce after its founders Gordon E Moore and Robert Noyce, a decision that was abandoned when they realised it sounded like "more noise" - was founded in 1968. 

Today we think of Intel as a processor firm but it started off making memory, and while it marketed its first microprocessor in 1971 there was a period where its future appeared to be not in processors, but in digital watches.

But Intel's processors hit the big time with the arrival of the IBM PC, which turned out to be quite popular: the combination of Intel processors, PC-compatible hardware and Microsoft operating systems would dominate computing for three decades.

It's amazing how far we've come, both in terms of engineering and what that engineering has enabled us to do. We've gone from computers that were little more than glorified adding machines to astonishingly powerful devices of all shapes and sizes - and that has changed the world.

Harder, better, faster, stronger

Intel's Gordon E Moore created Moore's Law, the prediction that transistor counts on integrated circuits would double every two years, and we saw that prediction take effect not just in processing power but in storage capacities and camera megapixels too.

The acceleration of technology took us from 8086 to 286, 386 to 486, 486 to Pentium and onto multi-core processors in the blink of an eye, and we've long since passed the stage when PCs weren't powerful enough for the things we wanted them to do.

That turned out to be something of a Pyrrhic victory for Intel, though: in recent years the battlefield hasn't been about power, but portability.

The tablet and smartphone, not the PC, have become many people's primary computing devices. As Jeremy Laird wrote recently, "the desktop CPU war is over... it's all about ultramobile."

PC sales have been in the longest, steepest decline in the industry's history, and in some emerging markets the PC is being bypassed altogether as people buy tablets as their first computers.

Should that cast a cloud over the birthday celebrations? We don't think so. The PC market may be shrinking but it's still pretty big and largely Intel-powered, and while Intel-powered tablets haven't quite set the world on fire we have high hopes for Haswell.

Some of our happiest computing experiences had Intel inside; here's to many more.

How to put humans on Mars, and get them home safely again



Younger generations haven’t experienced staggeringly monumental historic events like older generations have, such as World Wars or landing on the Moon. Our historic events so far — mostly related to personal technology, such as the rise of he PC and the internet — are more of a slow, incremental burn. However, a team of UK scientists from Imperial College London are aiming for that staggering historic event that the younger generations can experience, and have designed a mission to land three humans on Mars.

The mission consists of two spacecraft, the first of which is a Martian lander equipped with a heat shield that will send the crew off into Earth’s orbit. The second craft would be a habitat vehicle, which is the craft that the crew would live in during the voyage. The habitat vehicle would consist of three floors, and measure in at around 30 feet (10m) tall and 13 feet (4m) in diameter. So, while the habitat might be a little cramped for three humans, it should do. The astronauts would be situated in the lander during takeoff, and would move to the habitat when the dual-craft reaches Earth orbit. Once the astronauts are safely within the habitat, a rocket would shoot the dual-craft off on its journey to Mars, which would take a shorter-than-you-thought nine months at minimum.

Perhaps sounding like something Jeff Goldblum would think of when attempting to save the planet from hostile aliens, the dual-craft would then split apart by around 200 feet (60 meters), but would still be attached by a tether. Then, thrusters from both vehicles would spin them around a central point, creating artificial gravity similar to Earth’s in the habitat for the astronauts. Not only would this help the astronauts feel at home for the better part of a lonely year, but is thought to reduce the bone and muscle atrophy that extended periods of weightlessness cause. If the craft required increased maneuverability, such as due to incoming emergencies like a solar flare or large debris, the tether can be retracted, and the craft can be better piloted.
Body atrophy isn’t the only threat facing the crew, as nine months in cramped quarters would drive anyone insane, so the team will have to look for ways keep the astronauts occupied. The craft would have to be well-stocked with medicine, and the crew would have to be trained to use it, as practically no one remains in fine health for nine months straight. Superconducting magnets, as well as water flowing through the shell of the craft, would be employed to help reduce both cosmic and solar radiation.

Perhaps the biggest positive to the concept is that each stage of the mission has been proven to work in an individual capacity.

Once the dual-craft reaches Mars, it would tether back together, and the crew would move back into the lander, detach from the habitat, and descend to the Red Planet’s surface. The mission would involve sending a habitat and return vehicle to Mars before the astronauts arrived, so the crew would have shelter upon landing as well as a way home. The crew would spend anywhere from two months to two years on Mars, depending on the goals of the mission and the distance between Mars and Earth (which would dictate a faster journey). On the way back home, the mission would dock with the ISS, then take a craft back to Earth from there.

 
Unfortunately for space enthusiasts, there is no real timetable for this mission. However, considering every individual step of the mission has been proven to work on its own, the proposed overall journey could work. Hopefully the current young generations will see this kind of voyage take place in their lifetime, as they’re surely not impressed by the rise of smartphones and the internet anymore.

(Image credit: gdefon.ru)


Light stopped completely for a minute inside a crystal: The basis of quantum memory



Scientists at the University of Darmstadt in Germany have stopped light for one minute. For one whole minute, light, which is usually the fastest thing in the known universe and travels at 300 million meters per second, was stopped dead still inside a crystal. This effectively creates light memory, where the image being carried by the light is stored in crystals. Beyond being utterly cool, this breakthrough could lead to the creation of long-range quantum networks — and perhaps, tantalizingly, this research might also give us some clues on accelerating light beyond the universal speed limit.

Back in 1999, scientists slowed light down to just 17 meters per second, and then two years later the same research group stopped light entirely — but only for a few fractions of a second. Earlier this year, the Georgia Institute of Technology stopped light for 16 seconds — and now, the University of Darmstadt has stopped light for a whole minute.

To stop light, the German researchers use a technique called electromagnetically induced transparency (EIT). They start with a cryogenically cooled opaque crystal of yttrium silicate doped with praseodymium. (The image above is unrelated; sadly there isn’t an image of the actual crystal that was used to stop light.) A control laser is fired at the crystal, triggering a complex quantum-level reaction that turns it transparent. A second light source (the data/image source) is then beamed into the now-transparent crystal. The control laser is then turned off, turning the crystal opaque. Not only does this leave the light trapped inside, but the opacity means that the light inside can no longer bounce around — the light, in a word, has been stopped.


With nowhere to go, the energy from the photons is picked up by atoms within the crystal, and the “data” carried by the photons is converted into atomic spin excitations.  To get the light back out of the crystal, the control laser is turned back on, and the spin excitations are emitted at photons. These atomic spins can maintain coherence (data integrity) for around a minute, after which the light pulse/image fizzles. In essence, this entire setup allows the storage and retrieval of data from light memory (or should that be optical memory?)

In the image above, you can see that the scientists successfully stored a simple image (three horizontal lines) in the crystal for 60 seconds. It should be possible to store data for longer periods, too, using other crystals — such as europium-doped yttrium silicate — and by using specially tailored magnetic fields.

Light-based memory that preserves quantum coherence (such as polarization and entanglement) is vital for the creation of a long-range quantum network. Just as with conventional, electronic routers, quantum routers must be able to store incoming packets, and then retransmit them — which is exactly what today’s discovery allows. Even so, though, there are still a few barriers to overcome before we can roll out a quantum internet — namely, we must find a method of coherently storing light that introduces so little noise that single photons can still be reliably stored/retrieved, and we need to do it at room temperature, too. Cryogenics might be acceptable at the data center level, but I can’t imagine having a cryogenically cooled router in my house.


Research paper: DOI: 10.1103/PhysRevLett.111.033601 – “Stopped Light and Image Storage by Electromagnetically Induced Transparency up to the Regime of One Minute”

 

Copyright © 2014 Vivarams. All rights reserved.

Back to Top