Aug 9, 2013

IBM creates Corelet programming language to make software that operates like the human brain


At the International Joint Conference on Neural Networks held this week in Dallas, researchers from IBM have taken the wraps off a new software front-end for its neuromorphic processor chips. The ultimate goal of these most recent efforts is to recast Watson-style cognitive computing, and its recent successes, into a decidedly more efficient architecture inspired by the brain. As we shall see, the researchers have their work cut out for them — building something that on the surface looks like the brain is a lot different from building something that acts like the brain.
Head researcher of IBM’s Cognitive Computing group, Dharmendra Modha, announced last November that his group had simulated over 500 billion neurons using the Blue Gene/Sequoia supercomputer at the Lawrence Livermore National Laboratory (LLNL). His claims, however, continue to draw criticism from others who say that the representation of these neurons is too simplistic. In other words, the model neurons generate spikes like real neurons, but the underlying activity that creates those spikes is not modeled in sufficient detail, nor are the details of connections between them.


To interact with IBM’s “True North” neural architectural simulator, the researchers have developed an object-oriented language they call Corelet. Building blocks, or corelets, can be built using 256-neuron neuromorphic CPUs designed to do specific tasks. The “True North” library already has some 150 pre-designed corelets to do things like detect motion or image features, or even learn to play games. To play pong, for example, a layer of input neurons would be given information about the “ball” and “paddle” motions, an output layer would send paddle motion updates, and intermediate layers would perform some indeterminate processing.


The problem with assigning specific functional tasks to specific cores is that a further rift with real brains is introduced — a rift even beyond the simplicity of the models of individual neurons. Real neural networks don’t just do one thing, but many simultaneously. I think that if the researchers were seriously attempting to capture particular functions of real brains they would not be building complex million- or billion-neuron systems that look like the image above. Instead, they would be building rather more specific systems composed of just a handful a richly modeled neurons that mimic actual functions of real nervous systems — like, for example, the spinal reflex circuit:

Like a pong controller, a simple network such as this would have inputs, outputs, and intermediate neurons, but unlike pong, the spiking capability and activity would bear traceable relevance to the task at hand. Systems of neurons built on top of a circuit, like a reflex arc, could be added later — but without the underlying relevance to the real world, not only are they meaningless, but impossible to comprehend. If, however, researchers insist on jumping right away to massive neuron count models, perhaps we might suggest a thought experiment to probe how arbitrary networks might be functionally organized.


If an individual neuron is going to generate meaningful spikes, the consensus is that the neuron needs to have some minimum level of complexity. For the thought experiment then, let a neuron be represented by a whole person, and the spike of the neuron be the clap of the person. When assembled into a room, we know from general experience that a large group of clapping human neurons can quickly evolve synchronized clapping from initially random applause within a few seconds — no big deal. We might imagine the crowd clappers could also quickly provide an answer to the question 2+2, by similarly organizing beats of 4. The magic, and relevance, for designing network chips comes in when you begin to add the specializations of input and output.





IBM Watson: Now IBM wants to produce a system that derives its intelligence from thinking, rather than merely searching through vast amounts of data.
Instead of presenting the simple 2+2 query to the whole network, we can present it to just a few input units, who transmit the message in whatever way they see fit. Simultaneously, different queries can be presented to other input units. The output units then can be instructed to listen for messages and transmit outputs in the way that they see fit. The key addition we would require here is than the intermediate human units can move about some limited space to better hear activity of their particular choosing. Finally, we would need to add some driving energetic force to incentivize any behavior in the first place, and also limit the amount of claps or spikes they can produce. An example of this organizing incentivze might be jelly beans that are sprinkled onto the hungry crowd as it moves about.

If the amount of clapping an individual can perform is then directly confined by the amount of jelly bean energy each unit can accrue, the energy-incentivation loop is closed and we have all the essentials for a neural computing system. If instead of trying to model extremely complex neurons in an attempt to capture and comprehend network behaviors, we simply create the real network just described and record its behavior for observation, I would offer that greater understanding into network dynamics relevant to real brains will have been gained, than any attempt using billions of simple processing elements.

When we realize that each individual neuron, each cell, bears inside the full survival instinct and repertoire that enabled its amoeba-like forebear to thrive and reproduce on its own in a hostile world, we have some appreciation for the repurposed power possessed in each one. Ignoring the complexity of individual neurons beyond simple electrical behavior is folly if we desire to build computing systems with the power of the brain.

New flexible micro-supercapacitor paves way for tiny electronics



Before the age of the smartphone, mobile phone manufacturers were locked in an arms race to see who could create a smaller, but still usable device. Smartphones came along, and now the arms race is more or less focused on how big a screen can be while still being accepted by consumers. During this arms race, the way to keep phones from being unwieldy is to make them thin. Researchers have created a new supercapacitor so small that if it were used in smartphones, could make the devices even thinner and lighter than they are now.

Normally, electrodes in supercapacitors are made from carbon or polymers that can conduct electricity with ease. Researchers at Leibniz Institute for Solid State and Materials Research in Dresden, led by Oliver G. Schmidt, turned away from the usual electrode materials and instead used manganese dioxide — an unconventional choice, because the material isn’t known for being adept at conducting electricity. However, the material is cheaper than the usual electrodes, and also not as harmful to the environment. So, in order to make manganese dioxide conductive, the team turned to something a supervillain might to do a captive hero: vaporize it with an electron beam.


Once the manganese was vaporized and Lex Luthor finally defeated Superman, the atoms in the vaporized gas reformed into thin, flexible strips. The strips were still as conductive as the non-gaseous manganese dioxide, so the team connected thin layers of gold to the films, increasing the conductivity. The team found that the new micro-supercapacitor was not only flexible enough to save some space and shrink down the size of mobile devices, but that it also stored more energy and provided more power per unit volume than its competing supercapacitors.

Though the manganese is cheaper than a carbon-based electrode, adding the thin gold strips — which are expensive — counteract the reduced cost. So, the team is currently working on a way to reduce the cost once again. This likely means the researchers will have to turn to a material other than gold sometime down the road, or they could perhaps conduct an aggressive takeover of the Cash4Gold business and accrue the needed gold that way.

In essence, the flexible supercapacitor works, but not for the team’s initial goals. The researchers aimed at creating the flexible supercapacitor with a high energy density, but at a low cost. Adding the gold to help achieve that high energy density, unfortunately, increased the cost beyond an acceptable amount. If there’s something to take from this experiment, though, it’s that the supercapacitor itself was a success, and bringing it to to the consumer market is more about cost now than anything else.


Research paper: DOI: 10.1039/C3EE41286E - ”On chip, all solid-state and flexible micro-supercapacitors with high performance based on MnOx/Au multilayers”


Marvel at the most detailed photos of the Sun ever taken


Astronomers at the Big Bear Solar Observatory have captured the most detailed, visible-light images of the Sun. In the image above, you can see the terrifying detail of a sunspot, where intense magnetic activity prevents the convective flow of superheated plasma. In the image below, the Sun’s photosphere (the surface region that emits light) shows off its “ultrafine magnetic loops.”

These images were captured by the New Solar Telescope, which is equipped with a 1.6-meter clear-aperture Gregorian telescope and the Visible Imaging Spectrometer (VIS). With a huge aperture and modern imaging sensor, the NST is the largest and best solar telescope on the planet — and indeed, it was built specifically by the New Jersey Institute of Technology (NJIT) to study the activity of the Sun. Scientific observations began in 2009, but it seems it took more than four years for the conditions to be just right to capture these photos.

In the image at the top of this story, you see the most detailed photo ever of a sunspot. The dark patch in the middle is the umbra, with the “petals” forming the penumbra. The texture around the outside is what most of the surface of the Sun looks like. Like most solar phenomena, we don’t know exactly what causes a sunspot, but it appears to be some function of the Sun’s intense magnetic fields and differential rotation (where internal regions of the Sun rotate at different speeds).
Basically, something causes the magnetic field to collapse in on itself. This intense magnetic field is vertical (normal to the Sun’s surface), pointing straight down, blocking the Sun’s normal convection and in turn reducing the sunspot’s surface temperature. This is why sunspots appear darker — a sunspot might be just 2,700-4,200 degrees Celsius, while a normal patch of the Sun is around 5,500 C. The lighter, petal-like regions are where the magnetic field is more inclined, allowing for some convection to occur.
 
 
The second image, above, appears to just be a close-up of the Sun’s photosphere, captured by the Visible Image Spectrometer’s H-alpha filter (red light produced by energetic hydrogen atoms). The lines/loops of hydrogen plasma are created by magnetic fields that emanate from the Sun’s inner layers. Basically, it just gives us a better idea of just how crazy the surface of the Sun is. In the image below, captured by the TRACE space telescope as it orbited near the Sun, you can see what a sunspot looks like from another angle.


The New Solar Telescope, and space-based telescopes such as NASA’s STEREO, are of vital scientific importance because they give us more data about one of the most significant objects in the universe:  the Sun. By learning more about sunspots, solar flares, and other heliophysical phenomena, we stand a better chance of weathering whatever the Sun throws at us and prospering here on Earth. 

Aug 8, 2013

ReRAM, the memory tech that will eventually replace NAND flash, finally comes to market



A new memory technology company, Crossbar, has broken cover with a new ReRAM design it claims will allow for commercialization of the technology. The company’s claims aren’t strictly theoretical; today’s announcement reveals that the design firm has successfully implemented the architecture in silicon. While that’s not the same as initiating mass production, it’s an important step in the search for a NAND flash replacement.

ReRAM (also known as RRAM) works by creating resistance rather than directly storing charge. An electric current is applied to a material, changing the resistance of that material. The resistance state can then be measured and a “1″ or “0″ is read as the result. Much of the work done on ReRAM to date has focused on finding appropriate materials and measuring the resistance state of the cells. ReRAM designs are low voltage, endurance is far superior to flash memory, and the cells are much smaller — at least in theory.
 
Crossbar memory characteristics. (Click to zoom in)


Crossbar has been working to turn theoretical advantages into practical ones. The company’s design is ready for mass production but will target low-density applications for now — think embedded microcontrollers. Demonstrating the capabilities of the part now is important to grabbing investor attention. Crossbar might be a small player, but it’s a small player in a field that’s attracting a lot of prominent attention from major companies; SK Hynix, Panasonic, and HP are all working on ReRAM designs. Long-term, the same principles that make ReRAM function might allow its use as a DRAM replacement, though mass storage ReRAM and ReRAM-DRAM might use different architectures, with one emphasizing long-term storage and the other accelerating random access.

Flash in the pan

ReRAM is the most likely candidate for replacing NAND flash and make no mistake — we need a NAND flash replacement. Sub-20nm NAND roadmaps are peppered with references to 1X and 1Y technology as a means of implying node scaling when lower nodes aren’t actually on the table. The broad plan is to rely on 3D die stacking as a means of improving cost-per-GB as opposed to transitioning to smaller 2D process geometries. Flash will still scale to 14nm within the next few years, but every smaller process node sharply increases the amount of ECC RAM required, degrades longevity, and requires greater over-provisioning and more intelligent recovery schemes at the controller level. This, in turn, slows down performance and increases die sizes. SLC (single-layer cell NAND) doesn’t really suffer from these issues, but it’s inordinately expensive.



We don’t know where, exactly, the limit it, but the ITRS predicts that NAND below 7nm, in 2D or 3D form, isn’t going to happen, period. That’s more or less when CMOS itself runs out of steam, and even getting down to 7nm is currently dubious given the troubles with EUV lithography and the advent of double/quad patterning. The endurance issue will eventually bite into enterprise and database use, or force those industries to adopt SLC NAND. The bottom line is that regardless of when it happens, NAND scaling isn’t going to continue indefinitely.


The current hope is that ReRAM will be ready for widescale adoption by the 2017-2018 timeframe. The first 3D NAND devices are currently expected in 2015, which means commercial ReRAM deployment would begin well before NAND hits its absolute scaling limit. Given the difficulty of ramping an entirely new technology, it wouldn’t surprise us if NAND’s last generations focus primarily on low-end consumer applications, while ReRAM comes in at the top for the enterprise market, where endurance and write requirements are difficult to meet with smaller NAND geometries.

Put in context, then, the work Crossbar is doing to bring ReRAM to market is essential early work towards building the practical standard of the future. Not that ReRAM is guaranteed — there’s always the possibility of a problem or another technology might suddenly have a breakthrough moment. But as things stand today, ReRAM appears to be the memory technology with the fewest obstacles standing between it and commercialization as a long-term replacement for NAND.


Jul 31, 2013

Tiny twisted magnets could boost hard drive capacity by 20 times



Quantum physicists at the University of Hamburg have finally worked out how to read and write data using skyrmions — tiny twisted knots of magnetism that could allow for storage densities 20 times greater than today’s hard drives — allowing for hard drives that might one day store hundreds of terabytes of data, or alternatively finger-tip-sized drives that can carry a few terabytes.

Since they were first hypothetically described in the 1960s by a British physicist called Tony Skyrme (yes, they’re named after him), skyrmions have remained fairly elusive. At the time, skyrmions never really took off as theoretical physicists were more interested in quarks and string theory. In more recent years, though, as our tools for observing and testing quantum effects have improved, the skyrmion has come back into vogue.

Basically, a skyrmion is a twisted vortex of magnetized palladium atoms. The magnetization of an atom is defined by the spin of its electrons — depending on which way they spin, the magnetic pole is either at the top or the bottom of the atom (like a tiny little bar magnet). In general, magnetized atoms align in one direction, causing macroscopic samples to exhibit the same behavior — i.e. an actual bar magnet. In a skyrmion, however, the atoms don’t align; instead, they form a twisted vortex (pictured above).

Due to a property known as topological stability, these vortices are surprisingly hardy. Much in the same way that it’s impossible to remove the twist from a Möbius strip without destroying it completely, these skyrmions can be pushed around, but the vortex remains. Most importantly, though, the topological stability of skyrmions persists at tiny scales. In this case, the researchers were able to create stable vortices that consisted of just 300 atoms — just a few nanometers. In conventional hard drives, where conventional ferromagnetism is used and there’s no topological stability, each magnetic site (i.e. bit) needs to be much larger (tens of nanometers), otherwise neighboring bits can corrupt and interfere with each other.


The researchers at the University of Hamburg, led by Kristen von Bergmann, used a scanning tunneling microscope (STM) to create and destroy skyrmions. By using the tip of the STM to apply a stream of “twisted” (polarized) electrons, the north-south-aligned palladium atoms can be converted into skyrmions (the black dots in the video above). By applying electrons with the opposite spin, the skyrmions can be deleted.

This is the first time that skyrmions have been created and deleted since their theoretical conception in the ’60s — but we’re still a long way away from skyrmion-based 100-terabyte hard drives. Scanning tunneling microscopes are room-sized devices, and in this case the palladium had to be cooled with liquid helium (4.2 Kelvin, -269 Celsius) before the skyrmion would play ball. In the short-term, heat-assisted magnetic recording (HAMR) promises massive improvements to hard drive density, and it should be ready for commercial deployment soon. Still, as computers get ever smaller, and data storage requirements grow exponentially, skyrmions in specific and topological stability in general will likely be the focus of lots of future research.

Research paper: DOI: 10.1126/science.1240573 – “Writing and Deleting Single Magnetic Skyrmions”

Self-organizing ‘giant surfactants’ can take chips below 10nm



In the quest for faster processors that generate less heat, engineers have worked hard over the years to perfect more intricate fabrication procedures. Packing more transistors into a smaller space has allowed computing power to balloon in recent years, but how much further can we go? A team of researchers at the University of Akron have developed a new type of nanomaterial that could make semiconductors more efficient than ever.

The researchers, led by Dr. Stephen Z.D. Cheng of UA’s College of Polymer Science and Polymer Engineering, call the material a giant surfactant. While made up of individual nanoparticles, the giant surfactant takes its name from the fact that the assembled molecule is similar in scale to run-of-the-mill macromolecules. However, giant surfactants are of interest because they retain their surfactant functionality on the nanoscale.

A surfactant is a general term for any compound that can lower the surface tension of a liquid. A great many substances have surface tension, but water is the one that people are most familiar with. Surface tension is the force that allows water to form droplets, rather than simply flow outward. Surfactants are important in making semiconductors because the fluids used in various production steps have surface tension, and controlling that quality is vital to guiding them into narrow trenches and other small features. Without very precise control, transistors can’t be placed very close together.

A giant surfactant could revolutionize the production of electronics by allowing engineers to build considerably more dense chips. The University of Akron researchers used nanopatterning to construct the giant surfactant structures from the nanoscale components. Although, the nanomolecules do most of the work — nanopatterning is a kind of self-assembly.

Giant surfactants form a thin-film, organized lithographic pattern on semiconductor crystals, which acts as a guide for the production process. Because the molecules self-assemble, the structure is incredibly consistent, which could mean less waste from faulty transistors in the final product.
 


Current semiconductor manufacturing processes have reached 22nm, which is the distance between the transistors on a chip. Intel’s Ivy Bridge and Haswell are both based on the 22nm process, whereas the most recent ARM CPU cores are still 32nm and 28nm. It is not clear that Moore’s law will hold up much longer with current materials, but the researchers believe giant surfactants could make continued advancement possible. In fact, Dr. Cheng claims that giant surfactants could enable sub-10nm spacing of components.

As computing increasingly moves to mobile devices, having smaller, more powerful processors is of high importance. The lattice formed by giant surfactants provides a ready-made template for creating the necessary chips. The team has hope this is not just a discovery of great scientific interest, but one of enormous practical importance as well. The University of Akron Research Foundation is seeking to patent materials developed by Dr. Cheng and his colleagues.


New material identified by US Navy could revolutionize computer chip heat dissipation


 
One of the greatest challenges in semiconductor design is finding ways to move waste heat out of a structure and into whatever dissipation area is designed for it. This issue doesn’t get a lot of play — CPU and system cooling, when discussed, tends to focus on finding more efficient ways to remove heat from a heatsink lid or the top of the die. The question of how efficiently heat can be transferred to that point is just as important as what happens to it afterwards. Researchers working at the US Naval Research Laboratory in partnership with Boston College have found a new, extremely efficient transmission method. The secret? Cubic boron arsenide.

According to the research team, the room temperature thermal conductivity of boron arsenide BAs) is higher than 2000 Wm-1K-1. That’s on a level with diamond or graphite, which have the highest bulk values known, but are both extremely difficult to work with or integrate into a product. Mass synthesis and precise application of both diamond and graphite are both difficult, which limits practical uses of their capabilities. Boron arsenide could prove more tractable.

The reason born arsenide conducts heat so effectively is due to vibrational waves (phonons) within the lattice structure. In a conventional metal, heat is carried by electrons. Since electrons also carry an electrical charge, there’s a correlation between a metal’s thermal conductivity and its electrical conductivity at room temperature. Metals like copper and aluminum, that transmit heat well, also tend to carry electricity fairly well, particularly when compared to iron, which is a poor carrier, or lead, which is basically the grumpy llama of the metallic world.

The work being done here is theoretical and based on modeling the known lattice structure of boron, but the math checks out. The lattice structure and known properties of semiconductors, including semiconductor work being done in the III-V group of which boron is part, points to potential applications in solar cells and radiation-hardened circuits. One of the other advantages of boron, unlike a material like diamond, is that III-V semiconductor manufacturing is already an area of ongoing research. Boron can be bonded to gallium arsenide (BGaAs), though data on its efficacy in this configuration is somewhat limited.

Should the researchers’ prediction prove valid, there are undoubtedly uses for this capability. Gallium arsenide is a tricky substrate to work with, which is one reason why silicon has remained the industry standard, but multiple manufacturers are expected to deploy III-V materials in coming years as CMOS scaling becomes ever more difficult. Moving heat away from the transistor could allow for higher performance and reduce the need for cooling in any application where heat buildup is detrimental to product function (which is to say, most of them). Boron has also earned scrutiny in recent years thanks to the way it partners up with graphene. As shown in the image at the top of the story, boron nitride and graphene can be grown side by side, creating nanowires of graphene that are isolated by the boron. These types of applications suggest a great deal more attention may be focused on boron in the future, particularly if production can be ramped to industrial levels.Source: extremetech.com



 

Copyright © 2014 Vivarams. All rights reserved.

Back to Top