Search This Blog

Tuesday, June 19, 2012


Supercomputer


The Blue Gene/P supercomputer at Argonne National Lab runs over 250,000 processors using normal data center air conditioning, grouped in 72 racks/cabinets connected by a high-speed optical network[1]
supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), and later at Cray Research. While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm.[2][3]
Systems with a massive number of processors generally take one of two paths: in one approach, e.g. in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[4] In another approach, a large number of processors are used in close proximity to each other, e.g. in a computer cluster. The use of multi-core processors combined with centralization is an emerging direction.[5][6] Currently, IBM Sequoia is the fastest in the world.[7]
Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physicsweather forecastingclimate researchoil and gas explorationmolecular modeling (computing the structures and properties of chemical compounds, biologicalmacromolecules, polymers, and crystals), and physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).

History

The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[8] The CDC 6600, released in 1964, is generally considered the first supercomputer.[9][10]
Cray left CDC in 1972 to form his own company.[11] Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it became one of the most successful supercomputers in history.[12][13] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's fastest until 1990.[14]
While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaflops per processor.[15][16] The Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[17][18][19] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[20]

Hardware and architecture

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance.[8] However, in time the demand for increased computational power ushered in the age of massively parallel systems.
While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some being graphic units) connected by fast connections.[2][3]
Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers.[21][22][23] The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components.[24] There have been diverse approaches to heat management, from pumpingFluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures.[14][25]
The CPU share of TOP500
Systems with a massive number of processors generally take one of two paths: in one approach, e.g. in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[4] In another approach, a large number of processors are used in close proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect becomes very important and modern supercomputers have used various approaches ranging from enhanced Infinibandsystems to three-dimensional torus interconnects.[26][27] The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.[5][6]
As the price/performance of general purpose graphic processors (GPGPUs) has improved, a number of petaflop supercomputers such as Tianhe-I andNebulae have started to rely on them.[28] However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general purpose high performance computing applications has been the subject of debate, in that while a GPGPU maybe tuned to score well on specific benchmarks its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application towards it.[29] However, GPUs are gaining ground and in 2012 the Jaguar supercomputer was transformed into Titan by replacing CPUs with GPUs.[30][31][32]
A number of "special-purpose" systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle,[33] Deep Blue,[34] and Hydra,[35] for playing chessGravity Pipe for astrophysics,[36]MDGRAPE-3 for protein structure computation molecular dynamics[37] and Deep Crack,[38] for breaking the DES cipher.

Robotics

Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots [1] and computer systems for their control, sensory feedback, and information processing.[2]These technologies deal with automated machines that can take the place of humans, in hazardous or manufacturing processes, or simply just resemble humans. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics.
The concept and creation of machines that could operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century.[3] Throughout history, robotics has been often seen to mimic human behavior, and often manage tasks in a similar fashion. Today, robotics is a rapidly growing field, as we continue to research, design, and build new robots that serve various practical purposes, whether domestically, commercially, or militarily. Many robots do jobs that are hazardous to people such as defusing bombs, exploring shipwrecks, and mines.
                               

Tracked robots
http://bits.wikimedia.org/static-1.20wmf4/skins/common/images/magnify-clip.png
Tank tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor and military robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Examples include NASA's Urban Robot "Urbie".

Snaking
Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings. The Japanese ACM-R5 snake robot can even navigate both on land and in water.
                         

Human-robot interaction


Kismet can produce a range of facial expressions.
If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speechgestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO.

[edit]Speech recognition

Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech.[84] The same word, spoken by the same person may sound different depending on local acousticsvolume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent.[85] Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first "voice input system" which recognized "ten digits spoken by a single user with 100% accuracy" in 1952.[86] Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.[87]

[edit]Robotic voice

Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium,[88]making it necessary to develop the emotional component of robotic voice through various techniques.[89][90]

[edit]Gestures

One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate "down the road, then turn right". It is likely that gestures will make up a part of the interaction between humans and robots.[91] A great many systems have been developed to recognize human hand gestures.[92]

[edit]Facial expression

Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots. Robotic faces have been constructed byHanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos).[93]The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi[94] can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.[95]

[edit]Artificial emotions

Artificial emotions can also be generated, composed of a sequence of facial expressions and/or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots.

[edit]Personality

Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future.[96] Nevertheless, researchers are trying to create robots which appear to have a personality:[97][98] i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example isPleo, a toy robot dinosaur, which can exhibit several apparent emotions.[99]

[edit]Control


Puppet Magnus, a robot-manipulated marionette with complex control systems
The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to calculate the appropriate signals to the actuators (motors) which move the mechanical.
The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands. Sensor fusionmay first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction) is inferred from these estimates. Techniques from control theory convert the task into commands that drive the actuators.
At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a "cognitive" model. Cognitive models try to represent the robot, the world, and how they interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.

[edit]Autonomy levels


TOPIO, a humanoid robot, playedping pong at Tokyo IREX 2009.[100]
Control systems may also have varying levels of autonomy.
  1. Direct interaction is used for haptic or tele-operated devices, and the human has nearly complete control over the robot's motion.
  2. Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them.
  3. An autonomous robot may go for extended periods of time without human interaction. Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous, but operate in a fixed pattern.
Another classification takes into account the interaction between human control and the machine motions.
  1. Teleoperation. A human controls each movement, each machine actuator change is specified by the operator.
  2. Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators.
  3. Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.
  4. Full autonomy. The machine will create and complete all its tasks without human interaction.
                                                                                                     

Saturday, May 26, 2012

ANGRY BIRDS SPACE

The chase is on! After a giant claw kidnaps their eggs, the Angry Birds chase it into a wormhole and find themselves floating in a strange new galaxy – surrounded by space pigs! Luckily the Angry Birds have super powers of their own…

Angry Birds Space introduces you to new adventures on planets and in zero gravity, resulting in spectacular gameplay ranging from slow-motion puzzles to lightspeed destruction. With brand new birds, brand new superpowers, and a whole galaxy to explore, the sky is no longer the limit!

Also featuring the DANGER ZONE, the most difficult Angry Birds levels ever! Can you master the Danger Zone?

Buy the activation key to play the full version of Angry Birds Space on your PC and get access to content updates!

Minimum System Requirements:

PLEASE NOTE THAT YOU MUST TRY THE FREE DEMO VERSION FIRST.


OS Windows XP SP2
RAM 512MB
CPU 1 GHz
Graphic OpenGL 1.3 compatible
Internet Connection required for activation
for free download ; http://shop.angrybirds.com/us/games/pc-games/angry-birds-space-pc-version.html