Search This Blog

Tuesday, June 19, 2012


Supercomputer


The Blue Gene/P supercomputer at Argonne National Lab runs over 250,000 processors using normal data center air conditioning, grouped in 72 racks/cabinets connected by a high-speed optical network[1]
supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), and later at Cray Research. While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm.[2][3]
Systems with a massive number of processors generally take one of two paths: in one approach, e.g. in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[4] In another approach, a large number of processors are used in close proximity to each other, e.g. in a computer cluster. The use of multi-core processors combined with centralization is an emerging direction.[5][6] Currently, IBM Sequoia is the fastest in the world.[7]
Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physicsweather forecastingclimate researchoil and gas explorationmolecular modeling (computing the structures and properties of chemical compounds, biologicalmacromolecules, polymers, and crystals), and physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).

History

The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[8] The CDC 6600, released in 1964, is generally considered the first supercomputer.[9][10]
Cray left CDC in 1972 to form his own company.[11] Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it became one of the most successful supercomputers in history.[12][13] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's fastest until 1990.[14]
While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaflops per processor.[15][16] The Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[17][18][19] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[20]

Hardware and architecture

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance.[8] However, in time the demand for increased computational power ushered in the age of massively parallel systems.
While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some being graphic units) connected by fast connections.[2][3]
Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers.[21][22][23] The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components.[24] There have been diverse approaches to heat management, from pumpingFluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures.[14][25]
The CPU share of TOP500
Systems with a massive number of processors generally take one of two paths: in one approach, e.g. in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[4] In another approach, a large number of processors are used in close proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect becomes very important and modern supercomputers have used various approaches ranging from enhanced Infinibandsystems to three-dimensional torus interconnects.[26][27] The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.[5][6]
As the price/performance of general purpose graphic processors (GPGPUs) has improved, a number of petaflop supercomputers such as Tianhe-I andNebulae have started to rely on them.[28] However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general purpose high performance computing applications has been the subject of debate, in that while a GPGPU maybe tuned to score well on specific benchmarks its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application towards it.[29] However, GPUs are gaining ground and in 2012 the Jaguar supercomputer was transformed into Titan by replacing CPUs with GPUs.[30][31][32]
A number of "special-purpose" systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle,[33] Deep Blue,[34] and Hydra,[35] for playing chessGravity Pipe for astrophysics,[36]MDGRAPE-3 for protein structure computation molecular dynamics[37] and Deep Crack,[38] for breaking the DES cipher.

Robotics

Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots [1] and computer systems for their control, sensory feedback, and information processing.[2]These technologies deal with automated machines that can take the place of humans, in hazardous or manufacturing processes, or simply just resemble humans. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics.
The concept and creation of machines that could operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century.[3] Throughout history, robotics has been often seen to mimic human behavior, and often manage tasks in a similar fashion. Today, robotics is a rapidly growing field, as we continue to research, design, and build new robots that serve various practical purposes, whether domestically, commercially, or militarily. Many robots do jobs that are hazardous to people such as defusing bombs, exploring shipwrecks, and mines.
                               

Tracked robots
http://bits.wikimedia.org/static-1.20wmf4/skins/common/images/magnify-clip.png
Tank tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor and military robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Examples include NASA's Urban Robot "Urbie".

Snaking
Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings. The Japanese ACM-R5 snake robot can even navigate both on land and in water.
                         

Human-robot interaction


Kismet can produce a range of facial expressions.
If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speechgestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO.

[edit]Speech recognition

Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech.[84] The same word, spoken by the same person may sound different depending on local acousticsvolume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent.[85] Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first "voice input system" which recognized "ten digits spoken by a single user with 100% accuracy" in 1952.[86] Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.[87]

[edit]Robotic voice

Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium,[88]making it necessary to develop the emotional component of robotic voice through various techniques.[89][90]

[edit]Gestures

One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate "down the road, then turn right". It is likely that gestures will make up a part of the interaction between humans and robots.[91] A great many systems have been developed to recognize human hand gestures.[92]

[edit]Facial expression

Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots. Robotic faces have been constructed byHanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos).[93]The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi[94] can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.[95]

[edit]Artificial emotions

Artificial emotions can also be generated, composed of a sequence of facial expressions and/or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots.

[edit]Personality

Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future.[96] Nevertheless, researchers are trying to create robots which appear to have a personality:[97][98] i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example isPleo, a toy robot dinosaur, which can exhibit several apparent emotions.[99]

[edit]Control


Puppet Magnus, a robot-manipulated marionette with complex control systems
The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to calculate the appropriate signals to the actuators (motors) which move the mechanical.
The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands. Sensor fusionmay first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction) is inferred from these estimates. Techniques from control theory convert the task into commands that drive the actuators.
At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a "cognitive" model. Cognitive models try to represent the robot, the world, and how they interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.

[edit]Autonomy levels


TOPIO, a humanoid robot, playedping pong at Tokyo IREX 2009.[100]
Control systems may also have varying levels of autonomy.
  1. Direct interaction is used for haptic or tele-operated devices, and the human has nearly complete control over the robot's motion.
  2. Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them.
  3. An autonomous robot may go for extended periods of time without human interaction. Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous, but operate in a fixed pattern.
Another classification takes into account the interaction between human control and the machine motions.
  1. Teleoperation. A human controls each movement, each machine actuator change is specified by the operator.
  2. Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators.
  3. Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.
  4. Full autonomy. The machine will create and complete all its tasks without human interaction.
                                                                                                     

Saturday, May 26, 2012

ANGRY BIRDS SPACE

The chase is on! After a giant claw kidnaps their eggs, the Angry Birds chase it into a wormhole and find themselves floating in a strange new galaxy – surrounded by space pigs! Luckily the Angry Birds have super powers of their own…

Angry Birds Space introduces you to new adventures on planets and in zero gravity, resulting in spectacular gameplay ranging from slow-motion puzzles to lightspeed destruction. With brand new birds, brand new superpowers, and a whole galaxy to explore, the sky is no longer the limit!

Also featuring the DANGER ZONE, the most difficult Angry Birds levels ever! Can you master the Danger Zone?

Buy the activation key to play the full version of Angry Birds Space on your PC and get access to content updates!

Minimum System Requirements:

PLEASE NOTE THAT YOU MUST TRY THE FREE DEMO VERSION FIRST.


OS Windows XP SP2
RAM 512MB
CPU 1 GHz
Graphic OpenGL 1.3 compatible
Internet Connection required for activation
for free download ; http://shop.angrybirds.com/us/games/pc-games/angry-birds-space-pc-version.html

Thursday, December 22, 2011


Dell

This article is about the corporation known as Dell Inc. For other uses, see Dell (disambiguation).
Dell, Inc.
Public
Industry
Founded
Austin, Texas, U.S.
(November 4, 1984)
Founder(s)
Headquarters
1 Dell Way, Round Rock,
Texas, United States[1]
Area served
Worldwide
Key people
Michael Dell
(Chairman & CEO)
Products
Revenue
increase US$ 3.43 billion (2011)[2]
increase US$ 2.63 billion (2011)[2]
increase US$ 38.59 billion (2011)[2]
increase US$ 61.49 billion (2011)[2]
Employees
100,300 (2011)[2]
Website
Dell, Inc. (NASDAQDELL) is an American multinational information technology corporation based in 1 Dell Way, Round Rock, Texas, United States, that develops, sells and supports computers and related products and services. Bearing the name of its founder, Michael Dell, the company is one of the largest technological corporations in the world, employing more than 103,300 people worldwide.[2] Dell is listed at number 41 in the Fortune 500 list.[3]
Dell has grown by both increasing its customer base and through acquisitions since its inception; notable mergers and acquisitions including Alienware (2006) and Perot Systems (2009). As of 2009, the company sold personal computers, servers, data storage devices, network switches, software, and computer peripherals. Dell also sells HDTVs, cameras, printers, MP3 players and other electronics built by other manufacturers. The company is well known for its innovations in supply chain management and electronic commerce.
Fortune Magazine listed Dell as the sixth largest company in Texas by total revenue.[4] It is the second largest non-oil company in Texas – behind AT&T – and the largest company in the Austin, Texas area.[5]

History

Dell traces its origins to 1984, when Michael Dell created PCs Limited while a student at the University of Texas at Austin. The dorm-room headquartered company sold IBM PC-compatiblecomputers built from stock components.[6] Dell dropped out of school in order to focus full-time on his fledgling business, after getting about $300,000 in expansion-capital from his family.
In 1985, the company produced the first computer of its own design, the "Turbo PC", which sold for US$795.[7] PCs Limited advertised its systems in national computer magazines for sale directly to consumers and custom assembled each ordered unit according to a selection of options. The company grossed more than $73 million in its first year of operation.
The company changed its name to "Dell Computer Corporation" in 1988 and began expanding globally. In June 1988, Dell's market capitalization grew by $30 million to $80 million from its June 22initial public offering of 3.5 million shares at $8.50 a share.[8] In 1992, Fortune magazine included Dell Computer Corporation in its list of the world's 500 largest companies, making Michael Dell the youngest CEO of a Fortune 500 company ever.[citation needed]
In 1996, Dell began selling computers through its website, and in 2002, it expanded its product line to include televisions, handhelds, digital audio players, and printers. Dell's first acquisition occurred in 1999 with the purchase of ConvergeNet Technologies.
Dell surpassed Compaq to become the largest PC manufacturer in 1999. In 2002, when Compaq merged with Hewlett Packard (the 4th place PC maker), the combined Hewlett Packard took the top spot but struggled and Dell soon regained its lead.
In 2003, the company was rebranded as simply "Dell Inc." to recognize the company's expansion beyond computers.
In 2004, Michael Dell resigned as CEO while retaining the title of Chairman, handing the CEO title to Kevin Rollins who was the President and COO. Under Rollins, Dell began to loosen its ties to Microsoft and Intel, the two companies which were responsible for Dell's dominance in the PC business. During that time, Dell acquired Alienware, which introduced several new items to Dell products, including AMD microprocessors. To prevent cross-market products, Dell continues to run Alienware as a separate entity, but still a wholly owned subsidiary.[9]
However in 2005, while earnings and sales grew, sales growth slowed considerable, and the company stock lost 25% of its value that year. This has been attributed to a decline in consumers purchasing PCs through the Web or on the phone, as increasing numbers were visiting consumer electronics retail stores. As well, many analysts were looking to innovating companies as the next source of growth in the technology sector; Dell's low spending on R&D which worked well in the commoditized PC market prevented it from making inroads into more lucrative segments such as MP3 players. Lastly, Dell's reputation for poor customer service came under increasing scrutiny on the Web. By the fourth quarter of 2006, Dell lost its title of the largest PC manufacturer to Hewlett Packard which was invigorated under Mark Hurd.[10][11][12]
After four out of five quarterly earnings reports were below expectations, Rollins resiged in 2007 and Michael Dell assumed the role of CEO again. The founder announced a change campaign called "Dell 2.0," reducing headcount and diversifying the company's product offerings. The company acquired EqualLogic on January 28, 2008 to gain a foothold in the iSCSI storage market. Because Dell already had an efficient manufacturing process, integrating EqualLogic's products into the company drove manufacturing prices down.[13]
In 2009, Dell acquired Perot Systems, a technology services and outsourcing company founded by H. Ross Perot
On September 21, 2009, Dell announced its intent to acquire Perot Systems, based in Plano, Texas, in a reported $3.9 billion deal.[14] Perot Systems brought applications development, systems integration, and strategic consulting services through its operations in the U.S. and 10 other countries. In addition, it provided a variety of business process outsourcing services, including claims processing and call center operations.[15]
On August 16, 2010, Dell announced its intent to acquire the data storage company 3PAR.[16] On September 2, 2010 Hewlett-Packard offered $33 a share, which Dell declined to match.[17]
On February 10, 2010, Dell acquired KACE Networks a leader in Systems Management Appliances. The terms of the deal were not disclosed.[18]
On November 2, 2010, Dell acquired Software-as-a-Service (SaaS) integration leader Boomi. Terms of the deal were not disclosed.[19]


Dell facilities

Dell's headquarters are located in Austin, Texas.[20] As of 2010 the company employs about 16,000 people in the facility,[21] which has 2,100,000 square feet (200,000 m2) of space.[22] As of 1999 almost half of the general fund of the City of Round Rock originates from sales taxes generated from the Dell headquarters.[23]
Dell previously had its headquarters in the Arboretum complex in northern Austin, Texas.[24][25] In 1989 Dell occupied 127,000 square feet (11,800 m2) in the Arboretum complex.[26] In 1990 Dell had 1,200 employees in its headquarters.[24] In 1993 Dell submitted a document to Round Rock officials, titled "Dell Computer Corporate Headquarters, Round Rock, Texas, May 1993 Schematic Design." Despite the filing, during that year the company said that it was not going to move its headquarters.[27] In 1994 Dell announced that it was moving most of its employees out of the Arboretum, but that it was going to continue to occupy the top floor of the Arboretum and that the company's official headquarters address would continue to be the Arboretum. The top floor continued to hold Dell's board room, demonstration center, and visitor meeting room. Less than one month prior to August 29, 1994, Dell moved 1,100 customer support and telephone sales employees to Round Rock.[28] Dell's lease in the Arboretum had been scheduled to expire in 1994.[29]
http://bits.wikimedia.org/skins-1.18/common/images/magnify-clip.png
The company sponsors Dell Diamond, the home stadium of the Round Rock Express, the AAA minor league baseball affiliate of the Texas Rangers major league baseball team
By 1996 Dell was moving its headquarters to Round Rock.[30] As of January 1996 3,500 people still worked at the then-current Dell headquarters. One building of the Round Rock headquarters, Round Rock 3, had space for 6,400 employees and was scheduled to be completed in November 1996.[31] In 1998 Dell announced that it was going to add two buildings to its Round Rock complex, adding 1,600,000 square feet (150,000 m2) of office space to the complex.[32]
In 2000 Dell announced that it would lease 80,000 square feet (7,400 m2) of space in the Las Cimas office complex in unincorporated Travis County, Texas, between Austin and West Lake Hills, to house the company's executive offices and corporate headquarters. 100 senior executives were scheduled to work in the building by the end of 2000.[33] In January 2001 the company leased the space in Las Cimas 2, located along Loop 360. Las Cimas 2 housed Dell's executives, the investment operations, and some corporate functions. Dell also had an option for 138,000 square feet (12,800 m2) of space in Las Cimas 3.[34] After a slowdown in business required reducing employees and production capacity, Dell decided to sublease its offices in two buildings in the Las Cimas office complex.[35] In 2002 Dell announced that it planned to sublease its space to another tenant; the company planned to move its headquarters back to Round Rock once a tenant was secured.[34] By 2003 Dell moved its headquarters back to Round Rock. It leased all of Las Cimas I and II, with a total of 312,000 square feet (29,000 m2), for about a seven year period after 2003. By that year roughly 100,000 square feet (9,300 m2) of that space was absorbed by new subtenants.[36]
In 2008 Dell switched the power sources of the Round Rock headquarters to more environmentally friendly ones, with 60% of the total power coming from TXU Energy wind farms and 40% coming from the Austin Community Landfill gas-to-energy plant operated by Waste Management, Inc.[22]
The US and India are the only countries which have all of Dell's business functions and provide support globally: Research and Development, manufacturing, finance, analysis, customer care.[41]