AMD introduces new Phenom chips
Advanced Micro Devices on Thursday announced new Phenom chips, including quad-core chips and its first triple-core processors for desktop PCs.
The company's triple-core Phenom X3 8000 series processors provide an option to mainstream PC buyers who don't want to spend on a quad-core processor but are looking for more performance than a dual-core processor, said Pat Moorhead, vice president of advanced marketing at AMD.
The chips could be used for high-definition video playback, casual mainstream gaming and productivity applications, Moorhead said.
[ InfoWorld chief technologist Tom Yager believes that AMD is ready to scale you up. ]
The company's first triple-core processors include the Phenom X3 8400, which runs at 2.1GHz, and the Phenom X3 8600, which runs at 2.3GHz. Both will come with 1.5MB of L2 cache and 2MB of L3 cache.
AMD also launched three Phenom quad-core processors on Thursday: the Phenom X4 9750, which runs at 2.4Ghz; the Phenom X4 9850, which runs at 2.5GHz; and the Phenom 9100e, a low-voltage quad-core processor that runs at 1.8GHz and has a 65-watt power envelope during maximum usage. All the processors contain 2MB of L2 cache and 2MB of L3 cache.
PC makers will ship products with the quad-core processors in the second quarter, AMD said.
The triple-core processors are already shipping in volume to PC makers, AMD said. U.S. vendor ZT Systems will list PCs with the new triple-core Phenoms on Monday, with other "major OEMs" and system vendors shipping products next quarter, AMD said. Many major vendors, including Dell and Hewlett-Packard, have already hinted at including the processors in desktops.
Dell has listed plans to use the chip in its OptiPlex 740 business desktop systems. It will ship the triple-core OptiPlex in the second quarter, a company spokeswoman recently said, but she declined to specify which processor will run the desktop. Hewlett-Packard has also listed a desktop on its Bulgarian-language Web site with AMD's Phenom Triple-Core 8600B processor.
Mesh Computer, a PC vendor in the United Kingdom, is offering the Matrix XXX Plus desktop with the Phenom X3 8400 processor and the Matrix XXX Pro desktop with the Phenom 8600 processor.
Because the triple-core chip is a new concept -- set between the widely accepted dual- and quad-cores -- it's unclear how it will fit in the market, said Dean McCarron, founder and principal at Mercury Research.
"You're going to get a performance enhancement with the extra core above and beyond a dual-core," McCarron said. But it also falls shy of a quad-core.
AMD designed the triple-core as a way to produce a cheaper chip. The triple-core processor is built on a quad-core CPU, with one core nonfunctional, McCarron said.
The triple-core chip gives AMD a tactical advantage over Intel, McCarron said. Intel will need to answer the triple-core chip with a product priced in the same range while delivering similar performance. Intel can take a dual-core or quad-core processor, adjust features like cache, and price it similar to AMD's triple-core processor, McCarron said.
Apple releases iPhone SDK beta 2
Apple released on Thursday a new version of its iPhone SDK for developers. iPhone SDK beta 2 includes Interface Builder, a component of Apple's development tools that lets developers create the interface for their applications.
That seems to be the only major change in the latest build, according to the SDK's read me, which continues to list some known issues. Apple says "this second beta is known to be incompatible with installation folders other than the default /Developer."
Given the importance of UI on the Mac, Interface Builder is a pretty critical tool in the development process, and some developers had chosen to hold off on their efforts until the SDK was revised.
Apple unveiled the iPhone SDK at a special event earlier this month, allowing developers to begin building applications for the iPhone and iPod touch. Several high-profile companies have already jumped onboard, demoing their applications at the event.
Highlighting the demos was AOL with a native AIM client; other applications from Electronic Arts, Salesforce, and Apple were also shown.
Gone in 2 minutes: Mac gets hacked first in contest
It may be the quickest $10,000 Charlie Miller ever earned. He took the first of three laptop computers -- and a $10,000 cash prize -- Thursday after breaking into a MacBook Air at the CanSecWest security conference's PWN 2 OWN hacking contest.
Show organizers offered a Sony Vaio, Fujitsu U810, and the MacBook as prizes, saying that they could be won by anybody at the show who could find a way to hack into each of them and read the contents of a file on the system using a previously undisclosed "0day" attack.
Nobody was able to hack into the systems on the first day of the contest when contestants were only allowed to attack the computers over the network, but on Thursday, the rules were relaxed so that attackers could direct contest organizers using the computers to do things like visit Web sites or open e-mail messages.
Miller, best known as one of the researchers who first hacked Apple's iPhone last year, didn't take much time. Within 2 minutes, he directed the contest's organizers to visit a Web site that contained his exploit code, which then allowed him to seize control of the computer, as about 20 onlookers cheered him on.
He was the first contestant to attempt an attack on any of the systems.
Miller was quickly given a nondisclosure agreement to sign, and he's not allowed to discuss particulars of his bug until the contest's sponsor, TippingPoint, can notify the vendor.
Contest rules state that Miller could only take advantage of software that was preinstalled on the Mac, so the flaw he exploited must have been accessible by, or possibly inside, Apple's Safari browser.
Last year's contest winner, Dino Dai Zovi, exploited a vulnerability in QuickTime to take home the prize.
Dai Zovi, who congratulated Miller after his hack, didn't participate in this year's contest, saying it was time for someone else to win.
Multi-core to leave developers in dust?
Multi-core chip rivals AMD and Intel have been beating their chests as of late, but to what end, I wonder, as developers labor to keep up.
AMD, for one, has fixed the embarrassing flaw that delayed the quad-core Barcelona chip. As Terry Malloy put it in On the Waterfront, so what?
Meanwhile, Intel and Microsoft pat themselves on the back because they've donated $20 million to UC Berkley and the University of Illinois to found the Universal Parallel Computing Research Centers. Well, it's about time.
Why so negative? The dirty little secret (and it's not all that secret) is that the gap between hardware and software has never been greater. Today's software can barely (if at all) take advantage of quad-core processors, but Intel and AMD seem to be giddy with rivalry, rushing to push out chips with even more cores. Intel has already demonstrated an 80-core processor, and you can expect x86 servers with as many as 64 processor cores in 2009 and desktops with that many by 2012, says Forrester analyst James Staten.
That's not to say that the IT industry is scoffing at the potential benefits of multi-core processing. But the mountain between IT and some future multi-core promise land -- namely, the task of developing parallelized apps that keep pace with continual core advances -- is huge, says David Patterson, the Pardee Professor of Computing Science at UC Berkeley and director of the parallel computing lab. "It's the biggest challenge in 50 years of computing. If we do this, it's a chance to reset the foundation of computing."
In the short run, Patterson says, we can parallelize legacy software and gamble on getting value out of eight cores. But that would be only an interim solution, as such apps would not scale to 32 or 64 cores, he adds.
What is frustrating is that this problem didn't exactly sneak up on the industry. Chip development cycles are very long, and key software developers are well aware of what's moving through the pipeline. Sure, software always lags hardware. Many of us complained that we didn't have software that would take advantage of 500MHz back in the '90s. But what Patterson and others call the multi-core revolution poses problems for developers that are qualitatively different than the problems of the past. Why wait so long to get serious about solving them?
Making sense of the multi-core muddle
The cynical explanation for this growing gap is that Intel and AMD are running on a treadmill that requires selling more and more transistors to support the cost of developing and building fabs. As long as buyers are willing to spend the money for cool new hardware, who cares if they don't really need it?
Ray DePaul, president and CEO of RapidMind, which sells a multi-core software development platform, has a different take.
"The first multi-core chips were dual core, and that lulled everyone into thinking this is OK," DePaul says.
Taking advantage of the second core was relatively easy with existing software. But four cores is another story.
"It's the classic disruptive technology," DePaul says. "If the Microsofts and the Intels always got it right, you'd never see a Google or an AMD."
RapidMind hopes to avoid following in the wake of companies such as Thinking Machines and nCUBE, which attempted to build businesses around solving the parallel computing problem without success. I'm not qualified to say whether the RapidMind solution, which includes an embedded API to allow legacy software to take advantage of multiple cores, is viable. But I agree with DePaul when he says, "The business opportunity is far more mainstream than it was because every desktop is shipped with a multi-core processor."
RapidMind spun out of the University of Waterloo in Ontario, where co-founder Michael McCool studied the problems of parallel computing for years. A one-time competitor called PeakStream was purchased by Google last year. It's unclear what the search giant intends to do with the technology, though it may well use it internally to bolster its already enormous computing resources.
In addition to the business opportunity, there's an employment opportunity here as well. Developers who can handle parallel processing or concurrent processing are going to be in great demand. Indeed, UC's Patterson says: "We feel a sense of allegiance to our undergrads but don't know what to teach them. Course work is all focused on sequential [programming] problems."
I don't feel like doing the math, but I'll bet Intel and Microsoft earn $20 million in a matter of hours. So, yeah, I congratulate them for funding some research, but they and other industry heavyweights need to do a lot more. If not, maybe we'll wise up and stop buying what they're selling.
The death of the silicon computer chip
The reign of the silicon chip is over, according to physicists who predict that the conventional silicon chip has no longer than four years left to run.
Meeting at the Institute of Physics’ Condensed Matter and Material Physics conference this week, researchers speculate that the silicon chip will be unable to sustain the same pace of increase in computing power and speed as it has in previous years.
Just as Gordon Moore predicted in 2005, physical limitations of the miniaturised electronic devices of today will eventually lead to silicon chips that are saturated with transistors and incapable of holding any more digital information.
Scientists are now investigating alternative components that may pave the way to faster, more powerful computers of the future and potentially extend Moore’s Law of technological advancement.
One team of researchers at the Leeds University in the UK have proposed to replace silicon chips with carbon nanotubes, which are electrically-conducive tubes of pure carbon that are tens of times thinner than a human hair.
Already, some elements of computer circuits such as transistors have been constructed from individual carbon nanotubes. However, scientists have been as yet unable to precisely arrange nanotubes into circuit patterns, which is necessary to determine how each tube conducts electricity.
In a development that is expected to bring carbon nanotubes one step closer to commercial use, the Leeds University researchers have developed a technique of growing nanotubes on a perforated ceramic grid.
The technique allows the research team to determine the electrical propesrties of individual nanotubes, after which the tubes are accurately positioned on a surface using a tweezer-like device.
“With this technique we can make carbon nanotube devices of a complexity that is not achievable by most other means,” said Chris Allen, of the Quantum Information Group at the University of Leeds.
Meanwhile, other groups of scientists claim that superconductors are key to future computing, as they may be able to harness the power of quantum physics to boost computer power tremendously.
Superconductors are materials that conduct electricity with zero electrical resistance, which effectively means that an electric current can circulate around a superconducting loop for an indefinite period of time.
By linking the electric current in a loop to a quantum superposition state, superconductors may act as quantum bits, or qubits, in quantum computing.
Qubits are able to exist in multiple states at any one time, which massively increases the amount of information that can be encoded in a quantum computer’s memory.
According to physicist Hans Mooij, one of the biggest challenges in making quantum computers this way is to progress from two to three qubits that communicate with each other.
To maintain future developments in quantum computing, Mooij and his team of researchers at the Delft University of Technology in the Netherlands have developed a particular approach that supports future transitions from three working, communicating qubits, to larger groups of superconductors.
“With our qubit, once we have three set up we can move on to twenty or fifty,” he said.
IT News
Posted by
Dan's
|
Labels:
IT News
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment