Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Wednesday, 8 December 2010

Hacks and the Linux-Free PS3

It's not clear why Linux fans would even want to run it on a PS3, "when a console is NOTHING but 'DRM... in a box'" says Slashdot blogger hairyfeet. "Even when [Sony] allowed Linux you didn't get access to the full machine -- no GPU access -- which left it an underpowered POWER based PC."
"Never get between a geek and a processor" would be an excellent maxim for tech companies to live by, but it's one that gets ignored again and again.
Take Sony's (NYSE: SNE) latest misguided move. Not only is it what inspired Montreal consultant and Slashdot blogger Gerhard Mack to utter those sage words, but it's also what has now prompted George Hotz -- author of the original hack into the PS3 -- to vow he'll craft yet another hack to get around its latest firmware update.
"A note to people interested in the exploit and retaining OtherOS support, DO NOT UPDATE," Hotz wrote in a follow-up post last week. "I will look into a safe way of updating to retain OtherOS support, perhaps something like Hellcat's Recovery Flasher."
Apparently addressing Sony, Hotz added, "I never intended to touch CFW, but if that's how you want to play... "
In the meantime, "my investigation into 3.21 has begun," he wrote.
'This Has Me Seeing Red'
Indeed, the more-or-less forced Thursday update has sparked an ire whose equal has not been seen in a long, long time.
"This really has me seeing red," wrote Anonymous Coward among the 700-plus comments on Slashdot, for example. "I realize Sony is a business and they are simply trying to protect their rights. But this is removing functionality I paid for and own.
"They are taking away something that belongs to me," Anonymous Coward added. "I am really pissed that they couldn't figure out a better way to thwart hackers."
Similar sentiments could be heard all over Linux Devices, as well as on Digg, on SlashGear and on LXer, among many others.
Determined to dig deeper, Linux Girl conducted a small poll at the blogosphere's once-thriving Other OS Saloon.
'Sure to Attract the Ire of All'
"I've been following this story with some interest, as Sony is one of my favorite companies to hate," Hyperlogos blogger Martin Espinoza began. "Since the CD rootkit debacle, Sony has been on every hacker's mind."
With the Linux install option, "Sony successfully increased the cachet of the PS3 among the geek set," Espinoza noted. "Its removal is sure to attract the ire of all.
"Even cluster managers will in some cases be sorry to see this come to pass," he added. "Though they do not need firmware updates for game support, those same firmware updates have ramifications for system stability."
It's not even clear the move is a legal one, Espinoza told LinuxInsider.
"Eliminating functionality of the online service would be one thing, but altering the console itself eliminates functionality that may have swayed the purchaser's decision," he pointed out.
'It Was Always Running in a Hypervisor'
The move is disappointing, Slashdot blogger David Masover agreed. "Then again, it was always running in a hypervisor, always deliberately crippled to some extent in the name of preventing piracy -- or independent game manufacturers who don't want to pay Sony's licensing fees.
"I took one look at the PS3, read 'hypervisor,' and decided not to buy one," Masover recalled.
In fact, "I don't know that anyone who bought such a tightly controlled device in the first place deserves anything other than a hearty 'I told you so,'" Masover concluded. "Same goes for anyone with an iPhone, by the way."
'DRM in a Box'
"What did everybody expect?" Slashdot blogger hairyfeet agreed.
"While I have avoided Sony products since the rootkit fiasco, in this case I can understand their position," he told LinuxInsider. "They allow a way to run Linux on the PS3 and what happens? Some script kiddie hacker cooks up a way to compromise the hypervisor by using Linux."
It's also not clear why Linux fans would even want to run the OS on a console, "when a console is NOTHING but 'DRM... in a box'" hairyfeet pointed out. "Even when they allowed Linux you didn't get access to the full machine -- no GPU access -- which left it an underpowered POWER based PC."
It's possible Sony only implemented the Linux install option "to keep hobbyists from wanting to break their DRM," Mack suggested. "Now that the option is gone, expect more holes to be punched in their DRM."
'Off-Target Since Day One'
The situation "reminds me of the old adage, 'The big print giveth, and the fine print taketh away'," said Barbara Hudson, a blogger on Slashdot who goes by "Tom" on the site.
"The big print was, 'Price Cut on PS3'; the fine print was, 'and so were the features,'" she explained.
The marketing of the PS3 has been "off-target since day one," Hudson told LinuxInsider.
"Hugely overpriced at the beginning, Sony was always playing catch-up," she said. "Paying a (US)$100 premium so you can get a game console that also doubles as a noisy, heat-generating Blu-ray player doesn't cut it now that quieter, much more energy-efficient Blu-ray players are hitting the $100 price point."
Cutting features, then, "is the last thing they should want to do," she added. "Then again, it's not the first such move -- they also removed PS1 and PS2 compatibility, presumably not just to cut chip counts and cost, but to force consumers to buy new games."
'Penalizing Loyalty'
The real issue, of course, is whether the move will affect sales, Hudson pointed out.
"When I went shopping for a game console, it was a toss between a PS3 or a Wii," she recounted. "After trying my daughter's Wii, there was no way I was going to buy a PS3. Sony needs to focus on planning to make their next-generation product more attractive to everyone, or they'll never catch up."
Toward that end, there are a few lessons the company could learn from Nintendo, Hudson suggested:
1. "Lose the hard drive."
2. "Cut the energy bill. The PS3 uses 180 watts to play a Blu-Ray movie, while standalone players use less than 20 watts."
3. "Don't toss out backwards compatibility -- you're penalizing loyalty."
4. "Better controllers."
5. "Lose the 'hard-core-gamer, boyz-in-basements' sexist image."
In the meantime? "If you want to use GNU/Linux on some gadget, buy it from someone else," blogger Robert Pogson recommended. "That will make Sony and you both happy."

By Katherine Noyes
LinuxInsider
Part of the ECT News Network

Sunday, 5 December 2010

Leveraging Linux for Supercomputing

Thus, aggregation provides an affordable, virtual x86 platform with large, shared memory. Server virtualization for aggregation replaces the functionality of custom and proprietary chipsets with software and utilizes only a tiny fraction of a system's CPUs and RAM to provide chipset-level services without sacrificing system performance.
High-performance computing (HPC) applications such as numerical simulation -- whether for forecasting, mechanical and structure simulation, or computational chemistry -- require a large number of CPUs  for processing. To meet these needs, customers must buy a large-scale system that enables parallel processing so that the simulation can be completed in the shortest possible time. Such solutions are available in two forms: scale-up and scale-out.
Traditionally, scale-up customers have had no choice but to purchase high-cost, proprietary shared-memory symmetric multiprocessing (SMP) systems for HPC needs with proprietary operating systems such as AIX, Solaris, HPUX and others. These SMP systems require significant investment in system-level architecture by computer manufacturers.
While SMP systems with up to eight processors can use off-the-shelf chipsets to provide most of the required system aspects, systems with more processors require significant investment in R&D. The result of the high R&D investment has been an expensive solution that uses proprietary technology based on custom hardware and components. Most of the SMP systems with eight processors or more utilize non-x86 processors, which has greatly contributed to the high price of SMP systems.
Then came the Beowulf project, which helped pave the path to an entirely new approach to the SMP.
Chase Paymentech
Linux Helps Pioneer a Cluster Revolution
As x86 server systems became the commodity server infrastructure, users began to look for other, more accessible and affordable ways to handle their large workloads. They applied cluster technology to unify computers so that they could handle compute-intensive operations.
The Beowulf cluster project pioneered the use of off-the-shelf, commodity computers running open source, Unix-like operating systems such as BSD and GNU /Linux for HPC. It wasn't long before this concept was adapted for clusters instead of traditional SMPs by companies like IBM (NYSE: IBM) and HP (NYSE: HPQ), which began to sell their own cluster systems, and for good reason: With Beowulf clusters, there was a lower initial purchase price, open architecture and better performance than SMP systems running proprietary Unix.
Despite Linux's market penetration, ease of use and portability, proprietary Unix coupled with traditional SMP systems still maintained a significant footprint in the market. The reason for this was that large-memory applications, as well as multi-threaded applications, could not fit into off-the-shelf and small-scale x86 servers running Linux. Linux clusters, however, captured a significant portion of the market where Message-Passing Interface (MPI) applications were used.
Even so, regardless of their pervasiveness in the market, clusters still pose some key challenges to users, including complexity of installation and management of multiple nodes, as well as requiring distributed storage and job scheduling, which can generally be handled only by highly trained IT personnel.
That's where virtualization for aggregation comes in.
Virtualization for Aggregation
Server virtualization and its purpose are familiar to the industry by now: By decoupling the hardware from the operating environment, users can convert one single server into multiple virtual servers to increase hardware utilization.
Virtualization for aggregation does the reverse: It combines a number of commodity x86 servers into one virtual server, providing a larger, single system resource (CPU, RAM, I/O, etc.). Users are able to manage a single operating system while leveraging virtualization for aggregation's ability to enable a high number of processors with large, contiguous shared memory.
One of the great benefits of a system built with virtualization for aggregation is that it eliminates the complexity of managing a cluster, allowing for users to manage their systems more easily and reduce management time overall. This is especially helpful for projects that have no dedicated IT staff.
Thus, aggregation provides an affordable, virtual x86 platform with large, shared memory. Server virtualization for aggregation replaces the functionality of custom and proprietary chipsets with software and utilizes only a tiny fraction of a system's CPUs and RAM to provide chipset-level services without sacrificing system performance.
Virtualization for aggregation can be implemented in a completely transparent manner and does not require additional device drivers or modifications to the Linux OS.
Using this technology to create a virtual machine (VM), customers can run both distributed applications as well as large-memory applications optimally, using the same physical infrastructure and open source Linux. With x86, Linux can scale-up like traditional, large-scale proprietary servers.
Linux scalability to support these large VMs is critical for the success of aggregated VMs. Recent enhancements to the Linux kernel, such as support for large NUMA systems, make it possible.
Now that Linux provides a scalable OS infrastructure, applications requiring more processing or memory for better performance can implement virtualization for aggregation, while taking advantage of the price and performance advantages of commodity components.
Even more exciting is that virtualization for aggregation can create the largest SMP systems in the world. These systems are so large that current workloads do not even use their memory and CPU capacity -- meaning that in the future, users with compute-intensive needs can begin coding applications without worrying about these limitations.

By Shai Fultheim
LinuxInsider
Part of the ECT News Network