Showing posts with label system. Show all posts
Showing posts with label system. Show all posts

Sunday, 5 December 2010

Switching from PC to Mac and back

We live in a cross-platform world. Windows-powered PCs still dominate in the workplace, but Macs have captured substantial market share and even greater mind share among the affluent and well connected.
As I explained two weeks ago, I’m running a PC and a Mac side by side as part of a long-term commitment to developing more expertise in Apple’s platform and, along the way, helping my readers bridge the Mac-PC gap more smoothly.
So far, it’s been a mostly delightful, if occasionally challenging experience. Although I’ve owned a Mac for several years, I’ve probably used this one more in the past two weeks than I have in the past six months combined. In this post, I’ll share three of the lessons I’ve learned from switching between platforms, including insights about old habits, new hardware, and the joys of cross-platform software and services.
The software that has made this setup possible for me is an open-source package called Synergy, which allows two or more computers (running Windows, OS X, or Linux) to share a single keyboard and mouse. I finally broke down and spent some quality RTFM time with the program’s documentation, a process that gave me a series of small headaches but solved a few bigger ones.
The Synergy software has an unfortunate interaction with Internet Explorer. When Synergy is running, it causes the New Tab button in Internet Explorer to stop working in some circumstances and can even temporarily freeze IE. I first encountered these symptoms a week or so ago and I assumed it was a bug in Internet Explorer 9, but the problems persisted even after I uninstalled the IE9 beta and went back to IE8 on Windows 7.

After much troubleshooting, including resetting IE to its default configuration and uninstalling every add-on, I finally concluded that the Synergy software was to blame. This Stack Overflow thread confirmed that I’m not the only person experiencing this issue, and it also offered what appears so far to be an effective workaround—running Synergy as a standard user rather than as a system service.
I’ve made a conscious effort to spend roughly half my time in each OS over the past two weeks, a task made easier now that Office 2011 for Mac is finally released and available through TechNet.
Overall, I find more similarities than differences between PCs and Macs these days. Both Windows 7 and OS X Snow Leopard are mature, highly usable operating systems with an ample selection of quality third-party software and hardware to choose from. Aside from a few PC-only features like Blu-ray playback and built-in support for TV tuner hardware, I haven’t found any task that I can’t accomplish on either platform. The difference is the degree of difficulty, which varies depending on your experience and personal preferences.
I suspect a lot of my readers can relate to what I’m trying to do here. If you use Windows at the office and a Mac at home, you know what I’m talking about. If you have a desktop PC running Windows 7 and a MacBook Pro or MacBook Air running OS X, you’ve probably run into some of the same issues I have.
On the next three pages, I call out the three biggest lessons I’ve learned along the way.
Page 2: The keyboard is the biggest pain point. Nagging inconsistencies in basic keyboard operation have been, without question, my greatest source of frustration as I’ve switched between PCs and Macs. It’s a little like learning a new language.
Page 3: Cross-platform tools and services are a blessing. It helps immensely to have some tools that look and act the same in both places. Here are some of the tools I’ve found indispensable so far.
Page 4: Hardware matters. There’s no question that a Mac is easier to maintain than an equivalent PC. But a lot of that simplicity comes as a direct result of a lack of choices. Here’s how I’m resolving those trade-offs.

By Ed Bott

Leveraging Linux for Supercomputing

Thus, aggregation provides an affordable, virtual x86 platform with large, shared memory. Server virtualization for aggregation replaces the functionality of custom and proprietary chipsets with software and utilizes only a tiny fraction of a system's CPUs and RAM to provide chipset-level services without sacrificing system performance.
High-performance computing (HPC) applications such as numerical simulation -- whether for forecasting, mechanical and structure simulation, or computational chemistry -- require a large number of CPUs  for processing. To meet these needs, customers must buy a large-scale system that enables parallel processing so that the simulation can be completed in the shortest possible time. Such solutions are available in two forms: scale-up and scale-out.
Traditionally, scale-up customers have had no choice but to purchase high-cost, proprietary shared-memory symmetric multiprocessing (SMP) systems for HPC needs with proprietary operating systems such as AIX, Solaris, HPUX and others. These SMP systems require significant investment in system-level architecture by computer manufacturers.
While SMP systems with up to eight processors can use off-the-shelf chipsets to provide most of the required system aspects, systems with more processors require significant investment in R&D. The result of the high R&D investment has been an expensive solution that uses proprietary technology based on custom hardware and components. Most of the SMP systems with eight processors or more utilize non-x86 processors, which has greatly contributed to the high price of SMP systems.
Then came the Beowulf project, which helped pave the path to an entirely new approach to the SMP.
Chase Paymentech
Linux Helps Pioneer a Cluster Revolution
As x86 server systems became the commodity server infrastructure, users began to look for other, more accessible and affordable ways to handle their large workloads. They applied cluster technology to unify computers so that they could handle compute-intensive operations.
The Beowulf cluster project pioneered the use of off-the-shelf, commodity computers running open source, Unix-like operating systems such as BSD and GNU /Linux for HPC. It wasn't long before this concept was adapted for clusters instead of traditional SMPs by companies like IBM (NYSE: IBM) and HP (NYSE: HPQ), which began to sell their own cluster systems, and for good reason: With Beowulf clusters, there was a lower initial purchase price, open architecture and better performance than SMP systems running proprietary Unix.
Despite Linux's market penetration, ease of use and portability, proprietary Unix coupled with traditional SMP systems still maintained a significant footprint in the market. The reason for this was that large-memory applications, as well as multi-threaded applications, could not fit into off-the-shelf and small-scale x86 servers running Linux. Linux clusters, however, captured a significant portion of the market where Message-Passing Interface (MPI) applications were used.
Even so, regardless of their pervasiveness in the market, clusters still pose some key challenges to users, including complexity of installation and management of multiple nodes, as well as requiring distributed storage and job scheduling, which can generally be handled only by highly trained IT personnel.
That's where virtualization for aggregation comes in.
Virtualization for Aggregation
Server virtualization and its purpose are familiar to the industry by now: By decoupling the hardware from the operating environment, users can convert one single server into multiple virtual servers to increase hardware utilization.
Virtualization for aggregation does the reverse: It combines a number of commodity x86 servers into one virtual server, providing a larger, single system resource (CPU, RAM, I/O, etc.). Users are able to manage a single operating system while leveraging virtualization for aggregation's ability to enable a high number of processors with large, contiguous shared memory.
One of the great benefits of a system built with virtualization for aggregation is that it eliminates the complexity of managing a cluster, allowing for users to manage their systems more easily and reduce management time overall. This is especially helpful for projects that have no dedicated IT staff.
Thus, aggregation provides an affordable, virtual x86 platform with large, shared memory. Server virtualization for aggregation replaces the functionality of custom and proprietary chipsets with software and utilizes only a tiny fraction of a system's CPUs and RAM to provide chipset-level services without sacrificing system performance.
Virtualization for aggregation can be implemented in a completely transparent manner and does not require additional device drivers or modifications to the Linux OS.
Using this technology to create a virtual machine (VM), customers can run both distributed applications as well as large-memory applications optimally, using the same physical infrastructure and open source Linux. With x86, Linux can scale-up like traditional, large-scale proprietary servers.
Linux scalability to support these large VMs is critical for the success of aggregated VMs. Recent enhancements to the Linux kernel, such as support for large NUMA systems, make it possible.
Now that Linux provides a scalable OS infrastructure, applications requiring more processing or memory for better performance can implement virtualization for aggregation, while taking advantage of the price and performance advantages of commodity components.
Even more exciting is that virtualization for aggregation can create the largest SMP systems in the world. These systems are so large that current workloads do not even use their memory and CPU capacity -- meaning that in the future, users with compute-intensive needs can begin coding applications without worrying about these limitations.

By Shai Fultheim
LinuxInsider
Part of the ECT News Network 

Thursday, 2 December 2010

How To Program Your Direct TV Remote

Contemporary remote controls have become quite complex. In the old days, it used to be that a remote control was a simple electronic device- one power button, a volume up and down button, channel selections and perhaps a mute control. Remote controls today have become more universal, programming the user's television, satellite receiver, VCR, DVD player, stereo and any other part of the user's home entertainment system.
The problem arises when the viewer is not able to understand the options on the remote control and it becomes useless. DirecTv's remote control can be simply understood if these directions are followed. DirecTV incorporates specific features and special options. A four-position slide switch for easy component selection , code library for popular video and stereo components, code search to help program control of older or discontinued components and memory protection to ensure that the user will not have to reprogram the remote when the batteries are replaced.
1. The user chooses the Device
The first thing the user is required to do is choose which device is desired for programming. Most remotes have separate buttons that correspond to the various devices:
SAT - controls the satellite receiver
TV - controls the television set
VCR - controls the VCR
AUX - controls one of several additional units, such as a home stereo
The user presses the button of the device to program until the corresponding light on the remote control begins to flash.
2. The user finds the Code
Once the device desired for programming is chosen, the appropriate code for the particular unit is needed.
Codes for most manufacturers and brands can be found in the back of the remote control user manual. Satellite subscribers can also typically go online to their provider's website and if all else fails, contact the manufacturer of your remote to get the necessary codes.
3. Program the Device
Using the keypad on the remote control, enter the number that's listed first for the device. When this is finished, enter the appropriate key to indicate completion of input. For some remotes, this might be the asterisk (*) while other remotes might use the pound key (#). The mode light on the remote control will flash again and, if the code was correct, the device can now be controlled with the remote control. The user should test the results by turning the power on and off. Does it work? If so, the device is now programmed.
There is no cause for alarm if the code doesn't work the first time.
Remotes come with several codes for the various device brands. If the first code doesn't work, start over using the next code and the next and the next until the right one is discovered and the device is programmed.
What if the device isn't listed at all? Look through the list for "general" codes. If those codes are not found, then try scanning for the device. The user manual should have specific advice for devices without a listed code.
Written by David Johnson

The R4 SDHC card

Team R4 SDHC R4 SDHC card is the first memory card corner to accept greater than 2GB. It ds tt card can accept more high capacity (HC) memory cards, hence the name of R4 SDHC. It is now a standard set of memory cards high capacity to accept, but the R4 SDHC is the card has taken the lead.
At the time, 8GB Micro-SDHC credit history cards experienced been just commencing to emerge inside the industry also to the quantity of customers that flooded in the direction of R4 to hold benefit of the produced it a specific contender to the quickest marketing DS card in the time, alongside the reliable R4v2 also to the DSTT card. The only obvious downside in the direction of R4 SDHC card was that merely because within of the heightened memory space capacity, loading instances inside the card experienced been around 6 to 8 seconds, a whole whole lot slower compared to a few of to three seconds generally witnessed inside the R4v2 also to the DSTT.
Recently, in the commencing of June in 2010, the R4 SDHC celebration unveiled a producer new iteration using the card, dubbed the 2.10T version. The card at first baffled some using the neighborhood primarily since the packaging altered to some gold color not as opposed to the infamous clone "R4i Gold" card. The wording near to the card was also deliberately dstt card altered from "R4 Revolution" to "R4 Renovation". The guide to for these modifications was the fact that structure using the card alone was totally re-done so the celebration preferred to distinguish the more mature dark box away from your new one as considerably as possible.
Loading instances concerning the card experienced been slashed to an amazing 3-4 seconds along using the card alone is produced from the stronger, slightly lighter plastic material with factors rearranged to provide the card even even more mechanised stability. The interface was also altered to closely mimic the standard R4i SDHC (the sister card in the R4 that is compatible on newer Nintendo DSi and DSi XL consoles). The operation in the card alone has remained the same, but comer the app is now compatible with newer software programs and dstt card film games on top of that to getting faster.
Author: cattyboy