Sponsoer by :

Tuesday, April 12, 2011

Techradar

Sponsored

Techradar


Buying Guide: Best Sandy Bridge motherboard: 8 tested

Posted: 10 Apr 2011 04:00 AM PDT

Sandy Bridge processors certainly put the cat amongst the pigeons when Intel released them. Some of those metaphorical pigeons were their own loyal customers.

Those who had bought a Socket 1156 platform must have felt like they had received a slap round the chops, as barely 12 months after releasing the splendid Lynnfield, Intel had produced a CPU family that redefined the standard for performance and price.

Sandy Bridge's release is doubly harsh as the new processor uses a different socket, making for an expensive upgrade for anyone trying to stay up to date.

It seems odd that Intel couldn't work the new architecture into a package that would fit the existing LGA 1156 socket. After all, the package size is identical, and even though the LGA 1155 socket has a different pin layout (it isn't just a case of having one less pin) and orientation notch layout, is the move to a new socket really necessary?

To support Intel's Second Generation core architecture we have the new 6X Express (Cougar Point) series of chipsets. The two chipsets that were launched along with the new processors will also be the most visible in the channel; the performance P67 and the mainstream consumer H67.

Motherboards featuring these two chipsets already feature heavily in manufacturers' line-ups. But these two aren't the only ones supporting the new processors. Intel has already announced the Q67, B65 and H61, with varying differences in their feature sets for various other market sectors (more on these later).

Chipset overview

Dirt 3

DIRT 3: If Sandy Bridge was an 80's car rendered in DiRT 3, then it'd be the Audi Quattro...

The 6-series chipset architecture, built on 65nm process with a TDP of 6.1W, can be seen as a development of the popular 5-series chipsets with some key changes and improvements.

First and foremost is the upping of the bandwidth of the eight PCI-e 2.0 lanes from the current 2.5GT/s (max of 250MB/s per lane per direction) to a more respectable 5GT/s, which is more in line with what AMD chipsets currently support. This doubling of the transfer rate means that now there is up to 500MB/s of bandwidth in each direction.

Combining these 8 PCI-e lanes with the 16 lanes in the Sandy Bridge processors gives a total of just 24 PCI-e lanes, a hell of a lot less than Intel's X58 (40 lanes) or even AMD's 890FX (42 lanes) so it's something to consider if you are thinking about upgrading from your present system.

The DMI interface (linking the CPU and chipset) has also had a speed increase up to 20GB/s for both upstream and downstream lanes. The other new feature of the 6-series chipset is the native support of SATA 6Gb/s, albeit only by a maximum of two ports: the remaining maximum of four ports are SATA 3Gb/s.

The inclusion of native SATA 6Gb/s and an increase in bandwidth of the PCI-e interface reduces bottlenecks and allows for better performance of things such as USB 3.0, for which motherboard manufacturers will have to use a third-party controller(s), as Intel still hasn't got around to building this in natively.

Intel has been slow off the mark with this but it's rumoured to be one of the features of the next generation (Panther Point) chipset due out sometime in 2012.

The memory support is as the previous 5-series chipsets; two channels of up to DDR3-1333 with a maximum of 32GB (8GB dual sided DIMM modules) supported. Another of the old legacy parts disappears with the P67, H67 and H61 chipsets as there is no native support for that good old boy, the PCI slot.

If manufacturers want to put one on a board with these chipsets they're going have to add a third-party chip to control it, adding a few more bucks to the cost of the board. Up to 14 USB 2.0 ports are supported as well as integrated Gigabit Ethernet.

USB 3.0

USB 3.0: NEC chips can be found offering up USB 3.0 support on the latest mobos

At the present time Intel has five new chipsets in the 6 series: the mainstream P67 Express and H67 Express; the budget H61 Express; the B65 Express Chipset for the business sector; and the Q67 for the corporate boys. What's the difference?

P67 Express

Along with the H67, the P67 Express is the first of the new chipsets to see the light of day.

Aimed at the mainstream performance user, it supports the 16 PCI-E 2.0 lanes built into the Sandy Bridge CPUs with either a single x16 full speed slot or two slots running at x8/x8 speed supporting both SLI and CrossfireX – but not at full speed because of the limited amount of available PCI-e lanes.

It supports up to 14 USB 2.0 ports, two SATA 6Gb/s and four SATA 3GB/s ports and the latest version of Intel's Rapid Storage Technology.

It's also the only one of the new chipsets that presently supports Intel's Performance Tuning (overclocking to you and me). But on the flip side it's also the only one of the line-up that doesn't support Intel's FDI (flexible display interface) and as such it cannot access the integrated graphics of the Sandy Bridge cores.

So it only uses dedicated graphics and as a result doesn't support Protected Audio Video Path content protection technology, Intel's InTru 3D or Clear Video technologies.

H67 Express

If you want to use all the loveliness of the integrated HD2000/3000 graphics capability of the Sandy Bridge processors then the mainstream consumer H67 Express is definitely the option to go for. It supports a variety of outputs: DVI, HDMI and DisplayPort – as well as Protected Audio Video Path and Intel's InTru 3D technology.

Protected Audio Video Path (PAVP) is Intel's content protection technology, which enables the Sandy Bridge graphics core to decode encrypted video purely in hardware. It comes in two flavours.

First is Paranoid, where the video stream is encrypted and the decoding accelerated by the graphics core, with PAVP reserving 96MB of system memory for its own use. It's used mainly for streaming lossless audio formats such as DTS-HS MA or Dolby TrueHD. Using this setting not only uses system memory it also disables the Aero interface in Windows.

The Lite setting does much the same thing as Paranoid but without reserving any system memory and is ideal for playing HDCP-protected content, the core only accessing system memory when it is needed and returning it back when it's finished.

Officially the H67 has had the ability to support dual graphics removed, as the 16 PCI-e lanes provided by the Sandy Bridge core are not split 8/8 so it only supports a single x16 PCI-e slot running at full speed. That's not to say you won't see any H67-based boards with two slots. It's just that because of the lack of PCI-E lanes the second slot is reduced to running at x4 speed.

Here's one thing to look out for if you buy a motherboard with the H-series chipset, which has a graphics card slot: as soon as you drop in a discrete graphics card, the integrated graphics are automatically switched off.

While the H67 doesn't support Intel's Performance Tuning technology and has a locked memory multiplier, it does allow some overclocking to be done to the graphics core. Just like the P67, it supports Intel's Rapid Storage Technology.

H61 Express Chipset

The H61 is aimed squarely at the budget end of the spectrum. With just six PCI-e 2.0 lanes and a single x16 PCI-e lane, it supports only two DIMM slots and only has 3Gb/s SATA support. And although it can look after four ports, it has no built-in RAID support (though that should come as no real shock looking at where it's aimed at).

As with its bigger sibling the H67, it also supports the integrated graphics of the SandyBridge core but like the P67 it doesn't support Intel's Clear Video technology. It will only support 10 USB 2 ports.

The B65 Express

The B65 is aimed at the SMB market. It has a feature set similar to the H67, with eight PCI-e lanes and support for the graphics core of the Sandy Bridge processors, but with subtle differences.

It only supports up to 12 USB 2.0 ports and its six SATA ports are split between a single SATA 6Gb/s and five SATA 3Gb/s ones. It also brings back support for PCI slots, which is understandable for this market segment.

Intel Q67 Express

This chipset is probably one you won't come across much, if at all, in the consumer space as it's aimed at the corporate boys. It's also expected to be joined by the Q65 at some stage, which is very much the same but with support for only a single SATA 6Gb/s and hardware and software ACHI instead of Intel's Rapid Storage Technology.

As can be expected for a chipset aimed at this part of the market it comes with a whole bucketful of Intel technologies aimed at protecting both the system and the data stored on it, as well as a few others not in the other chipsets; Intel Virtualization Technology for Directed I/O (VT-d), Intel® Anti-Theft Technology, AMT Version 7, Active Management Technology, vPro Technology and Intel Trusted Execution Technology.

Like the B65, the Q67 retains the PCI slot, which is understandable given where systems built with motherboards using either of these chipsets are going to end up, where the use of PCI expansion cards is a lot healthier than in the consumer space.

On the horizon

If you're looking to overclock, it's the P67 and no integrated graphics support. If you want that, then it's the H67, but you can't overclock. What's really needed is a hybrid of the two chipsets.

Batman

BUILDING THE FOUNDATIONS: Sandy Bridge is a great basis for a gaming machine. Don't you agree?

If the rumour mill is to believed then this may well happen, with a sixth chipset joining the party in Q2 2011. That's the rumoured Z68, which could be the most interesting of them all, bringing together the P67 native dual graphics and OC functions, plus the H67's support for FDI.

Another technology rumoured to be part of the Z68 is RST (Rapid Storage Technology) SSD Caching, this could mean that it enables an SSD drive to be used as a cache drive while using a hard drive for storage, and the driver build that accompanies it will also allow for RAID support for drives over 2.2TB. There's still no native USB 3.0 support on offer though.

Variations on a theme

Of course, there are no hard and fast rules as to what these chipsets are used for – there never is, and motherboard manufacturers will always pick and choose which market segment they think a chipset would be good for and even which chipset features they want to use, regardless of where Intel or AMD may think they should be aimed at.

For instance, Gigabyte has the GA-P65A-UD3, which is built around the B65 chipset but is being used as an entry-level board in the consumer space, and the quite frankly bizarre PH67 chipset, which is the H67 with the display connectivity removed. Yes, we were scratching our heads at this one too.

Intel may have aimed the P67 Express at the performance end of the mainstream market but there's a problem with the amount of PCI-e lanes that the chipset combined with the processor can provide for the graphic cards slots.

The only real way around it is to use a third-party chip to provide more lanes, either with Lucid's Hydralogix chip as Sapphire has done with its Pure Black P67 Hydra, or as Gigabyte and Asus have done with their flagship P67 boards (GA-P67A-UD7 and Maximus IV Extreme respectively).

Nvidia's NF200 chip allows for two full x16 speed slots in CrossFire or SLI modes and for 3-way SLI. MSI, which leads the way with using Lucid's original version of the Hydralogix, has a new version of the chip in its bonkers new P67 board, the Big Bang Marshal. It has eight, yes eight PCI-e slots and no, they don't all run at x16 speed, only four of them do. If for some unimaginable reason you want to stick eight cards in it, they'll only run at x8 speed.

Details are scare but it seems that this version of the chip is more like a bridging chip than one that allows mix and matching of graphics cards.

Manufacturers have been quick to pick up what the H67 can offer for small motherboards and while many of the board makers have been quick to stick the H67 in a Mini ATX format. This an ideal format for it because of its support for integrated graphics, some have been even more inventive and gone for an even smaller format – mini- ITX, which for those unfamiliar with the concept is a motherboard built on a PCB that's just 17.1cm square.

Whether all these tiny boards from Asus (P8H67-I Deluxe), ECS (H67H2-I), Foxconn (H67S), Gigabyte (GA-67NUSB3) or Zotac (H67-ITX WiFi) make it to these shores is an unknown, but even Intel itself has got into the act with the DH67CF. Rumours abound that Gigabyte even has a P67-based mini-ITX board in the pipeline.

Even in this niche market, mobo makers are keen to outshine the opposition. So we have the Asus board offering Bluetooth and SO-DIMM memory support; the Zotac H67-ITX WiFi with built-in 802.11b/g/n wireless networking; and Gigabyte's GA-67NUSB3 providing dual HDMI output.

By combining one of these boards with one of the forthcoming low-voltage Sandy Bridge CPUs you can build some pretty impressive and feature-rich small format systems such as a media centres or HTPCs. And with notebook drives now reaching 1TB, storage space in one of these wee boxes is no longer the problem it once was.

Although Intel has announced the budget H61 chipset, no official release date has been set at time of going to press. But that hasn't stopped Jetway, Foxconn and MSI from producing boards built around it.

Asus P8P67M-Pro - £110
P67 Chipset

Asus p8p67m-pro

Riffing on the 'small, but perfectly formed' design ethos, Asus' P8P67M-Pro is packing a lot into it's teeny, tiny footprint.

If you cast your mind back to a few years ago, micro ATX (mATX) boards were seen very much as the poor relations of the motherboard world; they didn't support the latest processors or chipset technologies and weren't exactly overly blessed in the features department either.

One look at the P8P67M-Pro from Asus though shows exactly where the small format motherboard is now in the consumer, and more importantly the manufacturer's, consciousness.

Read the full Asus P8P67M-Pro review

ECS P67H2-A Black Extreme - £150
P67 Chipset

ECS p67h2-a black extreme

Everybody's doing it and so is ECS, releasing its top-end Sandy Bridge motherboard, the P67H2-A Black Extreme. ECS may not be as well known as some of their rivals but it does seem to come out with some interesting boards, especially its Black Extreme series for the enthusiasts.

And the latest addition to this line is no exception. It combines Intel's Sandy Bridge combination of the Socket 1155 CPU support and the P67 chipset with Lucid's Hydra graphics technology. That extra chip allows the board to support mix and match combinations of graphics cards.

Read the full ECS P67H2-A Black Extreme review

Foxconn H61MX - £60
H61 Chipset

Foxconn h61mx

While all the flashy high-end motherboards make the news, win awards and make many a geek swoon, the real bread 'n' butter end of the market is down at the other end of the coal face in the value market segment.

Here making boards that don't cost that much to build and shifting them in huge numbers is the name of the game.

Although you may find some mobo manufacturers using the B65 business-orientated chipset in this market segment, Intel's de-facto chipset for the value end of the market is the H61 Express, which is basically a cut down version of the H67, with only SATA 3Gbps support and offers 10 USB 2.0 ports instead of 14, making it an ideal platform for entry level PCs.

Read the full Foxconn H61MX review

Foxconn H67S - £65
H67 Chipset

Foxconn h67s

The new Sandy Bridge processors are stupidly, impossibly, hilariously quick. By some metrics, they're the most impressive new CPUs in memory. However, one of the downsides is the requirement for a new motherboard.

Still, if you're going to make the migration to a new Sandy Bridge system, why not consider a small-form factor board such as the H67S?

As the name suggests, it's based on the H67 chipset and supports the heavily revised integrated graphics core that appears in each and every Sandy Bridge CPU. The new core is available with either six or 12-execution units, respectively known as the Intel HD Graphics 2000 and 3000 models.

Read the full Foxconn H67S review

MSI H61MU-E35 - £60
H61 Chipset

MSI h61mu-e35

There are a couple of routes to take if you are designing a motherboard for the value end of the market, do you just give the basics at a good price or do you add features you think people will want and, indeed, pay extra for in a board in this market segment.

While Foxconn has taken one route with Intel's H61 value chipset with the H61MX, MSI has taken the polar opposite with the H61MU-E35. They might be both built around Intel's value chipset and both built on a Micro ATX PCB, but there the similarity ends.

Read the full MSI H61MU-E35 review

MSI P67A-GD65 - £140
P67 Chipset

MSI p67a-gd65

This here P67 board from MSI represents something very different from the it's top-end Big Bang board. This isn't necessarily aimed at the high-end enthusiast segment, this is a board for people wishing to build up their new platform with a reasonable feature-set and at a reasonable price.

The bonus of having so much move over to the CPU is that actually manufacturing motherboards is a little cheaper and so you get a lot for your £140.

There's full SLI and CrossFire licensing, the latest line in USB 3.0 and SATA 6Gbps storage interfaces and the 'Military Class' components that make these MSI boards rather robust.

Read the full MSI P67A-GD65 review

Sapphire Pure P67 Hydra - £160
P67 Chipset

Sapphire pure p67 hydra

We've already taken a look at Sapphire's first Intel-based motherboard: the Pure Black X58, which together with the AMD based White Fusion announced Sapphire's re-emergence into the motherboard market.

Hot on its heels comes the second high-end Intel based board, the Pure Black P67 Hydra, which, as you might gather from the name, not only uses Intel's current flagship chipset for the Sandy Bridge CPU's, the P67 Express, but also makes use of Lucid's HydraLogic graphics technology.

Read the full Sapphire Pure P67 Hydra review

Zotac H67-ITX WI-FI - £120
H67 Chipset

Zotac h67-itx wifi

Back in the mists of time (Okay, October 2001) when VIA released its first ITX EPIA motherboards, my how people laughed. A tiny little motherboard they all said, why it's just a toy!

But a few people understood the concept and could see the possibilities, and from time to time the format would peek above the parapet to see if anybody was still interested.

Fast forward to today and maybe, just maybe, Intel's H67 chipset could be the making of the concept. And if Zotac's H67-ITX WiFi is any indication of where were heading, then we're in for some interesting times ahead.

Read the full review

Benchmark analysis

It's demonstrative of just how much important goodness has been moved from the motherboard to the CPU itself that the benchmark results below are so damned close. In performance terms there is relatively little that manufacturers can do to separate their boards from the fierce competition.

That said the MATX P8P67M-Pro does a reasonably good job of putting some distance between it and the other boards in a few of our benchmarks at least. Most specifically in the gaming benchmarks where Asus boards traditionally do well.

It's also interesting to note that the new H61 boards give a pretty good showing in performance terms, demonstrating that there's little sacrifice in speeds, despite the loss of key parts of the feature set compared to the similarly-priced H67.

The performance of the two mini-ITX boards has likewise been impressive and the Sandy Bridge platform could well give rise to some frankly awesome, tiny gaming machines. Just look at the discrete card performance the likes of Zotac's H67-ITX WiFi.

bench 1

bench 2

The best Sandy Bridge motherboard is...

Asus p8p67m-pro

Trying to find an out and out winner in a range of Sandy Bridge motherboards that you'd actually buy is surprisingly tricky. With so much functionality being removed from the motherboards themselves there's very little between any of these boards in performance terms.

Where the different boards do differentiate themselves though is in assessing the feature set that they offer: Can they give you decent multi-GPU options? Do you have the latest in SATA and USB I/O technology? And how fully-featured is the relevant BIOS?

Of course, in this test we've been looking at the affordable end of the Sandy Bridge motherboard lineup, and if you were really interested in getting the most out of your second gen Intel Core CPU, whatever the cost, this isn't where you'd be looking.

At the £300 mark you can see the difference in performance. Just look at the Asus Maximus IV Extreme or the MSI Big Bang Marshal boards, those pricey, chunky fellows offer far more in the way of processing and overclocking grunt. But those boards cater for a relatively small percentage of users.

These mid-price to low-end boards here represent the bulk of Sandy Bridge motherboards that us normal folk are likely to buy. So which one should you pick up then?

To us the obvious winner is the excellent mATX P8P67M-Pro from Asus. It's got a huge range of features from SATA 6Gbps and USB 3.0 to fully fledged SLI and CrossFire support. That's a lot of functionality to pack into a small form factor motherboard.

It's not the fastest mobo in all of our tests, indeed some of the H61 boards post better Cinebench R11.5 scores, but it's a great all-round package. And it's also got Asus' great gaming pedigree too, which means that it's more than capable of offering great frame rates.

We reviewed the CyberPower Gamecube, an excellent little gaming machine, and with this wee mobo we 're looking at the basis for another small giant of the gaming PC market for sure. Being able to have a serious gaming system clothed in a small chassis, without having to make a compromise, is a very attractive proposition. After all you don't want to have a behemoth of a chassis bulging out from beneath your desk if there's no need for it to be that size.

Interestingly, the even smaller mini- ITX boards were almost as impressive. The Foxconn H67S and the pricier Zotac ITX WiFi offer decent performance figures in a frankly ridiculous form factor. Of course they don't have anywhere near the feature set of even their slightly bigger brethren, but for a fixed function they're worth a look.

The only issue with the Zotac board, as packed as it is, is that huge price tag. You may get USB 3.0 and a HDMI socket, but it's tough to say whether that's really worth paying twice as much as the almost-as-good Foxconn H67S.

It's still worth mentioning our personal bug bear with the make up of the desktop Sandy Bridge – the lack of processor graphics access when a discrete card is used. In a H67, or even H61 setup, losing access to the QuickSync transcode core is a disappointment if you want to have the opportunity to game with your rig too.

Lucid's Virtu GPU virtualisation tech should be ready soon, and ought to just be a simple update for the relevant boards. And that could end up being the death knell for the P67 platform. It will give you simultaneous access to both the Sandy Bridge graphics core and a discrete PCI-e card depending on which is better suited to a particular task.

There's also the spectre of the as yet unconfirmed Z68 chipset on the horizon, offering P67 performance with a processor graphics connection to boot. At the moment though if you're after a decent, feature-rich P67, then the bargainous P8P67M-Pro will serve pretty much all your needs.







Buying Guide: Best Sandy Bridge motherboard: 8 tested

Posted: 10 Apr 2011 04:00 AM PDT

Sandy Bridge processors certainly put the cat amongst the pigeons when Intel released them. Some of those metaphorical pigeons were their own loyal customers.

Those who had bought a Socket 1156 platform must have felt like they had received a slap round the chops, as barely 12 months after releasing the splendid Lynnfield, Intel had produced a CPU family that redefined the standard for performance and price.

Sandy Bridge's release is doubly harsh as the new processor uses a different socket, making for an expensive upgrade for anyone trying to stay up to date.

It seems odd that Intel couldn't work the new architecture into a package that would fit the existing LGA 1156 socket. After all, the package size is identical, and even though the LGA 1155 socket has a different pin layout (it isn't just a case of having one less pin) and orientation notch layout, is the move to a new socket really necessary?

To support Intel's Second Generation core architecture we have the new 6X Express (Cougar Point) series of chipsets. The two chipsets that were launched along with the new processors will also be the most visible in the channel; the performance P67 and the mainstream consumer H67.

Motherboards featuring these two chipsets already feature heavily in manufacturers' line-ups. But these two aren't the only ones supporting the new processors. Intel has already announced the Q67, B65 and H61, with varying differences in their feature sets for various other market sectors (more on these later).

Chipset overview

Dirt 3

DIRT 3: If Sandy Bridge was an 80's car rendered in DiRT 3, then it'd be the Audi Quattro...

The 6-series chipset architecture, built on 65nm process with a TDP of 6.1W, can be seen as a development of the popular 5-series chipsets with some key changes and improvements.

First and foremost is the upping of the bandwidth of the eight PCI-e 2.0 lanes from the current 2.5GT/s (max of 250MB/s per lane per direction) to a more respectable 5GT/s, which is more in line with what AMD chipsets currently support. This doubling of the transfer rate means that now there is up to 500MB/s of bandwidth in each direction.

Combining these 8 PCI-e lanes with the 16 lanes in the Sandy Bridge processors gives a total of just 24 PCI-e lanes, a hell of a lot less than Intel's X58 (40 lanes) or even AMD's 890FX (42 lanes) so it's something to consider if you are thinking about upgrading from your present system.

The DMI interface (linking the CPU and chipset) has also had a speed increase up to 20GB/s for both upstream and downstream lanes. The other new feature of the 6-series chipset is the native support of SATA 6Gb/s, albeit only by a maximum of two ports: the remaining maximum of four ports are SATA 3Gb/s.

The inclusion of native SATA 6Gb/s and an increase in bandwidth of the PCI-e interface reduces bottlenecks and allows for better performance of things such as USB 3.0, for which motherboard manufacturers will have to use a third-party controller(s), as Intel still hasn't got around to building this in natively.

Intel has been slow off the mark with this but it's rumoured to be one of the features of the next generation (Panther Point) chipset due out sometime in 2012.

The memory support is as the previous 5-series chipsets; two channels of up to DDR3-1333 with a maximum of 32GB (8GB dual sided DIMM modules) supported. Another of the old legacy parts disappears with the P67, H67 and H61 chipsets as there is no native support for that good old boy, the PCI slot.

If manufacturers want to put one on a board with these chipsets they're going have to add a third-party chip to control it, adding a few more bucks to the cost of the board. Up to 14 USB 2.0 ports are supported as well as integrated Gigabit Ethernet.

USB 3.0

USB 3.0: NEC chips can be found offering up USB 3.0 support on the latest mobos

At the present time Intel has five new chipsets in the 6 series: the mainstream P67 Express and H67 Express; the budget H61 Express; the B65 Express Chipset for the business sector; and the Q67 for the corporate boys. What's the difference?

P67 Express

Along with the H67, the P67 Express is the first of the new chipsets to see the light of day.

Aimed at the mainstream performance user, it supports the 16 PCI-E 2.0 lanes built into the Sandy Bridge CPUs with either a single x16 full speed slot or two slots running at x8/x8 speed supporting both SLI and CrossfireX – but not at full speed because of the limited amount of available PCI-e lanes.

It supports up to 14 USB 2.0 ports, two SATA 6Gb/s and four SATA 3GB/s ports and the latest version of Intel's Rapid Storage Technology.

It's also the only one of the new chipsets that presently supports Intel's Performance Tuning (overclocking to you and me). But on the flip side it's also the only one of the line-up that doesn't support Intel's FDI (flexible display interface) and as such it cannot access the integrated graphics of the Sandy Bridge cores.

So it only uses dedicated graphics and as a result doesn't support Protected Audio Video Path content protection technology, Intel's InTru 3D or Clear Video technologies.

H67 Express

If you want to use all the loveliness of the integrated HD2000/3000 graphics capability of the Sandy Bridge processors then the mainstream consumer H67 Express is definitely the option to go for. It supports a variety of outputs: DVI, HDMI and DisplayPort – as well as Protected Audio Video Path and Intel's InTru 3D technology.

Protected Audio Video Path (PAVP) is Intel's content protection technology, which enables the Sandy Bridge graphics core to decode encrypted video purely in hardware. It comes in two flavours.

First is Paranoid, where the video stream is encrypted and the decoding accelerated by the graphics core, with PAVP reserving 96MB of system memory for its own use. It's used mainly for streaming lossless audio formats such as DTS-HS MA or Dolby TrueHD. Using this setting not only uses system memory it also disables the Aero interface in Windows.

The Lite setting does much the same thing as Paranoid but without reserving any system memory and is ideal for playing HDCP-protected content, the core only accessing system memory when it is needed and returning it back when it's finished.

Officially the H67 has had the ability to support dual graphics removed, as the 16 PCI-e lanes provided by the Sandy Bridge core are not split 8/8 so it only supports a single x16 PCI-e slot running at full speed. That's not to say you won't see any H67-based boards with two slots. It's just that because of the lack of PCI-E lanes the second slot is reduced to running at x4 speed.

Here's one thing to look out for if you buy a motherboard with the H-series chipset, which has a graphics card slot: as soon as you drop in a discrete graphics card, the integrated graphics are automatically switched off.

While the H67 doesn't support Intel's Performance Tuning technology and has a locked memory multiplier, it does allow some overclocking to be done to the graphics core. Just like the P67, it supports Intel's Rapid Storage Technology.

H61 Express Chipset

The H61 is aimed squarely at the budget end of the spectrum. With just six PCI-e 2.0 lanes and a single x16 PCI-e lane, it supports only two DIMM slots and only has 3Gb/s SATA support. And although it can look after four ports, it has no built-in RAID support (though that should come as no real shock looking at where it's aimed at).

As with its bigger sibling the H67, it also supports the integrated graphics of the SandyBridge core but like the P67 it doesn't support Intel's Clear Video technology. It will only support 10 USB 2 ports.

The B65 Express

The B65 is aimed at the SMB market. It has a feature set similar to the H67, with eight PCI-e lanes and support for the graphics core of the Sandy Bridge processors, but with subtle differences.

It only supports up to 12 USB 2.0 ports and its six SATA ports are split between a single SATA 6Gb/s and five SATA 3Gb/s ones. It also brings back support for PCI slots, which is understandable for this market segment.

Intel Q67 Express

This chipset is probably one you won't come across much, if at all, in the consumer space as it's aimed at the corporate boys. It's also expected to be joined by the Q65 at some stage, which is very much the same but with support for only a single SATA 6Gb/s and hardware and software ACHI instead of Intel's Rapid Storage Technology.

As can be expected for a chipset aimed at this part of the market it comes with a whole bucketful of Intel technologies aimed at protecting both the system and the data stored on it, as well as a few others not in the other chipsets; Intel Virtualization Technology for Directed I/O (VT-d), Intel® Anti-Theft Technology, AMT Version 7, Active Management Technology, vPro Technology and Intel Trusted Execution Technology.

Like the B65, the Q67 retains the PCI slot, which is understandable given where systems built with motherboards using either of these chipsets are going to end up, where the use of PCI expansion cards is a lot healthier than in the consumer space.

On the horizon

If you're looking to overclock, it's the P67 and no integrated graphics support. If you want that, then it's the H67, but you can't overclock. What's really needed is a hybrid of the two chipsets.

Batman

BUILDING THE FOUNDATIONS: Sandy Bridge is a great basis for a gaming machine. Don't you agree?

If the rumour mill is to believed then this may well happen, with a sixth chipset joining the party in Q2 2011. That's the rumoured Z68, which could be the most interesting of them all, bringing together the P67 native dual graphics and OC functions, plus the H67's support for FDI.

Another technology rumoured to be part of the Z68 is RST (Rapid Storage Technology) SSD Caching, this could mean that it enables an SSD drive to be used as a cache drive while using a hard drive for storage, and the driver build that accompanies it will also allow for RAID support for drives over 2.2TB. There's still no native USB 3.0 support on offer though.

Variations on a theme

Of course, there are no hard and fast rules as to what these chipsets are used for – there never is, and motherboard manufacturers will always pick and choose which market segment they think a chipset would be good for and even which chipset features they want to use, regardless of where Intel or AMD may think they should be aimed at.

For instance, Gigabyte has the GA-P65A-UD3, which is built around the B65 chipset but is being used as an entry-level board in the consumer space, and the quite frankly bizarre PH67 chipset, which is the H67 with the display connectivity removed. Yes, we were scratching our heads at this one too.

Intel may have aimed the P67 Express at the performance end of the mainstream market but there's a problem with the amount of PCI-e lanes that the chipset combined with the processor can provide for the graphic cards slots.

The only real way around it is to use a third-party chip to provide more lanes, either with Lucid's Hydralogix chip as Sapphire has done with its Pure Black P67 Hydra, or as Gigabyte and Asus have done with their flagship P67 boards (GA-P67A-UD7 and Maximus IV Extreme respectively).

Nvidia's NF200 chip allows for two full x16 speed slots in CrossFire or SLI modes and for 3-way SLI. MSI, which leads the way with using Lucid's original version of the Hydralogix, has a new version of the chip in its bonkers new P67 board, the Big Bang Marshal. It has eight, yes eight PCI-e slots and no, they don't all run at x16 speed, only four of them do. If for some unimaginable reason you want to stick eight cards in it, they'll only run at x8 speed.

Details are scare but it seems that this version of the chip is more like a bridging chip than one that allows mix and matching of graphics cards.

Manufacturers have been quick to pick up what the H67 can offer for small motherboards and while many of the board makers have been quick to stick the H67 in a Mini ATX format. This an ideal format for it because of its support for integrated graphics, some have been even more inventive and gone for an even smaller format – mini- ITX, which for those unfamiliar with the concept is a motherboard built on a PCB that's just 17.1cm square.

Whether all these tiny boards from Asus (P8H67-I Deluxe), ECS (H67H2-I), Foxconn (H67S), Gigabyte (GA-67NUSB3) or Zotac (H67-ITX WiFi) make it to these shores is an unknown, but even Intel itself has got into the act with the DH67CF. Rumours abound that Gigabyte even has a P67-based mini-ITX board in the pipeline.

Even in this niche market, mobo makers are keen to outshine the opposition. So we have the Asus board offering Bluetooth and SO-DIMM memory support; the Zotac H67-ITX WiFi with built-in 802.11b/g/n wireless networking; and Gigabyte's GA-67NUSB3 providing dual HDMI output.

By combining one of these boards with one of the forthcoming low-voltage Sandy Bridge CPUs you can build some pretty impressive and feature-rich small format systems such as a media centres or HTPCs. And with notebook drives now reaching 1TB, storage space in one of these wee boxes is no longer the problem it once was.

Although Intel has announced the budget H61 chipset, no official release date has been set at time of going to press. But that hasn't stopped Jetway, Foxconn and MSI from producing boards built around it.

Asus P8P67M-Pro - £110
P67 Chipset

Asus p8p67m-pro

Riffing on the 'small, but perfectly formed' design ethos, Asus' P8P67M-Pro is packing a lot into it's teeny, tiny footprint.

If you cast your mind back to a few years ago, micro ATX (mATX) boards were seen very much as the poor relations of the motherboard world; they didn't support the latest processors or chipset technologies and weren't exactly overly blessed in the features department either.

One look at the P8P67M-Pro from Asus though shows exactly where the small format motherboard is now in the consumer, and more importantly the manufacturer's, consciousness.

Read the full Asus P8P67M-Pro review

ECS P67H2-A Black Extreme - £150
P67 Chipset

ECS p67h2-a black extreme

Everybody's doing it and so is ECS, releasing its top-end Sandy Bridge motherboard, the P67H2-A Black Extreme. ECS may not be as well known as some of their rivals but it does seem to come out with some interesting boards, especially its Black Extreme series for the enthusiasts.

And the latest addition to this line is no exception. It combines Intel's Sandy Bridge combination of the Socket 1155 CPU support and the P67 chipset with Lucid's Hydra graphics technology. That extra chip allows the board to support mix and match combinations of graphics cards.

Read the full ECS P67H2-A Black Extreme review

Foxconn H61MX - £60
H61 Chipset

Foxconn h61mx

While all the flashy high-end motherboards make the news, win awards and make many a geek swoon, the real bread 'n' butter end of the market is down at the other end of the coal face in the value market segment.

Here making boards that don't cost that much to build and shifting them in huge numbers is the name of the game.

Although you may find some mobo manufacturers using the B65 business-orientated chipset in this market segment, Intel's de-facto chipset for the value end of the market is the H61 Express, which is basically a cut down version of the H67, with only SATA 3Gbps support and offers 10 USB 2.0 ports instead of 14, making it an ideal platform for entry level PCs.

Read the full Foxconn H61MX review

Foxconn H67S - £65
H67 Chipset

Foxconn h67s

The new Sandy Bridge processors are stupidly, impossibly, hilariously quick. By some metrics, they're the most impressive new CPUs in memory. However, one of the downsides is the requirement for a new motherboard.

Still, if you're going to make the migration to a new Sandy Bridge system, why not consider a small-form factor board such as the H67S?

As the name suggests, it's based on the H67 chipset and supports the heavily revised integrated graphics core that appears in each and every Sandy Bridge CPU. The new core is available with either six or 12-execution units, respectively known as the Intel HD Graphics 2000 and 3000 models.

Read the full Foxconn H67S review

MSI H61MU-E35 - £60
H61 Chipset

MSI h61mu-e35

There are a couple of routes to take if you are designing a motherboard for the value end of the market, do you just give the basics at a good price or do you add features you think people will want and, indeed, pay extra for in a board in this market segment.

While Foxconn has taken one route with Intel's H61 value chipset with the H61MX, MSI has taken the polar opposite with the H61MU-E35. They might be both built around Intel's value chipset and both built on a Micro ATX PCB, but there the similarity ends.

Read the full MSI H61MU-E35 review

MSI P67A-GD65 - £140
P67 Chipset

MSI p67a-gd65

This here P67 board from MSI represents something very different from the it's top-end Big Bang board. This isn't necessarily aimed at the high-end enthusiast segment, this is a board for people wishing to build up their new platform with a reasonable feature-set and at a reasonable price.

The bonus of having so much move over to the CPU is that actually manufacturing motherboards is a little cheaper and so you get a lot for your £140.

There's full SLI and CrossFire licensing, the latest line in USB 3.0 and SATA 6Gbps storage interfaces and the 'Military Class' components that make these MSI boards rather robust.

Read the full MSI P67A-GD65 review

Sapphire Pure P67 Hydra - £160
P67 Chipset

Sapphire pure p67 hydra

We've already taken a look at Sapphire's first Intel-based motherboard: the Pure Black X58, which together with the AMD based White Fusion announced Sapphire's re-emergence into the motherboard market.

Hot on its heels comes the second high-end Intel based board, the Pure Black P67 Hydra, which, as you might gather from the name, not only uses Intel's current flagship chipset for the Sandy Bridge CPU's, the P67 Express, but also makes use of Lucid's HydraLogic graphics technology.

Read the full Sapphire Pure P67 Hydra review

Zotac H67-ITX WI-FI - £120
H67 Chipset

Zotac h67-itx wifi

Back in the mists of time (Okay, October 2001) when VIA released its first ITX EPIA motherboards, my how people laughed. A tiny little motherboard they all said, why it's just a toy!

But a few people understood the concept and could see the possibilities, and from time to time the format would peek above the parapet to see if anybody was still interested.

Fast forward to today and maybe, just maybe, Intel's H67 chipset could be the making of the concept. And if Zotac's H67-ITX WiFi is any indication of where were heading, then we're in for some interesting times ahead.

Read the full review

Benchmark analysis

It's demonstrative of just how much important goodness has been moved from the motherboard to the CPU itself that the benchmark results below are so damned close. In performance terms there is relatively little that manufacturers can do to separate their boards from the fierce competition.

That said the MATX P8P67M-Pro does a reasonably good job of putting some distance between it and the other boards in a few of our benchmarks at least. Most specifically in the gaming benchmarks where Asus boards traditionally do well.

It's also interesting to note that the new H61 boards give a pretty good showing in performance terms, demonstrating that there's little sacrifice in speeds, despite the loss of key parts of the feature set compared to the similarly-priced H67.

The performance of the two mini-ITX boards has likewise been impressive and the Sandy Bridge platform could well give rise to some frankly awesome, tiny gaming machines. Just look at the discrete card performance the likes of Zotac's H67-ITX WiFi.

bench 1

bench 2

The best Sandy Bridge motherboard is...

Asus p8p67m-pro

Trying to find an out and out winner in a range of Sandy Bridge motherboards that you'd actually buy is surprisingly tricky. With so much functionality being removed from the motherboards themselves there's very little between any of these boards in performance terms.

Where the different boards do differentiate themselves though is in assessing the feature set that they offer: Can they give you decent multi-GPU options? Do you have the latest in SATA and USB I/O technology? And how fully-featured is the relevant BIOS?

Of course, in this test we've been looking at the affordable end of the Sandy Bridge motherboard lineup, and if you were really interested in getting the most out of your second gen Intel Core CPU, whatever the cost, this isn't where you'd be looking.

At the £300 mark you can see the difference in performance. Just look at the Asus Maximus IV Extreme or the MSI Big Bang Marshal boards, those pricey, chunky fellows offer far more in the way of processing and overclocking grunt. But those boards cater for a relatively small percentage of users.

These mid-price to low-end boards here represent the bulk of Sandy Bridge motherboards that us normal folk are likely to buy. So which one should you pick up then?

To us the obvious winner is the excellent mATX P8P67M-Pro from Asus. It's got a huge range of features from SATA 6Gbps and USB 3.0 to fully fledged SLI and CrossFire support. That's a lot of functionality to pack into a small form factor motherboard.

It's not the fastest mobo in all of our tests, indeed some of the H61 boards post better Cinebench R11.5 scores, but it's a great all-round package. And it's also got Asus' great gaming pedigree too, which means that it's more than capable of offering great frame rates.

We reviewed the CyberPower Gamecube, an excellent little gaming machine, and with this wee mobo we 're looking at the basis for another small giant of the gaming PC market for sure. Being able to have a serious gaming system clothed in a small chassis, without having to make a compromise, is a very attractive proposition. After all you don't want to have a behemoth of a chassis bulging out from beneath your desk if there's no need for it to be that size.

Interestingly, the even smaller mini- ITX boards were almost as impressive. The Foxconn H67S and the pricier Zotac ITX WiFi offer decent performance figures in a frankly ridiculous form factor. Of course they don't have anywhere near the feature set of even their slightly bigger brethren, but for a fixed function they're worth a look.

The only issue with the Zotac board, as packed as it is, is that huge price tag. You may get USB 3.0 and a HDMI socket, but it's tough to say whether that's really worth paying twice as much as the almost-as-good Foxconn H67S.

It's still worth mentioning our personal bug bear with the make up of the desktop Sandy Bridge – the lack of processor graphics access when a discrete card is used. In a H67, or even H61 setup, losing access to the QuickSync transcode core is a disappointment if you want to have the opportunity to game with your rig too.

Lucid's Virtu GPU virtualisation tech should be ready soon, and ought to just be a simple update for the relevant boards. And that could end up being the death knell for the P67 platform. It will give you simultaneous access to both the Sandy Bridge graphics core and a discrete PCI-e card depending on which is better suited to a particular task.

There's also the spectre of the as yet unconfirmed Z68 chipset on the horizon, offering P67 performance with a processor graphics connection to boot. At the moment though if you're after a decent, feature-rich P67, then the bargainous P8P67M-Pro will serve pretty much all your needs.





Buying Guide: Best Sandy Bridge motherboard: 8 tested

Posted: 10 Apr 2011 04:00 AM PDT

Sandy Bridge processors certainly put the cat amongst the pigeons when Intel released them. Some of those metaphorical pigeons were their own loyal customers.

Those who had bought a Socket 1156 platform must have felt like they had received a slap round the chops, as barely 12 months after releasing the splendid Lynnfield, Intel had produced a CPU family that redefined the standard for performance and price.

Sandy Bridge's release is doubly harsh as the new processor uses a different socket, making for an expensive upgrade for anyone trying to stay up to date.

It seems odd that Intel couldn't work the new architecture into a package that would fit the existing LGA 1156 socket. After all, the package size is identical, and even though the LGA 1155 socket has a different pin layout (it isn't just a case of having one less pin) and orientation notch layout, is the move to a new socket really necessary?

To support Intel's Second Generation core architecture we have the new 6X Express (Cougar Point) series of chipsets. The two chipsets that were launched along with the new processors will also be the most visible in the channel; the performance P67 and the mainstream consumer H67.

Motherboards featuring these two chipsets already feature heavily in manufacturers' line-ups. But these two aren't the only ones supporting the new processors. Intel has already announced the Q67, B65 and H61, with varying differences in their feature sets for various other market sectors (more on these later).

Chipset overview

Dirt 3

DIRT 3: If Sandy Bridge was an 80's car rendered in DiRT 3, then it'd be the Audi Quattro...

The 6-series chipset architecture, built on 65nm process with a TDP of 6.1W, can be seen as a development of the popular 5-series chipsets with some key changes and improvements.

First and foremost is the upping of the bandwidth of the eight PCI-e 2.0 lanes from the current 2.5GT/s (max of 250MB/s per lane per direction) to a more respectable 5GT/s, which is more in line with what AMD chipsets currently support. This doubling of the transfer rate means that now there is up to 500MB/s of bandwidth in each direction.

Combining these 8 PCI-e lanes with the 16 lanes in the Sandy Bridge processors gives a total of just 24 PCI-e lanes, a hell of a lot less than Intel's X58 (40 lanes) or even AMD's 890FX (42 lanes) so it's something to consider if you are thinking about upgrading from your present system.

The DMI interface (linking the CPU and chipset) has also had a speed increase up to 20GB/s for both upstream and downstream lanes. The other new feature of the 6-series chipset is the native support of SATA 6Gb/s, albeit only by a maximum of two ports: the remaining maximum of four ports are SATA 3Gb/s.

The inclusion of native SATA 6Gb/s and an increase in bandwidth of the PCI-e interface reduces bottlenecks and allows for better performance of things such as USB 3.0, for which motherboard manufacturers will have to use a third-party controller(s), as Intel still hasn't got around to building this in natively.

Intel has been slow off the mark with this but it's rumoured to be one of the features of the next generation (Panther Point) chipset due out sometime in 2012.

The memory support is as the previous 5-series chipsets; two channels of up to DDR3-1333 with a maximum of 32GB (8GB dual sided DIMM modules) supported. Another of the old legacy parts disappears with the P67, H67 and H61 chipsets as there is no native support for that good old boy, the PCI slot.

If manufacturers want to put one on a board with these chipsets they're going have to add a third-party chip to control it, adding a few more bucks to the cost of the board. Up to 14 USB 2.0 ports are supported as well as integrated Gigabit Ethernet.

USB 3.0

USB 3.0: NEC chips can be found offering up USB 3.0 support on the latest mobos

At the present time Intel has five new chipsets in the 6 series: the mainstream P67 Express and H67 Express; the budget H61 Express; the B65 Express Chipset for the business sector; and the Q67 for the corporate boys. What's the difference?

P67 Express

Along with the H67, the P67 Express is the first of the new chipsets to see the light of day.

Aimed at the mainstream performance user, it supports the 16 PCI-E 2.0 lanes built into the Sandy Bridge CPUs with either a single x16 full speed slot or two slots running at x8/x8 speed supporting both SLI and CrossfireX – but not at full speed because of the limited amount of available PCI-e lanes.

It supports up to 14 USB 2.0 ports, two SATA 6Gb/s and four SATA 3GB/s ports and the latest version of Intel's Rapid Storage Technology.

It's also the only one of the new chipsets that presently supports Intel's Performance Tuning (overclocking to you and me). But on the flip side it's also the only one of the line-up that doesn't support Intel's FDI (flexible display interface) and as such it cannot access the integrated graphics of the Sandy Bridge cores.

So it only uses dedicated graphics and as a result doesn't support Protected Audio Video Path content protection technology, Intel's InTru 3D or Clear Video technologies.

H67 Express

If you want to use all the loveliness of the integrated HD2000/3000 graphics capability of the Sandy Bridge processors then the mainstream consumer H67 Express is definitely the option to go for. It supports a variety of outputs: DVI, HDMI and DisplayPort – as well as Protected Audio Video Path and Intel's InTru 3D technology.

Protected Audio Video Path (PAVP) is Intel's content protection technology, which enables the Sandy Bridge graphics core to decode encrypted video purely in hardware. It comes in two flavours.

First is Paranoid, where the video stream is encrypted and the decoding accelerated by the graphics core, with PAVP reserving 96MB of system memory for its own use. It's used mainly for streaming lossless audio formats such as DTS-HS MA or Dolby TrueHD. Using this setting not only uses system memory it also disables the Aero interface in Windows.

The Lite setting does much the same thing as Paranoid but without reserving any system memory and is ideal for playing HDCP-protected content, the core only accessing system memory when it is needed and returning it back when it's finished.

Officially the H67 has had the ability to support dual graphics removed, as the 16 PCI-e lanes provided by the Sandy Bridge core are not split 8/8 so it only supports a single x16 PCI-e slot running at full speed. That's not to say you won't see any H67-based boards with two slots. It's just that because of the lack of PCI-E lanes the second slot is reduced to running at x4 speed.

Here's one thing to look out for if you buy a motherboard with the H-series chipset, which has a graphics card slot: as soon as you drop in a discrete graphics card, the integrated graphics are automatically switched off.

While the H67 doesn't support Intel's Performance Tuning technology and has a locked memory multiplier, it does allow some overclocking to be done to the graphics core. Just like the P67, it supports Intel's Rapid Storage Technology.

H61 Express Chipset

The H61 is aimed squarely at the budget end of the spectrum. With just six PCI-e 2.0 lanes and a single x16 PCI-e lane, it supports only two DIMM slots and only has 3Gb/s SATA support. And although it can look after four ports, it has no built-in RAID support (though that should come as no real shock looking at where it's aimed at).

As with its bigger sibling the H67, it also supports the integrated graphics of the SandyBridge core but like the P67 it doesn't support Intel's Clear Video technology. It will only support 10 USB 2 ports.

The B65 Express

The B65 is aimed at the SMB market. It has a feature set similar to the H67, with eight PCI-e lanes and support for the graphics core of the Sandy Bridge processors, but with subtle differences.

It only supports up to 12 USB 2.0 ports and its six SATA ports are split between a single SATA 6Gb/s and five SATA 3Gb/s ones. It also brings back support for PCI slots, which is understandable for this market segment.

Intel Q67 Express

This chipset is probably one you won't come across much, if at all, in the consumer space as it's aimed at the corporate boys. It's also expected to be joined by the Q65 at some stage, which is very much the same but with support for only a single SATA 6Gb/s and hardware and software ACHI instead of Intel's Rapid Storage Technology.

As can be expected for a chipset aimed at this part of the market it comes with a whole bucketful of Intel technologies aimed at protecting both the system and the data stored on it, as well as a few others not in the other chipsets; Intel Virtualization Technology for Directed I/O (VT-d), Intel® Anti-Theft Technology, AMT Version 7, Active Management Technology, vPro Technology and Intel Trusted Execution Technology.

Like the B65, the Q67 retains the PCI slot, which is understandable given where systems built with motherboards using either of these chipsets are going to end up, where the use of PCI expansion cards is a lot healthier than in the consumer space.

On the horizon

If you're looking to overclock, it's the P67 and no integrated graphics support. If you want that, then it's the H67, but you can't overclock. What's really needed is a hybrid of the two chipsets.

Batman

BUILDING THE FOUNDATIONS: Sandy Bridge is a great basis for a gaming machine. Don't you agree?

If the rumour mill is to believed then this may well happen, with a sixth chipset joining the party in Q2 2011. That's the rumoured Z68, which could be the most interesting of them all, bringing together the P67 native dual graphics and OC functions, plus the H67's support for FDI.

Another technology rumoured to be part of the Z68 is RST (Rapid Storage Technology) SSD Caching, this could mean that it enables an SSD drive to be used as a cache drive while using a hard drive for storage, and the driver build that accompanies it will also allow for RAID support for drives over 2.2TB. There's still no native USB 3.0 support on offer though.

Variations on a theme

Of course, there are no hard and fast rules as to what these chipsets are used for – there never is, and motherboard manufacturers will always pick and choose which market segment they think a chipset would be good for and even which chipset features they want to use, regardless of where Intel or AMD may think they should be aimed at.

For instance, Gigabyte has the GA-P65A-UD3, which is built around the B65 chipset but is being used as an entry-level board in the consumer space, and the quite frankly bizarre PH67 chipset, which is the H67 with the display connectivity removed. Yes, we were scratching our heads at this one too.

Intel may have aimed the P67 Express at the performance end of the mainstream market but there's a problem with the amount of PCI-e lanes that the chipset combined with the processor can provide for the graphic cards slots.

The only real way around it is to use a third-party chip to provide more lanes, either with Lucid's Hydralogix chip as Sapphire has done with its Pure Black P67 Hydra, or as Gigabyte and Asus have done with their flagship P67 boards (GA-P67A-UD7 and Maximus IV Extreme respectively).

Nvidia's NF200 chip allows for two full x16 speed slots in CrossFire or SLI modes and for 3-way SLI. MSI, which leads the way with using Lucid's original version of the Hydralogix, has a new version of the chip in its bonkers new P67 board, the Big Bang Marshal. It has eight, yes eight PCI-e slots and no, they don't all run at x16 speed, only four of them do. If for some unimaginable reason you want to stick eight cards in it, they'll only run at x8 speed.

Details are scare but it seems that this version of the chip is more like a bridging chip than one that allows mix and matching of graphics cards.

Manufacturers have been quick to pick up what the H67 can offer for small motherboards and while many of the board makers have been quick to stick the H67 in a Mini ATX format. This an ideal format for it because of its support for integrated graphics, some have been even more inventive and gone for an even smaller format – mini- ITX, which for those unfamiliar with the concept is a motherboard built on a PCB that's just 17.1cm square.

Whether all these tiny boards from Asus (P8H67-I Deluxe), ECS (H67H2-I), Foxconn (H67S), Gigabyte (GA-67NUSB3) or Zotac (H67-ITX WiFi) make it to these shores is an unknown, but even Intel itself has got into the act with the DH67CF. Rumours abound that Gigabyte even has a P67-based mini-ITX board in the pipeline.

Even in this niche market, mobo makers are keen to outshine the opposition. So we have the Asus board offering Bluetooth and SO-DIMM memory support; the Zotac H67-ITX WiFi with built-in 802.11b/g/n wireless networking; and Gigabyte's GA-67NUSB3 providing dual HDMI output.

By combining one of these boards with one of the forthcoming low-voltage Sandy Bridge CPUs you can build some pretty impressive and feature-rich small format systems such as a media centres or HTPCs. And with notebook drives now reaching 1TB, storage space in one of these wee boxes is no longer the problem it once was.

Although Intel has announced the budget H61 chipset, no official release date has been set at time of going to press. But that hasn't stopped Jetway, Foxconn and MSI from producing boards built around it.

Asus P8P67M-Pro - £110
P67 Chipset

Asus p8p67m-pro

Riffing on the 'small, but perfectly formed' design ethos, Asus' P8P67M-Pro is packing a lot into it's teeny, tiny footprint.

If you cast your mind back to a few years ago, micro ATX (mATX) boards were seen very much as the poor relations of the motherboard world; they didn't support the latest processors or chipset technologies and weren't exactly overly blessed in the features department either.

One look at the P8P67M-Pro from Asus though shows exactly where the small format motherboard is now in the consumer, and more importantly the manufacturer's, consciousness.

Read the full Asus P8P67M-Pro review

ECS P67H2-A Black Extreme - £150
P67 Chipset

ECS p67h2-a black extreme

Everybody's doing it and so is ECS, releasing its top-end Sandy Bridge motherboard, the P67H2-A Black Extreme. ECS may not be as well known as some of their rivals but it does seem to come out with some interesting boards, especially its Black Extreme series for the enthusiasts.

And the latest addition to this line is no exception. It combines Intel's Sandy Bridge combination of the Socket 1155 CPU support and the P67 chipset with Lucid's Hydra graphics technology. That extra chip allows the board to support mix and match combinations of graphics cards.

Read the full ECS P67H2-A Black Extreme review

Foxconn H61MX - £60
H61 Chipset

Foxconn h61mx

While all the flashy high-end motherboards make the news, win awards and make many a geek swoon, the real bread 'n' butter end of the market is down at the other end of the coal face in the value market segment.

Here making boards that don't cost that much to build and shifting them in huge numbers is the name of the game.

Although you may find some mobo manufacturers using the B65 business-orientated chipset in this market segment, Intel's de-facto chipset for the value end of the market is the H61 Express, which is basically a cut down version of the H67, with only SATA 3Gbps support and offers 10 USB 2.0 ports instead of 14, making it an ideal platform for entry level PCs.

Read the full Foxconn H61MX review

Foxconn H67S - £65
H67 Chipset

Foxconn h67s

The new Sandy Bridge processors are stupidly, impossibly, hilariously quick. By some metrics, they're the most impressive new CPUs in memory. However, one of the downsides is the requirement for a new motherboard.

Still, if you're going to make the migration to a new Sandy Bridge system, why not consider a small-form factor board such as the H67S?

As the name suggests, it's based on the H67 chipset and supports the heavily revised integrated graphics core that appears in each and every Sandy Bridge CPU. The new core is available with either six or 12-execution units, respectively known as the Intel HD Graphics 2000 and 3000 models.

Read the full Foxconn H67S review

MSI H61MU-E35 - £60
H61 Chipset

MSI h61mu-e35

There are a couple of routes to take if you are designing a motherboard for the value end of the market, do you just give the basics at a good price or do you add features you think people will want and, indeed, pay extra for in a board in this market segment.

While Foxconn has taken one route with Intel's H61 value chipset with the H61MX, MSI has taken the polar opposite with the H61MU-E35. They might be both built around Intel's value chipset and both built on a Micro ATX PCB, but there the similarity ends.

Read the full MSI H61MU-E35 review

MSI P67A-GD65 - £140
P67 Chipset

MSI p67a-gd65

This here P67 board from MSI represents something very different from the it's top-end Big Bang board. This isn't necessarily aimed at the high-end enthusiast segment, this is a board for people wishing to build up their new platform with a reasonable feature-set and at a reasonable price.

The bonus of having so much move over to the CPU is that actually manufacturing motherboards is a little cheaper and so you get a lot for your £140.

There's full SLI and CrossFire licensing, the latest line in USB 3.0 and SATA 6Gbps storage interfaces and the 'Military Class' components that make these MSI boards rather robust.

Read the full MSI P67A-GD65 review

Sapphire Pure P67 Hydra - £160
P67 Chipset

Sapphire pure p67 hydra

We've already taken a look at Sapphire's first Intel-based motherboard: the Pure Black X58, which together with the AMD based White Fusion announced Sapphire's re-emergence into the motherboard market.

Hot on its heels comes the second high-end Intel based board, the Pure Black P67 Hydra, which, as you might gather from the name, not only uses Intel's current flagship chipset for the Sandy Bridge CPU's, the P67 Express, but also makes use of Lucid's HydraLogic graphics technology.

Read the full Sapphire Pure P67 Hydra review

Zotac H67-ITX WI-FI - £120
H67 Chipset

Zotac h67-itx wifi

Back in the mists of time (Okay, October 2001) when VIA released its first ITX EPIA motherboards, my how people laughed. A tiny little motherboard they all said, why it's just a toy!

But a few people understood the concept and could see the possibilities, and from time to time the format would peek above the parapet to see if anybody was still interested.

Fast forward to today and maybe, just maybe, Intel's H67 chipset could be the making of the concept. And if Zotac's H67-ITX WiFi is any indication of where were heading, then we're in for some interesting times ahead.

Read the full review

Benchmark analysis

It's demonstrative of just how much important goodness has been moved from the motherboard to the CPU itself that the benchmark results below are so damned close. In performance terms there is relatively little that manufacturers can do to separate their boards from the fierce competition.

That said the MATX P8P67M-Pro does a reasonably good job of putting some distance between it and the other boards in a few of our benchmarks at least. Most specifically in the gaming benchmarks where Asus boards traditionally do well.

It's also interesting to note that the new H61 boards give a pretty good showing in performance terms, demonstrating that there's little sacrifice in speeds, despite the loss of key parts of the feature set compared to the similarly-priced H67.

The performance of the two mini-ITX boards has likewise been impressive and the Sandy Bridge platform could well give rise to some frankly awesome, tiny gaming machines. Just look at the discrete card performance the likes of Zotac's H67-ITX WiFi.

bench 1

bench 2

The best Sandy Bridge motherboard is...

Asus p8p67m-pro

Trying to find an out and out winner in a range of Sandy Bridge motherboards that you'd actually buy is surprisingly tricky. With so much functionality being removed from the motherboards themselves there's very little between any of these boards in performance terms.

Where the different boards do differentiate themselves though is in assessing the feature set that they offer: Can they give you decent multi-GPU options? Do you have the latest in SATA and USB I/O technology? And how fully-featured is the relevant BIOS?

Of course, in this test we've been looking at the affordable end of the Sandy Bridge motherboard lineup, and if you were really interested in getting the most out of your second gen Intel Core CPU, whatever the cost, this isn't where you'd be looking.

At the £300 mark you can see the difference in performance. Just look at the Asus Maximus IV Extreme or the MSI Big Bang Marshal boards, those pricey, chunky fellows offer far more in the way of processing and overclocking grunt. But those boards cater for a relatively small percentage of users.

These mid-price to low-end boards here represent the bulk of Sandy Bridge motherboards that us normal folk are likely to buy. So which one should you pick up then?

To us the obvious winner is the excellent mATX P8P67M-Pro from Asus. It's got a huge range of features from SATA 6Gbps and USB 3.0 to fully fledged SLI and CrossFire support. That's a lot of functionality to pack into a small form factor motherboard.

It's not the fastest mobo in all of our tests, indeed some of the H61 boards post better Cinebench R11.5 scores, but it's a great all-round package. And it's also got Asus' great gaming pedigree too, which means that it's more than capable of offering great frame rates.

We reviewed the CyberPower Gamecube, an excellent little gaming machine, and with this wee mobo we 're looking at the basis for another small giant of the gaming PC market for sure. Being able to have a serious gaming system clothed in a small chassis, without having to make a compromise, is a very attractive proposition. After all you don't want to have a behemoth of a chassis bulging out from beneath your desk if there's no need for it to be that size.

Interestingly, the even smaller mini- ITX boards were almost as impressive. The Foxconn H67S and the pricier Zotac ITX WiFi offer decent performance figures in a frankly ridiculous form factor. Of course they don't have anywhere near the feature set of even their slightly bigger brethren, but for a fixed function they're worth a look.

The only issue with the Zotac board, as packed as it is, is that huge price tag. You may get USB 3.0 and a HDMI socket, but it's tough to say whether that's really worth paying twice as much as the almost-as-good Foxconn H67S.

It's still worth mentioning our personal bug bear with the make up of the desktop Sandy Bridge – the lack of processor graphics access when a discrete card is used. In a H67, or even H61 setup, losing access to the QuickSync transcode core is a disappointment if you want to have the opportunity to game with your rig too.

Lucid's Virtu GPU virtualisation tech should be ready soon, and ought to just be a simple update for the relevant boards. And that could end up being the death knell for the P67 platform. It will give you simultaneous access to both the Sandy Bridge graphics core and a discrete PCI-e card depending on which is better suited to a particular task.

There's also the spectre of the as yet unconfirmed Z68 chipset on the horizon, offering P67 performance with a processor graphics connection to boot. At the moment though if you're after a decent, feature-rich P67, then the bargainous P8P67M-Pro will serve pretty much all your needs.



Review: Audio-Technica AT120E/T

Posted: 10 Apr 2011 02:30 AM PDT

Audio-Technica is one of the two leading cartridge brands from Japan (the other of course being Denon). It maintains a good range of models, both moving-magnet and moving-coil, the AT120E/T being one of the former.

Its appearance looks quite budget and the packaging is nothing fancy, but there's some clever technology inside including AT's 'Paratoroidal Signal Generator' assembly.

This is the company's implementation of a moving-magnet generator: it's not fundamentally different from other types, but it is ingenious and is claimed to involve less electrical and mechanical connections than other MM versions.

An aluminium cantilever couples it to the stylus, which has a simple elliptical profile. The stylus is removable and needs to be removed to fit the cartridge, although we were surprised by how hard a pull is needed to separate the stylus assembly from the rest of the unit.

Mounting is via to semi-circular holes in the plastic top plate – not the strongest assembly ever and we would advise against over-zealous bolt-tightening.

AT quotes an unusually wide range of tracking weights for this cartridge – 1 to 1.8g: we settled on 1.5g, which gave secure tracking on all normal discs.

A couple of audiophile test discs gave it slight problems at any setting, but that's common to many cartridges. It seems fairly insensitive to Vertical Tracking Angle (VTA) and we found it to sound sweet and well detailed across a range of discs.

Its bass is a little unpredictable, generally strong but sometimes slightly underwhelming with sustained bass notes. Tonal balance is good, extended in the treble but not over-bright.

Related Links


Review: Audio-Technica AT120E/T

Posted: 10 Apr 2011 02:30 AM PDT

Audio-Technica is one of the two leading cartridge brands from Japan (the other of course being Denon). It maintains a good range of models, both moving-magnet and moving-coil, the AT120E/T being one of the former.

Its appearance looks quite budget and the packaging is nothing fancy, but there's some clever technology inside including AT's 'Paratoroidal Signal Generator' assembly.

This is the company's implementation of a moving-magnet generator: it's not fundamentally different from other types, but it is ingenious and is claimed to involve less electrical and mechanical connections than other MM versions.

An aluminium cantilever couples it to the stylus, which has a simple elliptical profile. The stylus is removable and needs to be removed to fit the cartridge, although we were surprised by how hard a pull is needed to separate the stylus assembly from the rest of the unit.

Mounting is via to semi-circular holes in the plastic top plate – not the strongest assembly ever and we would advise against over-zealous bolt-tightening.

AT quotes an unusually wide range of tracking weights for this cartridge – 1 to 1.8g: we settled on 1.5g, which gave secure tracking on all normal discs.

A couple of audiophile test discs gave it slight problems at any setting, but that's common to many cartridges. It seems fairly insensitive to Vertical Tracking Angle (VTA) and we found it to sound sweet and well detailed across a range of discs.

Its bass is a little unpredictable, generally strong but sometimes slightly underwhelming with sustained bass notes. Tonal balance is good, extended in the treble but not over-bright.

Related Links


Review: Audio-Technica AT120E/T

Posted: 10 Apr 2011 02:30 AM PDT

Audio-Technica is one of the two leading cartridge brands from Japan (the other of course being Denon). It maintains a good range of models, both moving-magnet and moving-coil, the AT120E/T being one of the former.

Its appearance looks quite budget and the packaging is nothing fancy, but there's some clever technology inside including AT's 'Paratoroidal Signal Generator' assembly.

This is the company's implementation of a moving-magnet generator: it's not fundamentally different from other types, but it is ingenious and is claimed to involve less electrical and mechanical connections than other MM versions.

An aluminium cantilever couples it to the stylus, which has a simple elliptical profile. The stylus is removable and needs to be removed to fit the cartridge, although we were surprised by how hard a pull is needed to separate the stylus assembly from the rest of the unit.

Mounting is via to semi-circular holes in the plastic top plate – not the strongest assembly ever and we would advise against over-zealous bolt-tightening.

AT quotes an unusually wide range of tracking weights for this cartridge – 1 to 1.8g: we settled on 1.5g, which gave secure tracking on all normal discs.

A couple of audiophile test discs gave it slight problems at any setting, but that's common to many cartridges. It seems fairly insensitive to Vertical Tracking Angle (VTA) and we found it to sound sweet and well detailed across a range of discs.

Its bass is a little unpredictable, generally strong but sometimes slightly underwhelming with sustained bass notes. Tonal balance is good, extended in the treble but not over-bright.

Related Links


Review: Scheu Analog Cello (Jelco SA-250 tonearm)

Posted: 10 Apr 2011 02:00 AM PDT

although a relative newcomer to the UK, Scheu Analog has been in business since the late 1980s. This makes it one of a number of turntable manufacturers who started up their business just as the LP was allegedly in its death throes.

Perhaps, for that reason, the company's range is not vast, (there are only four turntables and three arms available), but distinctly exotic-looking in design. The Cello (including a Jelco SA-250 arm) is a rectangular slab of acrylic with three feet, an arm mount and a bearing.

The most obvious difference from other designs is that the motor is housed, very ingeniously, in the front left foot. It's a small DC motor with electronic speed control and a small toggle switch that selects the speed.

tonearm

Instead of a drive belt there's a drive string. In fact, a thin thread can be a highly satisfactory trick and Scheu simply provides a small bobbin of fine nylon thread, just over a tenth of a millimetre in diameter! However, as it's up to the happy owner to cut a length of this and tie a knot in the right place, you may end up exhausting your supply of expletives.

Eventually, one gets a good tight loop the right length which gives good drive to the frosted-finish acrylic platter.

Sound quality

Once again, there was some disagreement among our listeners about the Scheu's bass: is it powerful and propulsive or a little shy? A little investigative work suggests the probable cause.

Playing a variety of familiar discs suggested that the bass is rather better in lively, dynamic moments (a rock drum kit being a perfect example) than in sustained tuned notes (church organ, bass tuba and so forth). The latter tends to sound a little underwhelming, although taken in isolation it's not at all bad.

But the transient energy conveyed by the deck goes a long way towards making the sound very attractive. A closely related area of performance, 'pace', was also singled out for praise.

This is a little more subtle than rhythm, involving as it does not just excitement but also, at suitable moments, restraint. That's necessary to keep the sound from being too frantic, something that we've all come across now and then.

Higher frequencies are well served by the Cello, with good detail across the board and very good life and vibrancy. The treble is always very clean, but prepared to scream and shout when the music demands it – no shrinking violet here!

This can be deceptive, though, the treble occasionally seeming restrained in recordings that are only moderately busy in the top octaves. There was a little puzzlement expressed at the Cello's handling of dynamics. Just occasionally, detail seems a little muted, though as mentioned above it is generally good and surface noise seems slightly more prominent than via some of the decks.

Imaging is good, instruments occupying a consistent and stable position in space and there is some decent depth in the image, too.

Related Links


In Depth: Apps: the future of tech or a passing fad?

Posted: 10 Apr 2011 02:00 AM PDT

App - seldom has such a small word had such a big impact. It's always on the lips of tech's biggest and brightest firms: Apple, Microsoft, Google and BlackBerry. Smartphones, tablets, TVs, fridges, cars: if your devices don't already sport the ability to run small, slick, one-trick programs, then the option is only an upgrade away.

The term has become so ubiquitous that the American Dialect Society - a learned group dedicated to the study of the English Language in America - dubbed 'app' its 2010 Word of the Year.

In case you're interested, it fended off stiff competition from the Cookie Monster's 'nom'. Praise indeed.

But what exactly is an app? Is 'app' really nothing more than a slightly irritating, cutesy term for 'application', or are apps indeed distinct from the software we PC users have enjoyed since the days of DOS? How will they affect the future of computing?

If you're interested in the etymological origins of the word 'app', you can probably thank Apple CEO Steve Jobs. Apple began propelling the word in the popular consciousness in 2008, but Jobs' love of the term dates back to the 1980s and NeXT.

Back then it was one of his favourite terms, presumably due to its friendliness compared to the Microsoft-endorsed 'software program' and the middle-ground 'application'.

A state of mind

Colin Walsh, owner of Celsius Game Studios, thinks apps aren't so much about a shift in software as about how people think about it:

"We've always had apps in the sense of focused programs that do one thing well. But because of the nature of portable devices, which by their design force a single-app/single-purpose user model, more people and more developers now think about specialised applications."

Walsh thinks this is good, because it's forcing the market away from "bloated monstrosities that do a whole bunch of things poorly," resulting in better software. This in turn engages people who wouldn't otherwise interact with many products, often encouraging them to source their own.

Mat Gadd, a software engineer at Advance IT Group says people in his office have long referred to 'applications', but now his mother - who is in her 50s - has an iPhone 4 and knows what an app is.

"For her Windows PC I had to install software for her, but now she calls me to chat about the latest and greatest app she's discovered and started using on her iPhone," he says.

For developers, this rapid shift in climate is quite a lot to take in, and it points to an unfamiliar future. "As Apple has embraced the mainstream consumer, apps are seen as the same as any other piece of media you can buy through iTunes - a true commodity rather than some artisan creation," explains James Thomson, founder of Mac (and now also iOS) developer TLA Systems.

"Some people will appreciate the craftsmanship involved, and truly excellent software will always find an audience, but we're no longer selling to the same technical hobbyist market we might have sold shareware to a decade or two ago."

Thomson's comment, with its overt consumerist angle, happens to echo the thoughts of Ian Bogost. In his blog post 'What is an app?', Bogost expands on the American Dialect Society's definition of 'app' as "the shortened slang term for a computer or smartphone application" by suggesting that it's also the applications themselves that are shortened and colloquialised.

Mac app store

MAC APP STORE: In making apps simple to buy, Apple has reinvigorated the software market

Instead of software suites - equivalent to a massive CD boxset aiming to cover everything and everyone - apps are more akin to singles. They're bite-sized, one-purpose applications, built for a specific function.

And mirroring what Apple did to the music industry, breaking the market into pieces and giving people somewhere to shop for the bits, Apple broke software into its component parts too, creating an App Store to provide users with a quick-fix solution to almost any problem.

An Apple a day

You might think that, with apps individually being more focused than traditional applications, you'd end up with a more coherent system, but the reality is more complicated.

Due to the typically isolated nature of an app, the notion of system-wide coherence is becoming deprecated in the app age. Logic Colony software developer Krishna Kotecha reckons this distinction between applications and more tightly focused apps will become "increasingly pronounced", although the focus and simple interfaces of apps will let typical users "get the most out of their software".

Certainly, usability has been key from the start with Apple's app model. Walsh argues that Apple "saved the mobile software ecosystem from itself", and says that before the App Store, it was incredibly difficult to sell apps.

The App Store deals with payment and distribution, along with making apps more discoverable. As iOS developer Jedrzej Jakubowski enthuses, "Customers are trained and encouraged to buy apps on a regular basis, which they can do in a very simple fashion."

Kotecha adds that even the much-criticised review process and 'walled garden' aren't a problem. "Unless you want to play fast and loose with platform development rules, private APIs, or user privacy. Apple has done a good job protecting the value of the iOS app ecosystem for both users and developers, because users don't experience the problems that break their PCs, such as malware and spyware. Users aren't afraid of buying and installing apps, and that's a really powerful thing."

Despite coming from the more open Mac development environment, Thomson agrees, and adds that the walled-garden criticism is overplayed: "Apple has found a good middle ground between the PC and games console approaches to software distribution. We can put apps in front of a broad consumer audience, and while there are restrictions, there aren't as many as if you wanted to write for, say, a Nintendo handheld."

A bad Apple

Aside from the well-publicised review process, the App Store has some shortcomings that impact on existing products, but these also point to how Apple may improve its offering in the future.

Thomson would like to see more flexibility in pricing, with the ability to offer upgrades - a common source of friction with users - although he concedes that this this is "perhaps a sign of old-school thinking, given that Apple now rarely offers upgrade pricing on its own products".

Elsewhere, although the app-buying process is discoverable, apps themselves are trickier to unearth due to the sheer number available and Apple's relatively limited search functionality; Thomson notes that many people rely heavily on the charts to 'discover' apps, often keeping a very limited number of products selling well and condemning the remainder.

There's also a tendency for app prices - especially for games - to 'race to the bottom', which has trained users to expect apps to cost 99 cents, or 59p. These needn't be considered entirely bad things.

As developer Neil Inglis points out, "from a consumer standpoint, the two big things are cheap apps and an almost endless supply of apps - I can't remember the last time I looked for an app to perform a task and couldn't find something".

However, because of the difficulty in recouping development costs, there's a tendency for many apps to be very simple, hastening the atomisation of software. This direction means things are going to be tough for any app that's not marketable to a widely understood niche, relegating general or experimental apps to relative obscurity.

Facebook app

FACEBOOK: Well-designed apps can simplify and improve the user experience of more complex web services

But atomisation can be a benefit to developers and users alike, assuming an app has a real use. As Kotecha explains, "It keeps apps focused tightly on one thing, and focused apps aren't a bad thing at all, because most non-technical users barely use a fraction of the functionality available on their desktop applications."

Perhaps as a result of this shift, cross-pollination is increasing between Apple systems, mostly with concepts from iOS infiltrating Mac OS X. This, along with the massive success of the App Store, meant it was inevitable that Apple would bring the 'app' model to the desktop.

Announced in 2010, the Mac App Store arrived in January 2011, and although only available to a minority of Mac users (those running Snow Leopard who have updated the operating system to version 10.6.6), the store had over a million downloads within its first day.

As on iOS, the Mac-based store distils the process of purchase and installation down to a couple of clicks and a password, and it also potentially increases exposure for apps, along with dealing with the supply chain, freeing developers to concentrate on other things. (Because of this, several long-time Mac developers took only a few days to go Mac App Store-only with their products.)

It's clear that this is where Apple sees the future of apps on the desktop. Unlike the iOS App Store, the Mac App Store isn't the only way to install third-party applications, but Thomson argues that "it will soon be the default way most Mac users find and buy software".

Gadd agrees, pointing to a potentially rosy future for developers as users realise they can click on a Dock icon, then, without much effort, "find an app to do a specific task more easily". He adds: "Since users can search, browse and buy securely within one application, that'll make them feel more at ease. You'll be less likely to find people worrying they'll 'break' the computer if they download something."

Dumbing down

Although a success, big names like Microsoft and Adobe were notably absent from the Mac App Store's launch; elsewhere, popular utilities like Default Folder X are barred for not adhering to Apple's relatively strict rules regarding the type of application allowed for sale in its store.

It was also notable that most launch products were casual games or simple utilities, often ported from iOS. Plenty disregard or break Mac conventions, threatening a cornerstone of usability on a windowed system: consistency - knowing what's likely to happen when you perform a certain action.

Those worries have been compounded by demos of Mac OS X Lion, which champions an iOS-like full-screen app view, devoid of dock, windows and taskbars, leading to what some call the 'appisation' of the Mac - a dumbing-down that could spell the end of traditional computing if Microsoft subsequently follows suit.

Some developers shudder at this prospect. "A desktop computer isn't a tablet or a phone - the interaction model is different and only certain apps benefit from the full-screen approach," argues Walsh, who hopes the windowing model remains.

Others aren't so sure, suggesting that Apple's app model could soon become commonplace throughout the computing world. "The atomisation is a good thing," says Jakubowski. "Prices go down, apps are more focused and are better at what they do. Competition increases, leading to more well-designed apps."

Kotecha also isn't against Apple's plans: "Full-screen apps are the way to go on smaller devices, where window management is frustrating." As computing becomes more mobile and portable, Jakubowski expects the app model to take hold, relegating the application window to a niche "desktop computer world" where the "hassles of window management and filesystems will stay in an environment where they still matter".

Store wars

Of course, Apple is hardly alone in the battle of the apps. Although arguably the leader in the field - a surprising turn of events in itself, given Apple's relatively niche position on the desktop-other companies are fighting back.

Chrome web store

OPEN BOOK?: The Chrome Web Store may be online, with a selection of web apps, but it's not nearly as open as what Mozilla's planning to launch in 2012

Google in particular has risen to the challenge, although it's clear that the search giant's vision for the future of apps is wildly different from Apple's own. Where Apple seeks to control and enhance user experience by way of curation, emphasis on quality and a single place to purchase apps, Google's primary motivation remains ad revenue.

Therefore, in Google's mobile offering, Android Market is almost the polar opposite of Apple's App Store: there's barely any curation, open (albeit often minor) intellectual property breaches are rife, and, crucially, it's not the only place Android users can download third-party software for their devices.

Various carriers offer storefronts, and even Amazon is prepping an Android store. But as much as Apple's stance regarding apps is propelling the concept forwards, there's some consensus that Google's efforts do the opposite.

Android users typically remain apathetic towards apps, not caring about them or citing usability and quality issues. "Part of the problem is [that] it's tough for developers to test apps across the different device and OS combinations on the market," explains W3i cofounder Robert Weber. "This leads to crap getting out into the market - Google should focus on providing better tools to Android developers for testing."

There's also been the thorny issue of payment mechanisms; in stark contrast to the App Store, with its millions of credit cards linked to iTunes, Android Market initially only enabled US and UK developers to offer paid apps.

Value for money

The payment situation has slowly improved, but user experience is now such that 'free' is the expected price for mist apps.

While advocates of open software laud this notion, Kotecha thinks it's detrimental to the platform as a whole. "Quality apps take time to build," he says. "There must be a clear monetisation mechanism for app development if a platform wants quality apps."

Without this, the perception of Android apps - indeed, apps as a whole, given that Android is gaining traction and now leading the way in smartphone market share - could become one of low quality and poor usability.

Zattikka producer Robin Clarke thinks this kind of lowbrow, anything-goes ecosystem leads to "a situation where nobody is willing to stick their neck out and invest in making their offering good enough to be the de-facto standard," and hopes Google will "lead by example and make Android Market competitive with the iOS App Store."

It's possible that Amazon could take up that particular challenge, since its vision of an Android app store appears closer to Apple's store than Google's, with app-screening and an emphasis on quality. And while Amazon will discount apps as and when it sees fit, quickfire discounts are common on Apple's stores, too.

If this is indeed what transpires in 2011, it will likely revolutionise Android apps and perhaps dismiss the belief that apps should be free. Weber reckons Android developers will finally start making some decent money, prompting more developers to take an interest in the platform, increasing competition and raising the quality of apps.

And Weber adds that the scenario also provides a major advantage over Apple, in that by offering competing but 'fragmented' distribution points, a "higher quantity of developers can get a piece of each of those distribution points," rather than on iOS where there's only one store with one set of charts.

Heading online

With fragmentation affecting monetisation, usability and the public perception of apps as useful, you'd think a trend towards a locked-down Apple-style approach would be inevitable. But some in the industry would prefer to obliterate the concept of locking down apps for good.

Brain

BEST OF THE BEST, SIR!: AppBrain.com aims to present the best of the Android Market

The main driving force behind this model is Mozilla, which wants to leverage open web standards with its Open Web Applications concept, and create an app ecosystem that isn't reliant on any one device or locked down to any one store. This would remove Apple-style censorship and carrier lock-ins (Android itself may be touted as 'open', but carriers regularly block stores and services from Android devices or force their own products to act as the main entry point to apps).

Pascal Finette, Director of Mozilla Labs, claims that web apps also offer additional benefits: "From a developer perspective, they dramatically lower development costs and often complexity, especially when a developer wants to target multiple platforms - there's no porting an Objective C iPhone app to Java in Android. For users, web apps provide similar functionality to native apps, but allow for portability - you can take an app from desktop to mobile and vice versa."

In the cloud

Finette optimistically believes that, in the mid-to-long-term, the vast majority of apps will be written on top of the Open Web stack, but this isn't yet a popular viewpoint.

Despite being a month older than the Mac App Store, the Chrome Web Store is already seemingly floundering: unable to drive traffic to developers, sales have been described rather generously as 'lacklustre' in the technology press - and monetisation isn't the only problem it's facing.

"Apple's app model destroyed the advantage of pure web apps compared to native code," argues Kotecha. "The biggest problems native apps had were discoverability and reliability in terms of impact on the system," he explains.

"But iOS dealt with those problems, and you can now have an app running on a mobile device that talks to a server and stores data locally, but that also provides a rich user experience. Comparatively, web apps seem like a very weak proposition. Native code will always allow a developer the freedom to be more creative in their software development."

Similarly, developers seem dismissive of those who advocate 'open' and cloud storage as the only future for apps. "'Open' is the most meaningless word in the tech industry today," grumbles Kotecha. "Where it's being used, you don't need to look very far to see someone's business agenda at play - when Mozilla says 'install anywhere', it means 'install anywhere Mozilla technology can be installed'."

And while Gadd says that he'd "love for there to be some form of reliable, always-on connectivity," such things aren't, and may never be, entirely viable: "Between ISP failures, power failures and lack of mobile coverage alone, relying on being able to reach a remote service to do any work is asking for trouble."

On storage, Kotecha says pitching 'cloud' versus 'local' is a red herring: "Users and apps will use both the cloud and local storage. Even if cloud-based technology becomes widespread, the need for local storage isn't going anywhere -after all, you still carry a wallet with you, even though you have a bank account."

And a similar hybrid approach is likely to be how apps evolve, thinks Weber: "Especially on mobile, native clients with web functionality will dominate, despite all the hype about HTML5 apps stealing the limelight away from existing native mobile apps."

Looking at the app market to date, iterative changes are predictable: the best iOS apps will continue to evolve, remaining focused and usable, but increasing in sophistication, enabling users to perform increasingly complex tasks.

Android will follow suit to some extent, perhaps driven by Amazon at the high end, and Google flooding the market with free, ad-supported apps.

Mobile stores on other platforms will see rapid growth as hardware improves and lessons are learned from market leaders, and while browsers won't take over, they will expose new features to apps and offer richer experiences for those choosing to favour web apps.

App to the future

It remains to be seen whether Android's market-lead will result in a shift in emphasis regarding target platforms, leading to more demand for tight-budget, cross-platform development, resulting in compromises on iOS; Kotecha thinks it's feasible the app approach will simply raise everyone's game, "driving mainstream users to be more demanding of user experiences in all areas they encounter software".

Train times

FOCUSSED FUNCTION: Apps often force focus on one task, rather than being everything to all

Windows is perhaps the platform that could benefit most from such thinking, and rumours of an app store for Windows 8 abound. Walsh wonders whether Microsoft could use an app store to distance itself from past accusations of vendor lock-in and monopoly abuse by directing users to third-party software from day one.

"Couple that with the potential for keeping people on Windows and the money they could make in the process, it's pretty much a no-brainer for the company," he thinks. "That being said, I've little faith in Microsoft getting it quite right first time, but they'll keep at it until it's decent."

Gadd adds such a store could also reduce viruses and malware on Windows PCs, if Microsoft adopted an approvals process akin to Apple's and pushed its app store as the default.

TV or not TV?

The other major platform being touted as perfect for apps is television. "But TV is a difficult fit for apps, apart from games and passive media," says Walsh. "The use-case just isn't there - no one wants to browse the web or edit documents on their TV, and I think that has to do with how TVs are situated in many homes as an entertainment device and not a 'work' device. Maybe that will change, but I'm not holding my breath."

Kotecha quips that the same platform that has successfully dominated TV will continue to do so: TV programmes, rather scuppering the long-held belief that convergence would inevitably lead to the emergence of a dominant platform, and that platform would be the television. It's the app economy that's destroyed this idea.

The smartphone boom has made people realise once again that they want different devices for different tasks: social networking, music and gaming on smartphones; reading and light work on tablets; more complex tasks on desktop PCs.

No one device can do all these things satisfactorily. However the app economy has also driven a thirst for single-purpose, focused apps, services can more easily thrive across multiple platforms.

"I don't think that any one device or system will dominate the market," concludes Jakubowski. "Rather we'll see a multitude of devices with certain services available on all of them, delivered through apps."



In Depth: Apps: the future of tech or a passing fad?

Posted: 10 Apr 2011 02:00 AM PDT

App - seldom has such a small word had such a big impact. It's always on the lips of tech's biggest and brightest firms: Apple, Microsoft, Google and BlackBerry. Smartphones, tablets, TVs, fridges, cars: if your devices don't already sport the ability to run small, slick, one-trick programs, then the option is only an upgrade away.

The term has become so ubiquitous that the American Dialect Society - a learned group dedicated to the study of the English Language in America - dubbed 'app' its 2010 Word of the Year.

In case you're interested, it fended off stiff competition from the Cookie Monster's 'nom'. Praise indeed.

But what exactly is an app? Is 'app' really nothing more than a slightly irritating, cutesy term for 'application', or are apps indeed distinct from the software we PC users have enjoyed since the days of DOS? How will they affect the future of computing?

If you're interested in the etymological origins of the word 'app', you can probably thank Apple CEO Steve Jobs. Apple began propelling the word in the popular consciousness in 2008, but Jobs' love of the term dates back to the 1980s and NeXT.

Back then it was one of his favourite terms, presumably due to its friendliness compared to the Microsoft-endorsed 'software program' and the middle-ground 'application'.

A state of mind

Colin Walsh, owner of Celsius Game Studios, thinks apps aren't so much about a shift in software as about how people think about it:

"We've always had apps in the sense of focused programs that do one thing well. But because of the nature of portable devices, which by their design force a single-app/single-purpose user model, more people and more developers now think about specialised applications."

Walsh thinks this is good, because it's forcing the market away from "bloated monstrosities that do a whole bunch of things poorly," resulting in better software. This in turn engages people who wouldn't otherwise interact with many products, often encouraging them to source their own.

Mat Gadd, a software engineer at Advance IT Group says people in his office have long referred to 'applications', but now his mother - who is in her 50s - has an iPhone 4 and knows what an app is.

"For her Windows PC I had to install software for her, but now she calls me to chat about the latest and greatest app she's discovered and started using on her iPhone," he says.

For developers, this rapid shift in climate is quite a lot to take in, and it points to an unfamiliar future. "As Apple has embraced the mainstream consumer, apps are seen as the same as any other piece of media you can buy through iTunes - a true commodity rather than some artisan creation," explains James Thomson, founder of Mac (and now also iOS) developer TLA Systems.

"Some people will appreciate the craftsmanship involved, and truly excellent software will always find an audience, but we're no longer selling to the same technical hobbyist market we might have sold shareware to a decade or two ago."

Thomson's comment, with its overt consumerist angle, happens to echo the thoughts of Ian Bogost. In his blog post 'What is an app?', Bogost expands on the American Dialect Society's definition of 'app' as "the shortened slang term for a computer or smartphone application" by suggesting that it's also the applications themselves that are shortened and colloquialised.

Mac app store

MAC APP STORE: In making apps simple to buy, Apple has reinvigorated the software market

Instead of software suites - equivalent to a massive CD boxset aiming to cover everything and everyone - apps are more akin to singles. They're bite-sized, one-purpose applications, built for a specific function.

And mirroring what Apple did to the music industry, breaking the market into pieces and giving people somewhere to shop for the bits, Apple broke software into its component parts too, creating an App Store to provide users with a quick-fix solution to almost any problem.

An Apple a day

You might think that, with apps individually being more focused than traditional applications, you'd end up with a more coherent system, but the reality is more complicated.

Due to the typically isolated nature of an app, the notion of system-wide coherence is becoming deprecated in the app age. Logic Colony software developer Krishna Kotecha reckons this distinction between applications and more tightly focused apps will become "increasingly pronounced", although the focus and simple interfaces of apps will let typical users "get the most out of their software".

Certainly, usability has been key from the start with Apple's app model. Walsh argues that Apple "saved the mobile software ecosystem from itself", and says that before the App Store, it was incredibly difficult to sell apps.

The App Store deals with payment and distribution, along with making apps more discoverable. As iOS developer Jedrzej Jakubowski enthuses, "Customers are trained and encouraged to buy apps on a regular basis, which they can do in a very simple fashion."

Kotecha adds that even the much-criticised review process and 'walled garden' aren't a problem. "Unless you want to play fast and loose with platform development rules, private APIs, or user privacy. Apple has done a good job protecting the value of the iOS app ecosystem for both users and developers, because users don't experience the problems that break their PCs, such as malware and spyware. Users aren't afraid of buying and installing apps, and that's a really powerful thing."

Despite coming from the more open Mac development environment, Thomson agrees, and adds that the walled-garden criticism is overplayed: "Apple has found a good middle ground between the PC and games console approaches to software distribution. We can put apps in front of a broad consumer audience, and while there are restrictions, there aren't as many as if you wanted to write for, say, a Nintendo handheld."

A bad Apple

Aside from the well-publicised review process, the App Store has some shortcomings that impact on existing products, but these also point to how Apple may improve its offering in the future.

Thomson would like to see more flexibility in pricing, with the ability to offer upgrades - a common source of friction with users - although he concedes that this this is "perhaps a sign of old-school thinking, given that Apple now rarely offers upgrade pricing on its own products".

Elsewhere, although the app-buying process is discoverable, apps themselves are trickier to unearth due to the sheer number available and Apple's relatively limited search functionality; Thomson notes that many people rely heavily on the charts to 'discover' apps, often keeping a very limited number of products selling well and condemning the remainder.

There's also a tendency for app prices - especially for games - to 'race to the bottom', which has trained users to expect apps to cost 99 cents, or 59p. These needn't be considered entirely bad things.

As developer Neil Inglis points out, "from a consumer standpoint, the two big things are cheap apps and an almost endless supply of apps - I can't remember the last time I looked for an app to perform a task and couldn't find something".

However, because of the difficulty in recouping development costs, there's a tendency for many apps to be very simple, hastening the atomisation of software. This direction means things are going to be tough for any app that's not marketable to a widely understood niche, relegating general or experimental apps to relative obscurity.

Facebook app

FACEBOOK: Well-designed apps can simplify and improve the user experience of more complex web services

But atomisation can be a benefit to developers and users alike, assuming an app has a real use. As Kotecha explains, "It keeps apps focused tightly on one thing, and focused apps aren't a bad thing at all, because most non-technical users barely use a fraction of the functionality available on their desktop applications."

Perhaps as a result of this shift, cross-pollination is increasing between Apple systems, mostly with concepts from iOS infiltrating Mac OS X. This, along with the massive success of the App Store, meant it was inevitable that Apple would bring the 'app' model to the desktop.

Announced in 2010, the Mac App Store arrived in January 2011, and although only available to a minority of Mac users (those running Snow Leopard who have updated the operating system to version 10.6.6), the store had over a million downloads within its first day.

As on iOS, the Mac-based store distils the process of purchase and installation down to a couple of clicks and a password, and it also potentially increases exposure for apps, along with dealing with the supply chain, freeing developers to concentrate on other things. (Because of this, several long-time Mac developers took only a few days to go Mac App Store-only with their products.)

It's clear that this is where Apple sees the future of apps on the desktop. Unlike the iOS App Store, the Mac App Store isn't the only way to install third-party applications, but Thomson argues that "it will soon be the default way most Mac users find and buy software".

Gadd agrees, pointing to a potentially rosy future for developers as users realise they can click on a Dock icon, then, without much effort, "find an app to do a specific task more easily". He adds: "Since users can search, browse and buy securely within one application, that'll make them feel more at ease. You'll be less likely to find people worrying they'll 'break' the computer if they download something."

Dumbing down

Although a success, big names like Microsoft and Adobe were notably absent from the Mac App Store's launch; elsewhere, popular utilities like Default Folder X are barred for not adhering to Apple's relatively strict rules regarding the type of application allowed for sale in its store.

It was also notable that most launch products were casual games or simple utilities, often ported from iOS. Plenty disregard or break Mac conventions, threatening a cornerstone of usability on a windowed system: consistency - knowing what's likely to happen when you perform a certain action.

Those worries have been compounded by demos of Mac OS X Lion, which champions an iOS-like full-screen app view, devoid of dock, windows and taskbars, leading to what some call the 'appisation' of the Mac - a dumbing-down that could spell the end of traditional computing if Microsoft subsequently follows suit.

Some developers shudder at this prospect. "A desktop computer isn't a tablet or a phone - the interaction model is different and only certain apps benefit from the full-screen approach," argues Walsh, who hopes the windowing model remains.

Others aren't so sure, suggesting that Apple's app model could soon become commonplace throughout the computing world. "The atomisation is a good thing," says Jakubowski. "Prices go down, apps are more focused and are better at what they do. Competition increases, leading to more well-designed apps."

Kotecha also isn't against Apple's plans: "Full-screen apps are the way to go on smaller devices, where window management is frustrating." As computing becomes more mobile and portable, Jakubowski expects the app model to take hold, relegating the application window to a niche "desktop computer world" where the "hassles of window management and filesystems will stay in an environment where they still matter".

Store wars

Of course, Apple is hardly alone in the battle of the apps. Although arguably the leader in the field - a surprising turn of events in itself, given Apple's relatively niche position on the desktop-other companies are fighting back.

Chrome web store

OPEN BOOK?: The Chrome Web Store may be online, with a selection of web apps, but it's not nearly as open as what Mozilla's planning to launch in 2012

Google in particular has risen to the challenge, although it's clear that the search giant's vision for the future of apps is wildly different from Apple's own. Where Apple seeks to control and enhance user experience by way of curation, emphasis on quality and a single place to purchase apps, Google's primary motivation remains ad revenue.

Therefore, in Google's mobile offering, Android Market is almost the polar opposite of Apple's App Store: there's barely any curation, open (albeit often minor) intellectual property breaches are rife, and, crucially, it's not the only place Android users can download third-party software for their devices.

Various carriers offer storefronts, and even Amazon is prepping an Android store. But as much as Apple's stance regarding apps is propelling the concept forwards, there's some consensus that Google's efforts do the opposite.

Android users typically remain apathetic towards apps, not caring about them or citing usability and quality issues. "Part of the problem is [that] it's tough for developers to test apps across the different device and OS combinations on the market," explains W3i cofounder Robert Weber. "This leads to crap getting out into the market - Google should focus on providing better tools to Android developers for testing."

There's also been the thorny issue of payment mechanisms; in stark contrast to the App Store, with its millions of credit cards linked to iTunes, Android Market initially only enabled US and UK developers to offer paid apps.

Value for money

The payment situation has slowly improved, but user experience is now such that 'free' is the expected price for mist apps.

While advocates of open software laud this notion, Kotecha thinks it's detrimental to the platform as a whole. "Quality apps take time to build," he says. "There must be a clear monetisation mechanism for app development if a platform wants quality apps."

Without this, the perception of Android apps - indeed, apps as a whole, given that Android is gaining traction and now leading the way in smartphone market share - could become one of low quality and poor usability.

Zattikka producer Robin Clarke thinks this kind of lowbrow, anything-goes ecosystem leads to "a situation where nobody is willing to stick their neck out and invest in making their offering good enough to be the de-facto standard," and hopes Google will "lead by example and make Android Market competitive with the iOS App Store."

It's possible that Amazon could take up that particular challenge, since its vision of an Android app store appears closer to Apple's store than Google's, with app-screening and an emphasis on quality. And while Amazon will discount apps as and when it sees fit, quickfire discounts are common on Apple's stores, too.

If this is indeed what transpires in 2011, it will likely revolutionise Android apps and perhaps dismiss the belief that apps should be free. Weber reckons Android developers will finally start making some decent money, prompting more developers to take an interest in the platform, increasing competition and raising the quality of apps.

And Weber adds that the scenario also provides a major advantage over Apple, in that by offering competing but 'fragmented' distribution points, a "higher quantity of developers can get a piece of each of those distribution points," rather than on iOS where there's only one store with one set of charts.

Heading online

With fragmentation affecting monetisation, usability and the public perception of apps as useful, you'd think a trend towards a locked-down Apple-style approach would be inevitable. But some in the industry would prefer to obliterate the concept of locking down apps for good.

Brain

BEST OF THE BEST, SIR!: AppBrain.com aims to present the best of the Android Market

The main driving force behind this model is Mozilla, which wants to leverage open web standards with its Open Web Applications concept, and create an app ecosystem that isn't reliant on any one device or locked down to any one store. This would remove Apple-style censorship and carrier lock-ins (Android itself may be touted as 'open', but carriers regularly block stores and services from Android devices or force their own products to act as the main entry point to apps).

Pascal Finette, Director of Mozilla Labs, claims that web apps also offer additional benefits: "From a developer perspective, they dramatically lower development costs and often complexity, especially when a developer wants to target multiple platforms - there's no porting an Objective C iPhone app to Java in Android. For users, web apps provide similar functionality to native apps, but allow for portability - you can take an app from desktop to mobile and vice versa."

In the cloud

Finette optimistically believes that, in the mid-to-long-term, the vast majority of apps will be written on top of the Open Web stack, but this isn't yet a popular viewpoint.

Despite being a month older than the Mac App Store, the Chrome Web Store is already seemingly floundering: unable to drive traffic to developers, sales have been described rather generously as 'lacklustre' in the technology press - and monetisation isn't the only problem it's facing.

"Apple's app model destroyed the advantage of pure web apps compared to native code," argues Kotecha. "The biggest problems native apps had were discoverability and reliability in terms of impact on the system," he explains.

"But iOS dealt with those problems, and you can now have an app running on a mobile device that talks to a server and stores data locally, but that also provides a rich user experience. Comparatively, web apps seem like a very weak proposition. Native code will always allow a developer the freedom to be more creative in their software development."

Similarly, developers seem dismissive of those who advocate 'open' and cloud storage as the only future for apps. "'Open' is the most meaningless word in the tech industry today," grumbles Kotecha. "Where it's being used, you don't need to look very far to see someone's business agenda at play - when Mozilla says 'install anywhere', it means 'install anywhere Mozilla technology can be installed'."

And while Gadd says that he'd "love for there to be some form of reliable, always-on connectivity," such things aren't, and may never be, entirely viable: "Between ISP failures, power failures and lack of mobile coverage alone, relying on being able to reach a remote service to do any work is asking for trouble."

On storage, Kotecha says pitching 'cloud' versus 'local' is a red herring: "Users and apps will use both the cloud and local storage. Even if cloud-based technology becomes widespread, the need for local storage isn't going anywhere -after all, you still carry a wallet with you, even though you have a bank account."

And a similar hybrid approach is likely to be how apps evolve, thinks Weber: "Especially on mobile, native clients with web functionality will dominate, despite all the hype about HTML5 apps stealing the limelight away from existing native mobile apps."

Looking at the app market to date, iterative changes are predictable: the best iOS apps will continue to evolve, remaining focused and usable, but increasing in sophistication, enabling users to perform increasingly complex tasks.

Android will follow suit to some extent, perhaps driven by Amazon at the high end, and Google flooding the market with free, ad-supported apps.

Mobile stores on other platforms will see rapid growth as hardware improves and lessons are learned from market leaders, and while browsers won't take over, they will expose new features to apps and offer richer experiences for those choosing to favour web apps.

App to the future

It remains to be seen whether Android's market-lead will result in a shift in emphasis regarding target platforms, leading to more demand for tight-budget, cross-platform development, resulting in compromises on iOS; Kotecha thinks it's feasible the app approach will simply raise everyone's game, "driving mainstream users to be more demanding of user experiences in all areas they encounter software".

Train times

FOCUSSED FUNCTION: Apps often force focus on one task, rather than being everything to all

Windows is perhaps the platform that could benefit most from such thinking, and rumours of an app store for Windows 8 abound. Walsh wonders whether Microsoft could use an app store to distance itself from past accusations of vendor lock-in and monopoly abuse by directing users to third-party software from day one.

"Couple that with the potential for keeping people on Windows and the money they could make in the process, it's pretty much a no-brainer for the company," he thinks. "That being said, I've little faith in Microsoft getting it quite right first time, but they'll keep at it until it's decent."

Gadd adds such a store could also reduce viruses and malware on Windows PCs, if Microsoft adopted an approvals process akin to Apple's and pushed its app store as the default.

TV or not TV?

The other major platform being touted as perfect for apps is television. "But TV is a difficult fit for apps, apart from games and passive media," says Walsh. "The use-case just isn't there - no one wants to browse the web or edit documents on their TV, and I think that has to do with how TVs are situated in many homes as an entertainment device and not a 'work' device. Maybe that will change, but I'm not holding my breath."

Kotecha quips that the same platform that has successfully dominated TV will continue to do so: TV programmes, rather scuppering the long-held belief that convergence would inevitably lead to the emergence of a dominant platform, and that platform would be the television. It's the app economy that's destroyed this idea.

The smartphone boom has made people realise once again that they want different devices for different tasks: social networking, music and gaming on smartphones; reading and light work on tablets; more complex tasks on desktop PCs.

No one device can do all these things satisfactorily. However the app economy has also driven a thirst for single-purpose, focused apps, services can more easily thrive across multiple platforms.

"I don't think that any one device or system will dominate the market," concludes Jakubowski. "Rather we'll see a multitude of devices with certain services available on all of them, delivered through apps."



Review: Scheu Analog Cello (Jelco SA-250 tonearm)

Posted: 10 Apr 2011 02:00 AM PDT

although a relative newcomer to the UK, Scheu Analog has been in business since the late 1980s. This makes it one of a number of turntable manufacturers who started up their business just as the LP was allegedly in its death throes.

Perhaps, for that reason, the company's range is not vast, (there are only four turntables and three arms available), but distinctly exotic-looking in design. The Cello (including a Jelco SA-250 arm) is a rectangular slab of acrylic with three feet, an arm mount and a bearing.

The most obvious difference from other designs is that the motor is housed, very ingeniously, in the front left foot. It's a small DC motor with electronic speed control and a small toggle switch that selects the speed.

tonearm

Instead of a drive belt there's a drive string. In fact, a thin thread can be a highly satisfactory trick and Scheu simply provides a small bobbin of fine nylon thread, just over a tenth of a millimetre in diameter! However, as it's up to the happy owner to cut a length of this and tie a knot in the right place, you may end up exhausting your supply of expletives.

Eventually, one gets a good tight loop the right length which gives good drive to the frosted-finish acrylic platter.

Sound quality

Once again, there was some disagreement among our listeners about the Scheu's bass: is it powerful and propulsive or a little shy? A little investigative work suggests the probable cause.

Playing a variety of familiar discs suggested that the bass is rather better in lively, dynamic moments (a rock drum kit being a perfect example) than in sustained tuned notes (church organ, bass tuba and so forth). The latter tends to sound a little underwhelming, although taken in isolation it's not at all bad.

But the transient energy conveyed by the deck goes a long way towards making the sound very attractive. A closely related area of performance, 'pace', was also singled out for praise.

This is a little more subtle than rhythm, involving as it does not just excitement but also, at suitable moments, restraint. That's necessary to keep the sound from being too frantic, something that we've all come across now and then.

Higher frequencies are well served by the Cello, with good detail across the board and very good life and vibrancy. The treble is always very clean, but prepared to scream and shout when the music demands it – no shrinking violet here!

This can be deceptive, though, the treble occasionally seeming restrained in recordings that are only moderately busy in the top octaves. There was a little puzzlement expressed at the Cello's handling of dynamics. Just occasionally, detail seems a little muted, though as mentioned above it is generally good and surface noise seems slightly more prominent than via some of the decks.

Imaging is good, instruments occupying a consistent and stable position in space and there is some decent depth in the image, too.

Related Links


Review: Scheu Analog Cello (Jelco SA-250 tonearm)

Posted: 10 Apr 2011 02:00 AM PDT

although a relative newcomer to the UK, Scheu Analog has been in business since the late 1980s. This makes it one of a number of turntable manufacturers who started up their business just as the LP was allegedly in its death throes.

Perhaps, for that reason, the company's range is not vast, (there are only four turntables and three arms available), but distinctly exotic-looking in design. The Cello (including a Jelco SA-250 arm) is a rectangular slab of acrylic with three feet, an arm mount and a bearing.

The most obvious difference from other designs is that the motor is housed, very ingeniously, in the front left foot. It's a small DC motor with electronic speed control and a small toggle switch that selects the speed.

tonearm

Instead of a drive belt there's a drive string. In fact, a thin thread can be a highly satisfactory trick and Scheu simply provides a small bobbin of fine nylon thread, just over a tenth of a millimetre in diameter! However, as it's up to the happy owner to cut a length of this and tie a knot in the right place, you may end up exhausting your supply of expletives.

Eventually, one gets a good tight loop the right length which gives good drive to the frosted-finish acrylic platter.

Sound quality

Once again, there was some disagreement among our listeners about the Scheu's bass: is it powerful and propulsive or a little shy? A little investigative work suggests the probable cause.

Playing a variety of familiar discs suggested that the bass is rather better in lively, dynamic moments (a rock drum kit being a perfect example) than in sustained tuned notes (church organ, bass tuba and so forth). The latter tends to sound a little underwhelming, although taken in isolation it's not at all bad.

But the transient energy conveyed by the deck goes a long way towards making the sound very attractive. A closely related area of performance, 'pace', was also singled out for praise.

This is a little more subtle than rhythm, involving as it does not just excitement but also, at suitable moments, restraint. That's necessary to keep the sound from being too frantic, something that we've all come across now and then.

Higher frequencies are well served by the Cello, with good detail across the board and very good life and vibrancy. The treble is always very clean, but prepared to scream and shout when the music demands it – no shrinking violet here!

This can be deceptive, though, the treble occasionally seeming restrained in recordings that are only moderately busy in the top octaves. There was a little puzzlement expressed at the Cello's handling of dynamics. Just occasionally, detail seems a little muted, though as mentioned above it is generally good and surface noise seems slightly more prominent than via some of the decks.

Imaging is good, instruments occupying a consistent and stable position in space and there is some decent depth in the image, too.

Related Links


In Depth: Apps: the future of tech or a passing fad?

Posted: 10 Apr 2011 02:00 AM PDT

App - seldom has such a small word had such a big impact. It's always on the lips of tech's biggest and brightest firms: Apple, Microsoft, Google and BlackBerry. Smartphones, tablets, TVs, fridges, cars: if your devices don't already sport the ability to run small, slick, one-trick programs, then the option is only an upgrade away.

The term has become so ubiquitous that the American Dialect Society - a learned group dedicated to the study of the English Language in America - dubbed 'app' its 2010 Word of the Year.

In case you're interested, it fended off stiff competition from the Cookie Monster's 'nom'. Praise indeed.

But what exactly is an app? Is 'app' really nothing more than a slightly irritating, cutesy term for 'application', or are apps indeed distinct from the software we PC users have enjoyed since the days of DOS? How will they affect the future of computing?

If you're interested in the etymological origins of the word 'app', you can probably thank Apple CEO Steve Jobs. Apple began propelling the word in the popular consciousness in 2008, but Jobs' love of the term dates back to the 1980s and NeXT.

Back then it was one of his favourite terms, presumably due to its friendliness compared to the Microsoft-endorsed 'software program' and the middle-ground 'application'.

A state of mind

Colin Walsh, owner of Celsius Game Studios, thinks apps aren't so much about a shift in software as about how people think about it:

"We've always had apps in the sense of focused programs that do one thing well. But because of the nature of portable devices, which by their design force a single-app/single-purpose user model, more people and more developers now think about specialised applications."

Walsh thinks this is good, because it's forcing the market away from "bloated monstrosities that do a whole bunch of things poorly," resulting in better software. This in turn engages people who wouldn't otherwise interact with many products, often encouraging them to source their own.

Mat Gadd, a software engineer at Advance IT Group says people in his office have long referred to 'applications', but now his mother - who is in her 50s - has an iPhone 4 and knows what an app is.

"For her Windows PC I had to install software for her, but now she calls me to chat about the latest and greatest app she's discovered and started using on her iPhone," he says.

For developers, this rapid shift in climate is quite a lot to take in, and it points to an unfamiliar future. "As Apple has embraced the mainstream consumer, apps are seen as the same as any other piece of media you can buy through iTunes - a true commodity rather than some artisan creation," explains James Thomson, founder of Mac (and now also iOS) developer TLA Systems.

"Some people will appreciate the craftsmanship involved, and truly excellent software will always find an audience, but we're no longer selling to the same technical hobbyist market we might have sold shareware to a decade or two ago."

Thomson's comment, with its overt consumerist angle, happens to echo the thoughts of Ian Bogost. In his blog post 'What is an app?', Bogost expands on the American Dialect Society's definition of 'app' as "the shortened slang term for a computer or smartphone application" by suggesting that it's also the applications themselves that are shortened and colloquialised.

Mac app store

MAC APP STORE: In making apps simple to buy, Apple has reinvigorated the software market

Instead of software suites - equivalent to a massive CD boxset aiming to cover everything and everyone - apps are more akin to singles. They're bite-sized, one-purpose applications, built for a specific function.

And mirroring what Apple did to the music industry, breaking the market into pieces and giving people somewhere to shop for the bits, Apple broke software into its component parts too, creating an App Store to provide users with a quick-fix solution to almost any problem.

An Apple a day

You might think that, with apps individually being more focused than traditional applications, you'd end up with a more coherent system, but the reality is more complicated.

Due to the typically isolated nature of an app, the notion of system-wide coherence is becoming deprecated in the app age. Logic Colony software developer Krishna Kotecha reckons this distinction between applications and more tightly focused apps will become "increasingly pronounced", although the focus and simple interfaces of apps will let typical users "get the most out of their software".

Certainly, usability has been key from the start with Apple's app model. Walsh argues that Apple "saved the mobile software ecosystem from itself", and says that before the App Store, it was incredibly difficult to sell apps.

The App Store deals with payment and distribution, along with making apps more discoverable. As iOS developer Jedrzej Jakubowski enthuses, "Customers are trained and encouraged to buy apps on a regular basis, which they can do in a very simple fashion."

Kotecha adds that even the much-criticised review process and 'walled garden' aren't a problem. "Unless you want to play fast and loose with platform development rules, private APIs, or user privacy. Apple has done a good job protecting the value of the iOS app ecosystem for both users and developers, because users don't experience the problems that break their PCs, such as malware and spyware. Users aren't afraid of buying and installing apps, and that's a really powerful thing."

Despite coming from the more open Mac development environment, Thomson agrees, and adds that the walled-garden criticism is overplayed: "Apple has found a good middle ground between the PC and games console approaches to software distribution. We can put apps in front of a broad consumer audience, and while there are restrictions, there aren't as many as if you wanted to write for, say, a Nintendo handheld."

A bad Apple

Aside from the well-publicised review process, the App Store has some shortcomings that impact on existing products, but these also point to how Apple may improve its offering in the future.

Thomson would like to see more flexibility in pricing, with the ability to offer upgrades - a common source of friction with users - although he concedes that this this is "perhaps a sign of old-school thinking, given that Apple now rarely offers upgrade pricing on its own products".

Elsewhere, although the app-buying process is discoverable, apps themselves are trickier to unearth due to the sheer number available and Apple's relatively limited search functionality; Thomson notes that many people rely heavily on the charts to 'discover' apps, often keeping a very limited number of products selling well and condemning the remainder.

There's also a tendency for app prices - especially for games - to 'race to the bottom', which has trained users to expect apps to cost 99 cents, or 59p. These needn't be considered entirely bad things.

As developer Neil Inglis points out, "from a consumer standpoint, the two big things are cheap apps and an almost endless supply of apps - I can't remember the last time I looked for an app to perform a task and couldn't find something".

However, because of the difficulty in recouping development costs, there's a tendency for many apps to be very simple, hastening the atomisation of software. This direction means things are going to be tough for any app that's not marketable to a widely understood niche, relegating general or experimental apps to relative obscurity.

Facebook app

FACEBOOK: Well-designed apps can simplify and improve the user experience of more complex web services

But atomisation can be a benefit to developers and users alike, assuming an app has a real use. As Kotecha explains, "It keeps apps focused tightly on one thing, and focused apps aren't a bad thing at all, because most non-technical users barely use a fraction of the functionality available on their desktop applications."

Perhaps as a result of this shift, cross-pollination is increasing between Apple systems, mostly with concepts from iOS infiltrating Mac OS X. This, along with the massive success of the App Store, meant it was inevitable that Apple would bring the 'app' model to the desktop.

Announced in 2010, the Mac App Store arrived in January 2011, and although only available to a minority of Mac users (those running Snow Leopard who have updated the operating system to version 10.6.6), the store had over a million downloads within its first day.

As on iOS, the Mac-based store distils the process of purchase and installation down to a couple of clicks and a password, and it also potentially increases exposure for apps, along with dealing with the supply chain, freeing developers to concentrate on other things. (Because of this, several long-time Mac developers took only a few days to go Mac App Store-only with their products.)

It's clear that this is where Apple sees the future of apps on the desktop. Unlike the iOS App Store, the Mac App Store isn't the only way to install third-party applications, but Thomson argues that "it will soon be the default way most Mac users find and buy software".

Gadd agrees, pointing to a potentially rosy future for developers as users realise they can click on a Dock icon, then, without much effort, "find an app to do a specific task more easily". He adds: "Since users can search, browse and buy securely within one application, that'll make them feel more at ease. You'll be less likely to find people worrying they'll 'break' the computer if they download something."

Dumbing down

Although a success, big names like Microsoft and Adobe were notably absent from the Mac App Store's launch; elsewhere, popular utilities like Default Folder X are barred for not adhering to Apple's relatively strict rules regarding the type of application allowed for sale in its store.

It was also notable that most launch products were casual games or simple utilities, often ported from iOS. Plenty disregard or break Mac conventions, threatening a cornerstone of usability on a windowed system: consistency - knowing what's likely to happen when you perform a certain action.

Those worries have been compounded by demos of Mac OS X Lion, which champions an iOS-like full-screen app view, devoid of dock, windows and taskbars, leading to what some call the 'appisation' of the Mac - a dumbing-down that could spell the end of traditional computing if Microsoft subsequently follows suit.

Some developers shudder at this prospect. "A desktop computer isn't a tablet or a phone - the interaction model is different and only certain apps benefit from the full-screen approach," argues Walsh, who hopes the windowing model remains.

Others aren't so sure, suggesting that Apple's app model could soon become commonplace throughout the computing world. "The atomisation is a good thing," says Jakubowski. "Prices go down, apps are more focused and are better at what they do. Competition increases, leading to more well-designed apps."

Kotecha also isn't against Apple's plans: "Full-screen apps are the way to go on smaller devices, where window management is frustrating." As computing becomes more mobile and portable, Jakubowski expects the app model to take hold, relegating the application window to a niche "desktop computer world" where the "hassles of window management and filesystems will stay in an environment where they still matter".

Store wars

Of course, Apple is hardly alone in the battle of the apps. Although arguably the leader in the field - a surprising turn of events in itself, given Apple's relatively niche position on the desktop-other companies are fighting back.

Chrome web store

OPEN BOOK?: The Chrome Web Store may be online, with a selection of web apps, but it's not nearly as open as what Mozilla's planning to launch in 2012

Google in particular has risen to the challenge, although it's clear that the search giant's vision for the future of apps is wildly different from Apple's own. Where Apple seeks to control and enhance user experience by way of curation, emphasis on quality and a single place to purchase apps, Google's primary motivation remains ad revenue.

Therefore, in Google's mobile offering, Android Market is almost the polar opposite of Apple's App Store: there's barely any curation, open (albeit often minor) intellectual property breaches are rife, and, crucially, it's not the only place Android users can download third-party software for their devices.

Various carriers offer storefronts, and even Amazon is prepping an Android store. But as much as Apple's stance regarding apps is propelling the concept forwards, there's some consensus that Google's efforts do the opposite.

Android users typically remain apathetic towards apps, not caring about them or citing usability and quality issues. "Part of the problem is [that] it's tough for developers to test apps across the different device and OS combinations on the market," explains W3i cofounder Robert Weber. "This leads to crap getting out into the market - Google should focus on providing better tools to Android developers for testing."

There's also been the thorny issue of payment mechanisms; in stark contrast to the App Store, with its millions of credit cards linked to iTunes, Android Market initially only enabled US and UK developers to offer paid apps.

Value for money

The payment situation has slowly improved, but user experience is now such that 'free' is the expected price for mist apps.

While advocates of open software laud this notion, Kotecha thinks it's detrimental to the platform as a whole. "Quality apps take time to build," he says. "There must be a clear monetisation mechanism for app development if a platform wants quality apps."

Without this, the perception of Android apps - indeed, apps as a whole, given that Android is gaining traction and now leading the way in smartphone market share - could become one of low quality and poor usability.

Zattikka producer Robin Clarke thinks this kind of lowbrow, anything-goes ecosystem leads to "a situation where nobody is willing to stick their neck out and invest in making their offering good enough to be the de-facto standard," and hopes Google will "lead by example and make Android Market competitive with the iOS App Store."

It's possible that Amazon could take up that particular challenge, since its vision of an Android app store appears closer to Apple's store than Google's, with app-screening and an emphasis on quality. And while Amazon will discount apps as and when it sees fit, quickfire discounts are common on Apple's stores, too.

If this is indeed what transpires in 2011, it will likely revolutionise Android apps and perhaps dismiss the belief that apps should be free. Weber reckons Android developers will finally start making some decent money, prompting more developers to take an interest in the platform, increasing competition and raising the quality of apps.

And Weber adds that the scenario also provides a major advantage over Apple, in that by offering competing but 'fragmented' distribution points, a "higher quantity of developers can get a piece of each of those distribution points," rather than on iOS where there's only one store with one set of charts.

Heading online

With fragmentation affecting monetisation, usability and the public perception of apps as useful, you'd think a trend towards a locked-down Apple-style approach would be inevitable. But some in the industry would prefer to obliterate the concept of locking down apps for good.

Brain

BEST OF THE BEST, SIR!: AppBrain.com aims to present the best of the Android Market

The main driving force behind this model is Mozilla, which wants to leverage open web standards with its Open Web Applications concept, and create an app ecosystem that isn't reliant on any one device or locked down to any one store. This would remove Apple-style censorship and carrier lock-ins (Android itself may be touted as 'open', but carriers regularly block stores and services from Android devices or force their own products to act as the main entry point to apps).

Pascal Finette, Director of Mozilla Labs, claims that web apps also offer additional benefits: "From a developer perspective, they dramatically lower development costs and often complexity, especially when a developer wants to target multiple platforms - there's no porting an Objective C iPhone app to Java in Android. For users, web apps provide similar functionality to native apps, but allow for portability - you can take an app from desktop to mobile and vice versa."

In the cloud

Finette optimistically believes that, in the mid-to-long-term, the vast majority of apps will be written on top of the Open Web stack, but this isn't yet a popular viewpoint.

Despite being a month older than the Mac App Store, the Chrome Web Store is already seemingly floundering: unable to drive traffic to developers, sales have been described rather generously as 'lacklustre' in the technology press - and monetisation isn't the only problem it's facing.

"Apple's app model destroyed the advantage of pure web apps compared to native code," argues Kotecha. "The biggest problems native apps had were discoverability and reliability in terms of impact on the system," he explains.

"But iOS dealt with those problems, and you can now have an app running on a mobile device that talks to a server and stores data locally, but that also provides a rich user experience. Comparatively, web apps seem like a very weak proposition. Native code will always allow a developer the freedom to be more creative in their software development."

Similarly, developers seem dismissive of those who advocate 'open' and cloud storage as the only future for apps. "'Open' is the most meaningless word in the tech industry today," grumbles Kotecha. "Where it's being used, you don't need to look very far to see someone's business agenda at play - when Mozilla says 'install anywhere', it means 'install anywhere Mozilla technology can be installed'."

And while Gadd says that he'd "love for there to be some form of reliable, always-on connectivity," such things aren't, and may never be, entirely viable: "Between ISP failures, power failures and lack of mobile coverage alone, relying on being able to reach a remote service to do any work is asking for trouble."

On storage, Kotecha says pitching 'cloud' versus 'local' is a red herring: "Users and apps will use both the cloud and local storage. Even if cloud-based technology becomes widespread, the need for local storage isn't going anywhere -after all, you still carry a wallet with you, even though you have a bank account."

And a similar hybrid approach is likely to be how apps evolve, thinks Weber: "Especially on mobile, native clients with web functionality will dominate, despite all the hype about HTML5 apps stealing the limelight away from existing native mobile apps."

Looking at the app market to date, iterative changes are predictable: the best iOS apps will continue to evolve, remaining focused and usable, but increasing in sophistication, enabling users to perform increasingly complex tasks.

Android will follow suit to some extent, perhaps driven by Amazon at the high end, and Google flooding the market with free, ad-supported apps.

Mobile stores on other platforms will see rapid growth as hardware improves and lessons are learned from market leaders, and while browsers won't take over, they will expose new features to apps and offer richer experiences for those choosing to favour web apps.

App to the future

It remains to be seen whether Android's market-lead will result in a shift in emphasis regarding target platforms, leading to more demand for tight-budget, cross-platform development, resulting in compromises on iOS; Kotecha thinks it's feasible the app approach will simply raise everyone's game, "driving mainstream users to be more demanding of user experiences in all areas they encounter software".

Train times

FOCUSSED FUNCTION: Apps often force focus on one task, rather than being everything to all

Windows is perhaps the platform that could benefit most from such thinking, and rumours of an app store for Windows 8 abound. Walsh wonders whether Microsoft could use an app store to distance itself from past accusations of vendor lock-in and monopoly abuse by directing users to third-party software from day one.

"Couple that with the potential for keeping people on Windows and the money they could make in the process, it's pretty much a no-brainer for the company," he thinks. "That being said, I've little faith in Microsoft getting it quite right first time, but they'll keep at it until it's decent."

Gadd adds such a store could also reduce viruses and malware on Windows PCs, if Microsoft adopted an approvals process akin to Apple's and pushed its app store as the default.

TV or not TV?

The other major platform being touted as perfect for apps is television. "But TV is a difficult fit for apps, apart from games and passive media," says Walsh. "The use-case just isn't there - no one wants to browse the web or edit documents on their TV, and I think that has to do with how TVs are situated in many homes as an entertainment device and not a 'work' device. Maybe that will change, but I'm not holding my breath."

Kotecha quips that the same platform that has successfully dominated TV will continue to do so: TV programmes, rather scuppering the long-held belief that convergence would inevitably lead to the emergence of a dominant platform, and that platform would be the television. It's the app economy that's destroyed this idea.

The smartphone boom has made people realise once again that they want different devices for different tasks: social networking, music and gaming on smartphones; reading and light work on tablets; more complex tasks on desktop PCs.

No one device can do all these things satisfactorily. However the app economy has also driven a thirst for single-purpose, focused apps, services can more easily thrive across multiple platforms.

"I don't think that any one device or system will dominate the market," concludes Jakubowski. "Rather we'll see a multitude of devices with certain services available on all of them, delivered through apps."



Review: Rega P7 (RB700 tonearm)

Posted: 10 Apr 2011 01:30 AM PDT

In so many respects the P7 turntable is a classic Rega, but it actually shares very few components with the famous old Planar models.

It has an AC motor mounted directly behind the bearing, but it's a low-voltage motor powered from an external generator, which also allows electronic speed switching. It has a short belt drive to the sub-platter, but there is actually a pair of round-section belts and the sub-platter is metal.

There's a hard, rigid platter with a felt mat, but instead of the original glass this one is made of ceramic, complete with Michell-style underslung weights around the periphery.

We have mixed feelings about this platter recipe: sure, it's dimensionally stable second to none, but the hardness is not relevant when there's a felt mat in the way.

The chassis is still particle board, with a metal trim, which serves both visual and functional purposes, helping to damp and disperse resonances in the chassis.

The arm on the P7 is the RB700. Here the inheritance from the classic RB300 is even clearer and many of the parts are identical or, at least, identical in measurements. There are various changes, though perhaps most significantly in the method of mounting: this arm mounts via three screws through holes in the stainless steel base plate, rather than the single large-diameter nut of the older models.

Rega rb700

The arm tube is given a special coating. There's a third hole in the headshell for cartridges that take a third bolt (including Rega's, of course). In common with the RB300, the counterweight is made of tungsten and is intended to be adjusted for perfect balance, downforce then being applied by a calibrated spring.

Most arms rely on moving the counterweight to set downforce, but a spring has the advantage of slightly increasing downforce as the arm rides up over a warp, making for more secure tracking of warped discs.

Sound quality

One of the occasionally amusing results of blind listening is that products, apparently from diametrically opposed schools of thought, emerge as sounding quite similar and few would have expected a Rega to receive many similar comments to the Pro-Ject Xperience.

That's what happened, though, with one listener even pointing out the similarity directly. It was felt in general that the P7 had the edge in control and resolution and also integrated the bass better with the mid and treble, but its imaging and general presentation were thought quite a lot like those of the Pro-Ject.

In keeping with the traditions of the brand, a good rocking performance is invariably on offer with any disc of decent merit. In fact it's interesting how Rega has managed to keep that aspect intact from its earliest models, perhaps very slightly lessening the raw impact and at the same time adding more insight and better tonal balance.

Effectively, if you always enjoyed the Rega sound, this could be just the deck for you!

Related Links


Review: Rega P7 (RB700 tonearm)

Posted: 10 Apr 2011 01:30 AM PDT

In so many respects the P7 turntable is a classic Rega, but it actually shares very few components with the famous old Planar models.

It has an AC motor mounted directly behind the bearing, but it's a low-voltage motor powered from an external generator, which also allows electronic speed switching. It has a short belt drive to the sub-platter, but there is actually a pair of round-section belts and the sub-platter is metal.

There's a hard, rigid platter with a felt mat, but instead of the original glass this one is made of ceramic, complete with Michell-style underslung weights around the periphery.

We have mixed feelings about this platter recipe: sure, it's dimensionally stable second to none, but the hardness is not relevant when there's a felt mat in the way.

The chassis is still particle board, with a metal trim, which serves both visual and functional purposes, helping to damp and disperse resonances in the chassis.

The arm on the P7 is the RB700. Here the inheritance from the classic RB300 is even clearer and many of the parts are identical or, at least, identical in measurements. There are various changes, though perhaps most significantly in the method of mounting: this arm mounts via three screws through holes in the stainless steel base plate, rather than the single large-diameter nut of the older models.

Rega rb700

The arm tube is given a special coating. There's a third hole in the headshell for cartridges that take a third bolt (including Rega's, of course). In common with the RB300, the counterweight is made of tungsten and is intended to be adjusted for perfect balance, downforce then being applied by a calibrated spring.

Most arms rely on moving the counterweight to set downforce, but a spring has the advantage of slightly increasing downforce as the arm rides up over a warp, making for more secure tracking of warped discs.

Sound quality

One of the occasionally amusing results of blind listening is that products, apparently from diametrically opposed schools of thought, emerge as sounding quite similar and few would have expected a Rega to receive many similar comments to the Pro-Ject Xperience.

That's what happened, though, with one listener even pointing out the similarity directly. It was felt in general that the P7 had the edge in control and resolution and also integrated the bass better with the mid and treble, but its imaging and general presentation were thought quite a lot like those of the Pro-Ject.

In keeping with the traditions of the brand, a good rocking performance is invariably on offer with any disc of decent merit. In fact it's interesting how Rega has managed to keep that aspect intact from its earliest models, perhaps very slightly lessening the raw impact and at the same time adding more insight and better tonal balance.

Effectively, if you always enjoyed the Rega sound, this could be just the deck for you!

Related Links


Review: Rega P7 (RB700 tonearm)

Posted: 10 Apr 2011 01:30 AM PDT

In so many respects the P7 turntable is a classic Rega, but it actually shares very few components with the famous old Planar models.

It has an AC motor mounted directly behind the bearing, but it's a low-voltage motor powered from an external generator, which also allows electronic speed switching. It has a short belt drive to the sub-platter, but there is actually a pair of round-section belts and the sub-platter is metal.

There's a hard, rigid platter with a felt mat, but instead of the original glass this one is made of ceramic, complete with Michell-style underslung weights around the periphery.

We have mixed feelings about this platter recipe: sure, it's dimensionally stable second to none, but the hardness is not relevant when there's a felt mat in the way.

The chassis is still particle board, with a metal trim, which serves both visual and functional purposes, helping to damp and disperse resonances in the chassis.

The arm on the P7 is the RB700. Here the inheritance from the classic RB300 is even clearer and many of the parts are identical or, at least, identical in measurements. There are various changes, though perhaps most significantly in the method of mounting: this arm mounts via three screws through holes in the stainless steel base plate, rather than the single large-diameter nut of the older models.

Rega rb700

The arm tube is given a special coating. There's a third hole in the headshell for cartridges that take a third bolt (including Rega's, of course). In common with the RB300, the counterweight is made of tungsten and is intended to be adjusted for perfect balance, downforce then being applied by a calibrated spring.

Most arms rely on moving the counterweight to set downforce, but a spring has the advantage of slightly increasing downforce as the arm rides up over a warp, making for more secure tracking of warped discs.

Sound quality

One of the occasionally amusing results of blind listening is that products, apparently from diametrically opposed schools of thought, emerge as sounding quite similar and few would have expected a Rega to receive many similar comments to the Pro-Ject Xperience.

That's what happened, though, with one listener even pointing out the similarity directly. It was felt in general that the P7 had the edge in control and resolution and also integrated the bass better with the mid and treble, but its imaging and general presentation were thought quite a lot like those of the Pro-Ject.

In keeping with the traditions of the brand, a good rocking performance is invariably on offer with any disc of decent merit. In fact it's interesting how Rega has managed to keep that aspect intact from its earliest models, perhaps very slightly lessening the raw impact and at the same time adding more insight and better tonal balance.

Effectively, if you always enjoyed the Rega sound, this could be just the deck for you!

Related Links


In Depth: Sandy Bridge and AMD Fusion: hybrid chips explained

Posted: 10 Apr 2011 12:00 AM PDT

We recently reviewed one of AMD's latest Fusion processors as part of the Asus E35M1-M Pro motherboard. Like Intel's recent Sandy Bridge chips, they combine a programmable graphics engine onto the same piece of silicon as the CPU. It's a modern minor miracle in metal.

Like the first wave of Intel's CPU/GPU hybrids, these initial Fusion chips from AMD cleverly and cautiously avoid underwhelming us by targeting netbooks, where performance expectations are pretty low. For Intel, it was bolting a rudimentary graphics core onto an Atom to create Pinetrail.

AMD's new Bobcat CPU architecture is more forward looking, but by going up against Pinetrail and low-end notebooks, it looks perhaps better than it is.

AMD will follow the trail that has been forged here, though. Over the last 12 months, Intel has moved up the value chain with its CPU-die graphics. Arrandale brought the technology to Core i3 and Core i5 chips, and while the launch of Sandy Bridge at CES may have been spoiled by the swift recall of compatible motherboards, it was still briefly triumphant.

Faster and cheaper than the outgoing Nehalem architecture, Sandy Bridge is eminently suited to heavy lifting tasks that can harness the parallel pipelines of GPUs, such as video and photo-editing. The implications for mobile workstations alone are a little mind numbing, and more than one commentator believes it augurs the death of the discrete graphics card.

The single silicon future isn't quite upon us, however. Without wanting to put a timescale on things, the chances are PC Format readers will still be buying add-in GPUs for a while yet, for one reason and one reason only. Integrated graphics of any sort are and always have been rubbish for gaming.

Fusion and Sandy Bridge may still have that new chip smell about them, but they've been in the public eye for a long time. Fusion was formally announced to the world in 2006, around the same time that AMD purchased graphics firm ATI. Sandy Bridge appeared on Intel's roadmaps a year or so later.

Familiarity should not breed contempt, though. It's hard not to be excited by what is arguably the biggest change to PC architecture for 20-plus years. This new approach to chip design is such a radical rethink of how a computer should be built that AMD has even come up with a new name for it: goodbye CPU, hello Accelerated Processing Unit (APU).

While we wish AMD the best of luck trying to get that new moniker to bed in, it is worth digging around on the company's website for the Fusion launch documents. The marketing has been done very well - the white paper PDF does an excellent job of explaining why putting graphics onto the processor die makes sense, and you can find it in a single click from the front page.

By comparison, Intel's site is more enigmatic and downplays the role of on-die graphics. The assumption seems to be that those who care already know, but most consumers are more interested in the number of cores a chip has.

The next big thing?

That's fair enough: computers are commodities now and a simple 'Intel Inside' has always been the company's best way of building its brand. It also works as a neat way of avoiding hubris.

In the world of PC tech, it's hard to work out what the next big thing will be and what is going to blow.

Nvidia

BATTLE LINES ARE DRAWN: Nvidia has some challenging times ahead

The CPU/GPU hybrid seems like a no-brainer, but a lot has changed in graphics since 3dfx first introduced the masses to the 3D video co-processor. It's been 15 years since Voodoo boards began to transform the sprite-based world of PC gaming, and with almost each new generation there's been some new feature that's promised to change the world.

Some of these breakthroughs have been welcome, and rightly gone on to achieve industry standard status. Routines for anti-aliasing, texture filtering and hardware transform and lighting were early candidates for standard adoption, and we now take it for granted that a new GPU will have programmable processors, HD video acceleration and unified shaders.

Other innovations, though, haven't found their way into the canon. At various times we've been promised by graphics vendors that the future lies in technologies that never picked up strong hardware support, such as voxels, tiling, ray tracing and variously outmoded ways of doing shadows.

Then there are those fancy sounding features that seem to have been around forever but still aren't widely used, such as tessellation, ring buses, GPU physics and even multicard graphics.

On a minor - related - note, hands up if you've ever used the digital vibrance slider in your control panel. Thought not.

All of which is not to say that Fusion and Sandy Bridge won't catch on. According to the roadmaps, it's going to be pretty hard to buy a CPU that doesn't have the graphics processor built in by the end of this year.

First-generation Core architecture chips are rapidly vanishing from Intel stockists, while AMD is expected to introduce a mainstream desktop combination chip, codenamed Lano, this summer with a high performance pairing based on the new Bulldozer CPU sometime in 2012.

If you buy a new PC by next spring it'll almost certainly have a hybrid processor at its heart.

There are three big advantages to hybrids. The first is that they're self-evidently very cost effective to manufacture compared to separate chips for the CPU and graphics and all the circuitry between.

Core i3

CORE I3: The first Sandy Bridge CPUs have an Intel HD 2000 performing the graphical grunt - or lack thereof

For similar reasons, they also require less power - there are fewer components and interconnects that need a steady flow of electrons, and even on the chip die itself Sandy Bridge and Fusion allow more sharing of resources than last year's Pinetrail and Arrandale forebears, which put graphics and CPU on two separate cores in the same package.

Finally, slapping everything on one piece of silicon improves the performance of the whole by an extra order of magnitude. Latency between the GPU and CPU is reduced - a factor very important for video editing - and both AMD and Intel are extending their CPU-based abilities for automatic overclocking to the GPU part of the wafer too.

It goes without saying that these advantages will only be apparent if you actually use the GPU part of the chip. But you can have the best of both worlds: switchable graphics such as Nvidia's Optimus chipsets are well established in notebooks already, including the MacBook Pro. These flick between integrated and discrete chips depending on whether or not you're gaming, in order to save battery life or increase framerates.

That's not really a technology that will find much footing on the desktop, where the power saving would be rather minimal for the effort involved, but that's not to say there are no opportunities there. A 'silent running' mode using the on-die GPU would be invaluable for media centres and all-in-one PCs, for example.

More importantly for us, once every PC has integrated graphics there's every likelihood that some GPU-friendly routines that are common in games - such as physics - could be moved onto there by default. It'd be better than the currently messy approach towards implementation of hardware acceleration for non-graphics gaming threads.

Can't do it all

That scenario, however, depends on an element of standardisation that may yet be lacking. Sandy Bridge contains an HD 3000 graphics core which, by all accounts, is Intel's best video processor to date.

HD 3000 is very good at encoding and decoding HD video, plus it supports DirectX 10.1 and OpenGL 2.0. That means it can use the GPU part of the core to accelerate programs such as Photoshop, Firefox 4 and some video transcoders too.

The problem is that, as has become almost traditional, Intel is a full generation behind everyone else when it comes to graphics. It's still living in a Vista world, while we're all on Windows 7. Don't even ask about Linux, by the way, as there's no support for Sandy Bridge graphics at all there yet (although both Intel and Linus Torvalds have promised it's coming).

Fusion, by comparison, has a well supported DirectX 11-class graphics chip based on its current Radeon HD6000 architecture. That means it supports the general purpose APIs OpenCL and DirectX Compute. Sandy Bridge doesn't - which may delay take up of these technologies for the time being.

GPCPU diagram

THE TIMES ARE A CHANGING: This is arguably the biggest change to PC architecture in two decades

Conversely, Sandy Bridge does feature the new Advanced Vector Extensions (AVX) to the SSE part of the x86 instruction set. These are designed to speed up applications such as video processing by batching together groups of data, and won't be in AMD chips until Bulldozer arrives later this year.

You'll need Windows 7 SP1 to actually make use of AVX, mind you. But by the time applications that use them appear, most of us will be running that OS revision or later.

Gaming performance

All this is academic if you're just interested in playing games. A quick look at the benchmarks will show you that while the HD 3000 is better than its predecessors for gaming, it's still not much fun unless you're prepared to sacrifice a lot of image quality.

By comparison, the first desktop Fusion chips - codenamed Llano and based on the current K10 architecture - will have more in the way of graphics firepower. But an 80 pipeline Radeon core is still only half the power of AMD's most basic discrete card, the HD6450. We wouldn't recommend one of those, either.

Feedback from games developers we've spoken to has been mixed so far. On the whole, they have broadly welcomed hybrids, and some have gone as far as saying that Sandy Bridge will meet minimum graphics specifications for a number of quite demanding games. Shogun 2 and Operation Flashpoint: Red River are good examples of this.

Ultimately, though, developers are interested in hitting these system requirements because it opens up a large market for their games among laptop owners. As far as innovation goes, the cool kids still code at the cutting edge.

What of the future?

You might be tempted to point out that it's early days yet, and CP-GPU hybrids - for want of a better phrase - are still in their infancy. Give it a couple more years and the discrete graphics business could be dead.

John Carmack himself recently noted that he didn't see any big leaps for the PC's graphical prowess in the near future. That gives combination processors time to catch up, doesn't it?

Well, here are some numbers to convince you otherwise. Moore's Law says the number of transistors that can be cost effectively incorporated inside a chip will double every two years. A four-core Sandy Bridge processor with integrated HD 3000 graphics packs around 995 million transistors. According to Intel's oft-quoted founder, that means that we could expect CPUs in 2013 to boast 2,000 million transistors on a single die.

Sandy bridge

SANDY SILICON: Sandy Bridge is eminently suited to heavy-lifting tasks such as video and photo-editing

Sounds like a lot, until you realise that 2009's Radeon HD5850 already has a transistor count of 2,154 million - which would all have to be squashed onto a CPU die to replicate the performance levels.

By applying Moore's Law, then, it'll be at least 2015 to 2016 before you'll be able to play games at the resolution and quality you were used to 18 months ago. What are the odds of seven-year-old technology being considered more than adequate as an entry level system? Not high, are they?

By that reasoning hybrid processors are going to struggle to ever move up into the 'mainstream' performance class. You see, even though graphics engine development has slowed down considerably in recent years, games are still getting more demanding. Even World of Warcraft needs a fairly hefty system to see it presented at its best these days. Just Cause isn't a spectacular looker, but it can strain a respectable system beyond breaking point if you turn all the graphical settings up and the pretty bits on.

We're also finding new ways to challenge our hardware: multi-monitor gaming is now so common it's included in Valve's Steam Survey. That means higher resolutions and more anti-aliasing, neither of which are cheap in processing terms.

And even if Moore's Law were somehow accelerated so that in five years' time integrated GPUs could be cutting edge, a 4,000 million transistor chip is an enormous undertaking that would be expensive to get wrong. Yields would have to be exceptionally good to make it financially viable - it's a much safer bet to produce separate parts for the top performance class.

What's more, high graphics chips are incredibly different to CPUs in terms of manufacture and operating environment. They're almost mutually exclusive, in fact.

A Core i7, for example, consumes less power and runs much colder than a GeForce GTX570. Could you get the same graphics power without the 100ºC-plus heat? Unlikely.

The tolerance for error is also much lower - if a single pixel is corrupted on screen because the GPU is overheating, no-one notices, but if a CPU repeatedly misses a read/write operation you have a billion dollar recall to take care of.

The GPU is dead

It would be lovely to be proved wrong, though. If a scenario arises where we can get the same sort of performance from a single chip - with all the overclocking features - that we currently enjoy from the traditional mix of discrete GPU and dedicated CPU, then we'd be first in the queue to buy one.

Right now, though, both Nvidia and AMD reported growth in their graphics divisions during 2010, despite the appearance of Intel's Arrandale hybrids early in the year. Perhaps this is the more likely outcome: it's easy to envision a world in, say, a decade or so, when the market for PC 'separates' is similar to the hi-fi one.

Because after a century of miniaturisation and integration there, valve amps are still the apeothesis of style and performance. In other words, a CPU/GPU hybrid chip may be good enough one day, but it won't be the best. And that is what we will always demand.



This posting includes an audio/video/photo media file: Download Now

In Depth: Sandy Bridge and AMD Fusion: hybrid chips explained

Posted: 10 Apr 2011 12:00 AM PDT

We recently reviewed one of AMD's latest Fusion processors as part of the Asus E35M1-M Pro motherboard. Like Intel's recent Sandy Bridge chips, they combine a programmable graphics engine onto the same piece of silicon as the CPU. It's a modern minor miracle in metal.

Like the first wave of Intel's CPU/GPU hybrids, these initial Fusion chips from AMD cleverly and cautiously avoid underwhelming us by targeting netbooks, where performance expectations are pretty low. For Intel, it was bolting a rudimentary graphics core onto an Atom to create Pinetrail.

AMD's new Bobcat CPU architecture is more forward looking, but by going up against Pinetrail and low-end notebooks, it looks perhaps better than it is.

AMD will follow the trail that has been forged here, though. Over the last 12 months, Intel has moved up the value chain with its CPU-die graphics. Arrandale brought the technology to Core i3 and Core i5 chips, and while the launch of Sandy Bridge at CES may have been spoiled by the swift recall of compatible motherboards, it was still briefly triumphant.

Faster and cheaper than the outgoing Nehalem architecture, Sandy Bridge is eminently suited to heavy lifting tasks that can harness the parallel pipelines of GPUs, such as video and photo-editing. The implications for mobile workstations alone are a little mind numbing, and more than one commentator believes it augurs the death of the discrete graphics card.

The single silicon future isn't quite upon us, however. Without wanting to put a timescale on things, the chances are PC Format readers will still be buying add-in GPUs for a while yet, for one reason and one reason only. Integrated graphics of any sort are and always have been rubbish for gaming.

Fusion and Sandy Bridge may still have that new chip smell about them, but they've been in the public eye for a long time. Fusion was formally announced to the world in 2006, around the same time that AMD purchased graphics firm ATI. Sandy Bridge appeared on Intel's roadmaps a year or so later.

Familiarity should not breed contempt, though. It's hard not to be excited by what is arguably the biggest change to PC architecture for 20-plus years. This new approach to chip design is such a radical rethink of how a computer should be built that AMD has even come up with a new name for it: goodbye CPU, hello Accelerated Processing Unit (APU).

While we wish AMD the best of luck trying to get that new moniker to bed in, it is worth digging around on the company's website for the Fusion launch documents. The marketing has been done very well - the white paper PDF does an excellent job of explaining why putting graphics onto the processor die makes sense, and you can find it in a single click from the front page.

By comparison, Intel's site is more enigmatic and downplays the role of on-die graphics. The assumption seems to be that those who care already know, but most consumers are more interested in the number of cores a chip has.

The next big thing?

That's fair enough: computers are commodities now and a simple 'Intel Inside' has always been the company's best way of building its brand. It also works as a neat way of avoiding hubris.

In the world of PC tech, it's hard to work out what the next big thing will be and what is going to blow.

Nvidia

BATTLE LINES ARE DRAWN: Nvidia has some challenging times ahead

The CPU/GPU hybrid seems like a no-brainer, but a lot has changed in graphics since 3dfx first introduced the masses to the 3D video co-processor. It's been 15 years since Voodoo boards began to transform the sprite-based world of PC gaming, and with almost each new generation there's been some new feature that's promised to change the world.

Some of these breakthroughs have been welcome, and rightly gone on to achieve industry standard status. Routines for anti-aliasing, texture filtering and hardware transform and lighting were early candidates for standard adoption, and we now take it for granted that a new GPU will have programmable processors, HD video acceleration and unified shaders.

Other innovations, though, haven't found their way into the canon. At various times we've been promised by graphics vendors that the future lies in technologies that never picked up strong hardware support, such as voxels, tiling, ray tracing and variously outmoded ways of doing shadows.

Then there are those fancy sounding features that seem to have been around forever but still aren't widely used, such as tessellation, ring buses, GPU physics and even multicard graphics.

On a minor - related - note, hands up if you've ever used the digital vibrance slider in your control panel. Thought not.

All of which is not to say that Fusion and Sandy Bridge won't catch on. According to the roadmaps, it's going to be pretty hard to buy a CPU that doesn't have the graphics processor built in by the end of this year.

First-generation Core architecture chips are rapidly vanishing from Intel stockists, while AMD is expected to introduce a mainstream desktop combination chip, codenamed Lano, this summer with a high performance pairing based on the new Bulldozer CPU sometime in 2012.

If you buy a new PC by next spring it'll almost certainly have a hybrid processor at its heart.

There are three big advantages to hybrids. The first is that they're self-evidently very cost effective to manufacture compared to separate chips for the CPU and graphics and all the circuitry between.

Core i3

CORE I3: The first Sandy Bridge CPUs have an Intel HD 2000 performing the graphical grunt - or lack thereof

For similar reasons, they also require less power - there are fewer components and interconnects that need a steady flow of electrons, and even on the chip die itself Sandy Bridge and Fusion allow more sharing of resources than last year's Pinetrail and Arrandale forebears, which put graphics and CPU on two separate cores in the same package.

Finally, slapping everything on one piece of silicon improves the performance of the whole by an extra order of magnitude. Latency between the GPU and CPU is reduced - a factor very important for video editing - and both AMD and Intel are extending their CPU-based abilities for automatic overclocking to the GPU part of the wafer too.

It goes without saying that these advantages will only be apparent if you actually use the GPU part of the chip. But you can have the best of both worlds: switchable graphics such as Nvidia's Optimus chipsets are well established in notebooks already, including the MacBook Pro. These flick between integrated and discrete chips depending on whether or not you're gaming, in order to save battery life or increase framerates.

That's not really a technology that will find much footing on the desktop, where the power saving would be rather minimal for the effort involved, but that's not to say there are no opportunities there. A 'silent running' mode using the on-die GPU would be invaluable for media centres and all-in-one PCs, for example.

More importantly for us, once every PC has integrated graphics there's every likelihood that some GPU-friendly routines that are common in games - such as physics - could be moved onto there by default. It'd be better than the currently messy approach towards implementation of hardware acceleration for non-graphics gaming threads.

Can't do it all

That scenario, however, depends on an element of standardisation that may yet be lacking. Sandy Bridge contains an HD 3000 graphics core which, by all accounts, is Intel's best video processor to date.

HD 3000 is very good at encoding and decoding HD video, plus it supports DirectX 10.1 and OpenGL 2.0. That means it can use the GPU part of the core to accelerate programs such as Photoshop, Firefox 4 and some video transcoders too.

The problem is that, as has become almost traditional, Intel is a full generation behind everyone else when it comes to graphics. It's still living in a Vista world, while we're all on Windows 7. Don't even ask about Linux, by the way, as there's no support for Sandy Bridge graphics at all there yet (although both Intel and Linus Torvalds have promised it's coming).

Fusion, by comparison, has a well supported DirectX 11-class graphics chip based on its current Radeon HD6000 architecture. That means it supports the general purpose APIs OpenCL and DirectX Compute. Sandy Bridge doesn't - which may delay take up of these technologies for the time being.

GPCPU diagram

THE TIMES ARE A CHANGING: This is arguably the biggest change to PC architecture in two decades

Conversely, Sandy Bridge does feature the new Advanced Vector Extensions (AVX) to the SSE part of the x86 instruction set. These are designed to speed up applications such as video processing by batching together groups of data, and won't be in AMD chips until Bulldozer arrives later this year.

You'll need Windows 7 SP1 to actually make use of AVX, mind you. But by the time applications that use them appear, most of us will be running that OS revision or later.

Gaming performance

All this is academic if you're just interested in playing games. A quick look at the benchmarks will show you that while the HD 3000 is better than its predecessors for gaming, it's still not much fun unless you're prepared to sacrifice a lot of image quality.

By comparison, the first desktop Fusion chips - codenamed Llano and based on the current K10 architecture - will have more in the way of graphics firepower. But an 80 pipeline Radeon core is still only half the power of AMD's most basic discrete card, the HD6450. We wouldn't recommend one of those, either.

Feedback from games developers we've spoken to has been mixed so far. On the whole, they have broadly welcomed hybrids, and some have gone as far as saying that Sandy Bridge will meet minimum graphics specifications for a number of quite demanding games. Shogun 2 and Operation Flashpoint: Red River are good examples of this.

Ultimately, though, developers are interested in hitting these system requirements because it opens up a large market for their games among laptop owners. As far as innovation goes, the cool kids still code at the cutting edge.

What of the future?

You might be tempted to point out that it's early days yet, and CP-GPU hybrids - for want of a better phrase - are still in their infancy. Give it a couple more years and the discrete graphics business could be dead.

John Carmack himself recently noted that he didn't see any big leaps for the PC's graphical prowess in the near future. That gives combination processors time to catch up, doesn't it?

Well, here are some numbers to convince you otherwise. Moore's Law says the number of transistors that can be cost effectively incorporated inside a chip will double every two years. A four-core Sandy Bridge processor with integrated HD 3000 graphics packs around 995 million transistors. According to Intel's oft-quoted founder, that means that we could expect CPUs in 2013 to boast 2,000 million transistors on a single die.

Sandy bridge

SANDY SILICON: Sandy Bridge is eminently suited to heavy-lifting tasks such as video and photo-editing

Sounds like a lot, until you realise that 2009's Radeon HD5850 already has a transistor count of 2,154 million - which would all have to be squashed onto a CPU die to replicate the performance levels.

By applying Moore's Law, then, it'll be at least 2015 to 2016 before you'll be able to play games at the resolution and quality you were used to 18 months ago. What are the odds of seven-year-old technology being considered more than adequate as an entry level system? Not high, are they?

By that reasoning hybrid processors are going to struggle to ever move up into the 'mainstream' performance class. You see, even though graphics engine development has slowed down considerably in recent years, games are still getting more demanding. Even World of Warcraft needs a fairly hefty system to see it presented at its best these days. Just Cause isn't a spectacular looker, but it can strain a respectable system beyond breaking point if you turn all the graphical settings up and the pretty bits on.

We're also finding new ways to challenge our hardware: multi-monitor gaming is now so common it's included in Valve's Steam Survey. That means higher resolutions and more anti-aliasing, neither of which are cheap in processing terms.

And even if Moore's Law were somehow accelerated so that in five years' time integrated GPUs could be cutting edge, a 4,000 million transistor chip is an enormous undertaking that would be expensive to get wrong. Yields would have to be exceptionally good to make it financially viable - it's a much safer bet to produce separate parts for the top performance class.

What's more, high graphics chips are incredibly different to CPUs in terms of manufacture and operating environment. They're almost mutually exclusive, in fact.

A Core i7, for example, consumes less power and runs much colder than a GeForce GTX570. Could you get the same graphics power without the 100ºC-plus heat? Unlikely.

The tolerance for error is also much lower - if a single pixel is corrupted on screen because the GPU is overheating, no-one notices, but if a CPU repeatedly misses a read/write operation you have a billion dollar recall to take care of.

The GPU is dead

It would be lovely to be proved wrong, though. If a scenario arises where we can get the same sort of performance from a single chip - with all the overclocking features - that we currently enjoy from the traditional mix of discrete GPU and dedicated CPU, then we'd be first in the queue to buy one.

Right now, though, both Nvidia and AMD reported growth in their graphics divisions during 2010, despite the appearance of Intel's Arrandale hybrids early in the year. Perhaps this is the more likely outcome: it's easy to envision a world in, say, a decade or so, when the market for PC 'separates' is similar to the hi-fi one.

Because after a century of miniaturisation and integration there, valve amps are still the apeothesis of style and performance. In other words, a CPU/GPU hybrid chip may be good enough one day, but it won't be the best. And that is what we will always demand.



This posting includes an audio/video/photo media file: Download Now

In Depth: Sandy Bridge and AMD Fusion: hybrid chips explained

Posted: 10 Apr 2011 12:00 AM PDT

We recently reviewed one of AMD's latest Fusion processors as part of the Asus E35M1-M Pro motherboard. Like Intel's recent Sandy Bridge chips, they combine a programmable graphics engine onto the same piece of silicon as the CPU. It's a modern minor miracle in metal.

Like the first wave of Intel's CPU/GPU hybrids, these initial Fusion chips from AMD cleverly and cautiously avoid underwhelming us by targeting netbooks, where performance expectations are pretty low. For Intel, it was bolting a rudimentary graphics core onto an Atom to create Pinetrail.

AMD's new Bobcat CPU architecture is more forward looking, but by going up against Pinetrail and low-end notebooks, it looks perhaps better than it is.

AMD will follow the trail that has been forged here, though. Over the last 12 months, Intel has moved up the value chain with its CPU-die graphics. Arrandale brought the technology to Core i3 and Core i5 chips, and while the launch of Sandy Bridge at CES may have been spoiled by the swift recall of compatible motherboards, it was still briefly triumphant.

Faster and cheaper than the outgoing Nehalem architecture, Sandy Bridge is eminently suited to heavy lifting tasks that can harness the parallel pipelines of GPUs, such as video and photo-editing. The implications for mobile workstations alone are a little mind numbing, and more than one commentator believes it augurs the death of the discrete graphics card.

The single silicon future isn't quite upon us, however. Without wanting to put a timescale on things, the chances are PC Format readers will still be buying add-in GPUs for a while yet, for one reason and one reason only. Integrated graphics of any sort are and always have been rubbish for gaming.

Fusion and Sandy Bridge may still have that new chip smell about them, but they've been in the public eye for a long time. Fusion was formally announced to the world in 2006, around the same time that AMD purchased graphics firm ATI. Sandy Bridge appeared on Intel's roadmaps a year or so later.

Familiarity should not breed contempt, though. It's hard not to be excited by what is arguably the biggest change to PC architecture for 20-plus years. This new approach to chip design is such a radical rethink of how a computer should be built that AMD has even come up with a new name for it: goodbye CPU, hello Accelerated Processing Unit (APU).

While we wish AMD the best of luck trying to get that new moniker to bed in, it is worth digging around on the company's website for the Fusion launch documents. The marketing has been done very well - the white paper PDF does an excellent job of explaining why putting graphics onto the processor die makes sense, and you can find it in a single click from the front page.

By comparison, Intel's site is more enigmatic and downplays the role of on-die graphics. The assumption seems to be that those who care already know, but most consumers are more interested in the number of cores a chip has.

The next big thing?

That's fair enough: computers are commodities now and a simple 'Intel Inside' has always been the company's best way of building its brand. It also works as a neat way of avoiding hubris.

In the world of PC tech, it's hard to work out what the next big thing will be and what is going to blow.

Nvidia

BATTLE LINES ARE DRAWN: Nvidia has some challenging times ahead

The CPU/GPU hybrid seems like a no-brainer, but a lot has changed in graphics since 3dfx first introduced the masses to the 3D video co-processor. It's been 15 years since Voodoo boards began to transform the sprite-based world of PC gaming, and with almost each new generation there's been some new feature that's promised to change the world.

Some of these breakthroughs have been welcome, and rightly gone on to achieve industry standard status. Routines for anti-aliasing, texture filtering and hardware transform and lighting were early candidates for standard adoption, and we now take it for granted that a new GPU will have programmable processors, HD video acceleration and unified shaders.

Other innovations, though, haven't found their way into the canon. At various times we've been promised by graphics vendors that the future lies in technologies that never picked up strong hardware support, such as voxels, tiling, ray tracing and variously outmoded ways of doing shadows.

Then there are those fancy sounding features that seem to have been around forever but still aren't widely used, such as tessellation, ring buses, GPU physics and even multicard graphics.

On a minor - related - note, hands up if you've ever used the digital vibrance slider in your control panel. Thought not.

All of which is not to say that Fusion and Sandy Bridge won't catch on. According to the roadmaps, it's going to be pretty hard to buy a CPU that doesn't have the graphics processor built in by the end of this year.

First-generation Core architecture chips are rapidly vanishing from Intel stockists, while AMD is expected to introduce a mainstream desktop combination chip, codenamed Lano, this summer with a high performance pairing based on the new Bulldozer CPU sometime in 2012.

If you buy a new PC by next spring it'll almost certainly have a hybrid processor at its heart.

There are three big advantages to hybrids. The first is that they're self-evidently very cost effective to manufacture compared to separate chips for the CPU and graphics and all the circuitry between.

Core i3

CORE I3: The first Sandy Bridge CPUs have an Intel HD 2000 performing the graphical grunt - or lack thereof

For similar reasons, they also require less power - there are fewer components and interconnects that need a steady flow of electrons, and even on the chip die itself Sandy Bridge and Fusion allow more sharing of resources than last year's Pinetrail and Arrandale forebears, which put graphics and CPU on two separate cores in the same package.

Finally, slapping everything on one piece of silicon improves the performance of the whole by an extra order of magnitude. Latency between the GPU and CPU is reduced - a factor very important for video editing - and both AMD and Intel are extending their CPU-based abilities for automatic overclocking to the GPU part of the wafer too.

It goes without saying that these advantages will only be apparent if you actually use the GPU part of the chip. But you can have the best of both worlds: switchable graphics such as Nvidia's Optimus chipsets are well established in notebooks already, including the MacBook Pro. These flick between integrated and discrete chips depending on whether or not you're gaming, in order to save battery life or increase framerates.

That's not really a technology that will find much footing on the desktop, where the power saving would be rather minimal for the effort involved, but that's not to say there are no opportunities there. A 'silent running' mode using the on-die GPU would be invaluable for media centres and all-in-one PCs, for example.

More importantly for us, once every PC has integrated graphics there's every likelihood that some GPU-friendly routines that are common in games - such as physics - could be moved onto there by default. It'd be better than the currently messy approach towards implementation of hardware acceleration for non-graphics gaming threads.

Can't do it all

That scenario, however, depends on an element of standardisation that may yet be lacking. Sandy Bridge contains an HD 3000 graphics core which, by all accounts, is Intel's best video processor to date.

HD 3000 is very good at encoding and decoding HD video, plus it supports DirectX 10.1 and OpenGL 2.0. That means it can use the GPU part of the core to accelerate programs such as Photoshop, Firefox 4 and some video transcoders too.

The problem is that, as has become almost traditional, Intel is a full generation behind everyone else when it comes to graphics. It's still living in a Vista world, while we're all on Windows 7. Don't even ask about Linux, by the way, as there's no support for Sandy Bridge graphics at all there yet (although both Intel and Linus Torvalds have promised it's coming).

Fusion, by comparison, has a well supported DirectX 11-class graphics chip based on its current Radeon HD6000 architecture. That means it supports the general purpose APIs OpenCL and DirectX Compute. Sandy Bridge doesn't - which may delay take up of these technologies for the time being.

GPCPU diagram

THE TIMES ARE A CHANGING: This is arguably the biggest change to PC architecture in two decades

Conversely, Sandy Bridge does feature the new Advanced Vector Extensions (AVX) to the SSE part of the x86 instruction set. These are designed to speed up applications such as video processing by batching together groups of data, and won't be in AMD chips until Bulldozer arrives later this year.

You'll need Windows 7 SP1 to actually make use of AVX, mind you. But by the time applications that use them appear, most of us will be running that OS revision or later.

Gaming performance

All this is academic if you're just interested in playing games. A quick look at the benchmarks will show you that while the HD 3000 is better than its predecessors for gaming, it's still not much fun unless you're prepared to sacrifice a lot of image quality.

By comparison, the first desktop Fusion chips - codenamed Llano and based on the current K10 architecture - will have more in the way of graphics firepower. But an 80 pipeline Radeon core is still only half the power of AMD's most basic discrete card, the HD6450. We wouldn't recommend one of those, either.

Feedback from games developers we've spoken to has been mixed so far. On the whole, they have broadly welcomed hybrids, and some have gone as far as saying that Sandy Bridge will meet minimum graphics specifications for a number of quite demanding games. Shogun 2 and Operation Flashpoint: Red River are good examples of this.

Ultimately, though, developers are interested in hitting these system requirements because it opens up a large market for their games among laptop owners. As far as innovation goes, the cool kids still code at the cutting edge.

What of the future?

You might be tempted to point out that it's early days yet, and CP-GPU hybrids - for want of a better phrase - are still in their infancy. Give it a couple more years and the discrete graphics business could be dead.

John Carmack himself recently noted that he didn't see any big leaps for the PC's graphical prowess in the near future. That gives combination processors time to catch up, doesn't it?

Well, here are some numbers to convince you otherwise. Moore's Law says the number of transistors that can be cost effectively incorporated inside a chip will double every two years. A four-core Sandy Bridge processor with integrated HD 3000 graphics packs around 995 million transistors. According to Intel's oft-quoted founder, that means that we could expect CPUs in 2013 to boast 2,000 million transistors on a single die.

Sandy bridge

SANDY SILICON: Sandy Bridge is eminently suited to heavy-lifting tasks such as video and photo-editing

Sounds like a lot, until you realise that 2009's Radeon HD5850 already has a transistor count of 2,154 million - which would all have to be squashed onto a CPU die to replicate the performance levels.

By applying Moore's Law, then, it'll be at least 2015 to 2016 before you'll be able to play games at the resolution and quality you were used to 18 months ago. What are the odds of seven-year-old technology being considered more than adequate as an entry level system? Not high, are they?

By that reasoning hybrid processors are going to struggle to ever move up into the 'mainstream' performance class. You see, even though graphics engine development has slowed down considerably in recent years, games are still getting more demanding. Even World of Warcraft needs a fairly hefty system to see it presented at its best these days. Just Cause isn't a spectacular looker, but it can strain a respectable system beyond breaking point if you turn all the graphical settings up and the pretty bits on.

We're also finding new ways to challenge our hardware: multi-monitor gaming is now so common it's included in Valve's Steam Survey. That means higher resolutions and more anti-aliasing, neither of which are cheap in processing terms.

And even if Moore's Law were somehow accelerated so that in five years' time integrated GPUs could be cutting edge, a 4,000 million transistor chip is an enormous undertaking that would be expensive to get wrong. Yields would have to be exceptionally good to make it financially viable - it's a much safer bet to produce separate parts for the top performance class.

What's more, high graphics chips are incredibly different to CPUs in terms of manufacture and operating environment. They're almost mutually exclusive, in fact.

A Core i7, for example, consumes less power and runs much colder than a GeForce GTX570. Could you get the same graphics power without the 100ºC-plus heat? Unlikely.

The tolerance for error is also much lower - if a single pixel is corrupted on screen because the GPU is overheating, no-one notices, but if a CPU repeatedly misses a read/write operation you have a billion dollar recall to take care of.

The GPU is dead

It would be lovely to be proved wrong, though. If a scenario arises where we can get the same sort of performance from a single chip - with all the overclocking features - that we currently enjoy from the traditional mix of discrete GPU and dedicated CPU, then we'd be first in the queue to buy one.

Right now, though, both Nvidia and AMD reported growth in their graphics divisions during 2010, despite the appearance of Intel's Arrandale hybrids early in the year. Perhaps this is the more likely outcome: it's easy to envision a world in, say, a decade or so, when the market for PC 'separates' is similar to the hi-fi one.

Because after a century of miniaturisation and integration there, valve amps are still the apeothesis of style and performance. In other words, a CPU/GPU hybrid chip may be good enough one day, but it won't be the best. And that is what we will always demand.



This posting includes an audio/video/photo media file: Download Now

Tutorial: How to find missing iTunes album artwork

Posted: 09 Apr 2011 04:00 AM PDT

There's nothing quite like browsing through your iTunes music library, looking at all the gorgeous album cover artwork as you go, be that on your Mac, your iPhone, iPad or iPod.

At least, it is until you come to some tracks that haven't got any artwork attached to them, at which point the black square with the pair of quavers rather spoils the wow factor.

So we're going to help you get that pizazz back by making sure your whole iTunes library has its cover artwork present and intact.

Before we start, make sure you've got the artwork thumbnail visible in iTunes - it appears in the bottom-left corner, and you can toggle it on and off using the rectangle with the triangle in it, next to the repeat button. Just above the thumbnail, you'll see it either says Selected Item or Now Playing - flip between these by clicking on the words; the former will be what we're mostly interested in here.

And remember, you can enjoy album artwork in a variety of ways - see those little buttons to the left of the search bar at the top of the iTunes window? The right-most three all display album view in a different way, so try viewing your music library in each way to see which you like best.

View options

If you switch into Cover Flow view (the furthest-right option), you'll see a little button with an arrow in each corner just below all your album covers. Clicking this will make Cover Flow take over your entire screen for truly immersive browsing. Right-click while you're in this view to adjust the font size, too.

And if you enjoy leaving the Visualizer on as eye-candy while you play your tunes, the album artwork will display alongside the track information here.

Jazz it up

If you buy your music directly from the iTunes Store, it comes with the artwork already attached, so there's no need to change anything there. But if you prefer the more traditional method of going down to the record shop, buying physical CDs and then importing them to iTunes, you can save yourself a lot of time later by adding the artwork when you import the songs in the first place.

When you pop a CD in your drive, iTunes looks up the track names on the internet, but by default, not the artwork. And because you can't edit what's on a CD, you can't add the artwork until you've imported the album.

But there's an option in iTunes' Preferences that will automatically download any missing cover art for songs you import into iTunes. Tick the Automatically download missing album artwork option in Preferences > Store and click OK.

Note that if you're importing a CD, it'll only get the artwork once all the tracks have safely copied to your Mac, so don't be alarmed if nothing appears while it's busy. If you import a CD and it's not able to find the artwork, it's worth getting into the habit of adding it manually straight away, so that you don't amass more and more art-less music in your collection.

And don't forget that if you've got an iPod, iPhone or iPad with music on it (or audiobooks, for that matter), any artwork you add to your iTunes library will be transferred to your mobile player next time you sync with your Mac, so that you can take the visual experience with you anywhere you go.

So your mission, should you choose to accept it (and we suggest you do, because the results will be stunning), starts here. Follow these simple steps to produce an art-filled music library.

How to complete your iTunes album artwork collection

01. Automatically get album art

step 1

Make sure you're connected to the internet and fire up iTunes.

Now go to the Advanced menu and select the Get Album Artwork option. What this does is send the details of songs that are missing cover art to Apple, to see if there's a match for them. As the pop-up message will inform you, Apple doesn't keep any information about the contents of your music library.

Tick the Do not ask me again box, then press Get Album Artwork. You'll see the top bar work its way through your library - depending on the speed of your internet connection and how many songs (and indeed movies, apps and books) you've got, this could take some time.

When it's done, pop your view into List mode (the left most of the four buttons next to the search bar), make sure the artwork thumbnail is set to Selected Item, and browse through your library to check if any tracks are still missing their art. If there are, we've got some more tricks up our sleeve. Read on.

02. The Dashboard widget

Online retailer Amazon is a fantastic source of cover artwork, and there's a little widget you can install on your Mac's Dashboard to help you get it.

You can get the widget from here - click Download on the page and it'll prompt you if you'd like to install it.Do so, and click Keep when it appears, rippling as it does on your Dashboard.

Now, back in iTunes, select a song that's art-less, pop up your Dashboard and click the green iTunes button on the new widget. All being well, you'll see an image pop up below.

There may even be more than one to choose from, and if so, you'll see a checkerboard appear in the top-right corner of the art. Click this to see miniature versions of the other options and click the one you'd like to use. When you're happy, press Set as album art in iTunes. And that's all there is to it!

The only drawback of this widget is that it can only handle single songs or albums at a time - if you select tracks that require different artwork, they'll all be given the same cover.

It's also worth noting that you can click the little 'i' at the right-hand end of the widget to display more options, including which national Amazon site it searches on.

03. Scripts

Scripts

As well as iTunes' built-in album art search, various people have written scripts that attempt to do the same thing.

Fetch Art is one such example that's worth trying - even though it searches Amazon in the same way that the above widget does, its results differed. It's available from fetchartblog.blogspot.com, and although it hasn't been updated for a while, it works with iTunes 10.

When you install it, it'll appear in iTunes' Script menu in the menu bar. Select a single track or album and choose the Fetch Art option. Its results will appear in a new window, and if you like what you see, simply click Copy to iTunes.

04. Manual search

If you've followed the previous steps, your library's artwork should be more or less complete. If there are a few straggler tracks still without artwork, you can add them manually.

To make things a bit easier for yourself, why not install another script, which will put all tracks missing art into a single playlist, so you have them all in one place. Get the script from here. Copy it to your Library/iTunes/Scripts folder to make it appear in iTunes' Script menu.

In your iTunes library, press Command+A to select all your songs, then go to Script menu > Find songs w-o artwork.

To get the missing art for each track, type the artist and album name into Google Images (or if you've caught the scripts bug, there's one that performs the search for you available here) and click the one you want.

On the right of the new window that opens, click Full-size image, then right-click (or Ctrl+click) the cover art, select Copy Image and switch back into iTunes. Highlight the track, right-click the artwork area in the lower left corner and choose Paste.

Repeat this process for your remaining tracks - it may seem tedious, but just think of the looks on your friends' and relatives' faces as you nonchalantly flip through in Cover Flow mode. You know it's worth it…



Tutorial: How to find missing iTunes album artwork

Posted: 09 Apr 2011 04:00 AM PDT

There's nothing quite like browsing through your iTunes music library, looking at all the gorgeous album cover artwork as you go, be that on your Mac, your iPhone, iPad or iPod.

At least, it is until you come to some tracks that haven't got any artwork attached to them, at which point the black square with the pair of quavers rather spoils the wow factor.

So we're going to help you get that pizazz back by making sure your whole iTunes library has its cover artwork present and intact.

Before we start, make sure you've got the artwork thumbnail visible in iTunes - it appears in the bottom-left corner, and you can toggle it on and off using the rectangle with the triangle in it, next to the repeat button. Just above the thumbnail, you'll see it either says Selected Item or Now Playing - flip between these by clicking on the words; the former will be what we're mostly interested in here.

And remember, you can enjoy album artwork in a variety of ways - see those little buttons to the left of the search bar at the top of the iTunes window? The right-most three all display album view in a different way, so try viewing your music library in each way to see which you like best.

View options

If you switch into Cover Flow view (the furthest-right option), you'll see a little button with an arrow in each corner just below all your album covers. Clicking this will make Cover Flow take over your entire screen for truly immersive browsing. Right-click while you're in this view to adjust the font size, too.

And if you enjoy leaving the Visualizer on as eye-candy while you play your tunes, the album artwork will display alongside the track information here.

Jazz it up

If you buy your music directly from the iTunes Store, it comes with the artwork already attached, so there's no need to change anything there. But if you prefer the more traditional method of going down to the record shop, buying physical CDs and then importing them to iTunes, you can save yourself a lot of time later by adding the artwork when you import the songs in the first place.

When you pop a CD in your drive, iTunes looks up the track names on the internet, but by default, not the artwork. And because you can't edit what's on a CD, you can't add the artwork until you've imported the album.

But there's an option in iTunes' Preferences that will automatically download any missing cover art for songs you import into iTunes. Tick the Automatically download missing album artwork option in Preferences > Store and click OK.

Note that if you're importing a CD, it'll only get the artwork once all the tracks have safely copied to your Mac, so don't be alarmed if nothing appears while it's busy. If you import a CD and it's not able to find the artwork, it's worth getting into the habit of adding it manually straight away, so that you don't amass more and more art-less music in your collection.

And don't forget that if you've got an iPod, iPhone or iPad with music on it (or audiobooks, for that matter), any artwork you add to your iTunes library will be transferred to your mobile player next time you sync with your Mac, so that you can take the visual experience with you anywhere you go.

So your mission, should you choose to accept it (and we suggest you do, because the results will be stunning), starts here. Follow these simple steps to produce an art-filled music library.

How to complete your iTunes album artwork collection

01. Automatically get album art

step 1

Make sure you're connected to the internet and fire up iTunes.

Now go to the Advanced menu and select the Get Album Artwork option. What this does is send the details of songs that are missing cover art to Apple, to see if there's a match for them. As the pop-up message will inform you, Apple doesn't keep any information about the contents of your music library.

Tick the Do not ask me again box, then press Get Album Artwork. You'll see the top bar work its way through your library - depending on the speed of your internet connection and how many songs (and indeed movies, apps and books) you've got, this could take some time.

When it's done, pop your view into List mode (the left most of the four buttons next to the search bar), make sure the artwork thumbnail is set to Selected Item, and browse through your library to check if any tracks are still missing their art. If there are, we've got some more tricks up our sleeve. Read on.

02. The Dashboard widget

Online retailer Amazon is a fantastic source of cover artwork, and there's a little widget you can install on your Mac's Dashboard to help you get it.

You can get the widget from here - click Download on the page and it'll prompt you if you'd like to install it.Do so, and click Keep when it appears, rippling as it does on your Dashboard.

Now, back in iTunes, select a song that's art-less, pop up your Dashboard and click the green iTunes button on the new widget. All being well, you'll see an image pop up below.

There may even be more than one to choose from, and if so, you'll see a checkerboard appear in the top-right corner of the art. Click this to see miniature versions of the other options and click the one you'd like to use. When you're happy, press Set as album art in iTunes. And that's all there is to it!

The only drawback of this widget is that it can only handle single songs or albums at a time - if you select tracks that require different artwork, they'll all be given the same cover.

It's also worth noting that you can click the little 'i' at the right-hand end of the widget to display more options, including which national Amazon site it searches on.

03. Scripts

Scripts

As well as iTunes' built-in album art search, various people have written scripts that attempt to do the same thing.

Fetch Art is one such example that's worth trying - even though it searches Amazon in the same way that the above widget does, its results differed. It's available from fetchartblog.blogspot.com, and although it hasn't been updated for a while, it works with iTunes 10.

When you install it, it'll appear in iTunes' Script menu in the menu bar. Select a single track or album and choose the Fetch Art option. Its results will appear in a new window, and if you like what you see, simply click Copy to iTunes.

04. Manual search

If you've followed the previous steps, your library's artwork should be more or less complete. If there are a few straggler tracks still without artwork, you can add them manually.

To make things a bit easier for yourself, why not install another script, which will put all tracks missing art into a single playlist, so you have them all in one place. Get the script from here. Copy it to your Library/iTunes/Scripts folder to make it appear in iTunes' Script menu.

In your iTunes library, press Command+A to select all your songs, then go to Script menu > Find songs w-o artwork.

To get the missing art for each track, type the artist and album name into Google Images (or if you've caught the scripts bug, there's one that performs the search for you available here) and click the one you want.

On the right of the new window that opens, click Full-size image, then right-click (or Ctrl+click) the cover art, select Copy Image and switch back into iTunes. Highlight the track, right-click the artwork area in the lower left corner and choose Paste.

Repeat this process for your remaining tracks - it may seem tedious, but just think of the looks on your friends' and relatives' faces as you nonchalantly flip through in Cover Flow mode. You know it's worth it…



Tutorial: How to find missing iTunes album artwork

Posted: 09 Apr 2011 04:00 AM PDT

There's nothing quite like browsing through your iTunes music library, looking at all the gorgeous album cover artwork as you go, be that on your Mac, your iPhone, iPad or iPod.

At least, it is until you come to some tracks that haven't got any artwork attached to them, at which point the black square with the pair of quavers rather spoils the wow factor.

So we're going to help you get that pizazz back by making sure your whole iTunes library has its cover artwork present and intact.

Before we start, make sure you've got the artwork thumbnail visible in iTunes - it appears in the bottom-left corner, and you can toggle it on and off using the rectangle with the triangle in it, next to the repeat button. Just above the thumbnail, you'll see it either says Selected Item or Now Playing - flip between these by clicking on the words; the former will be what we're mostly interested in here.

And remember, you can enjoy album artwork in a variety of ways - see those little buttons to the left of the search bar at the top of the iTunes window? The right-most three all display album view in a different way, so try viewing your music library in each way to see which you like best.

View options

If you switch into Cover Flow view (the furthest-right option), you'll see a little button with an arrow in each corner just below all your album covers. Clicking this will make Cover Flow take over your entire screen for truly immersive browsing. Right-click while you're in this view to adjust the font size, too.

And if you enjoy leaving the Visualizer on as eye-candy while you play your tunes, the album artwork will display alongside the track information here.

Jazz it up

If you buy your music directly from the iTunes Store, it comes with the artwork already attached, so there's no need to change anything there. But if you prefer the more traditional method of going down to the record shop, buying physical CDs and then importing them to iTunes, you can save yourself a lot of time later by adding the artwork when you import the songs in the first place.

When you pop a CD in your drive, iTunes looks up the track names on the internet, but by default, not the artwork. And because you can't edit what's on a CD, you can't add the artwork until you've imported the album.

But there's an option in iTunes' Preferences that will automatically download any missing cover art for songs you import into iTunes. Tick the Automatically download missing album artwork option in Preferences > Store and click OK.

Note that if you're importing a CD, it'll only get the artwork once all the tracks have safely copied to your Mac, so don't be alarmed if nothing appears while it's busy. If you import a CD and it's not able to find the artwork, it's worth getting into the habit of adding it manually straight away, so that you don't amass more and more art-less music in your collection.

And don't forget that if you've got an iPod, iPhone or iPad with music on it (or audiobooks, for that matter), any artwork you add to your iTunes library will be transferred to your mobile player next time you sync with your Mac, so that you can take the visual experience with you anywhere you go.

So your mission, should you choose to accept it (and we suggest you do, because the results will be stunning), starts here. Follow these simple steps to produce an art-filled music library.

How to complete your iTunes album artwork collection

01. Automatically get album art

step 1

Make sure you're connected to the internet and fire up iTunes.

Now go to the Advanced menu and select the Get Album Artwork option. What this does is send the details of songs that are missing cover art to Apple, to see if there's a match for them. As the pop-up message will inform you, Apple doesn't keep any information about the contents of your music library.

Tick the Do not ask me again box, then press Get Album Artwork. You'll see the top bar work its way through your library - depending on the speed of your internet connection and how many songs (and indeed movies, apps and books) you've got, this could take some time.

When it's done, pop your view into List mode (the left most of the four buttons next to the search bar), make sure the artwork thumbnail is set to Selected Item, and browse through your library to check if any tracks are still missing their art. If there are, we've got some more tricks up our sleeve. Read on.

02. The Dashboard widget

Online retailer Amazon is a fantastic source of cover artwork, and there's a little widget you can install on your Mac's Dashboard to help you get it.

You can get the widget from here - click Download on the page and it'll prompt you if you'd like to install it.Do so, and click Keep when it appears, rippling as it does on your Dashboard.

Now, back in iTunes, select a song that's art-less, pop up your Dashboard and click the green iTunes button on the new widget. All being well, you'll see an image pop up below.

There may even be more than one to choose from, and if so, you'll see a checkerboard appear in the top-right corner of the art. Click this to see miniature versions of the other options and click the one you'd like to use. When you're happy, press Set as album art in iTunes. And that's all there is to it!

The only drawback of this widget is that it can only handle single songs or albums at a time - if you select tracks that require different artwork, they'll all be given the same cover.

It's also worth noting that you can click the little 'i' at the right-hand end of the widget to display more options, including which national Amazon site it searches on.

03. Scripts

Scripts

As well as iTunes' built-in album art search, various people have written scripts that attempt to do the same thing.

Fetch Art is one such example that's worth trying - even though it searches Amazon in the same way that the above widget does, its results differed. It's available from fetchartblog.blogspot.com, and although it hasn't been updated for a while, it works with iTunes 10.

When you install it, it'll appear in iTunes' Script menu in the menu bar. Select a single track or album and choose the Fetch Art option. Its results will appear in a new window, and if you like what you see, simply click Copy to iTunes.

04. Manual search

If you've followed the previous steps, your library's artwork should be more or less complete. If there are a few straggler tracks still without artwork, you can add them manually.

To make things a bit easier for yourself, why not install another script, which will put all tracks missing art into a single playlist, so you have them all in one place. Get the script from here. Copy it to your Library/iTunes/Scripts folder to make it appear in iTunes' Script menu.

In your iTunes library, press Command+A to select all your songs, then go to Script menu > Find songs w-o artwork.

To get the missing art for each track, type the artist and album name into Google Images (or if you've caught the scripts bug, there's one that performs the search for you available here) and click the one you want.

On the right of the new window that opens, click Full-size image, then right-click (or Ctrl+click) the cover art, select Copy Image and switch back into iTunes. Highlight the track, right-click the artwork area in the lower left corner and choose Paste.

Repeat this process for your remaining tracks - it may seem tedious, but just think of the looks on your friends' and relatives' faces as you nonchalantly flip through in Cover Flow mode. You know it's worth it…



How your mains power supply affects your home cinema

Posted: 09 Apr 2011 02:00 AM PDT

Every component in your home cinema is a delicate electrical device. Yet people are often willing to pay for a monstrous flat panel, beefy receiver or state-of-the-art 3D Blu-ray player and connect them to the mains via a cheap, multi-plug power strip.

This will do the job, but may affect reliability or even performance, which is where we get to the controversial subject of mains power conditioning.

In the golden-eared world of hi-fi, mains conditioning is a topic of hot debate. Indeed, a long-running spat between specialist Russ Andrews Accessories and the Advertising Standards Authority (ASA) has brought the arguments into sharper focus than ever before (you can read about it at www.asa.org.uk).

But are the arguments for mains power conditioning equally applicable to home cinema?

Every component in a UK home cinema is designed to use a standard mains voltage of 240V at 50Hz. The problem is that your household electricity doesn't usually reflect this ideal power supply; in most cases it is 'dirty' and prone to over- and under-voltage issues.

Getting down and dirty

To fully understand what is going on, we must first look at the power foundation. Electricity is generated in a plant that's typically dozens of miles away from your home.

To get to your living room it has to navigate substations, high voltage lines, transformers and, finally, the power lines that connect to your home. Along the way, a lot could have happened to that electricity; it could have picked up radio frequency interference (RFI) or electromagnetic interference (EMI).

As more wireless devices are used, we are seeing increased issues with both. Power lines can act like aerials, picking up signals from mobile phones, radio and television broadcasts, and even wi-fi signals can have a negative effect on the quality of your power by contaminating it with noise.

But your house is the single biggest culprit of electrical contamination. Refrigerators; washer/dryers; light dimmers; hair dryers; computers and other AV components create noise by pulling power from the mains. Fridges and washer/dryers consume enormous amounts of electricity and have noisy power supplies.

This noise and power consumption affects other products on the same power line. Just about all of your AV gear is full of microprocessors and other delicate technology, and dirty electricity and over/under voltages can put stress on these sensitive components, causing them to underperform.

Measuring the existence of mains noise and voltage fluctuation is easy; proving it has a visible or audible effect on home cinema equipment is more difficult, and this is part of the ASA's beef. But it's incontrovertible that your home's electricity supply isn't perfect, so it's common sense to install a power conditioner.

There are two main types of power conditioner: traditional passive types and complete AC regenerators. The former are cheaper and use various types of electrical filters to tackle RFI and EMI. This can be pretty effective.

The more extreme method of cleaning your electricity is the AC regeneration route. This tends to be more costly, but is certainly the most effective.

Cleanliness next to Godliness

Some cinema enthusiasts have gone as far as running dedicated lines directly from their fuse box to their home cinema gear, with no other sockets attached to the lines. This is the ideal option, which should of course only be tackled by a qualified electrician.

So a power conditioner is the practical option in most cases, and there are plenty on the market from companies such as ISOL-8, IsoTek, PS Audio, and Monster Power.

We talked to ISOL-8's Nic Poulson about the philosophy of mains power conditioning and the Powerline 1080 specifically, and asked him what mains conditioners are designed to tackle.

"Taking a closer look at a mains supply reveals it's not a simple issue, and often not what we may expect," said Poulson. "Zooming in on an AC mains cycle reveals there is a lot going on; pollution and distortion of our energy supply is an inevitable consequence of its use, and it is everywhere.

"Electrical appliances in homes each turn some of the energy used into noise, either as radio interference or distortion of the mains itself. All are connected to, and share, the same physical conduit: the power grid.

"Most AV equipment is surprisingly vulnerable to this pollution, and limitations in the component parts commonly used means the interference is passed through to the sensitive circuits beyond, compromising performance.

"High energy spikes from switch contact arcing can cause clicks; broadband noise from rectifier diodes can create a haze veiling detail, and industrial inductive loads can cause significant local distortion of the mains waveform. These effects are often the reason why your system sounds better at night, when there is less local electrical activity."

So, what's the solution? Poulson opines: "Removing mains-borne noise can dramatically improve your system. A carefully designed filter or regenerator, isolating each individual component in your system, will provide superior, more predictable performance. The benefits are often not subtle. Other solutions such as specialist mains cables, which often provide improvements due to a limited degree of filtering, as a consequence of their physical geometry, are usually unpredictable, so tackling the source of the problem is a more powerful tool than just coping with its effects."

The PowerLine 1080 is a 2.3kg lump featuring a powder-coated steel chassis and an anodized 6mm thick CNC'd aluminum top plate designed to be non-magnetic and non-resonant. It has an IEC input connector, but you'll have to supply your own mains lead (the ISOLink shielded mains cable for instance).

Each of the four power outlets has a hinged cover designed to IEC standard IP54, protecting unused outlets from dirt, dust and little fingers. Internal wiring is silver-plated copper with PTFE insulation. One outlet has a dedicated filter to eliminate noise generated by video displays, while another has a filtered high current direct connection for amplifiers. The remaining two outlets are for source components.

It's all about the current

"It's specifically designed for plasmas, LCDs, and projectors," explains Poulson. "Basically, it's all about current. Displays don't need enormous amount of current; they don't normally need high peak current either. When that's the case, it allows us to put a far steeper curve in than I ordinarily I would for an audio component.

"The PowerLine 1080's output is very different to all of the others. It can only deliver around four or five amps maximum. It really has got a very very steep curve, plus we also built in a Notch Filter, which is designed to tackle a lot of the addition noise coming out of the plasma. The notch syncs specifically to that plasma noise, it's hyper-tailored specifically for the plasma display. The other two line sockets have moderate filtering just for source components, while the one for the amplifier is only parallel filtered because we don't want to limit current."

So how can the impact of your power conditioners be measured? "We use three things; one we look and listen, which is one of the finest arbiters. Then we have a Voltech power analyzer (more about these at www.voltech.com), plus we use new generation Picoscope (PC-based oscilloscope, from www.picotech.com) capturing devices, so we can freeze moments in time and look at them."

On our in-house reference system, our subjctive impression was of a touch more clarity overall on our installed LED TV when we used the Powerline 1080. Display noise was reduced, and colours appeared a little more vibrant.

Attached to our receiver, we heard a slight reduction in noise - a subtle effect, but sound effects appeared to emerge from a quieter background. Improvements were more pronounced at home, supporting the argument that the perceived effect of a power conditioner will be dependent on the state of your mains in the first place.

So, if you want to get the best out of your system, think of what you're putting into it - even if it adds only a level of surge protection, a power conditioner can still be a wise investment.



How your mains power supply affects your home cinema

Posted: 09 Apr 2011 02:00 AM PDT

Every component in your home cinema is a delicate electrical device. Yet people are often willing to pay for a monstrous flat panel, beefy receiver or state-of-the-art 3D Blu-ray player and connect them to the mains via a cheap, multi-plug power strip.

This will do the job, but may affect reliability or even performance, which is where we get to the controversial subject of mains power conditioning.

In the golden-eared world of hi-fi, mains conditioning is a topic of hot debate. Indeed, a long-running spat between specialist Russ Andrews Accessories and the Advertising Standards Authority (ASA) has brought the arguments into sharper focus than ever before (you can read about it at www.asa.org.uk).

But are the arguments for mains power conditioning equally applicable to home cinema?

Every component in a UK home cinema is designed to use a standard mains voltage of 240V at 50Hz. The problem is that your household electricity doesn't usually reflect this ideal power supply; in most cases it is 'dirty' and prone to over- and under-voltage issues.

Getting down and dirty

To fully understand what is going on, we must first look at the power foundation. Electricity is generated in a plant that's typically dozens of miles away from your home.

To get to your living room it has to navigate substations, high voltage lines, transformers and, finally, the power lines that connect to your home. Along the way, a lot could have happened to that electricity; it could have picked up radio frequency interference (RFI) or electromagnetic interference (EMI).

As more wireless devices are used, we are seeing increased issues with both. Power lines can act like aerials, picking up signals from mobile phones, radio and television broadcasts, and even wi-fi signals can have a negative effect on the quality of your power by contaminating it with noise.

But your house is the single biggest culprit of electrical contamination. Refrigerators; washer/dryers; light dimmers; hair dryers; computers and other AV components create noise by pulling power from the mains. Fridges and washer/dryers consume enormous amounts of electricity and have noisy power supplies.

This noise and power consumption affects other products on the same power line. Just about all of your AV gear is full of microprocessors and other delicate technology, and dirty electricity and over/under voltages can put stress on these sensitive components, causing them to underperform.

Measuring the existence of mains noise and voltage fluctuation is easy; proving it has a visible or audible effect on home cinema equipment is more difficult, and this is part of the ASA's beef. But it's incontrovertible that your home's electricity supply isn't perfect, so it's common sense to install a power conditioner.

There are two main types of power conditioner: traditional passive types and complete AC regenerators. The former are cheaper and use various types of electrical filters to tackle RFI and EMI. This can be pretty effective.

The more extreme method of cleaning your electricity is the AC regeneration route. This tends to be more costly, but is certainly the most effective.

Cleanliness next to Godliness

Some cinema enthusiasts have gone as far as running dedicated lines directly from their fuse box to their home cinema gear, with no other sockets attached to the lines. This is the ideal option, which should of course only be tackled by a qualified electrician.

So a power conditioner is the practical option in most cases, and there are plenty on the market from companies such as ISOL-8, IsoTek, PS Audio, and Monster Power.

We talked to ISOL-8's Nic Poulson about the philosophy of mains power conditioning and the Powerline 1080 specifically, and asked him what mains conditioners are designed to tackle.

"Taking a closer look at a mains supply reveals it's not a simple issue, and often not what we may expect," said Poulson. "Zooming in on an AC mains cycle reveals there is a lot going on; pollution and distortion of our energy supply is an inevitable consequence of its use, and it is everywhere.

"Electrical appliances in homes each turn some of the energy used into noise, either as radio interference or distortion of the mains itself. All are connected to, and share, the same physical conduit: the power grid.

"Most AV equipment is surprisingly vulnerable to this pollution, and limitations in the component parts commonly used means the interference is passed through to the sensitive circuits beyond, compromising performance.

"High energy spikes from switch contact arcing can cause clicks; broadband noise from rectifier diodes can create a haze veiling detail, and industrial inductive loads can cause significant local distortion of the mains waveform. These effects are often the reason why your system sounds better at night, when there is less local electrical activity."

So, what's the solution? Poulson opines: "Removing mains-borne noise can dramatically improve your system. A carefully designed filter or regenerator, isolating each individual component in your system, will provide superior, more predictable performance. The benefits are often not subtle. Other solutions such as specialist mains cables, which often provide improvements due to a limited degree of filtering, as a consequence of their physical geometry, are usually unpredictable, so tackling the source of the problem is a more powerful tool than just coping with its effects."

The PowerLine 1080 is a 2.3kg lump featuring a powder-coated steel chassis and an anodized 6mm thick CNC'd aluminum top plate designed to be non-magnetic and non-resonant. It has an IEC input connector, but you'll have to supply your own mains lead (the ISOLink shielded mains cable for instance).

Each of the four power outlets has a hinged cover designed to IEC standard IP54, protecting unused outlets from dirt, dust and little fingers. Internal wiring is silver-plated copper with PTFE insulation. One outlet has a dedicated filter to eliminate noise generated by video displays, while another has a filtered high current direct connection for amplifiers. The remaining two outlets are for source components.

It's all about the current

"It's specifically designed for plasmas, LCDs, and projectors," explains Poulson. "Basically, it's all about current. Displays don't need enormous amount of current; they don't normally need high peak current either. When that's the case, it allows us to put a far steeper curve in than I ordinarily I would for an audio component.

"The PowerLine 1080's output is very different to all of the others. It can only deliver around four or five amps maximum. It really has got a very very steep curve, plus we also built in a Notch Filter, which is designed to tackle a lot of the addition noise coming out of the plasma. The notch syncs specifically to that plasma noise, it's hyper-tailored specifically for the plasma display. The other two line sockets have moderate filtering just for source components, while the one for the amplifier is only parallel filtered because we don't want to limit current."

So how can the impact of your power conditioners be measured? "We use three things; one we look and listen, which is one of the finest arbiters. Then we have a Voltech power analyzer (more about these at www.voltech.com), plus we use new generation Picoscope (PC-based oscilloscope, from www.picotech.com) capturing devices, so we can freeze moments in time and look at them."

On our in-house reference system, our subjctive impression was of a touch more clarity overall on our installed LED TV when we used the Powerline 1080. Display noise was reduced, and colours appeared a little more vibrant.

Attached to our receiver, we heard a slight reduction in noise - a subtle effect, but sound effects appeared to emerge from a quieter background. Improvements were more pronounced at home, supporting the argument that the perceived effect of a power conditioner will be dependent on the state of your mains in the first place.

So, if you want to get the best out of your system, think of what you're putting into it - even if it adds only a level of surge protection, a power conditioner can still be a wise investment.



How your mains power supply affects your home cinema

Posted: 09 Apr 2011 02:00 AM PDT

Every component in your home cinema is a delicate electrical device. Yet people are often willing to pay for a monstrous flat panel, beefy receiver or state-of-the-art 3D Blu-ray player and connect them to the mains via a cheap, multi-plug power strip.

This will do the job, but may affect reliability or even performance, which is where we get to the controversial subject of mains power conditioning.

In the golden-eared world of hi-fi, mains conditioning is a topic of hot debate. Indeed, a long-running spat between specialist Russ Andrews Accessories and the Advertising Standards Authority (ASA) has brought the arguments into sharper focus than ever before (you can read about it at www.asa.org.uk).

But are the arguments for mains power conditioning equally applicable to home cinema?

Every component in a UK home cinema is designed to use a standard mains voltage of 240V at 50Hz. The problem is that your household electricity doesn't usually reflect this ideal power supply; in most cases it is 'dirty' and prone to over- and under-voltage issues.

Getting down and dirty

To fully understand what is going on, we must first look at the power foundation. Electricity is generated in a plant that's typically dozens of miles away from your home.

To get to your living room it has to navigate substations, high voltage lines, transformers and, finally, the power lines that connect to your home. Along the way, a lot could have happened to that electricity; it could have picked up radio frequency interference (RFI) or electromagnetic interference (EMI).

As more wireless devices are used, we are seeing increased issues with both. Power lines can act like aerials, picking up signals from mobile phones, radio and television broadcasts, and even wi-fi signals can have a negative effect on the quality of your power by contaminating it with noise.

But your house is the single biggest culprit of electrical contamination. Refrigerators; washer/dryers; light dimmers; hair dryers; computers and other AV components create noise by pulling power from the mains. Fridges and washer/dryers consume enormous amounts of electricity and have noisy power supplies.

This noise and power consumption affects other products on the same power line. Just about all of your AV gear is full of microprocessors and other delicate technology, and dirty electricity and over/under voltages can put stress on these sensitive components, causing them to underperform.

Measuring the existence of mains noise and voltage fluctuation is easy; proving it has a visible or audible effect on home cinema equipment is more difficult, and this is part of the ASA's beef. But it's incontrovertible that your home's electricity supply isn't perfect, so it's common sense to install a power conditioner.

There are two main types of power conditioner: traditional passive types and complete AC regenerators. The former are cheaper and use various types of electrical filters to tackle RFI and EMI. This can be pretty effective.

The more extreme method of cleaning your electricity is the AC regeneration route. This tends to be more costly, but is certainly the most effective.

Cleanliness next to Godliness

Some cinema enthusiasts have gone as far as running dedicated lines directly from their fuse box to their home cinema gear, with no other sockets attached to the lines. This is the ideal option, which should of course only be tackled by a qualified electrician.

So a power conditioner is the practical option in most cases, and there are plenty on the market from companies such as ISOL-8, IsoTek, PS Audio, and Monster Power.

We talked to ISOL-8's Nic Poulson about the philosophy of mains power conditioning and the Powerline 1080 specifically, and asked him what mains conditioners are designed to tackle.

"Taking a closer look at a mains supply reveals it's not a simple issue, and often not what we may expect," said Poulson. "Zooming in on an AC mains cycle reveals there is a lot going on; pollution and distortion of our energy supply is an inevitable consequence of its use, and it is everywhere.

"Electrical appliances in homes each turn some of the energy used into noise, either as radio interference or distortion of the mains itself. All are connected to, and share, the same physical conduit: the power grid.

"Most AV equipment is surprisingly vulnerable to this pollution, and limitations in the component parts commonly used means the interference is passed through to the sensitive circuits beyond, compromising performance.

"High energy spikes from switch contact arcing can cause clicks; broadband noise from rectifier diodes can create a haze veiling detail, and industrial inductive loads can cause significant local distortion of the mains waveform. These effects are often the reason why your system sounds better at night, when there is less local electrical activity."

So, what's the solution? Poulson opines: "Removing mains-borne noise can dramatically improve your system. A carefully designed filter or regenerator, isolating each individual component in your system, will provide superior, more predictable performance. The benefits are often not subtle. Other solutions such as specialist mains cables, which often provide improvements due to a limited degree of filtering, as a consequence of their physical geometry, are usually unpredictable, so tackling the source of the problem is a more powerful tool than just coping with its effects."

The PowerLine 1080 is a 2.3kg lump featuring a powder-coated steel chassis and an anodized 6mm thick CNC'd aluminum top plate designed to be non-magnetic and non-resonant. It has an IEC input connector, but you'll have to supply your own mains lead (the ISOLink shielded mains cable for instance).

Each of the four power outlets has a hinged cover designed to IEC standard IP54, protecting unused outlets from dirt, dust and little fingers. Internal wiring is silver-plated copper with PTFE insulation. One outlet has a dedicated filter to eliminate noise generated by video displays, while another has a filtered high current direct connection for amplifiers. The remaining two outlets are for source components.

It's all about the current

"It's specifically designed for plasmas, LCDs, and projectors," explains Poulson. "Basically, it's all about current. Displays don't need enormous amount of current; they don't normally need high peak current either. When that's the case, it allows us to put a far steeper curve in than I ordinarily I would for an audio component.

"The PowerLine 1080's output is very different to all of the others. It can only deliver around four or five amps maximum. It really has got a very very steep curve, plus we also built in a Notch Filter, which is designed to tackle a lot of the addition noise coming out of the plasma. The notch syncs specifically to that plasma noise, it's hyper-tailored specifically for the plasma display. The other two line sockets have moderate filtering just for source components, while the one for the amplifier is only parallel filtered because we don't want to limit current."

So how can the impact of your power conditioners be measured? "We use three things; one we look and listen, which is one of the finest arbiters. Then we have a Voltech power analyzer (more about these at www.voltech.com), plus we use new generation Picoscope (PC-based oscilloscope, from www.picotech.com) capturing devices, so we can freeze moments in time and look at them."

On our in-house reference system, our subjctive impression was of a touch more clarity overall on our installed LED TV when we used the Powerline 1080. Display noise was reduced, and colours appeared a little more vibrant.

Attached to our receiver, we heard a slight reduction in noise - a subtle effect, but sound effects appeared to emerge from a quieter background. Improvements were more pronounced at home, supporting the argument that the perceived effect of a power conditioner will be dependent on the state of your mains in the first place.

So, if you want to get the best out of your system, think of what you're putting into it - even if it adds only a level of surge protection, a power conditioner can still be a wise investment.



Review: Monitor Audio BX5

Posted: 09 Apr 2011 01:30 AM PDT

Monitor Audio has been producing the Bronze series, its entry-level full-size speaker range, for some years now. And the latest update takes the line-up from BR to BX status and features a full choice of standmounts, floorstanders and supporting multichannel equipment.

Design refinements include single-bolt driver fixings and HiVE reflex ports borrowed from the more expensive ranges.

The £500 BX5 tested here, however, is the smaller of two floorstanding models. It's a relatively compact 870mm tall with plinth and can be shortened by simply screwing the floor spikes into the bottom of the speaker. Doing this has an effect on the stability, but no real effect on sonic performance.

Finished in the relatively light Natural Oak finish, this is a handsome and well-proportioned loudspeaker. The fit and finish is excellent with smart touches like magnetic grill tabs that give the speaker a flush front. The only slightly discordant note is the plastic Monitor Audio logo on the top, but the overall effect is still excellent for the asking price.

Easy driver

The BX5 has a claimed sensitivity of 90dB/w and an impedance of eight ohms, making it a very easy load for an amp to drive. We obtained excellent results with a 70-watt Electrocompaniet ECI3 and the considerably less powerful Peachtree Audio iDecco. Both amps can drive the BX5 to very high levels, which would suggest that the speaker can be used with pretty much any amp from 25 watts and up.

The speaker is also relatively easy to position. The cabinet is both front and rear ported, but Monitor Audio supply a pair of foam bungs that can be used to reduce flow from the rear port and allow a relatively close placement to a rear wall.

Thrill a minute

Set a little over two metres apart with a slight toe-in and 20 centimetres or so from the wall, the BX5 presents a believable and full soundstage. This presentation is filled with a lively and open feel.

Given an upbeat piece of music, the Monitor Audio's are a thrill a minute to listen to and their timing and pace are extremely enjoyable. Given the relatively small cabinets, the bass response is impressive in both depth and presence and this low end underpins performances and imparts considerable authority. This is aided by the lack of "honking" or colouration from the bass ports.

Given more relaxed music, the BX5 never truly loses its slightly boisterous side, but equally could never be called dull or uninteresting and the assured timing means that even complex pieces stay very cohesive and easy to follow.

Live performances

Tonality with voices and instruments is good, although they can become slightly hard-edged if the BX5 is really pushed. At more usual domestic listening levels, however, the performance is always assured and confident.

Live performances, in particular, give the BX5 the chance to place musicians and the audience accurately in relation to one another and give a real sense of the original performance space.

Poor recordings can push the harshness levels, but well-recorded and high-resolution material allows them to demonstrate remarkable insight – considering the price point.

Sound choice

A combination of talents makes the BX5 a sound choice at the asking price. Partnered with a relatively neutral amp of almost any normal output, it is an engaging and entertaining performer – especially if your listening tastes lean towards more up tempo music.

Its performance, coupled with its compact size and attractive appearance, mean that this is a very talented and likeable speaker that should be able to slot happily into many different modestly priced hi-fi systems and deliver the goods.

Related Links


No comments:

Post a Comment

My Blog List