Menu
Is free
registration
home  /  Multimedia/ Processors with integrated graphics: AMD Fusion vs. Intel Core i3 and Intel Pentium.

Integrated Graphics Processors: AMD Fusion vs. Intel Core i3 and Intel Pentium.

An integrated GPU plays an important role for both gamers and undemanding users.

The quality of games, movies, watching videos on the Internet and images depends on it.

Principle of operation

The graphics processor is integrated into the motherboard of the computer - this is how the integrated graphics looks like.

As a rule, they use it to remove the need to install a graphics adapter -.

This technology helps to reduce the cost of the finished product. In addition, due to their compactness and low power consumption, such processors are often installed in laptops and low-power desktop computers.

Thus, integrated GPUs have filled this niche so much that 90% of laptops on US store shelves have such a processor.

Instead of a conventional video card, the computer's RAM itself is often an auxiliary tool in integrated graphics.

However, this solution somewhat limits the performance of the device. Yet the computer itself and the GPU use the same bus for memory.

So this "neighborhood" affects the performance of tasks, especially when working with complex graphics and during gameplay.

Views

Built-in graphics have three groups:

  1. Shared memory graphics are a device based on shared memory management with the main processor. This significantly reduces the cost, improves the energy saving system, but degrades the performance. Accordingly, for those working with complex programs, this kind of integrated GPU is more likely to be unsuitable.
  2. Discrete graphics - a video chip and one or two video memory modules are soldered on the motherboard. This technology significantly improves image quality and makes it possible to work with 3D graphics with the best results. True, you will have to pay a lot for this, and if you are looking for a high-power processor in all respects, then the cost can be incredibly high. In addition, the electricity bill will rise slightly - the power consumption of discrete GPUs is higher than usual.
  3. Hybrid discrete graphics - a combination of the two previous types, which ensured the creation of the PCI Express bus. Thus, access to the memory is carried out both through the unsoldered video memory, and through the operative one. With this solution, manufacturers wanted to create a compromise solution, but it still does not level the disadvantages.

Manufacturers

As a rule, large companies are engaged in the manufacture and development of integrated graphics processors, and, but many small enterprises are also involved in this area.

This is not difficult to do. Find Primary Display or Init Display First. If you don't see something like that, look for Onboard, PCI, AGP or PCI-E (it all depends on the buses installed on the motherboard).

Choosing PCI-E, for example, you enable the PCI-Express video card, and disable the built-in integrated one.

Thus, to enable the integrated video card, you need to find the appropriate parameters in the BIOS. The start-up process is often automatic.

Disable

Disabling is best done in BIOS. This is the simplest and most unpretentious option, suitable for almost all PCs. The only exceptions are some laptops.

Again, search BIOS for Peripherals or Integrated Peripherals if you are on a desktop.

For laptops, the name of the function is different, and not always the same. So just find something related to graphics. For example, the required options can be placed in the Advanced and Config sections.

Disconnection is also done in different ways. Sometimes it's enough just to click “Disabled” and put the PCI-E video card first in the list.

If you are a laptop user, do not be alarmed if you cannot find a suitable option, you may not have such a function a priori. For all other devices, the same rules are simple - no matter how the BIOS itself looks, the filling is the same.

If you have two video cards and they are both shown in the device manager, then the matter is quite simple: click on one of them with the right side of the mouse and select “disable”. However, keep in mind that the display may go out. Most likely it will.

However, this is also a solvable problem. It is enough to restart your computer or software.

Perform all subsequent settings on it. If this method does not work, roll back your actions using safe mode. You can also resort to the previous method - through BIOS.

Two programs - NVIDIA Control Center and Catalyst Control Center - configure the use of a specific video adapter.

They are the most unpretentious in comparison with the other two methods - the screen is unlikely to turn off, through the BIOS you also will not accidentally lose the settings.

For NVIDIA, all settings are in the 3D section.

You can choose your preferred video adapter for the entire operating system and for certain programs and games.

In Catalyst software, the same function is located in the Power option under the Switchable Graphics sub-item.

Thus, switching between GPUs is not difficult.

There are different methods, in particular, both through programs and through BIOS. Enabling or disabling one or another integrated graphics may be accompanied by some failures, mainly related to the image.

It may go out or just appear distortion. Nothing should affect the files themselves in the computer, unless you have inserted something in the BIOS.

Conclusion

As a result, integrated graphics processors are in demand due to their low cost and compactness.

For this you will have to pay with the level of performance of the computer itself.

In some cases, integrated graphics are essential - discrete processors are ideal for working with 3D images.

Plus, the industry leaders are Intel, AMD and Nvidia. Each of them offers its own graphics accelerators, processors and other components.

The latest popular models are Intel HD Graphics 530 and AMD A10-7850K. They are quite functional, but they have some flaws. In particular, this applies to power, productivity and cost of the finished product.

You can enable or disable a graphics processor with an embedded kernel or independently through BIOS, utilities and various programs, but the computer itself may well do it for you. It all depends on which video card is connected to the monitor itself.

AMD launched new mobile processors and announced desktop chips with integrated graphics at a special event ahead of CES 2018. And Radeon Technologies Group, a division of AMD, announced the Vega mobile discrete graphics chips. The company also revealed plans to move to new technical processes and promising architectures: graphics Radeon Navi and processor Zen +, Zen 2 and Zen 3.

New processors, chipset and cooling

First Ryzen desktops with Vega graphics

Two Ryzen desktop models with integrated Vega graphics will go on sale on February 12, 2018. The 2200G is an entry-level Ryzen 3 processor, while the 2400G is an entry-level Ryzen 5 processor. Both models dynamically boost frequencies by 200 and 300 MHz from 3.5 GHz and 3.6 GHz base frequencies, respectively. In fact, they replace the ultra-budget Ryzen 3 1200 and 1400 models.

The 2200G has only 8 graphics units, while the 2400G has 3 more. The 2200G graphics cores go up to 1,100 MHz and the 2400G up to 150 MHz. Each graphics unit contains 64 shaders.

The cores of both processors bear the same codename as mobile processors with integrated graphics - Raven Ridge (lit. Raven Mountain, a rock in Colorado). But nevertheless, they plug into the same AMD AM4 LGA socket as all other Ryzen 3, 5 and 7 processors.

Reference: AMD sometimes refers to processors with integrated graphics as non-CPU (Central Processing Unit, English Central processing unit), and APU (Accelerated Processor Unit).
AMD desktop processors with integrated graphics are labeled with a G at the end, following the first letter of the word graphics ( English graphics). Both AMD and Intel mobile processors are marked with a U at the end, the first letter of the words ultrathin ( English ultra-thin) or ultra-low power ( English ultra-low power consumption) respectively.
At the same time, you should not think that if the model numbers of the new Ryzen begin with the number 2, then the architecture of their cores belongs to the second generation of the Zen microarchitecture. This is not the case - these processors are still in the first generation.

Ryzen 3 2200G Ryzen 5 2400G
Kernels 4
Streams 4 8
Base frequency 3.5 GHz 3.6 GHz
Increased frequency 3.7 GHz 3.9 GHz
2 and 3 levels cache 6 Mb 6 Mb
Graphics blocks 8 11
Maximum graphics frequency 1 100 MHz 1 250 MHz
Cpu socket AMD AM4 (PGA)
Base heat dissipation 65 watts
Variable heat dissipation 45-65 watts
Codename Raven Ridge
Recommended price * 5 600 ₽ ($ 99) 9 500 ₽ ($ 99)
release date 12 february 2018

New Ryzen mobiles with Vega graphics

AMD already brought the first mobile Ryzen to market last year, codenamed Raven Ridge. The entire Ryzen mobile family is designed for gaming laptops, ultrabooks, and hybrid laptop tablets. But there were only two such models, each in the middle and senior segments: Ryzen 5 2500U and Ryzen 7 2700U. The younger segment was empty, but right at CES 2018, the company fixed it - two models were added to the mobile family at once: Ryzen 3 2200U and Ryzen 3 2300U.

AMD VP Jim Anderson Showcases Ryzen Mobile Family

The 2200U is Ryzen's first dual-core CPU, while the 2300U is quad-core as standard, but both run in four threads. At the same time, the base frequency for the 2200U cores is 2.5 GHz, and for the lower 2300U cores - 2 GHz. But with increasing loads, the frequency of both models will rise to the same rate - 3.4 GHz. However, laptop manufacturers can lower the power ceiling, because they also need to calculate energy costs and think over a cooling system. There is also a difference between the chips in the size of the cache: the 2200U has only two cores, and therefore half the cache of 1 and 2 levels.

The 2200U has only 3 graphics units, while the 2300U has twice as many, as well as processor cores. But the difference in graphics frequencies is not so significant: 1,000 MHz versus 1,100 MHz.

Ryzen 3 2200U Ryzen 3 2300U Ryzen 5 2500U Ryzen 7 2700U
Kernels 2 4
Streams 4 8
Base frequency 2.5 GHz 2 GHz 2.2 GHz
Increased frequency 3.4 GHz 3.8 GHz
Level 1 cache 192 KB (96 KB per core) 384 KB (96 KB per core)
Level 2 cache 1 MB (512 KB per core) 2 MB (512 KB per core)
Level 3 cache 4 MB (4 MB per core complex)
RAM Dual Channel DDR4-2400
Graphics blocks 3 6 8 10
Maximum graphics frequency 1,000 MHz 1 100 MHz 1 300 MHz
Cpu socket AMD FP5 (BGA)
Base heat dissipation 15 watts
Variable heat dissipation 12-25 watts
Codename Raven Ridge
release date 8 january 2018 26 october 2018

The first mobile Ryzen PRO

For Q2 2018, AMD is slated to release mobile versions of the Ryzen PRO, enterprise-grade processors. Mobile PRO specifications are identical to consumer versions, with the exception of the Ryzen 3 2200U, which hasn't received a PRO implementation at all. The difference between desktop and mobile Ryzen PRO is in additional hardware technologies.

Ryzen PRO processors - full copies of regular Ryzen, but with additional features

For example, TSME is used to ensure security, hardware-based RAM encryption "on the fly" (Intel only has software resource-intensive SME encryption). And for centralized fleet management, the open standard DASH (Desktop and mobile Architecture for System Hardware) is available - support for its protocols is built into the processor.

Laptops, ultrabooks and hybrid notebooks with Ryzen PRO should primarily be of interest to companies and government agencies that plan to purchase them for employees.

Ryzen 3 PRO 2300U Ryzen 5 PRO 2500U Ryzen 7 PRO 2700U
Kernels 4
Streams 4 8
Base frequency 2 GHz 2.2 GHz
Increased frequency 3.4 GHz 3.6 GHz 3.8 GHz
Level 1 cache 384 KB (96 KB per core)
Level 2 cache 2 MB (512 KB per core)
Level 3 cache 4 MB (4 MB per core complex)
RAM Dual Channel DDR4-2400
Graphics blocks 6 8 10
Maximum graphics frequency 1 100 MHz 1 300 MHz
Cpu socket AMD FP5 (BGA)
Base heat dissipation 15 watts
Variable heat dissipation 12-25 watts
Codename Raven Ridge
release date Second quarter of 2018

New AMD 400-series chipsets

The second generation of Ryzen relies on the second generation of system logic: the 300th series of chipsets is replaced by the 400th. The flagship of the series is expected to be AMD X470, and later there will be simpler and cheaper sets of circuits, such as the B450. The new logic has improved everything about RAM: reduced access latency, raised the upper frequency limit and added headroom for overclocking. Also in the 400 series, the USB bandwidth has increased and the power consumption of the processor has improved, as well as its heat dissipation.

But the processor socket has not changed. The AMD AM4 desktop socket (and its mobile non-removable AMD FP5 variant) is a particular strength of the company. The second generation has the same connector as the first. He will not be replaced in the third and fifth generations. AMD has promised, in principle, not to change AM4 until 2020. And for the 300 series motherboards (X370, B350, A320, X300 and A300) to work with the new Ryzen, you just need to update the BIOS. Moreover, in addition to direct compatibility, there is also the opposite: old processors will work on new boards.

At CES 2018, Gigabyte even showed a prototype of the first motherboard based on the new chipset - the X470 Aorus Gaming 7 WiFi. This and other motherboards based on X470 and lower chipsets will appear in April 2018, along with the second generation of Ryzen on the Zen + architecture.

New cooling system

AMD also introduced the new AMD Wraith Prism cooler. While its predecessor, the Wraith Max, was backlit in solid red, the Wraith Prism features on-board RGB lighting around the fan perimeter. The cooler blades are made of transparent plastic and are also illuminated in millions of colors. Lovers of RGB lighting will appreciate it, and haters can simply turn it off, although in this case the sense of buying this model is leveled.


Wraith Prism is a complete replica of Wraith Max, but backlit with millions of colors

The rest of the characteristics are identical to the Wraith Max: direct contact heat pipes, programmed airflow profiles in acceleration mode, and almost silent operation at 39 dB under standard conditions.

There is no word yet on how much the Wraith Prism will cost, whether it will come bundled with processors, and when it will be available to buy.

New Ryzen laptops

In addition to mobile processors, AMD is also promoting new laptops based on them. In 2017, the HP Envy x360, Lenovo Ideapad 720S and Acer Swift 3 models came out on Ryzen mobiles. In the first quarter of 2018, the Acer Nitro 5, Dell Inspiron 5000 and HP series will be added to them. They all run on last year's Ryzen 7 2700U and Ryzen 5 2500U mobiles.

The Acer Nitro family is a gaming machine. The Nitro 5 line is equipped with 15.6-inch IPS displays with a resolution of 1920 × 1080. And some models will be supplemented with a discrete graphics chip Radeon RX 560 with 16 graphics units inside.

The Dell Inspiron 5000 line of laptops offers 15.6 and 17-inch models with either hard drives or solid state drives. Some models of the line will also receive a discrete graphics card Radeon 530 with 6 graphics units. This is a rather strange configuration, because even the integrated graphics of the Ryzen 5 2500U have more graphics units - 8 pieces. But the advantage of a discrete card can be in higher clock speeds and separate graphics memory chips (instead of the RAM section).

Reduced prices for all Ryzen processors

Processor (socket) Kernels / Threads Old price* New price*
Ryzen Threadripper 1950X (TR4) 16/32 56 000 ₽ ($ 999) -
Ryzen Threadripper 1920X (TR4) 12/24 45 000 ₽ ($ 799) -
Ryzen Threadripper 1900X (TR4) 8/16 31 000 ₽ ($ 549) 25 000 ₽ ($ 449)
Ryzen 7 1800X (AM4) 8/16 28 000 ₽ ($ 499) 20 000 ₽ ($ 349)
Ryzen 7 1700X (AM4) 8/16 22 500 ₽ ($ 399) 17 500 ₽ ($ 309)
Ryzen 7 1700 (AM4) 8/16 18 500 ₽ ($ 329) 17 000 ₽ ($ 299)
Ryzen 5 1600X (AM4) 6/12 14 000 ₽ ($ 249) 12 500 ₽ ($ 219)
Ryzen 5 1600 (AM4) 6/12 12 500 ₽ ($ 219) 10 500 ₽ ($ 189)
Ryzen 5 1500X (AM4) 4/8 10 500 ₽ ($ 189) 9 800 ₽ ($ 174)
Ryzen 5 1400 (AM4) 4/8 9 500 ₽ ($ 169) -
Ryzen 5 2400G (AM4) 4/8 - 9 500 ₽ ($ 169)
Ryzen 3 2200G (AM4) 4/4 - 5 600 ₽ ($ 99)
Ryzen 3 1300X (AM4) 4/4 7 300 ₽ ($ 129) -
Ryzen 3 1200 (AM4) 4/4 6 100 ₽ ($ 109) -

Plans until 2020: Navi graphics, Zen 3 processors

2017 was a watershed year for AMD. After years of troubles, AMD completed the development of the Zen core microarchitecture and released the first generation of CPUs: the Ryzen, Ryzen PRO and Ryzen Threadripper family of PC processors, the Ryzen and Ryzen PRO mobile processor families, and the EPYC server family. In the same year, the Radeon group developed the Vega graphics architecture: based on it, the Vega 64 and Vega 56 video cards were released, and at the end of the year, Vega cores were integrated into Ryzen mobile processors.


Dr. Lisa Su, CEO of AMD, says the company will release 7nm processors before 2020

New items not only attracted the interest of fans, but also captured the attention of ordinary consumers and enthusiasts. Intel and NVIDIA had to fend off hastily: Intel released the Coffee Lake six-core processors, the unplanned second "so" of the Skylake architecture, and NVIDIA expanded the 10th series of Pascal graphics cards to 12 models.

Rumors about AMD's future plans have been piling up throughout 2017. So far, Lisa Su, CEO of AMD, has only indicated that the company plans to exceed the 7-8% annual performance growth rate in the electronics industry. Finally, at CES 2018, the company showed a roadmap not just until the end of 2018, but up to 2020. The basis of these plans is to improve chip architectures through the miniaturization of transistors: a progressive transition from the current 14 nanometers to 12 and 7 nanometers.

12 nanometers: the second generation of Ryzen on Zen +

The Zen + microarchitecture, the second generation of the Ryzen brand, is based on a 12 nanometer process technology. In fact, the new architecture is a revised Zen. The technological production rate of GlobalFoundries factories is being transferred from 14nm 14LPP (Low Power Plus) to 12Nm 12LP (Low Power). The new 12LP process technology should provide the chips with a 10% performance increase.

Reference: The GlobalFoundries factory network is AMD's former manufacturing facility, spun off in 2009 and merged with other contract manufacturers. In terms of market share for contract manufacturing, GlobalFoundries shares the second place with UMC, significantly behind TSMC. Chip developers - AMD, Qualcomm and others - order production both from GlobalFoundries and other factories.

In addition to the new technical process, the Zen + architecture and chips based on it will receive improved AMD Precision Boost 2 (precise overclocking) and AMD XFR 2 (Extended Frequency Range 2) technologies. In Ryzen mobile processors, you can already find Precision Boost 2 and a special modification of XFR - Mobile Extended Frequency Range (mXFR).

The Ryzen, Ryzen PRO and Ryzen Threadripper family of PC processors will be released in the second generation, but so far there is no information about the generation update of the Ryzen and Ryzen PRO mobile family and the server EPYC. But it is known that some models of Ryzen processors from the very beginning will have two modifications: with and without graphics integrated into the chip. The entry-level and mid-range Ryzen 3 and Ryzen 5 models will come in both variants. And the high level Ryzen 7 will not receive any graphical modification. Most likely, the codename Pinnacle Ridge is assigned to the architecture of the cores for these processors (literally, the sharp crest of a mountain, one of the peaks of the Wind River ridge in Wyoming).

The second generation Ryzen 3, 5 and 7 will begin shipping in April 2018 alongside the 400 series chipsets. And the second generation Ryzen PRO and Ryzen Threadripper will be late until the second half of 2018.

7 nanometers: 3rd generation Ryzen on Zen 2, discrete Vega graphics, Navi graphics core

In 2018, the Radeon group will release discrete Vega graphics for laptops, ultrabooks, and laptop tablets. AMD does not share specific details: it is known that discrete chips will work with compact multilayer memory such as HBM2 (integrated graphics use RAM). Separately, Radeon emphasizes that the height of the memory chips will be only 1.7 mm.


Radeon executive reveals integrated and discrete Vega graphics

And in the same 2018, Radeon will transfer graphics chips based on the Vega architecture from the 14 nm LPP process technology immediately to 7 nm LP, completely leaping over 12 nm. But first, the new graphics units will only ship for the Radeon Instinct line. This is a separate family of Radeon server chips for heterogeneous computing: machine learning and artificial intelligence - the demand for them is provided by the development of self-driving cars.

And already at the end of 2018 or early 2019, ordinary consumers will wait for Radeon and AMD products on a 7-nanometer process technology: processors on the Zen 2 architecture and graphics on the Navi architecture. Moreover, the design work for Zen 2 has already been completed.

AMD partners are already getting acquainted with the chips on Zen 2, who will create motherboards and other components for the third generation Ryzen. AMD is gaining such momentum due to the fact that the company has two teams "jumping over" each other to develop promising microarchitectures. They started by doing Zen and Zen + in parallel. When Zen was completed, the first team moved to Zen 2, and when Zen + was completed, the second team moved to Zen 3.

7 nanometers "plus": the fourth generation of Ryzen on Zen 3

While one AMD department is solving the problems of mass production of Zen 2, another department is already designing the Zen 3 at the technological standard designated as "7nm +". The company does not disclose details, but according to indirect data, it can be assumed that the process technology will be improved by complementing the current deep ultraviolet lithography (DUV, Deep Ultraviolet) with a new hard ultraviolet lithography (EUV, Extreme Ultraviolet) with a wavelength of 13.5 nm.


GlobalFoundries has already installed new equipment to move to 5nm

Back in the summer of 2017, one of the GlobalFoundries factories purchased more than 10 lithographic systems from the TWINSCAN NXE series from the Dutch ASML. With partial use of this equipment within the framework of the same 7 nm process technology, it will be possible to further reduce power consumption and increase the performance of chips. There are no exact metrics yet - it will take some more time to debug new lines and bring them to acceptable capacities for mass production.

AMD expects to start organizing sales of chips at the rate of "7 nm +" from processors on the Zen 3 microarchitecture by the end of 2020.

5 nanometers: the fifth and subsequent generations of Ryzen on Zen 4?

AMD has not made an official announcement yet, but one can safely speculate that the next frontier for the company will be the 5 nm process technology. Experimental chips at this rate have already been produced by a research alliance of IBM, Samsung and GlobalFoundries. Crystals based on the 5 nm process technology will require not partial, but full-fledged use of hard ultraviolet lithography with an accuracy of better than 3 nm. This permission is provided by the models of the TWINSCAN NXE: 3300B lithographic system purchased by GlobalFoundries from the ASML company.


A layer as thick as one molecule of molybdenum disulfide (0.65 nanometers) exhibits a leakage current of only 25 femtoamperes / micrometer at 0.5 volts.

But the difficulty also lies in the fact that the 5 nm process will probably have to change the shape of the transistors. The long-established FinFETs (fin-shaped transistors) may give way to promising GAA FETs (gate-all-around transistors). It will take several more years to set up and deploy mass production of such chips. The consumer electronics sector is unlikely to receive them before 2021.

Further reduction of technological standards is also possible. For example, back in 2003, Korean researchers created a 3 nanometer FinFET. In 2008, a nanometer transistor was created at the University of Manchester based on graphene (carbon nanotubes). And the research engineers at the Berkeley laboratory in 2016 conquered the sub-nanometer scale: in such transistors, both graphene and molybdenum disulfide (MoS2) can be used. True, at the beginning of 2018, there was still no way to produce a whole chip or substrate from new materials.

  • Chip codename: "Hawaii"
  • 6.2 billion transistors (Tahiti's Radeon HD 7970 has 4.3 billion)
  • 4 geometry processors
  • 512-bit memory bus: eight 64-bit controllers with support for GDDR5 memory
  • Core frequency up to 1000 MHz (dynamic)
  • 44 GCN computing units, including 176 SIMD cores, consisting of a total of 2816 ALUs for floating point calculations (integer and floating point formats are supported, with FP32 and FP64 precision)
  • 176 texture units, with support for trilinear and anisotropic filtering for all texture formats
  • 64 ROPs with support for full-screen anti-aliasing modes with programmable sampling of more than 16 samples per pixel, including FP16 or FP32 framebuffer format. Peak performance up to 64 samples per cycle, and in Z only mode - 256 samples per cycle

Radeon R9 290X Graphics Card Specifications

  • Core frequency: up to 1000 MHz
  • Number of universal processors: 2816
  • Number of texture units: 176, blending units: 64
  • Memory type: GDDR5
  • Memory capacity: 4 gigabytes
  • Computing Performance (FP32) 5.6 teraflops
  • Maximum theoretical fill rate: up to 64 gigapixels per second.
  • Theoretical texture sampling rate: up to 176 gigatexels per second.
  • PCI Express 3.0 bus
  • Power consumption up to 275 W
  • One 8-pin and one 6-pin power connector;
  • Two-slot design
  • The recommended price for the US market is $ 549 (for Russia - 19,990 rubles).

Radeon R9 290 Graphics Card Specifications

  • Core frequency: up to 947 MHz
  • Number of universal processors: 2560
  • Number of texture units: 160, blending units: 64
  • Effective memory frequency: 5000 MHz (4 × 1250 MHz)
  • Memory type: GDDR5
  • Memory capacity: 4 gigabytes
  • Memory bandwidth: 320 gigabytes per second.
  • Computing Performance (FP32) 4.9 teraflops
  • Maximum theoretical fill rate: Up to 60.6 gigapixels per second.
  • Theoretical texture sampling rate: up to 152 gigatexels per second.
  • PCI Express 3.0 bus
  • Two Dual Link DVI, one HDMI, one DisplayPort
  • Power consumption up to 275 W
  • Two-slot design
  • The recommended price for the US market is $ 399 (for Russia - 13,990 rubles).

From the name of the top-end novelty, it is clear that the naming system for AMD video cards has changed. The innovation is partially justified by the fact that such a system has long been used in APUs of their own production (A8 and A10 families, for example), and other manufacturers (for example, Intel Core i5 and i7 have a similar naming system for processors), but for video cards the previous naming system was clearly more logical and understandable. Interestingly, what made AMD change it right now, although they had at least a Radeon HD 9000 line in stock, and the “HD” prefix could be changed to another one.

The division into the R7 and R9 families also remains not entirely clear for us: why does the 260X still belong to the R7 family, and the 270X already belongs to the R9? However, with the Radeon R9 290X considered in the material, everything is a little more logical, it belongs to the top-end R9 family and has the maximum serial number in the series - 290. But why was it necessary to start a leapfrog with the “X” suffixes? Why was it impossible to get by with numbers, as was the case in the previous family? If three digits are not enough, and you don't like numbers like 285 and 295, then you could leave four numbers in the name: R9 2950 and R9 2970. But then the system would not be very different from the previous one, and marketers need to somehow justify their jobs. Well, okay, the name of the video card is the tenth thing, if only the product was good and justified its price.

And there are no problems with this, the recommended price for the Radeon R9 290X is lower than that of the corresponding top-end solution of a competitor from the same price segment. The release of the Radeon R9 290X is clearly aimed at fighting the NVIDIA GeForce GTX 780 based on the GK110 chip, which at the time of its release was a top competitor's board (we do not take into account the GeForce GTX Titan, since this model has always been a purely fashion solution) and has a higher recommended price even taking into account the reduction in prices for top models from NVIDIA.

The recommended price for the Radeon R9 290 is also lower than the price of the corresponding solution of a competitor from the same price segment. The Radeon R9 290 is clearly intended to fight the NVIDIA GeForce GTX 780 based on the GK110 chip, which is the younger top-end competitor's motherboard (after all, there is a GeForce GTX Titan for a long time, and the GTX 780 Ti has already been announced and will be released soon). The NVIDIA model has a higher MSRP ($ 499 versus $ 399), but it can provide higher performance in games - this is not Fire Strike from 3DMark, which is convenient for AMD.

Both top models of AMD graphics cards have four gigabytes of GDDR5 memory. Since the Hawaii graphics chip has a 512-bit memory bus, it would theoretically be possible to put 2 GB on them, but this amount of GDDR5 memory for a top solution is already too small, especially since the Radeon HD 7970 had 3 GB of memory, yes and modern projects like Battlefield 4 already recommend at least 3 GB of video memory. And four gigabytes will definitely be enough for any modern games at the highest settings and resolutions, and even for the future, when multi-platform games for the next generation of consoles are released: PS4 and Xbox One.

As far as energy consumption is concerned, this is not an easy question. Although on paper the power consumption of the new model has not increased too much compared to the Radeon HD 7970 GHz, there are nuances here. Like some previous top-end solutions, AMD Radeon R9 290X has a special switch on the card that allows you to choose one of two BIOS firmwares. This switch is located at the end of the video card next to the mounting plate with video outputs. Naturally, after switching, a PC reboot will be required for the changes to take effect. By default, all Radeon R9 290X have two BIOS versions flashed, and these modes differ noticeably from each other in terms of power consumption. Unlike the older model, a special switch on the R9 290 is physically present, but only one mode is available.

"Quiet Mode" - the position of the "one" switch, closest to the mounting plate of the video card. This mode is for players who are concerned about the noise of the game system. For example, those who play with headphones in a quiet room and have a PC with quiet cooling systems.

Uber Mode - Switch position two farthest from the video output mounting plate. This mode is designed for maximum performance in games, testing and CrossFire systems. It is clear from the names of the modes that the quiet mode provides less noise from the cooling system at the price of slightly reduced performance, and the super mode provides the maximum possible with more power consumption and noise from the video card cooling fan. It is good that the user has a choice and is free to use any of the modes according to his needs without restrictions.

Architectural features

The new Hawaii graphics chip, which underlies the AMD Radeon R9 290 (X) series graphics cards, is based on the already known Graphics Core Next (GCN) architecture, which has been slightly modified in terms of computing power and to fully support all DirectX 11.2 capabilities, like this was previously made in the Bonaire chip (Radeon HD 7790), which also became the basis for the Radeon R7 260X. The architectural changes in Bonaire and Hawaii relate to improvements in computing capabilities (support for more concurrently executing threads) and a new version of AMD PowerTune technology, which we will discuss later.

New DirectX 11.2 features include tile resources that take advantage of Hawaii's GPU virtual memory hardware features called partially-resident textures (PRTs). Using virtual video memory, it is easy to get efficient hardware support for algorithms that allow applications to use huge amounts of textures and their streaming into video memory. PRT allows you to increase the efficiency of using video memory in such tasks, and similar techniques are already used in some game engines.

We have already described PRT in the material dedicated to the release of the Radeon HD 7970, but in Bonaire and Hawaii these capabilities have been expanded. These video chips support all the additional features that were added in DirectX 11.2, mainly related to level of detail (LOD) algorithms and texture filtering.

Although the GCN capabilities were expanded, AMD's main focus in designing the new top-end GPU was to improve the power efficiency of the chip, since the Tahiti was already consuming too much power, and Hawaii included more compute units. Let's take a look at what AMD engineers managed to do to bring a competitive product to the market:

The new graphics processor is logically divided into four parts (Shader Engine), each of which contains 11 enlarged computing units (Compute Units), including texture units, one geometric processor and one rasterizer, as well as several ROP units. In other words, the block diagram of the most modern AMD chip has become even more similar to the diagram of NVIDIA chips, which also have a similar organization.

In total, the Hawaii graphics chip includes 44 Compute Units containing 2816 stream processors, 64 ROPs and 176 TMUs. The GPU in question has a 512-bit memory bus consisting of eight 64-bit controllers, as well as 1 MB of L2 cache. It is produced on the same 28 nm process technology as Tahiti, but already contains 6.2 billion transistors (Tahiti has 4.3 billion).

But this applies only to a full-fledged chip with all active blocks, which is used in the Radeon R9 290X. The younger R9 290 received a chip with 40 active Compute Units containing 2560 stream processors and 160 texture units. But the number of ROP units was not cut down, there are 64 of them left. The same applies to the memory bus, it remains 512-bit, consisting of eight 64-bit controllers.

Let's take a look at the block diagram of the shader engine that make up the Hawaii GPU. This is a large-block part of a chip that contains four such engines:

Each of the Shader Engine includes one geometry processor and one rasterizer, which are capable of processing one geometry primitive per clock cycle. It seems that the geometric performance of Hawaii has not only improved, but should be well balanced compared to previous AMD GPUs.

The GCN shader engine can contain up to four large Render Back-ends (RB) blocks, which include four ROP blocks each. The number of Compute Units in the shader engine can also be different, but in this case there are 11 of them, although the caches for instructions and constants are divided for every four Compute Units. That is, it would be more logical to include not 11, but 12 computational units in the Shader Engine, but it seems that such a number was no longer included in the Hawaii power consumption limits.

The computational block of the GCN architecture includes various functional blocks: texture fetch modules (16 pieces), texture filtering modules (four pieces), a branch prediction unit, a scheduler, computational units (four vector and one scalar), a first-level cache (16 KB per computing unit), memory for vector and scalar registers, and shared memory (64 KB for each Compute Unit).

Since there are four shader engines in the Hawaii GPU, it has four geometry processing and rasterization engines in total. Accordingly, the new top-end GPU from AMD is able to process up to four geometric primitives per clock. In addition, Hawaii has improved geometry buffering and increased caches for geometry primitives. Taken together, this provides a serious increase in performance with large amounts of calculations in geometry shaders and the active use of tessellation.

Also, the computing capabilities of the new processor, albeit a graphic one, have undergone some changes. The chip includes two DMA engines, which provide full use of the PCI Express 3.0 bus, the declared bidirectional bandwidth of 16 GB / s. Comparatively new is the possibility of asynchronous computing, which is carried out using eight (in the case of the Hawaii chip) Asynchronous Compute Engines (ACE).

The ACEs run in parallel with the GPU and each is capable of handling eight instruction streams. This organization enables independent scheduling and multitasking, access to data in global memory and L2 cache, and fast context switching. This is especially important in computational tasks, as well as in gaming applications using GPUs for both graphics and general computing. Also, this innovation can theoretically be an advantage when using low-level access to GPU capabilities using APIs such as Mantle.

Let's go back to Hawaii's capabilities that apply to graphical computing. Due to the growth of resolution requirements with the expected proliferation of UltraHD monitors, it becomes necessary to increase the computational power of ROPs. The Hawaii chip includes 16 Render Back End (RBE) blocks, which is twice the size of the Tahiti. The sixteen RBEs contain 64 ROPs that are capable of handling up to 64 pixels per clock, and this can be very useful in some cases.

As for the memory subsystem, Hawaii has 1 megabyte of L2 cache, which is divided into 16 partitions of 64 KB each. Both a 33% increase in the amount of cache memory and an increase in internal bandwidth by a third are declared. The total throughput of L2 / L1 caches is declared equal to 1 TB / s.

Memory is accessed using eight 64-bit controllers, which together make up a 512-bit bus. The memory chips in the Radeon R9 290X are clocked at 5.0 GHz for a total memory bandwidth of 320 GB / s, more than 20% higher than the Radeon HD 7970 GHz. At the same time, the chip area occupied by the memory controller was reduced by 20%, compared to the 384-bit controller in Tahiti.

Mantle Low Level Graphics API

The introduction of a new graphics API called Mantle was quite unexpected. AMD entered the sphere of Microsoft's interests with their DirectX, and decided on some ... let's say, confrontation. Of course, the reason for the move was that for the next generation of gaming consoles, AMD is the supplier of all GPUs to Sony, Microsoft and Nintendo, and from this AMD wanted to get a tangible advantage.

AMD decided to release this API largely due to the influence of DICE and EA, which release the Frostbite game engine, which is the basis of the Battlefield game and several others. Technicians at DICE, which builds the Frostbite engine, consider PC to be an excellent gaming platform for DICE. They have been working with AMD for a long time to develop and implement new technologies in the Frostbite 3 engine - the company's new engine, which is the basis for more than 15 games in the series: Battlefield, Need for Speed, Star Wars, Mass Effect, Command & Conquer, Dragon Age, Mirror's Edge, etc.

No wonder AMD has seized upon an opportunity like deep optimizations of Frostbite for their GPUs. This game engine is very modern and supports all the important features of DirectX 11 (even 11.1), but developers wanted to make full use of the capabilities of PC systems, move away from the limitations of DirectX and OpenGL and use the CPU and GPU more efficiently, since some functionality that exceeds the DirectX specifications and OpenGL remains unused by developers.

The Mantle Graphics API offers all the hardware capabilities of AMD graphics cards without being constrained by current software limits and using a thinner software interface between the game engine and GPU hardware resources, similar to the way it is done on game consoles. And taking into account the fact that all future gaming consoles of the "desktop" format (Playstation 4 and Xbox One, first of all) are based on AMD graphics solutions based on the GCN architecture familiar from PCs, AMD and game developers have an interesting opportunity - a special graphics API that will allow game engines to be programmed on a PC in the same style as on consoles, with minimal API impact on the game engine code.

According to preliminary data, using Mantle provides a nine-fold advantage in the execution time of draw calls compared to other graphics APIs, which reduces the load on the CPU. Such a multiple advantage is only possible in an artificial environment, but some superiority will be provided in typical 3D gaming conditions.

This low-level, high-performance graphics API was developed at AMD with significant input from leading game developers, especially DICE, and the near-released Battlefield 4 is the first project to use Mantle, and other game developers will be able to use this API in the future - so far unknown , when exactly.

The release version of Battlefield 4 will only support DirectX 11.1, and support for the Mantle API is planned for December, when a free update is released, further optimized for AMD Radeon graphics cards. On PC systems with GCN architecture video cards, the Frostbite 3 engine will use Mantle, which will reduce the load on the CPU by parallelizing the work on eight computing cores, and will introduce special low-level performance optimizations with full access to the GCN hardware capabilities.

Mantle leaves the public with more questions than answers. For example, it is not very clear how the low-level Mantle driver will work with its direct access to GPU resources in the Windows operating system with DirectX, which usually dispose of the GPU resources themselves, and how these resources will be shared between the game application running Mantle and the Windows system. ... Some questions were answered at the APU13 summit, but this was just a short list of partners and one demo program, without much technical details.

Initially, there were expectations among enthusiasts that the next generation consoles will also support Mantle, this will not be the case in reality, simply because it is not necessary and not beneficial for console developers. So, Microsoft has its own graphics API and this company has already confirmed that their Xbox One will use DirectX 11.x exclusively, close in capabilities to DirectX 11.2, also supported by modern AMD video chips. Other graphics APIs like OpenGL and Mantle won't be available on Xbox One - and that's Microsoft's official position. Probably, the same goes for the Sony PlayStation 4, although the representatives of this company have not yet officially announced anything about this.

In addition, according to some reports, Mantle will not be available to game developers, except for DICE and others, for several more months. And if you put all the available information together, then the prospects for Mantle at the moment really look vague. AMD, in turn, claims that Mantle was not intended for use in consoles, that it is just a low-level API, "similar" to the console. How it is similar, if the APIs are still different, is not very clear. Well, perhaps only a "low" level and proximity to hardware, but this is clearly not necessary for all developers and will require additional development time.

As a result, in the absence of support for Mantle on consoles, this graphics API can be used exclusively on the PC, which reduces interest in it. Many people even remember such graphical APIs of the distant past as Glide. And although the difference with Mantle is great, there is a high probability that without support on consoles and on two-thirds of dedicated GPUs (approximately this share has been occupied by the corresponding solutions from NVIDIA for several years), this API will never really become popular. It will likely be used by individual game developers who take an interest in low-level GPU programming and receive appropriate support from AMD.

The main question is how close Mantle is to the low-level console APIs and whether it really helps reduce development or porting costs. It also remains unclear how big the real advantage is from the transition to low-level GPU programming and how many capabilities of graphics chips are not disclosed in the existing popular APIs that can be used together with Mantle.

TrueAudio audio processing technology

We have already talked about this technology in as much detail as possible in the theoretical material dedicated to the release of a new line from AMD. With the release of the Radeon R7 and R9 series, the company introduced the world to AMD TrueAudio technology, a programmable audio engine that is only supported on the AMD Radeon R7 260X and R9 290 (X). It is the Bonaire and Hawaii chips that are the latest in terms of technology, they have the GCN 1.1 architecture and other innovations, including support for TrueAudio.

TrueAudio is an embedded programmable audio engine in AMD's GPUs, the first being the Bonaire chip on which the Radeon R7 260X is based, and the second being the Hawaii. TrueAudio provides guaranteed real-time processing of audio tasks on a system with a compatible GPU, regardless of the installed CPU. For this purpose, several Tensilica HiFi EP Audio DSP DSP cores are integrated into the Hawaii and Bonaire chips, as well as other strapping:

The TrueAudio capabilities are accessed using popular sound processing libraries, whose developers can use the resources of the built-in audio engine using the dedicated AMD TrueAudio API. In the case of such new technologies, the most important is the issue of partnership with the developers of audio engines and libraries for working with sound. AMD works closely with many companies known for their developments in this field: game developers (Eidos Interactive, Creative Assembly, Xaviant, Airtight Games), audio middleware developers (FMOD, Audiokinetic), audio algorithms developers (GenAudio, McDSP), and etc.

TrueAudio technology is quite interesting given the stagnation in hardware processing of audio on the PC. There remains the question of the relevance of the solution at the moment. We doubt that game developers will rush to embed this technology into their projects, taking into account extremely limited compatibility (at the moment TrueAudio is supported only on three video cards: Radeon HD 7790, R7 260X and R9 290X) without additional motivation from AMD. But we welcome all innovations in sophisticated audio processing and hope that the technology will spread.

Improved PowerTune power management and overclocking settings

AMD's PowerTune power management technology has also received some enhancements to the Radeon R9 290X graphics card from AMD. We already wrote about these improvements in the Radeon HD 7790 review, for more efficient power management, the latest AMD graphics chips have several states with different values ​​of frequency and voltage, which allows to achieve higher clock speeds than before. At the same time, the GPU always works with the optimal voltage and frequency for the current GPU load and power consumption by the video chip, on which the switching between states is based.

The Hawaii chip integrates the second generation serial VID interface - SVI2. All the latest GPUs and APUs have this voltage regulator, including Hawaii and Bonaire, as well as all APUs with Socket FM2. The voltage regulator accuracy is 6.25 mV, 255 possible values ​​fit between the 0.00 V and 1.55 V voltages. The voltage regulator is capable of driving multiple power lines.

In the new algorithm, known since the days of Bonaire, PowerTune technology does not have to sharply drop the frequency when the consumption level is exceeded, plus the voltage also decreases with it. The transitions between states became very fast, so as not to exceed the set consumption limit even for a short time, the GPU switches PowerTune states 100 times per second. Therefore, Hawaii simply does not have a single operating frequency, there is only an average for a certain period of time. This approach helps to "squeeze all the juice" out of the available hardware solutions, improves energy efficiency and reduces the noise of cooling systems.

Accordingly, new features have been added to the Catalyst Control Center driver settings in the OverDrive tab - it has been completely redesigned in order to get the most out of the innovations in PowerTune for the R9 290 series solutions.

The first thing you notice is the relationship between the Power Limit and the GPU clock. These parameters are now linked to each other in the energy consumption and heat dissipation diagram. Due to the fact that consumption and performance are directly related in the new PowerTune algorithm in Hawaii, such an interface makes overclocking more intuitive and understandable.

It also reflects the fully dynamic GPU frequency control introduced in the R9 290 series solutions. Overclocking is now indicated by increasing the corresponding value (GPU Clock) by a certain percentage, and the possibilities of previous solutions in the form of specifying a specific frequency are no longer there.

The second thing that has been seriously changed in the new OverDrive interface is the fan speed control. This setting has also been completely overhauled. In previous generations, on the OverDrive tab, the user could only set a fixed fan speed, which was constantly maintained. In the new interface, this setting has changed and it is called the "Maximum Fan Speed", which sets the upper limit for the fan speed, which will be maximum. But the fan speed will change in this case, based on the load of the GPU and its temperature, and will not remain fixed, as it was before.

By default, the rotational speed of the cooler on the Radeon R9 290X depends on the current settings of the loaded BIOS firmware. Manually changing the maximum fan speed allows you to select any other value. And when overclocking, it is advisable to take into account not only the power and frequency settings, but also to increase the fan speed limit, otherwise the maximum performance will be limited by the GPU temperature and its cooling.

Changes in AMD CrossFire Technology

One of the most interesting hardware innovations in the AMD Radeon R9 290 series video cards is support for AMD CrossFire technology without the need to connect video cards to each other using special bridges. Instead of dedicated communication lines, GPUs communicate with each other over the PCI Express bus using a hardware DMA engine. At the same time, the performance and image quality is exactly the same as with the connecting bridges. This solution is much more convenient, and AMD claims that they have not encountered compatibility problems on different motherboards.

It is important that for maximum performance in AMD CrossFire mode on all Radeon R9 290X video cards, it is advisable to set the BIOS switch to the super mode "Uber Mode", and good cooling should be provided for all boards, otherwise the newfangled PowerTune technology will lower the GPU clock speeds. which will lead to a drop in performance.

The CrossFire technology provides excellent scaling on multi-chip systems with the R9 290X, if we take into account the average frame rate (for CrossFire, there are still questions about the smoothness of the footage, which we studied earlier). The following diagram shows the comparative performance of a single AMD Radeon R9 290X and two such cards working together on rendering using AMD CrossFire technology.

All the games shown in the diagram provide an excellent increase in average frame rate, when connected to a second video card - up to two times. In the worst case, these applications show 80% CrossFire efficiency, and the average is 87%.

When a third AMD Radeon R9 290X board is added to the CrossFire system, the efficiency is expected to drop even lower, but three such video cards still provide a 2.6-fold increase in speed relative to a single board, which is also pretty good.

AMD Eyefinity technology and UltraHD support

AMD is one of the leaders in the field of information output to display devices, they were among the first to implement DVI Dual Link support for monitors with a resolution of 2560 × 1600 pixels, DisplayPort support, made output to three or more monitors from one GPU (Eyefinity ), 4K HDMI output, etc.

4K resolution, also known as Ultra HD, is 3,840 x 2,160 pixels, which is exactly four times the resolution of Full HD (1920 x 1080), and is very important to the industry. The problem remains with the low prevalence of Ultra HD monitors and TVs at the present time. 4K TVs are only sold very large and expensive, and the corresponding monitors are extremely rare and also overpriced. But the situation is about to change as analysts predict a bright future for Ultra HD devices.

AMD provides two options for Ultra HD displays: TVs that only support 30Hz or lower at 3840x2160 and connect via HDMI or DisplayPort, and monitors that are halved at 1920x2160 @ 60Hz ... The second type of monitor is also supported with DisplayPort 1.2 MST hubs, which were recently released.

To support split monitors, a new VESA Display ID 1.3 standard has been introduced, which describes additional display capabilities. The new VESA standard will automatically "glue" the image for such monitors, if supported by both the monitor and the driver. This is planned for the future, but for now, such tiled 4K monitors require manual configuration. AMD says the latest versions of the Catalyst driver already have auto-configuration capabilities for the most popular monitor models.

In addition, AMD Radeon graphics cards will also support the third type of Ultra HD-displays, which only need one stream to work in ultra-high resolution at a refresh rate of 60 Hz. The Radeon R9 290X delivers sufficient 3D performance for multi-monitor configurations, which is essential at maximum gaming settings and the highest rendering resolutions on such systems. Also, AMD Radeon R9 290X has an advantage over NVIDIA GeForce GTX 780, expressed in more video memory, which is important in resolutions like 5760 × 1080 pixels and 4K.

The AMD Radeon R9 290X graphics card supports UltraHD resolutions for both HDMI 1.4b (with a low refresh rate not exceeding 30 Hz) and DisplayPort 1.2. Moreover, the performance of the new solution makes it possible to play at maximum settings in this resolution, getting an acceptable frame rate in almost any game.

The ability to use multiple monitors is also very important for PC game enthusiasts. Eyefinity technology has been updated in the Radeon R9 series of graphics cards, and the new Radeon R9 290X graphics card supports configurations of up to six displays. The AMD Radeon R9 series supports up to three HDMI / DVI displays with AMD Eyefinity technology.

This function requires a set of three identical displays that support identical timings, the output is configured at system startup, and it does not support hot-plugging a display for a third HDMI / DVI connection. To take advantage of the ability to connect more than three displays on the AMD Radeon R9 290X, either DisplayPort capable monitors or certified DisplayPort adapters are required.

First, let's look at the theoretical indicators. Let's try to figure out how much the new Radeon R9 290X should be faster than the previous top-end Radeon HD 7970 GHz. So far, we do not take into account the possible efficiency improvement associated with minor architectural changes in GCN, but if we consider all blocks in the R9 290X and HD 7970 to be identical, then we get the following picture:

With a not so big difference in area and theoretically almost the same level of power consumption (it is not in the table), the peak geometry processing speed has almost doubled, the computational and texture performance has increased by 30%, the video memory bandwidth - by 20%, and the fill rate ( fill rate) - by as much as 90%! The latter value will be very important in view of the planned popularization of UltraHD resolution in the near future, because the number of pixels on the screen will noticeably increase.

All the improvements made have improved the effective performance per millimeter of area. It would be interesting to know about the increase in energy efficiency, but AMD does not like to indicate the TDP level for its modern top solutions, and the official figure of 275 W for the new board is questionable. We can only hope that energy efficiency has not deteriorated. But the performance should definitely improve by at least 20-30%, compared to the Radeon HD 7970, and in some cases even more.

As if in confirmation of the increased possibilities, especially in terms of fill rate, AMD cites the average frame rates achieved in the latest Battlefield 4 game, which is coming out the other day. Battlefield 4 is the sequel to the hit Battlefield series developed by DICE and is arguably the most anticipated game of the year.

It's important to us that Battlefield 4 and its developer DICE are part of the AMD Gaming Evolved Partner Program, so there won't be any problems optimizing Battlefield 4 for GCN GPUs. Moreover, the new Frostbite 3 game engine, on which the Battlefield 4 project is based, uses many of the most advanced capabilities of AMD video chips, and a version with support for the Mantle API is expected in December. In the meantime, let's take a look at the performance in the regular version of the game:

As you can see, even in "quiet" mode, the Radeon R9 290X clearly outperforms the competing GeForce GTX 780 in both modes with different resolutions. However, there is a theoretical possibility that the NVIDIA video card at such high resolutions is hindered by the lack of video memory, which it has less than that of the R9 290X. Of course, a larger amount of video memory is also an advantage of the new product from AMD, but it would be interesting to see a comparison at a lower resolution, where this is not a determining factor.

Theoretical conclusions

At the end of October 2013, AMD offered the market a model of the Radeon R9 290X video card with a very competitive price and capabilities, and a little later, the younger Radeon R9 290. Based on the above theoretical characteristics and the recommended price of video cards, as well as their performance in games, you can to argue that the presented top models of video cards from AMD have an excellent ratio of price, performance and functionality.

The functionality of the new products is additionally strengthened by very interesting initiatives of AMD: a sound DSP engine built into modern chips in the form of TrueAudio technology and a new low-level Mantle graphics API. Their development was made possible in large part by the fact that AMD is the supplier of graphics solutions for all next generation game consoles. And although the prospects for these initiatives in PC games are still vague and they have not gained much popularity among game developers, but this is only the beginning, and with AMD's proper approach to promoting its technologies, they will succeed.

Powered by the latest Hawaii GPU, solutions have become a powerful engine that should drive new technologies in the form of Mantle and TrueAudio, as well as the entire modern product line of the company. The high end graphics cards are the products that help everyone else sell. And the Radeon R9 290 (X) series boards should do a pretty good job of that role. The only controversial point seems to be the likely high power consumption of the new product and insufficient supply on the market - after all, there are obvious problems with the availability of motherboards.

AMD Radeon R9 280X Graphics

  • Chip codename: "Tahiti"
  • Core frequency: up to 1000 MHz
  • Number of universal processors: 2048
  • Number of texture units: 128, blending units: 32
  • Effective memory frequency: 6000 MHz (4 × 1500 MHz)
  • Memory type: GDDR5
  • Memory bus: 384 bit
  • Memory capacity: 3 gigabytes
  • Memory bandwidth: 288 gigabytes per second
  • Computing Performance (FP32): 4.1 teraflops
  • Maximum theoretical fill rate: 32.0 gigapixels per second
  • Theoretical texture sampling rate: 128.0 gigatexels per second.
  • Two CrossFire connectors
  • PCI Express 3.0 bus
  • One 8-pin and one 6-pin power connectors
  • Two-slot design
  • MSRP: $ 299

AMD Radeon R9 280 graphics

  • Chip codename: "Tahiti"
  • Core frequency: up to 933 MHz
  • Effective memory frequency: 5000 MHz (4 × 1250 MHz)
  • Memory type: GDDR5
  • Memory bus: 384 bit
  • Memory capacity: 3 gigabytes
  • Memory bandwidth: 240 gigabytes per second.
  • Maximum theoretical fill rate: 30.0 gigapixels per second
  • Theoretical texture sampling rate: 104.5 gigatexels per second.
  • PCI Express 3.0 bus
  • Connectors: two DVI Dual Link, HDMI 1.4, DisplayPort 1.2
  • Power consumption: 3 to 250 W
  • One 8-pin and one 6-pin power connectors
  • Two-slot design

The 280X is one step below the top-end R9 290 (X), which came out a little later, in the company's new lineup. The R9 280X is based on the successful Tahiti video chip, which was a top-end one quite recently, and is almost a complete analogue of the Radeon HD 7970 GHz model, but went on sale with a price of $ 299 (in the US market). Among the advantages of the model, AMD company calls the amount of video memory of 3 gigabytes, which will be in demand in high resolutions, such as 2560 × 1440 and Ultra HD, in such demanding games as Battlefield 4. Moreover, the amount of video memory in 3 GB is the official recommendation of the developers of this game. ...

As for the comparison of performance and price with previous solutions, then, following the competitor, AMD fell in love with comparisons with video cards of many years ago. Of course, the new product will look just fine if you compare it with the Radeon HD 5870, which came out ... already 4 years ago:

The graphics cards are compared in the modern 3DMark benchmark suite, so it's no surprise that the R9 280X is more than twice as fast as the top motherboard from years ago. More importantly, this performance is offered for around $ 300, which is pretty good, although some Radeon HD 7970 models are already selling for nearly that amount. If we compare with the solutions of the competitor, then AMD claims an average superiority of 20-25% over the competing NVIDIA GeForce GTX 760, which has a similar price.

The numerical name of the R9 280 model, chosen for the solution under consideration, fits well into the naming system of the AMD video card line, unlike some other solutions. The video card did not have to be called a non-circular number, it was simply stripped of the “X” suffix, which belongs to the older model R9 280X. It turned out so fortunately because the place for the younger modification on the Tahiti chip was provided in advance.

The Radeon R9 280 occupies a position in the mid-price range, between R9 270X and R9 280X - between full-fledged models based on Tahiti and Pitcairn chips, and in terms of performance it is very close to the Radeon HD 7950 Boost model known from the previous generation. The differences from last year's board are slightly increased clock speeds and typical power consumption levels, but the difference is small. The recommended price for the Radeon R9 280 currently corresponds to the price of a similar solution from a competitor from the same price segment - the GeForce GTX 760, which is the main rival for the new Radeon model.

The new product from the Radeon R9 series, like the older modification R9 280X, has GDDR5 memory of three gigabytes, which is quite enough for resolutions above 1920 × 1080 (1200) pixels, even in modern demanding games at maximum graphics quality settings. In fact, this is almost an ideal volume for a video card of the middle and upper middle price ranges, because there is simply no point in installing a larger amount of fast and expensive GDDR5 memory. Perhaps even 1.5 GB would be enough for some games, but this does not apply to high resolutions and multi-monitor systems.

The characteristics of the reference board Radeon R9 280, the design of the board and its cooling devices do not differ from those of the Radeon HD 7950 Boost, but this is not too important, since all AMD partners immediately offered their own options with the original design of printed circuit boards and the design of cooling systems as well as solutions with a higher GPU frequency. At the same time, the video card requires additional power supply, one 8-pin and one 6-pin power connector, has two DVI outputs and one HDMI 1.4 and DisplayPort 1.2.

The Radeon R9 280 model can be considered as a stripped-down version of the R9 280X, since the graphics processors of both models are similar in characteristics, except that in the younger one four computing devices are turned off (out of 32 computing devices, only 28 remained active), which gives us 1792 streaming cores instead of 2048 cores in the full version. The same applies to texture units, their number has decreased from 128 TMU to 112 TMU, since each GCN unit contains four texture units.

But the rest of the chip was not cut, all 32 ROPs remained active, as well as the memory controllers. Therefore, the Tahiti graphics processor performed by the Radeon R9 280 has the same 384-bit memory bus, assembled from six 64-bit channels, as the older solution R9 280X.

The operating frequencies of the video card of the new model are slightly higher than those offered in the Radeon HD 7950 Boost. That is, the graphics processor in the new model received a slightly increased turbo frequency equal to 933 MHz, but the video memory of the new product operates at the usual frequency of 5 GHz. The use of sufficiently fast GDDR5 memory with a 384-bit bus gives a relatively high bandwidth of 240 GB / s.

The theoretical performance of the Radeon R9 280 in all respects should be identical to the Radeon HD 7950 Boost, judging by the very close specifications, and the new product should lag about 15% behind the older R9 280X based on a full-fledged Tahiti chip. In the popular 3DMark FireStrike test suite, according to the company's measurements, the speed of the new Radeon R9 280 is about 13% lower than the speed of the Radeon R9 280X, which is close to the theoretical difference.

In general, under the name Radeon R9 280, an attractive video card in terms of price and performance has entered the market, surpassing the comparable price GeForce GTX 760 from NVIDIA in almost all games. The Radeon R9 280 video card model presented in March became one of the most advantageous offers in this price niche - users should be satisfied with its speed obtained for relatively little money.

Radeon R9 270 (X) Series Graphics Accelerators

  • Chip codename: "Curacao"
  • Manufacturing technology: 28 nm
  • 2.8 billion transistors
  • Unified architecture with an array of common processors for streaming processing of multiple types of data: vertices, pixels, etc.
  • Hardware support for DirectX 11.1, including Shader Model 5.0
  • 256-bit memory bus: four 64-bit controllers with support for GDDR5 memory
  • Core frequency up to 925 MHz
  • 20 GCN computational units, including 80 SIMD cores, consisting of a total of 1280 ALUs for floating point calculations (integer and floating point formats are supported, with FP32 and FP64 precision)
  • 80 texture units, with support for trilinear and anisotropic filtering for all texture formats
  • 32 ROPs with support for anti-aliasing modes with the possibility of programmable sampling of more than 16 samples per pixel, including FP16 or FP32 framebuffer format. Peak performance up to 32 samples per cycle, and in Z only mode - 128 samples per cycle
  • Integrated support for up to six monitors connected via DVI, HDMI and DisplayPort

AMD Radeon R9 270X Graphics

  • Core frequency: up to 1050 MHz
  • Memory type: GDDR5
  • Memory bus: 256 bit
  • Memory capacity: 2 or 4 gigabytes
  • Computing Performance (FP32): 2.7 teraflops
  • Maximum theoretical fill rate: 33.6 gigapixels per second
  • Theoretical texture sampling rate: 84.0 gigatexels per second.
  • One CrossFire connector
  • PCI Express 3.0 bus
  • Connectors: two DVI Dual Link, HDMI 1.4, DisplayPort 1.2
  • Power consumption: 3 to 180 W
  • Two-slot design
  • MSRP: $ 199 (4GB model - $ 229)

Radeon R9 270 Graphics Card Specifications

  • Core frequency: 925 MHz
  • Number of universal processors: 1280
  • Number of texture units: 80, blending units: 32
  • Effective memory frequency: 5600 MHz (4 × 1400 MHz)
  • Memory type: GDDR5
  • Memory bus: 256 bit
  • Memory capacity: 2 gigabytes
  • Memory bandwidth: 179 gigabytes per second
  • Computing Performance (FP32): 2.37 teraflops
  • Theoretical texture sampling rate: 74.0 gigatexels per second.
  • CrossFire connector
  • PCI Express 3.0 bus
  • Connectors: two DVI Dual Link, HDMI 1.4, DisplayPort 1.2
  • Power consumption: up to 150 W
  • Two-slot design
  • MSRP: $ 179

Radeon R7 265 Graphics Card Specifications

  • Core frequency: 900 (925) MHz
  • Number of universal processors: 1024
  • Number of texture units: 64, blending units: 32
  • Effective memory frequency: 5600 MHz (4 × 1400 MHz)
  • Memory type: GDDR5
  • Memory bus: 256-bit
  • Memory capacity: 2 gigabytes
  • Memory bandwidth: 179 gigabytes per second
  • Computing Performance (FP32): 1.89 teraflops
  • Maximum theoretical fill rate: 29.6 gigapixels per second
  • Theoretical texture sampling rate: 59.2 gigatexels per second.
  • CrossFire support
  • PCI Express 3.0 bus
  • Connectors: two DVI Dual Link, HDMI 1.4, DisplayPort 1.2
  • Power consumption: up to 150 W
  • One 6-pin power connector
  • Two-slot design
  • MSRP: $ 149

The R9 270X is in the middle of AMD's Radeon lineup and is based on the new Curacao video chip, which is practically the twin of Pitcairn. The names of the models Radeon R9 270 and 270X differ only in the additional symbol "X" in the name of the older model. In the previous family, such a difference was denoted by the numbers xx50 and xx70, which was somewhat more logical and understandable. But we have almost got used to the new system, especially since “extreme” indices are now loved not only by AMD.

The video card Radeon R9 270X almost completely repeats the model Radeon HD 7870 known from the previous line, but will be sold in the North American market for only $ 199, although it has differences from last year's card in speed, and they consist in an increased clock frequency of the GPU and video memory, which should have a positive effect on performance. Moreover, the maximum frequencies themselves mean little now - in practice, the GPU can operate at an even higher frequency, and the R9 270X will be closer in speed to the Radeon HD 7950 than to the HD 7870.

The model Radeon R9 270 occupies a position in the lower part of the middle of the new line and is also very close to the model Radeon HD 7870 known from the previous line. There are differences from last year's motherboard, they are in a slightly lower GPU clock speed. As we are already accustomed to, the recommended price for the Radeon R9 270 turns out to be slightly lower than the price of the corresponding solution of a competitor from the same price segment. It is not so easy to find a rival for the Radeon R9 270. It seems that the new product is clearly aimed at fighting the NVIDIA GeForce GTX 660, which has a similar price, but AMD compares its solution with the GeForce GTX 650 Ti Boost, which is sold much cheaper, being rather a competitor for the R7 260X.

The rest of the characteristics of the reference board Radeon R9 270, the design of the board and its cooling devices are not so important, since AMD partners have already been offering several models with their own design of printed circuit boards and original coolers, as well as a higher frequency of the GPU since the announcement.

The considered models have a video memory volume equal to two gigabytes, which is quite enough for resolutions up to 1920 × 1080 (1200) even in modern demanding games at high settings. Traditionally, the performance and price of new items are compared with previous solutions. This time, for comparison, we also took a four-year-old Radeon HD 5850 model, which at one time even had a slightly higher price:

Unsurprisingly, the Radeon R9 270X also provides more than double the performance in modern benchmarks compared to one of the older models. And the second one - Radeon HD 6870 - is outperforming by almost the same margin. As for the comparison with NVIDIA video cards, AMD compares the new product with the GeForce GTX 660 model, believing that its $ 199 version is 25-40% faster than its competitor in a specially selected set of modern games.

If we consider the later released model Radeon R7 265, then, first of all, the chosen name of the new item is curious, which reveals the imperfection of the naming system for AMD video cards. First, the video card had to be called a non-circular number between 260 and 270, since the “X” suffix is ​​already occupied by the R7 260X, and there is simply no room for the younger modification on the Pitcairn chip. It's not that bad though, because they could have given the new product another suffix - "L", for example, which would have led to even more confusion.

Secondly, judging by the name, the Radeon R7 265 for some reason belongs to the R7 series, and not to the R9, which includes only a slightly more powerful solution based on the same Pitcairn chip. It turns out that the R7 line now includes both Pitcairn-based video cards that do not have TrueAudio support and some GCN 1.1 architecture capabilities, and Bonaire-based solutions that support these technologies. Similar motherboards based on Pitcairn belong to completely different R7 and R9 families. In general, the confusion arose simply wild, as we warned about in the first articles on the updated line and naming system of AMD video cards.

The Radeon R7 265 model occupies a position at the bottom of the company's new lineup, between the R9 270 and the R7 260X, and in terms of performance it is very close to the known Radeon HD 7850 model from the previous generation. too great. The recommended price for the Radeon R7 265 fully corresponds to the price of a similar solution from a competitor from the same price segment - the GeForce GTX 750 Ti, this model is the only rival for the Radeon R7 265 after the GeForce GTX 650 Ti Boost has ceased to be produced.

The most productive model from the Radeon R7 series, like the older version of the R9 270, has GDDR5 memory of two gigabytes, which is quite enough for resolutions up to 1920 × 1080 (1200) even in modern demanding games at high quality settings, not to mention that for such an inexpensive video card there is simply no point in installing a larger volume of fast and expensive GDDR5 memory, but a smaller one would have a very negative effect on its performance.

The characteristics of the reference board Radeon R7 265, the design of the board and its cooling devices do not differ from those of the Radeon R9 270, and are not at all particularly important, since AMD partners immediately offered other options with their own design of printed circuit boards and original coolers, as well as higher frequency of the GPU. At the same time, all of them are content with only one 6-pin power connector, but they may differ in the set of connectors for displaying the image.

The Radeon R7 265 model can be considered as a stripped-down version of the R9 270. The graphics processors of both models are very similar in characteristics, except that in the younger one four computing devices are turned off (16 out of 20 computing devices remain active), which gives us 1024 streaming cores instead of 1280 cores for the full version. The same applies to texture units, their number has decreased from 80 TMUs to 64 TMUs, since each GCN unit contains four texture units. As for the rest, the chip has not changed, all ROP units remain in place, as well as memory controllers. That is, this GPU has 32 active ROPs and four 64-bit memory controllers, giving a shared 256-bit bus.

The operating frequencies of the video card of the new model are identical to those offered by the Radeon R9 270. That is, the graphics processor in the Radeon R7 265 model received the same base frequency of 900 MHz and a turbo frequency of 925 MHz, and the video memory of the new product operates at a frequency of 5.6 GHz. The use of fast enough GDDR5 memory gives a relatively high bandwidth of 179 GB / s. By the way, the memory capacity of this model is 2 GB, which is quite logical for a budget video card. The typical power consumption of a video card has not changed either. The official figure of power consumption for the Radeon R7 265 remains the same as for the R9 270 - 150 W, although in practice the consumption of the younger model should still be slightly lower.

Naturally, the new video card Radeon R7 265 supports all the technologies that other models on the same GPU do. We have repeatedly written about all the new technologies supported by AMD graphics chips in the respective reviews. Judging by the theoretical figures, comparing the performance of the Radeon R7 265 with the R7 260X gives mixed conclusions. The novelty is much faster in performance of ROP units and has a much higher video memory bandwidth, but in terms of the speed of mathematical calculations and texturing, it is even slightly inferior to its younger sister.

AMD Radeon R7 260X Graphics

  • Chip codename: "Bonaire"
  • Core frequency: up to 1100 MHz
  • Number of universal processors: 896
  • Number of texture units: 56, blending units: 16
  • Effective memory frequency: 6500 MHz (4 × 1625 MHz)
  • Memory type: GDDR5
  • Memory bus: 128 bit
  • Memory capacity: 2 gigabytes
  • Memory bandwidth: 104 gigabytes per second.
  • Computing Performance (FP32): 2.0 teraflops
  • Maximum theoretical fill rate: 17.6 gigapixels per second
  • Theoretical texture sampling rate: 61.6 gigatexels per second.
  • One CrossFire connector
  • PCI Express 3.0 bus
  • Connectors: two DVI Dual Link, HDMI 1.4, DisplayPort 1.2
  • Power consumption: 3 to 115 W
  • One 6-pin power connector
  • Two-slot design
  • MSRP: $ 139

This model has an even lower price of $ 139 and is an almost complete copy of the Radeon HD 7790, being based on the same graphics processor, codenamed Bonaire. Among the differences between the new model and the old one from the previous line: a slightly increased frequency and the presence of two gigabytes of video memory. This is understandable, because memory requirements grow very quickly over time, and even more so it will be obvious when multi-platform games are released for next-generation consoles.

The Radeon R7 260X has sufficient performance for undemanding gamers, which is sufficient for high quality settings in most games. AMD compares the performance and price of the new product with only one of the previous generations of video cards - the Radeon HD 5870, again four years ago:

Apparently, the outdated top-end motherboard was taken to show that the performance of the former representatives of the high-end segment is now available for only $ 139 (again, all prices are in the US market), and the new product even has a power reserve. Among the competing solutions AMD mentions the NVIDIA GeForce GTX 650 Ti, and on the diagrams of this company the new model R7 260X turns out to be 15-25% faster than the competitor.

AMD Radeon R7 250 Graphics

  • Chip codename: "Oland XT"
  • Core frequency: up to 1050 MHz
  • Number of universal processors: 384
  • Number of texture units: 24, blending units: 8
  • Effective memory frequency: 4600 MHz (4 × 1150 MHz)
  • Memory type: GDDR5 or DDR3
  • Memory bus: 128 bit
  • Memory bandwidth: 74 gigabytes per second.
  • Computing Performance (FP32): 0.8 teraflops
  • Maximum theoretical fill rate: 8.4 gigapixels per second
  • Theoretical texture sampling rate: 25.2 gigatexels per second.
  • PCI Express 3.0 bus
  • Ports: DVI Dual Link, HDMI 1.4, VGA
  • Power consumption: 3 to 65 W
  • Two-slot design
  • MSRP: $ 89

Perhaps this is one of the few video cards from the entire new AMD line that does not have an obvious predecessor in the company's retail line, since the Oland chip is used for the first time in desktop solutions (it was used in OEM solutions of the Radeon HD 8000 family, which is not very well known to the general public) ... This is the most affordable graphics card based on the GPU architecture Graphics Core Next, designed for the entry-level price segment - it costs less than $ 90.

Radeon R7 250 video cards will be available in both dual-slot and single-slot versions, depending on the manufacturer's decision. Naturally, such a video card does not need additional power - it is content with the energy received via PCI-E. Let's see what it has to offer in terms of performance:

And again AMD compares the fresh model with a solution from the distant Radeon HD 5000 family. Now they have taken a mid-range video card - HD 5770, which at one time had considerable success in the market. So, the current budget model provides performance higher than the old one, and this is at almost half the price! At present, this is the most entry level for modern 3D games, and only APUs and ... one more new video card of the R7 family are below it in performance.

AMD Radeon R7 240 Graphics

  • Chip codename: "Oland Pro"
  • Core frequency: up to 780 MHz
  • Number of universal processors: 320
  • Number of texture units: 20, blending units: 8
  • Effective memory frequency: 4600 MHz (4 × 1150 MHz) or 1800 MHz (2 × 900 MHz)
  • Memory type: GDDR5 or DDR3
  • Memory bus: 128 bit
  • Memory capacity: 1 (GDDR5) or 2 gigabytes (DDR3)
  • Memory bandwidth: 74 (GDDR5) or 23 (DDR3) gigabytes per second.
  • Computing Performance (FP32): 0.5 teraflops
  • Maximum theoretical fill rate: 6.2 gigapixels per second
  • Theoretical texture sampling rate: 15.6 gigatexels per second.
  • PCI Express 3.0 bus
  • Power consumption: 3 to 30 W
  • Single-slot design

In fact, this is an even cheaper version of a video card based on the Oland video chip. It has a slightly trimmed GPU running at lower frequencies, and most of these cards on the market are likely to have slower DDR3 memory, which will leave a mark on their 3D performance. However, for such cheap motherboards, performance does not matter anymore. Moreover, even less expensive solutions of the R5 family may appear in the future, but this is a separate story.

It is not surprising that AMD partners are ready to supply solutions of new families almost from the moment of the announcement, and even with their own design of motherboards, coolers and factory overclocking. Indeed, for many of the new products, they just need to flash slightly modified BIOS versions, change the design of boxes and coolers - that is, new products are ready:

Actually, practical tests on new video cards are not so interesting, because you can simply take as a basis the results of those video cards of the previous generation, almost complete copies of which are models from new families, and throw 5-15% of the advantage obtained due to increased frequencies and improved technologies power management. After all, only R7 240, R7 250, R9 290 (X) have obvious differences from the boards of the Radeon HD 7000 family, and the rest of the cards are renamed old boards.

AMD Radeon R9 295X2 Graphics

  • Codename "Vesuvius"
  • Manufacturing technology: 28 nm
  • 2 chips with 6.2 billion transistors each
  • Unified architecture with an array of common processors for streaming processing of multiple types of data: vertices, pixels, etc.
  • Hardware support for DirectX 11.2, including Shader Model 5.0
  • Dual 512-bit memory bus: twice eight 64-bit controllers with support for GDDR5 memory
  • GPU frequency: up to 1018 MHz
  • Twice 44 GCN computational units, including 176 SIMD cores, consisting of a total of 5632 ALUs for floating point calculations (integer and floating point formats are supported, with FP32 and FP64 precision)
  • 2x176 texture units, with support for trilinear and anisotropic filtering for all texture formats
  • 2x64 ROPs with support for anti-aliasing modes with the possibility of programmable sampling of more than 16 samples per pixel, including with FP16 or FP32 framebuffer format. Peak performance up to 128 samples per cycle, and in Z only mode - 512 samples per cycle
  • Integrated support for up to six monitors connected via DVI, HDMI and DisplayPort

Radeon R9 295X2 Graphics Card Specifications

  • Core frequency: up to 1018 MHz
  • Number of universal processors: 5632
  • Number of texture units: 352, blending units: 128
  • Effective memory frequency: 5000 MHz (4 × 1250 MHz)
  • Memory type: GDDR5
  • Memory capacity: 2 × 4 gigabytes
  • Memory bandwidth: 2 × 320 gigabytes per second.
  • Computing Performance (FP32) 11.5 teraflops
  • Maximum theoretical fill rate: 130.3 gigapixels per second
  • Theoretical texture sampling rate: 358.3 gigatexels per second.
  • PCI Express 3.0 bus
  • Connectors: DVI Dual Link, four Mini-DisplayPort 1.2
  • Power consumption up to 500 W
  • Two 8-pin auxiliary power connectors
  • Two-slot design
  • The recommended price for the US market is $ 1499 (for Russia - 59,990 rubles).

The full name of the new dual-GPU model is interesting, which once again shows the problems of the naming system for AMD video cards, which we have written about more than once. This is already the second video card, which was called a non-circular number, this time between 290 and 300, since the 300 series cannot yet be called, and the 290 was occupied by single-chip video cards. But why then was the novelty given a new suffix "X2"? Well, they would call it either R9 290X2 or R9 295, but no - it is necessary to have both, "yes, more, doctor, more!"

It is logical that the Radeon R9 295X2 model occupies the highest position in the company's new lineup, high above the R9 290X, since in terms of performance and price it is noticeably higher than the single-chip version. The recommended price for the Radeon R9 295X2 is $ 1500, which is closest to the price of the "exclusive" single-chip solution from a competitor from the same price segment - GeForce GTX Titan Black. Well, and partly can be cited as an example of the GTX 780 Ti, although it is noticeably cheaper. And before the announcement and market launch of a dual-GPU gaming solution from NVIDIA, it was the top single-GPU GeForce models that remained the only rivals for the Radeon R9 295X2.

A two-chip Radeon video card is equipped with 4 GB GDDR5 memory per GPU, which is due to the 512-bit memory bus of Hawaii chips. Such a large volume is more than justified for a product of such a high level, since in some modern gaming applications, with maximum settings, anti-aliasing and high resolutions enabled, a smaller amount of memory (2 gigabytes per chip, for example) is sometimes not enough. And even more so, this remark applies to rendering in UltraHD-resolution, in stereo mode or on multiple monitors in Eyefinity mode.

Naturally, such a powerful dual-GPU video card has an effective cooling system that differs from traditional coolers for AMD reference video cards, but we'll talk about it a little later. But the power consumption of a board with two powerful GPUs on board can already be mentioned - it is not just high, but set another record for the official TDP figure for a reference design board, even a two-chip one. For obvious reasons, the card also has two 8-pin power connectors, which is also explained by its gigantic power consumption.

Architectural features

Since the video card with the code name "Vesuvius" is based on two "Hawaii" GPUs, which we have already written about more than once, all the detailed specifications and other features can be found in the article dedicated to the announcement of the company's single-chip flagship - Radeon R9 290X. In the material at the link, all the features of both the current architecture of Graphics Core Next and a specific GPU are thoroughly analyzed, and in this article we will briefly repeat only the most important thing.

The Hawaii graphics chip, which underlies the graphics card, is based on the Graphics Core Next architecture, which in version 1.1 has been slightly modified in terms of computing power and to fully support all DirectX 11.2 features. But the main challenge in the design of the new top-end GPU was to improve energy efficiency and add additional computing units, compared to the Tahiti. The chip is produced on the same 28 nm process technology as Tahiti, but it is more complex: 6.2 billion transistors versus 4.3 billion. The Radeon R9 295X2 uses two such chips:

Each GPU contains 44 compute units of the GCN architecture, containing 2816 stream processors, 64 ROPs and 176 TMUs, all of which are operational, none of them has been disabled for the dual-chip solution. The final texturing performance exceeded 358 gigatexels per second, which is a lot, and the scene fill rate (ROP unit performance) of the Radeon R9 295X2 is high - 130 gigapixels per second. The new two-chip Radeon has a dual 512-bit memory bus, assembled from sixteen 64-bit channels on two chips, which provides a total memory bandwidth of 640 GB / s - a record figure.

Model Radeon R9 295X2 supports all the technologies that other models on the same GPU. We have repeatedly written about all the new technologies supported by AMD graphics chips in the respective reviews. In particular, the solution reviewed today has support for the new Mantle graphics API, which helps to more efficiently use the hardware capabilities of AMD graphics processors, the board also supports all other modern AMD technologies that have been implemented and improved in new video chips of the line: TrueAudio, PowerTune, ZeroCore , Eyefinity and others.

Design features and system requirements

The Radeon R9 295X2 graphics card not only delivers maximum 3D performance, it also looks solid - according to its status as a top video system. This AMD product has a fairly sturdy and reliable design, including a metal back plate and a cooling system shroud. At the same time, they did not forget to decorate the appearance of the board, using the backlighting of the Radeon logo located at the end of the cooler casing, as well as the illuminated central fan of the video card.

The length of the new card is more than 30 cm (more precisely, 305-307 mm), and in terms of thickness it is a two-slot solution, and not a three-slot one, as powerful models for gaming enthusiasts are. As a result, the resulting video card looks great and is suitable in style for gaming systems with maximum performance, like ready-made PCs from Maingear Epic, as well as similar PCs from the most powerful gaming series from other manufacturers:

Naturally, even with a single-chip Radeon R9 290X video card that consumes almost 300 W, in the case of two GPUs operating at the same frequency and having the same number of active functional devices, the power consumption of a two-chip card could not be limited to 375 W. which was previously the standard even for powerful dual-chip solutions. Therefore, AMD decided to release an uncompromising solution for enthusiasts, which has two 8-pin additional power connectors and requires as much as 500 watts.

Accordingly, the use of Radeon R9 295X2 in the system implies rather high requirements for the applied power supply, which are much higher than those imposed by single-chip models of video cards, even the most powerful ones. The power supply must have two 8-pin PCI Express power connectors, each of which must provide 28A on a dedicated line. In general, the power supply unit must provide at least 50 A through two power lines suitable for the video card - and this is without taking into account the requirements of the rest of the system components.

Naturally, if you install two video cards Radeon R9 295X2 in one PC, the requirements are doubled, and you will also need a second pair of 8-pin connectors. At the same time, it is highly discouraged to use any adapters or splitters. An official list of recommended power supplies will be provided.

Note that the Radeon R9 295X2 has support for the well-known ZeroCore Power technology. This technology helps to achieve significantly lower power consumption in "deep idle" or "sleep" mode with the display device off. In this mode, an idle GPU is almost completely turned off, and consumes less than 5% of the power of a full-fledged mode, turning off most of the functional blocks. In the case of dual-chip boards, it is even more important that when the interface is drawn by the operating system, the second GPU will not work at all. In this case, one of the Radeon R9 295X2 chips will be immersed in a deep sleep with minimal power consumption.

Cooling system

Since even one Hawaii GPU is very hot, consuming more than 250 W in some cases, AMD decided to use a water cooling system in a two-chip solution, since water is much (24 times) more efficient than air in transferring heat. More precisely, the cooling device specially designed for the Radeon R9 295X2 from Asetek is a hybrid, since it combines cooling with water and air for different elements of the video card.

So, the new dual-GPU video card of the Radeon R9 295X2 model has a cooler, which is a sealed maintenance-free cooling system that includes an integrated pump, a large heat exchanger with a 120 mm fan, a pair of rubber hoses, and a separate radiator with a fan for cooling memory chips and the power system.

Asetek's water cooling system is designed to maximize heat removal from a pair of GPUs, and special microchannels are made in the soles pressed against both chips to improve heat transfer. The fan on the heat exchanger runs at an automatically variable speed depending on the coolant temperature. The fan, which serves to cool the memory and the power system, also changes its speed depending on the degree of heating.

The new dual-GPU video card from AMD, despite the complex hybrid cooler, comes completely ready for installation in the system, you just need to install it in the expansion slot as usual and mount the heat exchanger on the PC case. But due to such a massive cooling system, there are additional requirements and recommendations for installing the Radeon R9 295X2 into the system.

The PC case must have at least one slot for 120mm fans. In the case of a pair of video cards Radeon R9 295X2, two such places will be needed, and if the central processor of the system is cooled by a similar device, then three. At the same time, it is advisable to install the heat exchanger of the video card above the video card itself, for more efficient circulation of the coolant, making sure in advance that the length of the cooler tubes of 38 cm is enough for such an installation.

A 120 mm fan is installed on the heat exchanger radiator so that it blows air through the radiator, and it is recommended to install it in the case so that hot air goes out from the PC. It is also recommended to use additional fans in the PC case to cool such a powerful system with a very hot temper, which is completely unsurprising.

Performance evaluation

For a fairly reliable assessment of the probable performance of the two-chip new product from AMD, it is enough to consider only theoretical indicators, in comparison with the single-chip model Radeon R9 290X, since CrossFire provides close to 100% efficiency at high resolutions.

Comparing the parameters of similar two- and one-chip top-end models of the company, one can understand that the Radeon R9 295X2 is not much different from a pair of R9 290X video cards supplied in a CrossFire bundle. All the parameters of the graphics processors in the new product remained unchanged (do not count as a large increase in the frequency jump of 18 MHz, which is less than 2%) compared to the single-chip analogue. Neither the number of execution units, nor the frequency, nor the memory bus were cut. This means that the performance of the R9 295X2 is up to two times higher than that of the R9 290X.

The most powerful single-chip motherboards from AMD and NVIDIA are 60 to 85% outperformed by a motherboard based on a pair of GPUs, and in games the Radeon R9 295X2 also outperforms its rivals, especially at the highest quality settings and UltraHD resolution. Actually, AMD's dual-chip board has become one of the best choices for enthusiasts playing in such conditions on UltraHD display devices. The Radeon R9 295X2 delivers this performance across a wide range of modern games, including the most demanding:

At a time when single-chip solutions cannot provide even 30 average FPS, a new dual-GPU product from AMD always shows performance not lower than this mark, and more often than not - much higher. In fact, it is almost twice as fast as single-chip tops in such conditions.

Graphics Accelerator Model Radeon R9 285

  • Chip codename: "Tonga"
  • Manufacturing technology: 28 nm
  • 5 billion transistors
  • Unified architecture with an array of common processors for streaming processing of multiple types of data: vertices, pixels, etc.
  • Hardware support for DirectX 12, including Shader Model 5.0
  • 384-bit memory bus: six 64-bit controllers with support for GDDR5 memory
  • Core frequency up to 918 MHz (dynamic)
  • 32 GCN computational units, including 128 SIMD cores, consisting of a total of 2048 ALUs for floating point calculations (integer and floating point formats are supported, with FP32 and FP64 precision)
  • 128 texture units, with support for trilinear and anisotropic filtering for all texture formats
  • 32 ROPs with support for full-screen anti-aliasing modes with programmable sampling of more than 16 samples per pixel, including FP16 or FP32 framebuffer format. Peak performance up to 32 samples per cycle, and in Z only mode - 128 samples per cycle
  • Integrated support for up to six monitors connected via DVI, HDMI and DisplayPort

AMD Radeon R9 285 Graphics

  • Chip codename: "Tonga"
  • Core frequency: up to 918 MHz
  • Number of universal processors: 1792
  • Number of texture units: 112, blending units: 32
  • Effective memory frequency: 5500 MHz (4 × 1375 MHz)
  • Memory type: GDDR5
  • Memory bus: 256 bit
  • Memory capacity: 2 gigabytes
  • Memory bandwidth: 176 gigabytes per second.
  • Computing Performance (FP32): 3.3 teraflops
  • Maximum theoretical fill rate: 29.8 gigapixels per second
  • Theoretical texture sampling rate: 102.8 gigatexels per second.
  • PCI Express 3.0 bus
  • Connectors: two DVI Dual Link, HDMI 1.4, DisplayPort 1.2
  • Power consumption: up to 190 W
  • Two 6-pin power connectors
  • Two-slot design
  • MSRP: $ 249

The name of this AMD solution has once again revealed an unfortunate naming system. Since the “round” numbers were already taken, the video card had to be called a non-circular number between 280 and 290, because the “X” suffix is ​​occupied by the R9 280X, and there is no room for modification on the Tonga chip. This happened because when the initial line was announced, the Tonga chip had not yet been thought about, and there was no place in the names for this modification. In addition, a solution based on the full Tonga XT video chip is also expected - it will probably be called R9 285X.

In the lineup, the novelty is located between the R9 270X and R9 280X - full-fledged models based on Tahiti and Pitcairn chips, and in terms of speed it is somewhere between these models, despite the higher digital index than the R9 280X. Judging by the theory, the Radeon R9 285 should be very close in performance to the Radeon R9 280, and the very old Radeon HD 7950 Boost. The recommended price for the Radeon R9 285 at the time of the announcement corresponded to the prices of the replacement AMD model and a similar solution from a competitor from the same price segment - GeForce GTX 760, which is the main rival for the new model.

Unlike the Radeon R9 280, the new product has GDDR5 memory with a volume of not three gigabytes, but two, since the memory bus in the used chip was cut from 384-bit to 256-bit, and you can put 1, 2 or 4 GB on it ... 1 GB is too small, 4 GB is too expensive, and 2 GB is well suited for the price in this case. True, in some cases this volume may not be enough for resolutions above 1920 × 1080 pixels in the most modern and demanding games at maximum graphics quality settings, not to mention multi-monitor systems. But there are hardly many such users, and 2 GB can be considered the ideal amount of memory for a video card in this price range.

The market offers video cards from such partners of the company as Sapphire, PowerColor, HIS, ASUS, MSI, XFX, Gigabyte and others. Most of AMD's partners have released proprietary PCB designs and cooling designs, as well as solutions with higher GPU frequencies. It should be noted that the reference video card requires additional power supply via two 6-pin power connectors, in contrast to the 8-pin and 6-pin for the Radeon R9 280.

Architectural and functional features

We have already talked about the Graphics Core Next (GCN) architecture in as much detail as possible using Tahiti, Hawaii and others as examples. The Tonga GPU used in the Radeon R9 285 is based on the latest version of this architecture - GCN 1.2, like other modern solutions from the company. The new GPU gets all of the enhancements from Bonaire and Hawaii in terms of computational capabilities, support for some additional DirectX capabilities, AMD TrueAudio technology, and an enhanced version of AMD PowerTune.

Recall that the basic block of the architecture is the GCN computing unit, from which all AMD graphics processors are assembled. This computing unit has a dedicated local data storage for exchanging data or expanding the local register stack, as well as a first-level cache memory with read / write capabilities and a full-fledged texture pipeline with fetch and filter units, divided into subsections, each of which works on its own stream teams. Each of the GCN blocks is responsible for the planning and distribution of work independently. Let's see how the Tonga (in the Radeon R9 285 variant) looks like:

So, the model Radeon R9 285 in characteristics is very close to the R9 280, which, in turn, can be considered as a stripped-down version of the R9 280X. The stripped-down Tonga chip has 28 GCN computing devices, giving a total of 1,792 streaming computing cores (a full-fledged chip has 2048, as expected). The same applies to texture units, in the cut-down Tonga their number has been reduced from 128 TMUs to 112 TMUs, since each GCN unit contains four texture units.

In terms of the number of ROPs, the chip has not been cut down, having received the same 32 actuators. But there are fewer memory controllers, the Tonga graphics processor in the form of the Radeon R9 285 has only four 64-bit memory channels, giving a total of 256-bit memory bus, as opposed to 384-bit out of six channels in Tahiti-based solutions. This is probably due to AMD's desire to save money.

The operating frequencies of the video card of the new model are slightly lower than those offered in the Radeon HD 7950 Boost and in the Radeon R9 280. More precisely, the new solution on the Tonga graphics processor received a slightly lower maximum frequency, equal to 918 MHz (and not 933, as in the R9 280) , but in itself this is not so important due to the use of improved AMD PowerTune technology, which we also talked about many times in Bonaire and Hawaii reviews.

The Tonga GPU supports the latest version of PowerTune, which provides the highest possible 3D performance within the specified power consumption. In special applications with high power consumption, this GPU drops the frequency below the nominal, hitting the power consumption limit, and in gaming applications it provides the highest operating frequency that is the maximum possible for the GPU under the current conditions.

In addition, PowerTune also provides rich overclocking capabilities for the Tonga GPU. In the driver settings, the user can set several parameters, such as the target GPU temperature, the relative fan speed in the cooling device, as well as the maximum power consumption level, and the video card will do the rest by setting the maximum possible frequency and other parameters (GPU voltage, fan speed) in changed conditions.

Although the nominally operating frequency of the GPU in the Radeon R9 285 did not increase, the video memory frequency of the new item was increased from 5 GHz to 5.5 GHz in order to at least slightly compensate for the shortcoming in the form of only a 256-bit memory bus. The use of faster GDDR5 memory with a 256-bit bus gives a bandwidth of 176 GB / s, which is still noticeably lower than the 240 GB / s of the Radeon R9 280.

The Tonga GPU has received some architectural modifications. It is based on the latest generation of Graphics Core Next architecture and features an updated Instruction List (ISA), improved geometry and tessellation performance, a more efficient lossless framebuffer compression method, a better image scaling engine (when outputting at non-native resolution) and new engine versions. encoding and decoding video data. Let's consider all the changes in more detail.

AMD says the Tonga has improved geometry handling, as we saw earlier in the same Hawaii chip. The new GPU can process up to four primitives per clock cycle and provides twice or four times more tessellation performance in difficult conditions. We will definitely check this data in the next part of our material, but for now let's take a look at the graph from AMD:

The Tonga GPU has received some changes in the ISA - similar to the Bonaire and Hawaii chips (only these three chips are based on the improved GCN architecture), which previously had new instructions designed to accelerate various calculations and processing media data on the GPU, as well as the ability to exchange data between SIMD lines, improved control of the work of computing units and distribution of tasks.

From the player's point of view, it is much more important to use a new, more efficient lossless compression method for the frame buffer, because it is necessary to somehow compensate for the lack of the Radeon R9 285 in the form of a 256-bit memory bus compared to 384-bit in solutions based on Tahiti. Similar methods have long been used in graphics processors, when the frame buffer is stored in video memory in a compressed form, and the GPU reads and writes compressed data into it, but it is the new method from AMD that provides 40% more efficient compression than previous GPUs, which is especially important given the Tonga's relatively narrow memory bus.

It is quite natural that the new video chip received full support for AMD TrueAudio sound processing technology. We have also talked about it more than once in our materials dedicated to the release of solutions from the new line of AMD. With the release of the Radeon R7 and R9 series, the company introduced the world to TrueAudio technology, a programmable audio engine, which was supported on the AMD Radeon R7 260X and R9 290 (X), and now has appeared in the R9 285. It is the Bonaire, Hawaii and Tonga chips that have all the latest innovations, including support for TrueAudio.

TrueAudio is a built-in programmable audio engine in the AMD GPU that provides guaranteed real-time processing of audio tasks, regardless of the installed CPU. To do this, several Tensilica HiFi EP Audio DSP DSP cores are integrated into the named AMD graphics processors, access to their capabilities is carried out using popular sound processing libraries, the developers of which can use the resources of the built-in audio engine using a special TrueAudio API. AMD has long and closely cooperated with many companies known for their developments in this area: game developers, developers of audio middleware, audio algorithms, etc., and several games with TrueAudio support have already been released.

The new video card Radeon R9 285 also supports other technologies of the company, which we have already written about in the relevant reviews. In particular, the announced solution has support for the new graphics API Mantle, which helps to more efficiently use the hardware capabilities of AMD graphics processors, since Mantle is not limited by the shortcomings of the available graphics API: OpenGL and DirectX. To do this, a thinner software shell is used between the game engine and the hardware resources of the GPU, similar to how it has been done for a long time on game consoles.

Among other changes, AMD highlights the high-quality scaling of the output image (scaler), which uses an advanced filter with a large number of samples: 10 horizontal and 6 vertical. The new hardware scaling method works from and up to and including 4K (UltraHD) and improves the quality of the output in non-native resolution images.

Of the completely new capabilities of the new Tonga chip, we can note new versions of video processing units: Unified Video Decoder (UVD) and Video Coding Engine (VCE). These units work in resolutions up to UltraHD (4K) inclusive, in these versions, the performance of decoding and encoding video data, as well as transcoding from one format to another, is significantly increased.

So, the new UVD block supports decoding of video data in H.264, VC-1, MPEG4, MPEG2 formats, which were in the previous version of the block, but now the MJPEG format has been added to them. Increasing the resolution of the video stream from FullHD to UltraHD means a fourfold load during decoding, and the power of the central processor may already be insufficient. According to AMD, if using software decoding of a video in FullHD resolution, the CPU load can reach 20-25%, then for UltraHD resolution under the same conditions, the CPU will be already half loaded with work.

To reduce the load on the CPU, the Tonga GPU, on which the Radeon R9 285 is based, includes a redesigned UVD decoding unit with support for full hardware decoding of H.264 High Profile Level 5.2 data at resolutions up to 4K, which gives a significant reduction resource intensity when decoding and playing such videos, compared with a purely software method:

The performance of the VCE unit has also been significantly improved to provide encoding speeds up to 12x faster than real-time for FullHD resolution. The new VCE block supports fully hardware encoding of data in H.264 format for Baseline and Main profiles, and UltraHD-resolution is also supported. AMD believes the new product delivers best-in-class H.264 encoding performance based on the following internal benchmarks:

Upon closer examination of the testing conditions, it turns out that the tests used different software: Cyberlink Media Espresso for AMD and Arcsoft Media Converter 8 for NVIDIA, since the first product for NVIDIA chips does not yet support hardware video encoding, and in such conditions the results are 100 % cannot be called correct. Well, at least we got a rough estimate - AMD's solution, according to their own estimates, turned out to be 30-50% faster than its counterpart from a competitor.

It remains to add just a little information about the Never Settle: Space Edition loyalty program. We remember that AMD graphics cards have been shipped with a couple of free digital games for some time now. This program is called Never Settle, and in the case of AMD's Radeon R9 285 (and other graphics cards from the company from now on), it has been updated to Never Settle: Space Edition.

The Never Settle: Space Edition launches today, on the day of the Radeon R9 285's announcement, and features several highly anticipated space-related games coming out this year. From now on, any AMD Radeon R9 series graphics card can be purchased from a wide range of games, including Alien: Isolation and Star Citizen.

Alien: Isolation was released on October 7th, and buyers of Radeon R9 graphics cards received a serial number for the game on launch day. The Star Citizen Mustang Omega Variant Racer special offer includes the Arena Commander and Murray Cup Race Series multiplayer modules.

Radeon R9 users who purchase from today will be able to use an exclusive red and black racing spaceship skin called the Mustang Omega Variant Racer from October 1st for use in alpha versions of a project still in development.

In order to receive free games after purchasing a Radeon, you need to select up to three options in a library of 29 game projects. The buyer of a video card from the Radeon R9 line, including the R9 285, is included in the Radeon Gold Reward and will be able to choose up to three free games from 29 projects. Those who buy the Radeon R7 260 get access to Silver Reward and choose two games out of 28, but the purchase of the Radeon R7 240 and R7 250 will delight you with the Bronze Reward and will give you the opportunity to get one game from the list of 18 pieces.

Theoretical performance evaluation

To make a quick preliminary assessment of the performance of AMD's new solution, we will look at the theoretical numbers and the company's own benchmark results. Judging by the theoretical figures (there is an oddity in the table with the calculation of the texturing speed - it seems that for different video cards the numbers were counted at different frequencies - the turbo frequency in the case of a new product and the usual frequency for old boards), the new Radeon R9 285 should show the speed in games. close to its predecessor in the face of R9 280 based on Tahiti, and lag behind the older model R9 280X by 15-20% maximum.

It is clear that from the older model Radeon R9 280X, based on a full-fledged Tahiti chip, the novelty will lag behind everywhere, but the Radeon R9 280 can be faster if the rendering speed is limited by the memory bandwidth. Which of the only video card based on the Tonga chip is lower due to the less wide memory bus, despite the increased frequency of its operation.

Let's take a look at the preliminary performance indicators of the new motherboard from AMD relative to the replacement Radeon R9 280 and a competitor's solution with a similar price in real applications. First, let's look at the results of the popular 3DMark benchmark suite and AMD's favorite Fire Strike benchmark in two sets of settings: Performance and Extreme.

The benchmark numbers show the positioning of the Radeon R9 285 in the market relative to other solutions. In this particular benchmark, AMD measured the speed of the new Radeon R9 285 to be slightly faster than the Radeon R9 280, which can be explained by the GPU running at a higher real frequency. Well, the competitor from NVIDIA is clearly inferior in price to the new motherboard, yielding to it in rendering speed by about a quarter.

Do not forget that this is stakeholder data and just one pseudo-game test from a synthetic benchmark. Let's see what AMD's new product turns out to be in games, comparing it only with the competing GeForce GTX 760 model in several gaming applications used for testing in AMD laboratories:

We used a resolution of 2560x1440 and such game settings to show the new product from the best side, the frame rate remained above the 30 FPS mark. In this comparison, AMD's proprietary Radeon R9 285 also delivers better performance than its competitor across the entire suite of applications.

Additionally, data from other measurements are given. For example, in Battlefield 4 at 2560x1440 and High settings, the Radeon R9 285 is 15% faster than the GeForce GTX 760. In Crysis 3 at 2560x1440 and Very High game settings, AMD's new product is 13% faster, and in Bioshock Infinite at the same resolution and Ultra settings - 15% faster than GeForce GTX 760.

In general, sheer joy for the new member of the Radeon R9 family. What happens in computational applications? There are even fewer questions here, since Radeon cards were always faster than comparable GeForce in such applications, especially if you carefully select profitable test applications.

Judging by the diagram, the new Radeon R9 285 outperforms the GeForce GTX 760 in GPGPU applications using OpenCL, with an even more obvious advantage. In general, if we are to believe the figures from AMD, the Radeon R9 285 should successfully replace the Radeon R9 280, which is so attractive in terms of price and performance ratio. The new product should slightly outperform the model based on the Tahiti chip, and even more so it will be faster comparable in price to NVIDIA GeForce GTX 760 in almost all applications.

The new model Radeon R9 285, although it does not bring anything supernew and super interesting, is quite a strong solution in its price class. The new product is slightly faster than the Radeon R9 280 and is offered at the same price. In addition, the Tonga GPU differs from the Tahiti in several improvements, the main of which are accelerated geometry processing, support for several new technologies and redesigned blocks for working with video data - in these areas, the new mainstream AMD chip surpasses even the top-end Hawaii.

Introduction In the development of all computer technology in recent years, the course towards integration and the accompanying miniaturization is well traced. And here we are talking not so much about the usual desktop personal computers, but about a huge park of "user-level" devices - smartphones, laptops, players, tablets, etc. - which are reborn in new form factors, absorbing more and more new functions. As for the desktops, it is this trend that affects them in the last turn. Of course, in recent years, the vector of user interest has slightly deviated towards small-sized computing devices, but it's hard to call this a global trend. The basic architecture of x86 systems, which assumes the presence of separate processor, memory, video card, motherboard and disk subsystem, remains unchanged, and this is what limits the possibilities for miniaturization. It is possible to reduce each of the listed components, but a qualitative change in the dimensions of the resulting system in total will not work.

However, in the course of the last year, it seems, there has been a certain turning point in the environment of personal computers. With the introduction of modern semiconductor technological processes with "finer" standards, developers of x86 processors are able to gradually transfer the functions of some devices that were previously separate components to the CPU. So, no one is surprised anymore that the memory controller and, in some cases, the PCI Express bus controller have long become a part of the central processor, and the motherboard chipset has degenerated into a single microcircuit - the south bridge. But in 2011, a much more significant event happened - a graphics controller began to be built into processors for productive desktops. And we are not talking about some frail video cores that are only capable of ensuring the operation of the operating system interface, but about completely full-fledged solutions that in their performance can be opposed to discrete entry-level graphics accelerators and probably surpass all those integrated video cores that were built into systems logic sets earlier.

The pioneer was Intel, which released Sandy Bridge processors with integrated Intel HD Graphics for desktop computers at the beginning of the year. True, she thought that good integrated graphics would be of interest primarily to users of mobile computers, and for desktop CPUs only a stripped-down version of the video core was offered. The incorrectness of this approach was later demonstrated by AMD, which released on the market of desktop systems Fusion processors with full-fledged graphics cores of the Radeon HD series. Such proposals immediately gained popularity not only as solutions for the office, but also as a basis for low-cost home computers, which forced Intel to reconsider its attitude towards the prospects of CPUs with integrated graphics. The company has updated its Sandy Bridge line of desktop processors by adding faster Intel HD Graphics to its desktop offerings. As a result, now users who want to build a compact integrated system are faced with the question: which manufacturer's platform is more rational to prefer? After conducting comprehensive testing, we will try to give recommendations on choosing a particular processor with an integrated graphics accelerator.

Terminology question: CPU or APU?

If you are already familiar with the integrated graphics processors that AMD and Intel offer for desktop users, then you know that these manufacturers are trying to distance their products as much as possible from each other, trying to instill the idea that their direct comparison is incorrect. The main "confusion" is brought by AMD, which refers its solutions to a new class of APUs, and not to conventional CPUs. What's the difference?

APU stands for Accelerated Processing Unit. If we turn to detailed explanations, it turns out that from a hardware point of view, this is a hybrid device that combines traditional general-purpose computing cores with a graphics core on a single semiconductor chip. In other words, the same CPU with integrated graphics. However, there is still a difference, and it lies at the program level. The graphics core included in the APU must have a universal architecture in the form of an array of stream processors capable of working not only on the synthesis of a three-dimensional image, but also on solving computational problems.

That is, the APU offers a more flexible design than simply combining graphics and computing resources within a single semiconductor chip. The idea is to create a symbiosis of these disparate parts, when some of the calculations can be performed by means of the graphics core. True, as always in such cases, software support is required to tap into this promising opportunity.

AMD Fusion processors with a video core, known under the codename Llano, fully meet this definition, they are precisely the APU. They integrate the graphics cores of the Radeon HD family, which, among other things, support the ATI Stream technology and the OpenCL 1.1 programming interface, through which calculations on the graphics core are really possible. In theory, a number of applications can get practical benefits from execution on an array of Radeon HD stream processors, including cryptographic algorithms, rendering of three-dimensional images, or tasks of post-processing of photos, sound and video. In practice, however, everything is much more complicated. Implementation difficulties and dubious real performance gains have held back widespread support for the concept so far. Therefore, in most cases, an APU can be viewed as nothing more than a simple CPU with an integrated graphics core.

Intel, by contrast, has a more conservative terminology. It continues to refer to its Sandy Bridge processors containing the integrated HD Graphics core by the traditional term CPU. Which, however, has some ground, because the OpenCL 1.1 programming interface is not supported by Intel graphics (compatibility with it will be provided in the next generation Ivy Bridge products). So, Intel does not yet provide for any joint work of dissimilar parts of the processor on the same computing tasks.

With one important exception. The fact is that in the graphics cores of Intel processors there is a specialized Quick Sync block, focused on hardware acceleration of the video stream encoding algorithms. Of course, as in the case of OpenCL, it requires special software support, but it is really capable of improving the performance when transcoding high-definition video by almost an order of magnitude. So in the end, we can say that Sandy Bridge is to some extent also a hybrid processor.

Is it legal to compare AMD APUs and Intel CPUs? From a theoretical point of view, an equal sign cannot be put between an APU and a CPU with a built-in video accelerator, but in real life we ​​have two names for the same. AMD Llano processors can accelerate parallel computing, and Intel Sandy Bridge can only use graphics power when transcoding video, but in fact, both of these features are almost never used. So, from a practical point of view, any of the processors discussed in this article is a regular CPU and a video card assembled inside a single microcircuit.

Processors - Test Participants

In fact, you shouldn't think of processors with integrated graphics as some kind of special offer aimed at a certain group of users with atypical requests. Universal integration is a global trend, and such processors have become the standard offer in the lower and middle price range. Both AMD Fusion and Intel Sandy Bridge have ousted CPUs without graphics from the current offerings, so even if you are not going to rely on an integrated video core, we can not offer anything other than focusing on the same processors with graphics. Fortunately, no one forces the built-in video core to be used, and it can be turned off.

Thus, starting to compare a CPU with an integrated GPU, we came to a more general task - comparative testing of modern processors with a cost of $ 60 to $ 140. Let's see what suitable options in this price range AMD and Intel can offer us, and what specific processor models we were able to involve in the tests.

AMD Fusion: A8, A6 and A4

To use desktop processors with an integrated graphics core, AMD offers a dedicated Socket FM1 platform that is compatible exclusively with the Llano family of processors - A8, A6 and A4. These processors have two, three or four general-purpose Husky cores with a microarchitecture similar to Athlon II, and a Sumo graphics core, inheriting the microarchitecture of the younger representatives of the five thousandth Radeon HD series.



The line of processors of the Llano family looks quite self-sufficient, it includes processors of different computing and graphics performance. However, there is one regularity in the model range - the computing performance is related to the graphics performance, that is, the processors with the largest number of cores and with the maximum clock frequency are always supplied with the fastest video cores.

Intel Core i3 and Pentium

Intel can oppose the AMD Fusion processors with its dual-core Core i3 and Pentium, which do not have their own collective name, but are also equipped with graphics cores and have a comparable cost. Of course, there are graphics cores in more expensive quad-core processors, but they play a clearly secondary role there, so Core i5 and Core i7 were not included in the actual testing.

Intel did not create its own infrastructure for low-cost integrated platforms, so Core i3 and Pentium processors can be used in the same LGA1155 motherboards as the rest of Sandy Bridges. To use the integrated video core, motherboards based on special H67, H61 or Z68 logic sets are required.



All Intel processors that can be considered competitors for Llano are based on a dual-core design. At the same time, Intel does not place much emphasis on graphics performance - most CPUs have a weak version of HD Graphics 2000 graphics with six executive devices. An exception was made only for the Core i3-2125 - this processor is equipped with the most powerful graphics core in the company's arsenal, HD Graphics 3000 with twelve executive devices.

How we tested

After we got acquainted with the set of processors that are presented in this testing, it's time to pay attention to the test platforms. Below is a list of components from which the composition of the test systems was formed.

Processors:

AMD A8-3850 (Llano, 4 cores, 2.9 GHz, 4 MB L2, Radeon HD 6550D);
AMD A8-3800 (Llano, 4 cores, 2.4 / 2.7 GHz, 4 MB L2, Radeon HD 6550D);
AMD A6-3650 (Llano, 4 cores, 2.6 GHz, 4 MB L2, Radeon HD 6530D);
AMD A6-3500 (Llano, 3 cores, 2.1 / 2.4 GHz, 3 MB L2, Radeon HD 6530D);
AMD A4-3400 (Llano, 2 cores, 2.7 GHz, 1 MB L2, Radeon HD 6410D);
AMD A4-3300 (Llano, 2 cores, 2.5 GHz, 1 MB L2, Radeon HD 6410D);
Intel Core i3-2130 (Sandy Bridge, 2 cores + HT, 3.4 GHz, 3 MB L3, HD Graphics 2000);
Intel Core i3-2125 (Sandy Bridge, 2 cores + HT, 3.3 GHz, 3 MB L3, HD Graphics 3000);
Intel Core i3-2120 (Sandy Bridge, 2 cores + HT, 3.3 GHz, 3 MB L3, HD Graphics 2000);
Intel Pentium G860 (Sandy Bridge, 2 cores, 3.0 GHz, 3 MB L3, HD Graphics);
Intel Pentium G840 (Sandy Bridge, 2 cores, 2.8 GHz, 3 MB L3, HD Graphics);
Intel Pentium G620 (Sandy Bridge, 2 cores, 2.6 GHz, 3 MB L3, HD Graphics).

Motherboards:

ASUS P8Z68-V Pro (LGA1155, Intel Z68 Express);
Gigabyte GA-A75-UD4H (Socket FM1, AMD A75).

Memory - 2 x 2 GB DDR3-1600 SDRAM 9-9-9-27-1T (Kingston KHX1600C8D3K2 / 4GX).
Hard disk: Kingston SNVP325-S2 / 128GB.
Power supply: Tagan TG880-U33II (880 W).
Operating system: Microsoft Windows 7 SP1 Ultimate x64.
Drivers:

AMD Catalyst Display Driver 11.9;
AMD Chipset Driver 8.863;
Intel Chipset Driver 9.2.0.1030;
Intel Graphics Media Accelerator Driver 15.22.50.64.2509;
Intel Management Engine Driver 7.1.10.1065;
Intel Rapid Storage Technology 10.5.0.1027.

Since the main purpose of this test was to study the capabilities of processors with integrated graphics, all tests were carried out without using an external graphics card. The built-in video cores were responsible for displaying the image on the screen, 3D functions and accelerating HD video playback.

It should be noted that, due to the lack of DirectX 11 support in Intel graphics cores, testing in all graphics applications was carried out in DirectX 9 / DirectX 10 modes.

Performance in common tasks

Overall performance

To assess the performance of processors in common tasks, we traditionally use the Bapco SYSmark 2012 test, which simulates user work in common modern office programs and applications for creating and processing digital content. The idea of ​​the test is very simple: it produces a single metric that characterizes the weighted average speed of a computer.



As you can see, AMD Fusion series processors look just shameful in traditional applications. AMD's fastest quad-core Socket FM1 processor, the A8-3850, barely outperforms the dual-core Pentium G620 at half the price. All the other representatives of the AMD A8, A6 and A4 series are hopelessly behind Intel competitors. In general, this is a quite natural result of using the old microarchitecture, which migrated there from the Phenom II and Athlon II, in the basis of the Llano processors. Until AMD implements processor cores with a higher specific performance, even a quad-core APU of this company will be very difficult to fight with current and regularly updated Intel solutions.

A deeper understanding of the SYSmark 2012 results can provide insight into the performance estimates obtained in various system use cases. The Office Productivity script simulates typical office work: preparing word, processing spreadsheets, working with e-mail, and surfing the Internet. The script uses the following set of applications: ABBYY FineReader Pro 10.0, Adobe Acrobat Pro 9, Adobe Flash Player 10.1, Microsoft Excel 2010, Microsoft Internet Explorer 9, Microsoft Outlook 2010, Microsoft PowerPoint 2010, Microsoft Word 2010, and WinZip Pro 14.5.



The Media Creation scenario simulates the creation of a commercial using pre-shot digital images and video. For this purpose, popular packages from Adobe are used: Photoshop CS5 Extended, Premiere Pro CS5 and After Effects CS5.



Web Development is a scenario within which the creation of a website is modeled. Applications used: Adobe Photoshop CS5 Extended, Adobe Premiere Pro CS5, Adobe Dreamweaver CS5, Mozilla Firefox 3.6.8 and Microsoft Internet Explorer 9.



Data / Financial Analysis Scenario is dedicated to statistical analysis and forecasting of market trends that are performed in Microsoft Excel 2010.



3D Modeling Script is all about creating 3D objects and rendering static and dynamic scenes using Adobe Photoshop CS5 Extended, Autodesk 3ds Max 2011, Autodesk AutoCAD 2011 and Google SketchUp Pro 8.



In the last scenario, System Management, you create backups and install software and updates. Several different versions of Mozilla Firefox Installer and WinZip Pro 14.5 are involved here.



The only type of application that AMD Fusion processors can achieve with acceptable performance are 3D modeling and rendering. In such tasks, the number of cores is a weighty argument, and the quad-core A8 and A6 can provide higher performance than, for example, Intel Pentium. But up to the level set by Core i3 processors in which support for Hyper-Threading technology is implemented, AMD's offerings fall short even in the most favorable case for themselves.

Application performance

To measure the speed of processors when compressing information, we use the WinRAR archiver, with which we archive a folder with various files with a total size of 1.4 GB with the maximum compression ratio.



We measure performance in Adobe Photoshop using our own benchmark, which is a creatively reworked Retouch Artists Photoshop Speed ​​Test including typical processing of four 10-megapixel images taken with a digital camera.



When testing the audio transcoding speed, the Apple iTunes utility is used, with the help of which the contents of a CD-disc are converted to AAC format. Note that a characteristic feature of this program is the ability to use only a couple of processor cores.



To measure the speed of video transcoding into H.264 format, the x264 HD test is used, which is based on measuring the processing time of the original MPEG-2 video recorded in 720p resolution with a 4 Mbps stream. It should be noted that the results of this test are of great practical importance, since the x264 codec used in it underlies numerous popular transcoding utilities, for example, HandBrake, MeGUI, VirtualDub, etc.



Testing the final rendering speed in Maxon Cinema 4D is performed using the specialized Cinebench benchmark.



We also used the Fritz Chess Benchmark, which evaluates the speed of the popular chess algorithm used in the programs of the Deep Fritz family.



Looking at the diagrams above, you can once again repeat everything that has already been said in relation to the SYSmark 2011 results. AMD processors, which the company offers for use in integrated systems, can boast of any acceptable performance only in those computational tasks where the load is good. is parallelized. For example, in 3D rendering, video transcoding, or when iterating over and evaluating chess positions. And then, the competitive level of performance in this case is observed only in the senior quad-core AMD A8-3850 with a clock frequency that is increased to the detriment of power consumption and heat dissipation. Still, AMD processors with a 65-watt thermal design give way to any Core i3, even in the most favorable case for them. Accordingly, against the background of Fusion, representatives of the Intel Pentium family look quite decent: these dual-core processors perform about the same as the three-core A6-3500 with a well-parallelized load, and surpass the older A8 in programs like WinRAR, iTunes or Photoshop.

In addition to the conducted tests, to check how the power of graphics cores can be used to solve everyday computing tasks, we conducted a study of the video transcoding speed in Cyberlink MediaEspresso 6.5. This utility has support for computing on graphics cores - it supports both Intel Quick Sync and ATI Stream. Our test consisted of measuring the time it took to transcode a 1.5GB 1080p video to H.264 (which was a 20-minute episode of the hit TV series) downsampled for viewing on an iPhone 4.



The results are divided into two groups. The first includes Intel Core i3 processors, which have support for Quick Sync technology. Numbers speak better than words: Quick Sync transcodes HD video content several times faster than any other toolkit. The second large group unites all other processors, among which CPUs with a large number of cores are in the first place. The Stream technology promoted by AMD, as we can see, does not manifest itself in any way, and the Fusion series APUs with two cores show no better result than Pentium processors, which transcode video exclusively by the computational cores.

Graphics core performance

The group of 3D gaming tests opens with the results of the 3DMark Vantage benchmark, which was used with the Performance profile.









A change in the nature of the load immediately leads to a change in leaders. The graphics core of any AMD Fusion processors outperforms any Intel HD Graphics in practice. Even the Core i3-2125, equipped with the HD Graphics 3000 video core with twelve execution units, is able to reach only the performance level demonstrated by the AMD A4-3300 with the weakest integrated graphics accelerator Radeon HD 6410D among all presented in the Fusion test. All the rest of Intel's processors are two to four times worse than AMD's in terms of 3D performance.

Some compensation for the drop in graphics performance can be the results of the CPU test, but it should be understood that the speed of the CPU and GPU are not interchangeable parameters. We should strive to balance these characteristics, and as is the case with the compared processors, we will see further, analyzing their gaming performance, which depends on the power of both the GPU and the computing component of hybrid processors.

To study the speed of work in real games, we selected Far Cry 2, Dirt 3, Crysis 2, the beta version of World of Planes and Civilization V. Testing was carried out at a resolution of 1280x800, and the quality level was set to Medium.















In gaming tests the picture is very positive for AMD's proposals. Despite the fact that they have rather mediocre computational performance, powerful graphics allow them to show good (for integrated solutions) results. Almost always, representatives of the Fusion series allow you to get a higher number of frames per second than Intel's platform with Core i3 and Pentium processors.

Even the fact that Intel began to build in a productive version of the HD Graphics 3000 graphics core did not save the position of the Core i3 processors. The Core i3-2125 equipped with it turned out to be faster than its counterpart Core i3-2120 with HD Graphics 2000 by about 50%, but the graphics embedded in Llano, even faster. As a result, even the Core i3-2125 can only compete with the cheap A4-3300, while the rest of the Sandy Bridge microarchitecture carriers look even worse. And if we add to the results shown in the diagrams the lack of support for DirectX 11 in the video cores of Intel processors, then the situation for the current solutions of this manufacturer seems even more hopeless. Only the next generation of the Ivy Bridge microarchitecture can fix it, where the graphics core will receive both much higher performance and modern functionality.

Even if we disregard specific numbers and look at the situation qualitatively, AMD's offerings look like a much more attractive option for an entry-level gaming system. The older Fusion A8 series processors, with certain compromises in terms of screen resolution and image quality settings, allow you to play almost any modern games without resorting to the services of an external video card. We cannot recommend any Intel processors for cheap gaming systems - various HD Graphics options have not yet matured for use in this environment.

Energy consumption

Systems based on processors with integrated graphics cores are gaining more and more popularity not only due to the opening possibilities for miniaturization of systems. In many cases, consumers opt for them, guided by the opening opportunities to reduce the cost of computers. Such processors allow not only to save on a video card, they also allow you to assemble a system that is more economical to use, since its total power consumption will obviously be lower than the consumption of a platform with discrete graphics. A concomitant bonus is quieter operating modes, since a decrease in consumption translates into a decrease in heat generation and the possibility of using simpler cooling systems.

That is why developers of processors with integrated graphics cores try to minimize the power consumption of their products. Most of the CPUs and APUs reviewed in this article have an estimated typical heat dissipation, which lies in the 65W range - and this is an unspoken standard. However, as we know, AMD and Intel approach the TDP parameter somewhat differently, and therefore it will be interesting to assess the practical consumption of systems with different processors.

The graphs below show two energy consumption values. The first is the total system consumption (without a monitor), which is the sum of the energy consumption of all components involved in the system. The second is the consumption of only one processor through a dedicated 12-volt power line. In both cases, the efficiency of the power supply is not taken into account, since our measuring equipment is installed after the power supply and records the voltages and currents entering the system via 12-, 5- and 3.3-volt lines. During the measurements, the load on the processors was created by the 64-bit version of the LinX 0.6.4 utility. The FurMark 1.9.1 utility was used to load the graphics cores. In addition, to correctly estimate idle power consumption, we have activated all available energy-saving technologies, as well as Turbo Core technology (where supported).



At rest, all systems showed the total energy consumption, which is approximately at the same level. At the same time, as we can see, Intel processors practically do not load the processor power line in idle, and competing AMD solutions, on the contrary, consume up to 8 watts per 12-volt dedicated line on the CPU. But this does not mean that the representatives of the Fusion family do not know how to fall into deep energy-saving states. The differences are caused by the different implementation of the power scheme: in Socket FM1 systems, both the computing and graphics cores of the processor and the north bridge built into the processor are powered from the processor line, while in Intel systems the north bridge of the processor takes power from the motherboard.



Maximum compute load finds that the power efficiency issues inherent in the Phenom II and Athlon II AMD processors have not gone away with the introduction of 32nm process technology. Llano uses the same microarchitecture and loses to Sandy Bridge in the same way in terms of the ratio of performance per watt of electricity consumed. Older Socket FM1 systems consume about twice as much as systems with LGA1155 Core i3 processors, despite the fact that the computing performance of the latter is clearly higher. The gap in power consumption between Pentium and the younger A4 and A6 is not that huge, but nevertheless, the situation does not change qualitatively.



Under the graphics load, the picture is almost the same - Intel processors are significantly more economical. But in this case, their significantly higher 3D performance can serve as a good excuse for AMD Fusion. Note that in gaming tests, the Core i3-2125 and A4-3300 "squeezed" the same number of frames per second, and in terms of consumption under the load on the graphics core, they also went very close to each other.



The simultaneous load on all units of hybrid processors allows you to obtain a result that can be figuratively represented as the sum of the two previous graphs. The A8-3850 and A6-3650 processors, which have a 100-watt thermal package, seriously break away from the rest of the 65-watt offerings from AMD and Intel. However, even without them, Fusion processors are less economical than Intel solutions in the same price range.



When using processors as the basis of a media center, busy with playing high-definition video, an atypical situation arises. Computing cores are mostly idle here, and the decoding of the video stream is assigned to specialized blocks built into the graphics cores. Therefore, platforms based on AMD processors manage to achieve good energy efficiency; in general, their consumption does not greatly exceed the consumption of systems with Pentium or Core i3 processors. Moreover, the lowest frequency AMD Fusion, the A6-3500 offers the best economy in this use case.

conclusions

At first glance, summing up the test results is easy. AMD and Intel processors with integrated graphics have shown very different advantages, which allows us to recommend either one or the other depending on the planned use of the computer.

Thus, the strong point of the AMD Fusion family of processors is the integrated graphics core with relatively high performance and compatibility with DirectX 11 and Open CL 1.1 software interfaces. Thus, these processors can be recommended for those systems where the quality and speed of 3D graphics is not the least important. At the same time, the processors included in the Fusion series use general-purpose cores based on the old and slow K10 microarchitecture, which translates into their low performance in computational tasks. Therefore, if you are interested in options that provide the best performance in common non-gaming applications, you should look towards Intel's Core i3 and Pentium, even though such CPUs are equipped with fewer processing cores than competing offerings from AMD.

Of course, in general, AMD's approach to the design of processors with an integrated video accelerator seems to be more rational. The APU models offered by the company are well balanced in the sense that the speed of the computing part is quite adequate to the speed of the graphics and vice versa. As a result, the older A8 series processors can be considered as a possible basis for entry-level gaming systems. Even in modern games, such processors and the Radeon HD 6550D video accelerators integrated into them can provide acceptable playability. With the younger A6 and A4 series with weaker versions of the graphics core, the situation is more complicated. For universal gaming systems of the lower level, their performance is no longer enough, therefore, it is possible to rely on such solutions only in those cases when it comes to creating multimedia computers, which will run extremely graphically simple casual games or network role-playing games of previous generations.

However, whatever is said about balance, the A4 and A6 series are poorly suited for demanding computing applications. Within the same budget, Intel Pentium line-ups can offer significantly faster computing performance. To tell the truth, against the background of Sandy Bridge, only the A8-3850 can be considered a processor with an acceptable speed in common programs. And even then, its good results are not manifested everywhere and, moreover, they are provided with increased heat dissipation, which will not please every computer owner without a discrete video card.

In other words, it's a shame that Intel still can't offer a graphics core worthy of performance. Even the Core i3-2125, equipped with the fastest Intel HD Graphics 3000 graphics in the company's arsenal, works at the level of AMD A4-3300 in games, since the speed in this case is limited by the performance of the built-in video accelerator. All the other Intel processors are equipped with a one and a half times slower video core, and in 3D games they appear very faded, often showing a completely unacceptable number of frames per second. Therefore, we would not recommend at all to think of Intel processors as a possible basis for a system capable of working with 3D graphics. The Core i3 and Pentium video core does an excellent job of displaying the operating system interface and playing high-definition video, but it is not capable of more. So the most suitable application for Core i3 and Pentium processors is seen in systems where the computing power of general-purpose cores is important with good energy efficiency - in these parameters, no AMD offers with Sandy Bridge can compete.

And in conclusion, it should be reminded that Intel's LGA1155 platform is much more promising than AMD Socket FM1. When purchasing an AMD Fusion series processor, you must be mentally prepared for the fact that it will be possible to improve a computer based on it within very limited limits. AMD plans to release only a few more Socket FM1 models from the A8 and A6 series with a slightly increased clock frequency, and their successors coming out next year, known under the codename Trinitу, will not have compatibility with this platform. Intel's LGA1155 platform is much more promising. Not only can the much more computationally productive Core i5 and Core i7 be installed in it today, but the Ivy Bridge processors planned for next year in motherboards purchased today should work.