NVIDIA gained more revenue in the gaming segment this quarter than all of AMD’s Computing and Graphics segment earned in their last quarter.
Damn, the numbers don't lie. The echo chamber of online bloggers make it sound like NV products are expensively evil and AMD stuff is competitive this quarter, but the silent majority have spoken at the checkout stand and its clear NVDA is in a league of their own compared to Intel and AMD. Definitely a company that's executing amazingly.
All Polaris did was to make 1070/1060 look like a steal with PCIEgate, overpriced AIB 480s, lousy DX11 performance, and mediocre perf/W. Also contrary to AMD fanboys want you to believe, there are plenty of non-bankrupt people who can spend $100+ on a better NV card, and that money is nothing to them for the card's 2-3 year lifespan.
> money is nothing to them for the card's 2-3 year lifespan.
I think this pretty much nails it. If I needed a decent GPU _right now_ and would only use it for 6 months, AMD would be a no-brainer.
But buying for 2-4 years—and most people these days do, so it doesn’t apply to the affluent ones only—means that $100 for 2-4 years are an investment of $2-4 a month.
In 2 years time, I know _I_ would want the better performance. Even if GPUs make a huge leap in that time (and they won’t), buying cheap now only to have to buy again in 2 years gets as expensive as buying a mid-to-high range GPU now. Only the latter option having the added benefit of me getting better performance in the present.
AMD owns performance per watt but yes the 390(x) and RX 480 were a step back. Still AMD has lower resale value and hence going via ebay can save you 50% or more compared to nvidia for same performance. You would be dumb to not do that.
AFAIK the talk about Polaris matching Maxwell efficiency is a subjective one. Polaris gain their efficiency from node shrink. not so much from architecture changes. if maxwell was directly shrink to 14nm/16nm finfet they will be even more power efficient than polaris at 14nm.
>if maxwell was directly shrink But Maxwell's not getting a die shrink... Maxwell's efficiency is set in stone as Nvidia's moved onto Pascal now. And currently AMD's Polaris efficiency is very close to what Maxwell's at.
Yeah it's too bad that AMD's performance/watt is roughly equivalent to Nvidia's last gen, but the GTX 970 didn't suddenly become a shitty card overnight. It's still a good card with good performance/watt capable of modern games at 1080p and 1440p, or even 4k at reduced fidelity. Likewise, the RX480 is similar and accomplishes the same thing.
I find it odd that Team Red's equivalent of a GTX 970 is crapped on so hard by enthusiasts like this.
It's as if people on Team Green suddenly forget and forgive that Nvidia sold them a 3.5GB VRAM GTX 970.
And then Nvidia buyers suddenly point the finger at AMD of a scandal that was literally fixed in less than 10 days via a driver update to modify the power usage of the video card, while Nvidia GTX 970 owners didn't get the speed of the last .5GB of their VRAM "restored" with a driver update and are still waiting on collecting a class action lawsuit settlement.
There's a lot of unnecessary negative stigma against AMD for no reason online, and it shows with the attitudes people pervade online. The RX 480 is the rough equivalent of a GTX 970 and both cards, new, have roughly the same price on the market today, and the GTX 970 sold like hotcakes despite the VRAM problems.
The only thing I find disappointing with AMD's Radeon video cards right now is that they still haven't released their high end lineup of video cards, which despite being the minority of actual GPU sales, still serves the purpose of having a halo product that people can aspire to want.
The 970 sold like hotcakes because it was a good card for a good price, and the 3.5GB issue didn't actually have any impact on real-world use. Case in point: the R480 has 8GB of RAM and still provides roughly similar performance. Which is actually a problem, because the R480 is currently priced about the same as the much faster 1060.
>Case in point: the R480 has 8GB of RAM and still provides roughly similar performance. I don't think you really understand the way RAM works; You either have enough RAM to perform all processes, or you don't have enough and suddenly there is massive performance degradation as now the process is cache missing, and has to look into swap to find the data off a much slower SSD, or in a worst case scenario, a HDD.
3.5GB was and still is enough for most titles in 1080 and 1440p today. Few titles exceed this amount of VRAM, but when they do, (ex: massive Skyrim texture mods), VRAM usage goes way up and game performance degrades a lot.
So to say "the R480 has 8GB of RAM and still provides roughly similar performance" means to imply that it should be providing *better* performance because it haz moar vramz, which is *only* true if the same application being compared on a GTX 970 to RX480 was occupying anything more than 3.5GB in VRAM.
Consider the speed of a desktop if a user has 32GB of RAM installed. It never slows down due to lack of RAM when using standard desktop applications. What if they installed another 32GB of RAM for a total of 64GB of RAM? Well it still wouldn't speed up any standard desktop applications, as they weren't able to saturate the first 32GB of RAM to begin with.
Local markets also have an impact, if Nvidia have price parity with AMD people will most likely buy Nvidia. I just looked through the most sold cards in Finland and Sweden and high end 1060 and 1070 dominate the top followed by 1080 and 1050TI and then 480 nitro+.
exactly, all my radeons are just mining. And don't forget nvidia has some amazing mobile cards in almost a decade. They can finally perform near desktop. Not silly rebadging for the last freakin 5 years. AMD has basically no presence in the mobile GPU sector. Polaris 11 was suppose to be the return everyone (well actually no one) was waiting for, but its still a no show, probably because Polaris uses just too much power and Pascal is decimating. Zen is going to look like a joke, it MIGHT just beat a Nehalim. Which is sad as i just went to my parents to take a look a my old i7 920 that wasn't booting. Man that thing is old and slow. We were waiting for how long on that!?!
They don't have presence in mobile GPUs, true... but then, Nvidia does not have presence in consoles. I am surprised that consoles exclusivity for AMD is not making more positive impact on AMD side. Maybe those APUs are really dozen a dime, though PS4Pro and Scorpio APUs shouldn't be that cheap... and then, they have sold in vicinity of 70 million APUs in PS4 and X1 so far. Even if they are cheap, quantity should make up a bit for low price.
Rumour has it nV, Intel and IBM all walked away from the console market because margins were too small. By the looks of it, AMD isn't really raking it in from there either.
Well, considering AMD basically gave up high-end GPU market for the time being, (meaning nvidia can keep prices high up without any sort of competition) and nvidia still has good options in all other segments, this is not that much of a shocker.
February...2014? That was when Maxwell hit the market and it was clear AMD was going to be screwed.
As nVidia scaled Maxwell up, there was no competition for efficiency or performance per mm/2 (which translates directly to higher margins on production)
Pascal is simply the nail in the coffin that we all saw coming when it was obvious over the course of many years, essentially since the Radeon 7970, AMD has had nothing, and all of their products since, up until the R480, are loosely based on 3 generation old hardware.
Maybe AMD fanboys will stop with the infinite price/performance ratio saga that they have been addressing since AMD came out with a sub par architecture with respect to the competition (aka TeraScale). Good price/performance ratio (while competition does not follow that same path) just means that the company is under pricing its solutions <b>because there is not enough request for them!</b>. And that brings low margins, low incomes and no money to invest in next architecture.
I constantly read "if AMD had the money to.. they could..". The money is made by creating good products that can sell for what they deserve. If AMD can't sell them with good enough margins it just means they are not good enough for the market. It means that AMD will constantly loose money while nvidia will make a bunch of them.
If it was not clear with Kepler vs GCN and then Maxwell vs GCN 1.2 and then again Pascal vs Polaris, AMD just came out with a shame retarded architecture that cannot compete, despite they are selling it almost for free to show they product have some appeal.
Please stop with this lament about price and performance, and the future, and the miraculous drivers and the future, and Mantle, and the future, and the DX12 and the future and everything that does not talk about resources/performance ratio (which is an indication of the quality of the architecture).
AMD simply needs more (a quite lot more) of resources than is needed by nvidia to get to the same performance point. It means that AMD cannot do a real battle on price, as the same performing solution is always going to cost less to nvidia (and being nvidia is not going to under price it). So all this talking about how low price are AMD products is useless. It is not driving nvidia better solutions to lower prices is short term and more important it is not giving AMD money to create next killer architecture. It is not and advantage to the market (and then to us) in the long term period. And we had this period for 9 years now.. it is time to change things for real.
I somewhat agree with your 1st two paragraphs, but then you start to exaggerate until incredibility. Look at e.g. how gracefully HD7970 has aged compared to GTX680. You can still play at decent settings today with that "shame retarded architecture", whereas the nVidia card is seriously struggeling.
Don't get me wrong, my last 3 cards have been nVidias due to their superior power efficiency. But that won't make me bash AMD in an undeserved way.
those HD7k age better because the major console (PS4 and Xbone) using the same architecture. AMD themselves never deny about the advantage they have because of console. why did you think AMD pushing low level API with Mantle? part of the reason was they want to ease their burden on driver development and since console are using the same architecture as their current gpu game developer should be familiar using low level API with their hardware. in short they want to streamline their architecture with console. but because of that AMD probably did not dare to make radical changes to their architecture. if they change too much they might lose the benefit "same architecture as console". hence their current polaris rely heavily on node shrink to gain their power efficiency.
That's not the only reason. GCN was rather forward looking, which pays off as the compute workloads /shaders become more complex nowadays. Apart from the fact that AMD simply gave the chip more raw power compared to its nVidia counterpart from "back then".
Don't underestimate the fact that AMD has stuck with the same chips through multiple generations. The 7970 was several years old and still being sold, which means its been an optimization target for the driver team for a much longer period of time than the 680 was over at Nvidia. AMD had to keep pushing performance as much as it could just to keep those chips relevant in new games, while Nvidia made a significant change with Maxwell and didn't look back at Kepler.
AMD always try to be forward looking but it does not mean they always give them the advantage they need in the future. just that this time they both own the contract for PS4 and Xbone they have more influence with game developer and also with MS on what feature to be added on DirectX API. take tessellation for example. even those HD4k series have the hardware for it. some people take it when tessellation will be part of the API those HD4k series will have absolute advantage vs nvidia 200 series. but in the end that advantage did not turn into reality when to use tessellation the hardware still need to be fully compliant with MS Shader Model 5.
both AMD and nvidia push what they believe what should be added to game but right now AMD have more influence than they did in the past. that's why nvidia no longer ignore console market completely. because when AMD have all their hands on all of them they will put nvidia at disadvantage.
I see a lot of Nvidia cheerleaders posting here, and there is no doubt Nvidia has been doing better than AMD. That's hardly breaking news. It's been a tough few years for AMD. However as you point out AMD has gotten themselves a solid base to build off of by becoming the standard for both major consoles. AMD is currently in a rebuilding phase, and it looks like they're doing what is necessary to get themselves back in the game.
Are you talking about the future? Aren't 9 years enough to fill the gap? What we should wait? The entire gaming market making game engines optimized only for AMD HW? Would that be enough to grant AMD some advantages over the competition?
They aged better for the simple reason that they were unbalanced and more computing capable than, for example, geometry capable. It was due to the development of heavier and heavier shader code that they finally managed to overcome the competition that was (and still is) more balanced. Infact most of AMD computing advantage has gone against Maxwell that is more effective at GPGPU than Kepler was.
It requires a 9 TFLOPS GPU (like Fiji) to equal GM200 that has only 5.6TFLOPS computing capacity. And last Polaris 10 is in the same league as the 970 (and under 980) despite being faster (on chart) than that old Maxwell chip. So it was true that latest shifting to more computing centric engines made GCN age better than Kepler, but it ends there as GCN 1.2 has not aged better than Maxwell and for other reason they are even much worse (see the tiny GM106 vs Tonga or the 4GB limited Fiji vs GM200). So it was not a "future proof" vision but the luck that (or the ability to steer) the games engine shifted more to pixel shading instead of geometry shading (as with that even doubling shaders and power consumption would have not helped "future proof" AMD architecture).
Polaris has not new feature that can make it battle against Pascal better than GCN 1.2 could against Maxwell. And it is still power hungry. It is at the level of old generation in efficiency, something that never happened before. Even GF100 was inefficient but could still reach the highest rank on podium. Here we have inefficiency and low rank. Nothing that can be considered good for the future unless you consider the price the card is sold at, which is a lower tier than it was thought to be placed.
Let AMD stop sales of CPU and GPU and buy shares of NV and Intel. Without competition, NV and Intel will raise prices and their shares rapidly skyrocket, and AMD earn more cash and much faster than selling their current products.
I just dont get some people here who get so happy at nvidia's numbers and speak like nvidia is their mother or father... they are making too much money out of customers like you who buy cards at ridiculous prices... jeez
When I remember cost of my first (parents') PC 25 years ago, cost of GTX 1080 doesn't seem that much tbh. I would lie if I wouldn't be happier to get one for half the price, though put in long time price/wages point of view, it's a bargain.
From a historical perspective it is a bargain. But that can be said of most any electronic component. I think the value of the GPU compared to other components in a gaming system has been going up. From that perspective, although inflation-adjusted prices have only come down a little, value per dollar has still been going up. VR will only continue that trend, especially if it starts to rely on accelerated physical simulations.
They are already fine on TMSC 16nm. Who needs a better PP to overcome architecture deficiencies is AMD right now. Maybe it is them who should contact Intel to make GPU on 10nm PP. Maybe that way GCN 1.4 could just reach Pascal efficiency.
It is like the old funny talkings about Atom vs arm efficiency.... The former was always more efficient with 2 PP of advantage. Otherwise it was worse (and it even didn't had everything an arm soc integrated and was much bigger and costier :D).
AMD's revenue in Q3 2016: 1.3 billion $. Can someone tell me how does: "NVIDIA gained more revenue in the gaming segment this quarter than all of AMD’s Computing and Graphics segment earned in their last quarter. " make sense?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
44 Comments
Back to Article
webdoctors - Thursday, November 10, 2016 - link
NVIDIA gained more revenue in the gaming segment this quarter than all of AMD’s Computing and Graphics segment earned in their last quarter.Damn, the numbers don't lie. The echo chamber of online bloggers make it sound like NV products are expensively evil and AMD stuff is competitive this quarter, but the silent majority have spoken at the checkout stand and its clear NVDA is in a league of their own compared to Intel and AMD. Definitely a company that's executing amazingly.
StrangerGuy - Thursday, November 10, 2016 - link
All Polaris did was to make 1070/1060 look like a steal with PCIEgate, overpriced AIB 480s, lousy DX11 performance, and mediocre perf/W. Also contrary to AMD fanboys want you to believe, there are plenty of non-bankrupt people who can spend $100+ on a better NV card, and that money is nothing to them for the card's 2-3 year lifespan.xype - Friday, November 11, 2016 - link
> money is nothing to them for the card's 2-3 year lifespan.I think this pretty much nails it. If I needed a decent GPU _right now_ and would only use it for 6 months, AMD would be a no-brainer.
But buying for 2-4 years—and most people these days do, so it doesn’t apply to the affluent ones only—means that $100 for 2-4 years are an investment of $2-4 a month.
In 2 years time, I know _I_ would want the better performance. Even if GPUs make a huge leap in that time (and they won’t), buying cheap now only to have to buy again in 2 years gets as expensive as buying a mid-to-high range GPU now. Only the latter option having the added benefit of me getting better performance in the present.
beginner99 - Friday, November 11, 2016 - link
AMD owns performance per watt but yes the 390(x) and RX 480 were a step back. Still AMD has lower resale value and hence going via ebay can save you 50% or morecompared to nvidia for same performance. You would be dumb to not do that.
Morawka - Tuesday, November 15, 2016 - link
Nvidia owns performance per watt.AMD owns performance per dollar.. at least they used to
Shadow7037932 - Saturday, November 12, 2016 - link
PCIEgate? What's that?Space Jam - Saturday, November 12, 2016 - link
>PCIEgateAll talk, not an actual problem for the majority of people.
>overpriced AIB 480s
They have been pricey, but so too have been the
>lousy DX11 performance
AMD DX11 performance is better than ever. Yeah, it isn't Nvidia tier, but it's by no means bad.
>mediocre perf/W
AMD is lagging behind there, but it's HARDLY mediocre. Polaris matches Maxwell in power efficiency and Maxwell isn't anything bad just yet.
renz496 - Saturday, November 12, 2016 - link
AFAIK the talk about Polaris matching Maxwell efficiency is a subjective one. Polaris gain their efficiency from node shrink. not so much from architecture changes. if maxwell was directly shrink to 14nm/16nm finfet they will be even more power efficient than polaris at 14nm.JoeyJoJo123 - Monday, November 14, 2016 - link
>if maxwell was directly shrinkBut Maxwell's not getting a die shrink... Maxwell's efficiency is set in stone as Nvidia's moved onto Pascal now. And currently AMD's Polaris efficiency is very close to what Maxwell's at.
Yeah it's too bad that AMD's performance/watt is roughly equivalent to Nvidia's last gen, but the GTX 970 didn't suddenly become a shitty card overnight. It's still a good card with good performance/watt capable of modern games at 1080p and 1440p, or even 4k at reduced fidelity. Likewise, the RX480 is similar and accomplishes the same thing.
I find it odd that Team Red's equivalent of a GTX 970 is crapped on so hard by enthusiasts like this.
JoeyJoJo123 - Monday, November 14, 2016 - link
It's as if people on Team Green suddenly forget and forgive that Nvidia sold them a 3.5GB VRAM GTX 970.And then Nvidia buyers suddenly point the finger at AMD of a scandal that was literally fixed in less than 10 days via a driver update to modify the power usage of the video card, while Nvidia GTX 970 owners didn't get the speed of the last .5GB of their VRAM "restored" with a driver update and are still waiting on collecting a class action lawsuit settlement.
There's a lot of unnecessary negative stigma against AMD for no reason online, and it shows with the attitudes people pervade online. The RX 480 is the rough equivalent of a GTX 970 and both cards, new, have roughly the same price on the market today, and the GTX 970 sold like hotcakes despite the VRAM problems.
The only thing I find disappointing with AMD's Radeon video cards right now is that they still haven't released their high end lineup of video cards, which despite being the minority of actual GPU sales, still serves the purpose of having a halo product that people can aspire to want.
Guspaz - Monday, November 14, 2016 - link
The 970 sold like hotcakes because it was a good card for a good price, and the 3.5GB issue didn't actually have any impact on real-world use. Case in point: the R480 has 8GB of RAM and still provides roughly similar performance. Which is actually a problem, because the R480 is currently priced about the same as the much faster 1060.JoeyJoJo123 - Monday, November 14, 2016 - link
>Case in point: the R480 has 8GB of RAM and still provides roughly similar performance.I don't think you really understand the way RAM works; You either have enough RAM to perform all processes, or you don't have enough and suddenly there is massive performance degradation as now the process is cache missing, and has to look into swap to find the data off a much slower SSD, or in a worst case scenario, a HDD.
3.5GB was and still is enough for most titles in 1080 and 1440p today. Few titles exceed this amount of VRAM, but when they do, (ex: massive Skyrim texture mods), VRAM usage goes way up and game performance degrades a lot.
So to say "the R480 has 8GB of RAM and still provides roughly similar performance" means to imply that it should be providing *better* performance because it haz moar vramz, which is *only* true if the same application being compared on a GTX 970 to RX480 was occupying anything more than 3.5GB in VRAM.
Consider the speed of a desktop if a user has 32GB of RAM installed. It never slows down due to lack of RAM when using standard desktop applications. What if they installed another 32GB of RAM for a total of 64GB of RAM? Well it still wouldn't speed up any standard desktop applications, as they weren't able to saturate the first 32GB of RAM to begin with.
medi03 - Tuesday, November 15, 2016 - link
960 sold like hotcakes too, despite having pathetic perf/$.Hell, Prescott outsold Athlon 64 like 3 vs 1?
Why pretend average consumer choices are reasonable?
medi03 - Tuesday, November 15, 2016 - link
Oh, give me a break with this BS.480 and 470 are solid low range GPUs, 470 has no competition.
Ariknowsbest - Friday, November 11, 2016 - link
Local markets also have an impact, if Nvidia have price parity with AMD people will most likely buy Nvidia. I just looked through the most sold cards in Finland and Sweden and high end 1060 and 1070 dominate the top followed by 1080 and 1050TI and then 480 nitro+.Mining is probably were most AMD cards end up.
Byte - Friday, November 11, 2016 - link
exactly, all my radeons are just mining. And don't forget nvidia has some amazing mobile cards in almost a decade. They can finally perform near desktop. Not silly rebadging for the last freakin 5 years. AMD has basically no presence in the mobile GPU sector. Polaris 11 was suppose to be the return everyone (well actually no one) was waiting for, but its still a no show, probably because Polaris uses just too much power and Pascal is decimating. Zen is going to look like a joke, it MIGHT just beat a Nehalim. Which is sad as i just went to my parents to take a look a my old i7 920 that wasn't booting. Man that thing is old and slow. We were waiting for how long on that!?!nikon133 - Sunday, November 13, 2016 - link
They don't have presence in mobile GPUs, true... but then, Nvidia does not have presence in consoles. I am surprised that consoles exclusivity for AMD is not making more positive impact on AMD side. Maybe those APUs are really dozen a dime, though PS4Pro and Scorpio APUs shouldn't be that cheap... and then, they have sold in vicinity of 70 million APUs in PS4 and X1 so far. Even if they are cheap, quantity should make up a bit for low price.ZeDestructor - Monday, November 14, 2016 - link
Rumour has it nV, Intel and IBM all walked away from the console market because margins were too small. By the looks of it, AMD isn't really raking it in from there either.Shadow7037932 - Saturday, November 12, 2016 - link
Mining is no longer profitable on a GPU. ASICs dominate mining these days.medi03 - Tuesday, November 15, 2016 - link
Except 1050Ti is hardly a good buy, 470 trumps if while being only slightly more expensive.medi03 - Wednesday, November 16, 2016 - link
Huh?AMD revenue in Q3 was 1.3 Billion.
http://venturebeat.com/2016/10/20/amd-reports-1-3-...
nagi603 - Thursday, November 10, 2016 - link
Well, considering AMD basically gave up high-end GPU market for the time being, (meaning nvidia can keep prices high up without any sort of competition) and nvidia still has good options in all other segments, this is not that much of a shocker.HollyDOL - Friday, November 11, 2016 - link
Damn, why didn't I buy their stock on February?Samus - Friday, November 11, 2016 - link
February...2014? That was when Maxwell hit the market and it was clear AMD was going to be screwed.As nVidia scaled Maxwell up, there was no competition for efficiency or performance per mm/2 (which translates directly to higher margins on production)
Pascal is simply the nail in the coffin that we all saw coming when it was obvious over the course of many years, essentially since the Radeon 7970, AMD has had nothing, and all of their products since, up until the R480, are loosely based on 3 generation old hardware.
CiccioB - Friday, November 11, 2016 - link
Maybe AMD fanboys will stop with the infinite price/performance ratio saga that they have been addressing since AMD came out with a sub par architecture with respect to the competition (aka TeraScale).Good price/performance ratio (while competition does not follow that same path) just means that the company is under pricing its solutions <b>because there is not enough request for them!</b>. And that brings low margins, low incomes and no money to invest in next architecture.
I constantly read "if AMD had the money to.. they could..". The money is made by creating good products that can sell for what they deserve. If AMD can't sell them with good enough margins it just means they are not good enough for the market. It means that AMD will constantly loose money while nvidia will make a bunch of them.
If it was not clear with Kepler vs GCN and then Maxwell vs GCN 1.2 and then again Pascal vs Polaris, AMD just came out with a shame retarded architecture that cannot compete, despite they are selling it almost for free to show they product have some appeal.
Please stop with this lament about price and performance, and the future, and the miraculous drivers and the future, and Mantle, and the future, and the DX12 and the future and everything that does not talk about resources/performance ratio (which is an indication of the quality of the architecture).
AMD simply needs more (a quite lot more) of resources than is needed by nvidia to get to the same performance point. It means that AMD cannot do a real battle on price, as the same performing solution is always going to cost less to nvidia (and being nvidia is not going to under price it).
So all this talking about how low price are AMD products is useless. It is not driving nvidia better solutions to lower prices is short term and more important it is not giving AMD money to create next killer architecture. It is not and advantage to the market (and then to us) in the long term period. And we had this period for 9 years now.. it is time to change things for real.
MrSpadge - Friday, November 11, 2016 - link
I somewhat agree with your 1st two paragraphs, but then you start to exaggerate until incredibility. Look at e.g. how gracefully HD7970 has aged compared to GTX680. You can still play at decent settings today with that "shame retarded architecture", whereas the nVidia card is seriously struggeling.Don't get me wrong, my last 3 cards have been nVidias due to their superior power efficiency. But that won't make me bash AMD in an undeserved way.
renz496 - Friday, November 11, 2016 - link
those HD7k age better because the major console (PS4 and Xbone) using the same architecture. AMD themselves never deny about the advantage they have because of console. why did you think AMD pushing low level API with Mantle? part of the reason was they want to ease their burden on driver development and since console are using the same architecture as their current gpu game developer should be familiar using low level API with their hardware. in short they want to streamline their architecture with console. but because of that AMD probably did not dare to make radical changes to their architecture. if they change too much they might lose the benefit "same architecture as console". hence their current polaris rely heavily on node shrink to gain their power efficiency.MrSpadge - Friday, November 11, 2016 - link
That's not the only reason. GCN was rather forward looking, which pays off as the compute workloads /shaders become more complex nowadays. Apart from the fact that AMD simply gave the chip more raw power compared to its nVidia counterpart from "back then".owan - Friday, November 11, 2016 - link
Don't underestimate the fact that AMD has stuck with the same chips through multiple generations. The 7970 was several years old and still being sold, which means its been an optimization target for the driver team for a much longer period of time than the 680 was over at Nvidia. AMD had to keep pushing performance as much as it could just to keep those chips relevant in new games, while Nvidia made a significant change with Maxwell and didn't look back at Kepler.renz496 - Friday, November 11, 2016 - link
AMD always try to be forward looking but it does not mean they always give them the advantage they need in the future. just that this time they both own the contract for PS4 and Xbone they have more influence with game developer and also with MS on what feature to be added on DirectX API. take tessellation for example. even those HD4k series have the hardware for it. some people take it when tessellation will be part of the API those HD4k series will have absolute advantage vs nvidia 200 series. but in the end that advantage did not turn into reality when to use tessellation the hardware still need to be fully compliant with MS Shader Model 5.both AMD and nvidia push what they believe what should be added to game but right now AMD have more influence than they did in the past. that's why nvidia no longer ignore console market completely. because when AMD have all their hands on all of them they will put nvidia at disadvantage.
Nagorak - Saturday, November 12, 2016 - link
I see a lot of Nvidia cheerleaders posting here, and there is no doubt Nvidia has been doing better than AMD. That's hardly breaking news. It's been a tough few years for AMD. However as you point out AMD has gotten themselves a solid base to build off of by becoming the standard for both major consoles. AMD is currently in a rebuilding phase, and it looks like they're doing what is necessary to get themselves back in the game.CiccioB - Monday, November 14, 2016 - link
Are you talking about the future? Aren't 9 years enough to fill the gap?What we should wait? The entire gaming market making game engines optimized only for AMD HW? Would that be enough to grant AMD some advantages over the competition?
CiccioB - Monday, November 14, 2016 - link
They aged better for the simple reason that they were unbalanced and more computing capable than, for example, geometry capable.It was due to the development of heavier and heavier shader code that they finally managed to overcome the competition that was (and still is) more balanced.
Infact most of AMD computing advantage has gone against Maxwell that is more effective at GPGPU than Kepler was.
It requires a 9 TFLOPS GPU (like Fiji) to equal GM200 that has only 5.6TFLOPS computing capacity. And last Polaris 10 is in the same league as the 970 (and under 980) despite being faster (on chart) than that old Maxwell chip.
So it was true that latest shifting to more computing centric engines made GCN age better than Kepler, but it ends there as GCN 1.2 has not aged better than Maxwell and for other reason they are even much worse (see the tiny GM106 vs Tonga or the 4GB limited Fiji vs GM200).
So it was not a "future proof" vision but the luck that (or the ability to steer) the games engine shifted more to pixel shading instead of geometry shading (as with that even doubling shaders and power consumption would have not helped "future proof" AMD architecture).
Polaris has not new feature that can make it battle against Pascal better than GCN 1.2 could against Maxwell.
And it is still power hungry. It is at the level of old generation in efficiency, something that never happened before. Even GF100 was inefficient but could still reach the highest rank on podium. Here we have inefficiency and low rank. Nothing that can be considered good for the future unless you consider the price the card is sold at, which is a lower tier than it was thought to be placed.
medi03 - Tuesday, November 15, 2016 - link
Chizow?TristanSDX - Friday, November 11, 2016 - link
Let AMD stop sales of CPU and GPU and buy shares of NV and Intel. Without competition, NV and Intel will raise prices and their shares rapidly skyrocket, and AMD earn more cash and much faster than selling their current products.smilingcrow - Friday, November 11, 2016 - link
Pure capitalism at it's finest.AnotherGuy - Friday, November 11, 2016 - link
I just dont get some people here who get so happy at nvidia's numbers and speak like nvidia is their mother or father... they are making too much money out of customers like you who buy cards at ridiculous prices... jeezHollyDOL - Friday, November 11, 2016 - link
When I remember cost of my first (parents') PC 25 years ago, cost of GTX 1080 doesn't seem that much tbh. I would lie if I wouldn't be happier to get one for half the price, though put in long time price/wages point of view, it's a bargain.Nagorak - Saturday, November 12, 2016 - link
I wouldn't describe the 1080 as a bargain in any respect, despite the fact I own one myself. It's definitely a luxury type purchase.Yojimbo - Saturday, November 12, 2016 - link
From a historical perspective it is a bargain. But that can be said of most any electronic component. I think the value of the GPU compared to other components in a gaming system has been going up. From that perspective, although inflation-adjusted prices have only come down a little, value per dollar has still been going up. VR will only continue that trend, especially if it starts to rely on accelerated physical simulations.JKflipflop98 - Saturday, November 12, 2016 - link
In 1993, the Los Alamos labs had a pair of supercomputers for their research. They were 23 and 24 on the TOP500.A GTX1080 has the same amount of compute power as those computers.
Yeah, I agree it's a bargain what we get for our money today. Tomorrow will only be better.
JKflipflop98 - Saturday, November 12, 2016 - link
I really think Intel needs to buy Nvidia. GeForce cards on Intel's FinFet process? Yes, please.CiccioB - Monday, November 14, 2016 - link
They are already fine on TMSC 16nm.Who needs a better PP to overcome architecture deficiencies is AMD right now. Maybe it is them who should contact Intel to make GPU on 10nm PP. Maybe that way GCN 1.4 could just reach Pascal efficiency.
It is like the old funny talkings about Atom vs arm efficiency.... The former was always more efficient with 2 PP of advantage. Otherwise it was worse (and it even didn't had everything an arm soc integrated and was much bigger and costier :D).
medi03 - Wednesday, November 16, 2016 - link
AMD's revenue in Q3 2016: 1.3 billion $.Can someone tell me how does: "NVIDIA gained more revenue in the gaming segment this quarter than all of AMD’s Computing and Graphics segment earned in their last quarter. " make sense?