# AMD HD 6950 and 6970 released



## topgear (Dec 15, 2010)

*First a little Specs :*

*images.techtree.com/ttimages/story/113790_specs_1.jpg

image courtesy of techtree.com

One more with prices and better comparison with it's cousins :

*i51.tinypic.com/2h8bchz.gif

used infos from anandtech.com for this one.

*HD6950 has TDP	of 140 W and HD6970 has TDP of 190 W*

The duos : ( good old design ! ) 

*www.guru3d.com/imageview.php?image=27894

*www.guru3d.com/imageview.php?image=27897

images courtesy of guru3d.com

One more tiny detail :

*www.guru3d.com/imageview.php?image=27892

image courtesy of guru3d.com

*What's That New Switch?*

If you look at the photo below you'll notice a tiny micro-switch next to the two CrossfireX connectors. The R6900 cards now have one firmware flashable dual-BIOS and one (non flashable) default BIOS, with the switch you can select BIOS 1 or 2. 

AMD implemented the feature likely to prevent the RMA rate. They know very well that the enthusiast community often re-flashes their cards, often unsuccessful after which they enter a very expensive RMA procedure at AMD's cost.

So this feature is a bit of a failsafe. However for the real tweakers it's an advantage. You could overclock and over-volt the product based on a predetermined set of preferences of your choice, flash it into the BIOS and with the flick of a switch you can have either a performance or default BIOS at your disposal.

It's definitely not a bad thing to have on any graphics card really and we certainly appreciate the implementation.

*The top 3 reviews :* ( IMO )

*www.tomshardware.com/reviews/radeon-hd-6970-radeon-hd-6950-cayman,2818.html

*www.guru3d.com/article/radeon-6950-6970-review/

*www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950

*The First ever Review* : iXBT.com: AMD Radeon HD 6950/6970: ÷óòü ñëàáåå GeForce GTX 570/580, íî è äåøåâëå

iXBT.com: AMD Radeon HD 6950/6970: îïèñàíèå âèäåîêàðòû è ðåçóëüòàòû ñèíòåòè÷åñêèõ òåñòîâ

Game Benches : iXBT.com: AMD Radeon HD 6950/6970: ðåçóëüòàòû çàìåðîâ ïðîèçâîäèòåëüíîñòè â èãðàõ è äðóãèõ 3D-òåñòàõ è îáùèå âûâîäû

*More reviews of HD6970 and HD6950*

*www.neoseeker.com/Articles/Hardware/Reviews/amd_radeon_hd6970_hd6950/

*www.hardwarecanucks.com/forum/hardware-canucks-reviews/38899-amd-radeon-hd-6970-hd-6950-review.html

*www.overclockersclub.com/reviews/amd_hd6970_hd6950_review/

*www.overclock3d.net/reviews/gpu_displays/his_hd6970_hd6950_review/1

*hardocp.com/article/2010/12/14/amd_radeon_hd_6970_6950_video_card_review

*fudzilla.com/reviews/item/21186-xfx-radeon-hd-6970-and-hd-6950-tested

*www.rage3d.com/reviews/video/amd_hd6970_hd6950_launch_review/

*HD 6970 Reviews* :

*www.techtree.com/India/Reviews/AMD_HD_6970_Cayman_Review/551-113790-537.html

*www.techpowerup.com/reviews/HIS/Radeon_HD_6970/

*www.techspot.com/review/348-amd-radeon-6970/

*www.bit-tech.net/hardware/graphics/2010/12/15/ati-radeon-hd-6970-review/1

*benchmarkreviews.com/index.php?option=com_content&task=view&id=606&Itemid=72

*www.techradar.com/reviews/pc-mac/pc-components/graphics-cards/amd-radeon-hd-6970-915716/review

*HD6950 Reviews* :

*www.pcworld.com/reviews/product/752477/review/amd_radeon_hd_6950.html

*www.techradar.com/reviews/pc-mac/pc-components/graphics-cards/amd-radeon-hd-6950-915689/review

*www.hitechlegion.com/reviews/video-cards/7411-sapphire-radeon-hd-6950-review

*www.techpowerup.com/reviews/HIS/Radeon_HD_6950/


----------



## Chirag (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

AMD's 6970 pricing revealed - Graphics Cards - Build - News - Atomic MPC

6970 is ~$370. Competing with GTX 570. Looks good.


----------



## damngoodman999 (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*



topgear said:


> got another review of HD6970 ( 1536 SPs ) and HD6950 ( 1408 SPs ) and I think it can be trusted :
> 
> iXBT.com: AMD Radeon HD 6950/6970: ÷óòü ñëàáåå GeForce GTX 570/580, íî è äåøåâëå
> 
> ...



Top gear i never knew that u know RUSSIAN language ...


----------



## desiibond (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

@topgear, from those reviews, it looks like GTX580 will keep the crown for a bit more time. the HD6970 is a bit slower with AA and AF enabled and at FullHD resolution.


----------



## Krazzy Warrior (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*



damngoodman999 said:


> Top gear i never knew that u know RUSSIAN language ...


iXBT.com: AMD Radeon HD 6950/6970: îïèñàíèå âèäåîêàðòû è ðåçóëüòàòû ñèíòåòè÷åñêèõ òåñòîâ > Google Translate

Google rocks!


----------



## clear_lot (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

BENCHMARKS FROM FUDZILLA:

XFX Radeon HD 6970 and HD 6950 tested


A question: is fudzilla as reliable as toms/anandtech/guru3d/techpowerup ?


----------



## Chirag (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

^^
no.


----------



## clear_lot (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

GURU3D:        Radeon HD 6950 & 6970 review

TECHPOWERUP 6970 : HIS Radeon HD 6970 2 GB Review - Page 1/33 | techPowerUp

TECHPOWERUP 6950 :   HIS Radeon HD 6950 2 GB Review - Page 1/33 | techPowerUp




THESE are the de-facto standard reviews. no monkey business in them.


----------



## desiibond (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

Looks like HD6950 is gonna be a heck of a  buy. Should be priced around 300$, which keeps it at GTX470 price point but then is much faster than GTX470 in most games. am surprised to see it touching GTX570 too in some games.


----------



## Cilus (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

The reviews are mixed bug according to Guru3d. In some games like Battle Field Bad Company, DIRT the performance is equal to HD 5870 whereras in other games like Anno 3D, Crysis it performed superbly.
However, it looks like than the main target for HD 69XX series was GTX 480 as GTX580 is a clear winner in all the cases. But power consumption, Idle power and temperature level in full load these cards are hard to beat.
I think pricing will be next thing we are looking into....And AMD is good on it for long time.


----------



## Zangetsu (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

so the current fastest graphic card in market (crown) goes to ........????


----------



## Skud (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*



KaranTh85 said:


> so the current fastest graphic card in market (crown) goes to ........????



its still the radeon 5970. and the fastest GPU is the gtx580.


----------



## damngoodman999 (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

So wat will be the Exact price of HD 6950 wud be ??? In india ??


----------



## Skud (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

links:-

Anandtech:- AMD's Radeon HD 6970 & Radeon HD 6950: Paving The Future For AMD - AnandTech :: Your Source for Hardware Analysis and News

hardOCP:- Introduction - AMD Radeon HD 6970 and HD 6950 Video Card Review | [H]ard|OCP

TomsHardware:- Radeon HD 6970 And 6950 Review: Is Cayman A Gator Or A Crock? : Radeon HD 6970 And 6950 Arrive

The 6950 really seems like a card to get. Should be somewhere between 18K-20K in India depending on the model.


----------



## Faun (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*



damngoodman999 said:


> Top gear i never knew that u know RUSSIAN language ...



lol...privet


----------



## vickybat (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

Guys check cayman review by tomshardware here.

6970 is no match for the mighty gtx 580 in all benchmarks and even the gtx 570 trumps it in some resolutions and perform closely.Same goes for the 6950 which is not that far from its big brother. But amd has improved over its cypress architecture incorporating a VLIW4 model against VLIW5 i.e dispatching off the special function alu as amd claims it to be unnecessary.More about it in the review.

Lastly where amd excels is in improved cf scaling which see's two 6950's trump two gtx 570 (marginally though). But that is due to 6950's large frame buffer and improved drivers. Except nvidia to fight back with its own driver optimisations.

So the conclusion is 580 and 570 are still top performers and 6950 seems to be the cayman to buy. But we can't deny the value in a pair of 6850's which beats all caymans and fight closely with 580 and overtakes it in higher resolutions.

Expect a gtx 560 to take on that aspect.


----------



## coderunknown (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

but why was the pricing something like 28k (HD6970) as posted in the earlier page? was it a mistake or SMC or whoever was the dealer lost its mind?


----------



## Faun (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

^^yep that pricing is not commensurate with the performance.


----------



## Cilus (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

vickybat, ya you are very true, GTX 580 is the winner here. Actually AMD was targetting GTX 480 and completely surprised by he Nvdia's trump card, the GTX 580. But still it is a lovely product. I think attractive pricing is the next thing we can see from AMD and it should be lower than GTX 570.


----------



## clear_lot (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

^ +1

@ Sam.Shab

i posted the prices from SMC. their prices are very strange. i think what they do is when a new card and its junior version release, their knee-jerk reaction is to put prices as 28k and 22k respectively, in hope of catching the early uninformed.
how else can you explain 6970=580 and 6950=570 in pricing?

anyway, how are they priced compared to international pries? i guess we are paying rs.5000 extra for each card.


----------



## desiibond (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

@vickybat. HD6950 goes head on against mid range cards and I see it sitting next to HD6870. It costs similar to GTX480 while performs much better. HD6970 on the other hand is in a bad situation. It costs more than GTX570 but then for the same price, one can get dual HD6850s in CF mode that pawns both 570 and HD6970


----------



## Ishu Gupta (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

A 15-17k range would be great for HD6950.


----------



## ico (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*



desiibond said:


> It costs more than GTX570 but then for the same price, one can get dual HD6850s in CF mode that pawns both 570 and HD6970


HD 6850 in crossfire actually even edges a single GTX 580 in most benchmarks.



Ishu Gupta said:


> A 15-17k range would be great for HD6950.


which would mean HD 6870 and 6850 getting even more cheaper. 

Here are the prices, refer to these while making comparisons.

HD 6850 = $170
HD 6870 = $240
HD 6950 = $299
GTX 570 = $349
HD 6970 = $369
GTX 580 = $510


----------



## Ishu Gupta (Dec 15, 2010)

*Re: All Graphics Cards related queries here.*

Yeah. Only thing against 6850CF is people prefer single cards over two cards.


----------



## vickybat (Dec 15, 2010)

Yes 6850cf edges past 580 but at ultra high resolutions i.e 2560x 1600 due to large frame buffer. They speak utter value and its truly undeniable.

But talking about single cards, the fermi duo still bests the caymans in all departments. 570 also performs close to 6970 and beats it in some benches and so is the 6950( it trails slightly).* But the gtx 580 is untouchable currently*. So amd has to lower prices in order to take on the fermi duo and i guess they won't be that low. 6950 will still cost around 20k and 6970 will be above 25k in india.So they will face stiff competition from nvidia. Even in power consumption nvidia is very close this time. So the cards to recommend here is nvidia gtx 570 and amd radeon hd6950 (nvidia performs a tad better).

*Nvidia's answer to 6850cf will come in the form of gtx 560 and when slied, it should even take down a 6870cf(atleast from what gf110 promises).*

This conclusion says it all.


----------



## Joker (Dec 15, 2010)

here is my conclusion after reading all the reviews...

AMD's cayman is not a match for GTX 580 in terms of single cards but...i've been surprised by their CF scaling. currently if take two HD 6950 and HD 6970 in CF each, you get really excellent performance at the price point of $600 and $740 respectively which is great even with temperature and power under control.



vickybat said:


> * But the gtx 580 is untouchable currently*.


it is a much bigger die with more transistors..should be better.


----------



## vickybat (Dec 15, 2010)

Joker said:


> here is my conclusion after reading all the reviews...
> 
> AMD's cayman is not a match for GTX 580 in terms of single cards but...i've been surprised by their CF scaling. currently if take two HD 6950 and HD 6970 in CF each, you get really excellent performance at the price point of $600 and $740 respectively which is great even with temperature and power under control.



Very true. Cf has improved. 6950cf are the cards that scales well and beats gtx 570 sli in some benchmarks but thats due to improved driver optimisations and large frame buffer.Expect nvidia to release new drivers to improve sli scaling and thus fight back.*(we all know about nvidia's drivers prowess*). But a gtx 580 sli will still be at top but that increases the costs much higher than caymans.

Tesselation is also a factor and though amd has improved a lot and stress on geometry more unlike its earlier iterations but they are no match for nvidia. This is an important consideration imo because i see games like crysis 2 to utilize tesselation.


----------



## desiibond (Dec 15, 2010)

yes. the pricing of HD6970 is a bit higher than required. With the amount of profits they raked in for DX11 series, AMD should've been more competitive. A $310 for 6970 and $270 for HD6950 would've been superb and i do expect the price to fall below these very soon. 

And yes, GTX560 will be the fastest single GPU card available in the market for few more weeks atleast. And I doubt if AMD has answer to this in near future. That Fermi architecture is finally making some inroads right now. Two things that AMD needs to get right:

1. Tessalation performance (with very high settings, HD69xx lag behind competition)
2. Physics engine (if every game manufactuer decide to make use of physics processing in games, it will be big trouble for AMD as their Havok is yet to make a mark.


----------



## vickybat (Dec 15, 2010)

@ desiibond

Yes buddy you are absolutely correct.

About this physx thing amd is really backed off cause of not understanding the older x87 instruction set that physx code uses. And if upcoming games utilize more of this then surely amd will be in big trouble.

 Talking of my own experience, my 5750 failed to run mafia 2 at 1600x900 res and all settings high with apex physx on. The fps were a paltry 5-10 fps. I think a gts 450 would have faired a lot better in this scenario as opposed to my current radeon. Was it a wrong buying decision from my side? What do you think on this buddy? Will all of 6 series fare like this in physx based games?


----------



## Faun (Dec 15, 2010)

To be fair you should use fare. And no, anything within range of GTX 460 would have been smothered on activating apex physx.


----------



## vickybat (Dec 15, 2010)

^^

Sorry it was a typo. I meant fare. But don't you think gts 450 would have been better than 5750 cause of its ability to decode physx instruction set atleast?

Maybe not in the league of gtx 460 but atleast it would have given playable framerates. Now my question is how will the 6 series fare in physx based games?


----------



## ico (Dec 15, 2010)

desiibond said:


> 2. Physics engine (if every game manufactuer decide to make use of physics processing in games, it will be big trouble for AMD as their Havok is yet to make a mark.


PhysX is a joke, seriously. You really think AMD can implement "nVidia PhysX"??? It is nVidia's technology and only the game developers who decide to go nVidia's way will use it. I've played Batman: Arkham Asylum with PhysX turned OFF and ON. PhysX turned ON just adds some fancy papers flying here and there with some extra cracks in the walls, nothing else and that too with a ~60% frame rate deficit. These effects aren't something which can't be achieved without going the "nVidia PhysX way" by developers. Those developers are developing games for nVidia PhysX cards, so be it. Not every developer will do that if they really want their game to be blockbusters.

Havoc does those physics calculations on the CPU itself and it is owned by Intel. Every game makes use of physics processing; they leave it to the CPU.


----------



## Arun_joseph (Dec 15, 2010)

Phyx is good,not more game use Nvidia phyx!


----------



## Faun (Dec 15, 2010)

^^
NVIDIA GeForce GTS 450 GF106 Video Card | GeForce GTS 450,Review,Benchmarks,Performance,NVIDIA,Fermi GF106,Graphics,Video Card,3D Vision,NVIDIA GeForce GTS 450 Fermi GF106 Graphics Card Video Game and 3D Vision Benchmark Performance Review

Draw your conclusions.


----------



## ico (Dec 15, 2010)

^^ hardly a difference of Rs.500 between GTS 450 and HD 5770 here in India. I'd go for the faster card for games which is HD 5770 and this is what generally gets suggested in the PC Configuration section when someone is purely going for a gaming machine unless he mentions rendering/CUDA. PhysX isn't the deciding factor when it only supports a handful of games. Metro 2033 and Mafia II heavily favour the nVidia camp whereas BF: BC2 is known to be favouring AMD cards. I just read the Crysis and CoD:MW2 benchmarks (which are again optimized for nVidia but without fancy PhysX - still neutral to both the camps) and make the judgement.


----------



## clear_lot (Dec 15, 2010)

6970 is very overpriced in india. i wonder why.


----------



## Joker (Dec 15, 2010)

kneejerk prices by smcinternational.in..should get fixed.


----------



## Cilus (Dec 15, 2010)

Regarding the pricing I completely agree with desibond, the pricing should be little lower. But lets check out for couple of days, the price drop could happen very quickly.
AMD can play a little with their price because of the low manufacturing cost due to the use of 256 data bus which is industry standard and pretty easy to implement compared to Nvidia's 320 and 384 bit memory bus.
I think release of new driver may also increase performance as from the guru3d review it is verfied in some games it performs as per with the older HD 5870.


----------



## damngoodman999 (Dec 15, 2010)

Arun_joseph said:


> Phyx is good,not more game use Nvidia phyx!



Latest games are coming with Physx ! whether it is Nvidia or Aegia Physx -> Nvidia Owns It !!


----------



## asingh (Dec 15, 2010)

Cilus said:


> Regarding the pricing I completely agree with desibond, the pricing should be little lower. But lets check out for couple of days, the price drop could happen very quickly.
> AMD can play a little with their price because of the low manufacturing cost due to the use of 256 data bus which is industry standard and pretty easy to implement compared to Nvidia's 320 and 384 bit memory bus.
> I think release of new driver may also increase performance as from the guru3d review it is verfied in some games it performs as per with the older HD 5870.



How will this affect the manufacturing cost.



damngoodman999 said:


> Latest games are coming with Physx ! whether it is Nvidia or Aegia Physx -> Nvidia Owns It !!



DG.Man, you post well, but seriously what were you thinking here. How many games do we see with some type of physics (nVidia Physx or Havoc) coming out. Having the game do physics (paper flying/moving cloth) is different to exclusive physics being implemented via an engine. I have seen PhysX games being rendered and as *Ico* posted it is nothing great. Just something extra. It is not the differentiating factor between ATI and nVidia. Also I have seen the Havoc (CPU) physics on games like Red Falcon Guerilla and Timeshift but it does not 'make or break' the game.

End of the day it is how good FPS the accelerator can generate coupled to the overall system.


----------



## vickybat (Dec 15, 2010)

hmm i guess all physx based games can be played in amd cards by simply turning it off. Mafia 2 plays rock solid in my 5750 by turning apex physx off but rest everything set to high. But since newer games are using this, it has to be seen how good it is with physx on and off in an nvidia card. 

Maybe mafia2 will have some spectacular vehicle explosions, gunfights and falling realistic debris to name a few with physx on in an nvidia card.

Whether its great or not but i want to ask if that extra is worth it to switch to an nvidia card from an equivalent amd card when making a purchase decision? And how well does AMD'S 6XXX series fare in a physx title? Ofcourse after turning on the physx option. 

I know that it doesn't understand the physx code but will the framerates drop to unplayble levels even on highend amd cards?

Please guys i need some answers as this will be highly helpful for future buyers.


----------



## Ishu Gupta (Dec 15, 2010)

I have seen B:AA with and without Physx (5870 with 9600) and Physx was better but nothing immense. I don't care for it.


----------



## ico (Dec 15, 2010)

vickybat said:


> But since newer games are using this,


How many new games?



vickybat said:


> I know that it doesn't understand the physx code but will the framerates drop to unplayble levels even on highend amd cards?


We had a lengthy discussion on this in another thread. yes, the frame rates will drop to unplayable on high-end AMD cards too.

nVidia PhysX is a proprietary "physics" engine. nVidia owns it. It is their property. Sources say that its SDK is free.

Traditionally physics processing has always been done on the CPU and still so in 97.5% of the games. PhysX processing is however done on the GPU and when you turn it ON in the game, you do experience a frame rate loss as the GPU will experience more load. On nVidia cards, the game will be still playable because they "support it" but not in AMD cards because they don't.

*What happens when you turn PhysX on?*
Some paper flying, extra cracks in the walls with some flying debris and a 60% frame rate loss in nVidia cards. Unplayable game in AMD cards.

*But are these effects something which can't be implemented the traditional way?*
Of course the game developers can easily do this the "traditional way."

*But why did game devs use PhysX then?*
nVidia sponsored them or gave them $$$ or simply struck a deal with them. Nothing wrong in this though.

*Is the performance loss in nVidia cards justified for the effects?*
No, it isn't.

*CUDA* however is no joke and is a great thing in nVidia cards.


----------



## asingh (Dec 15, 2010)

^^
Errm..or you could say "the way it's meant to be played".

Buy another GPU for PhysX.


----------



## Ishu Gupta (Dec 15, 2010)

Quality post ico. Agree with all of it.


----------



## clear_lot (Dec 15, 2010)

> Traditionally physics processing has always been done on the CPU and still so in 97.5% of the games. PhysX processing is however done on the GPU and when you turn it ON in the game, you do experience a frame rate loss as the GPU will experience more load. On nVidia cards, the game will be still playable because they "support it" but not in AMD cards because they don't.



a liitle point here is that Physx implementation is done by game developers.  the VERY massive frame drops experienced by amd cards is because in absence of nvidia card, the CPU does Physx. which is very slow because of old 8087 instructions.

the massive frame drops in nvidia cards (but less than ati) when physx enabled is because physx is computationally expensive even for nvidia card. it is unable to do other rendering AND physx calculations together. hence the massive drops(though lesser than ati).    
to realistically enable gpu physx AND normal rendering, a SEPARATE NVIDIA CARD is must. this extra card does nothing but physx.   
main rendering can be done by the primary nvidia/ati card.  
the seperate nvidia card must be > 9600gt for realistic playable framerates.

source: Analysis: PhysX On Systems With AMD Graphics Cards : Introduction


----------



## damngoodman999 (Dec 15, 2010)

asingh said:


> DG.Man, you post well, but seriously what were you thinking here. How many games do we see with some type of physics (nVidia Physx or Havoc) coming out. Having the game do physics (paper flying/moving cloth) is different to exclusive physics being implemented via an engine. I have seen PhysX games being rendered and as *Ico* posted it is nothing great. Just something extra. It is not the differentiating factor between ATI and nVidia. Also I have seen the Havoc (CPU) physics on games like Red Falcon Guerilla and Timeshift but it does not 'make or break' the game.
> 
> End of the day it is how good FPS the accelerator can generate coupled to the overall system.



Yes i Do accept i many terms as ICO says that Due to physx the FPS reduces by 60% of the GPU in Nvidia ! But still if some one getting card more than 10K sure they ll need of eyecandy also their point ll be on more texture & physx ! so Sacrificing physx worth it all ??

 But i saw some where while * googling * Ati cards provide Physx on many games  as not like Nvidia cards but ati cards are trying a bit to access like Physx i.e.(Breaking particles , debris ,  leaves )

Is that So , Then Y dint AMD concentrate on providing alternative for Physx ??

*
HAVOC* is Joke , i never saw after Red faction guerrilla ???


----------



## asingh (Dec 15, 2010)

^^
I think till date ATI cards cannot process physics of any type.


----------



## vickybat (Dec 15, 2010)

I agree with *damngoodman999* here. A high card like gtx 570 will give playable framerates with physx on but lets a say a 6950 and 6970 won't be able to provide playable frame rates with that feature on. So why will anybody buy an amd card and buy a dedicated physx card later*?*

Amd is not thinking about a physx alternative and that is a drawback which they should work on fixing or providing something out of the box. Mafia 2 plays brilliantly with physx on and i have seen it on my friend's gtx 460.

@ ico

Batman Arkham city. mirror's edge 2, next unreal tournament, ghost recon future soldier, dragon age origins 2, dmc (not sure) etc are some titles supporting physx and many are on the way. If nvidia supports a AAA title like batman arkham city and enables physx then all high end amd owners or future owners will miss on complete eye candy.

Whats your opinion on this?


----------



## ico (Dec 15, 2010)

How many games came out in 2010?? How many of them supported PhysX??



damngoodman999 said:


> Is that So , Then Y dint AMD concentrate on providing alternative for Physx ??


If a developer wants to add those effects, he can easily do that without using PhysX and hence giving us a terrible frame rate penalty.


----------



## Piyush (Dec 15, 2010)

^^valid point


----------



## asingh (Dec 15, 2010)

damngoodman999 said:


> ]
> HAVOC[/B] is Joke , i never saw after Red faction guerrilla ???



Bad Company 2 uses Havoc Engine with the Frost Bite engine. Guess all jokes stop....! 



vickybat said:


> I agree with *damngoodman999* here. A high card like gtx 570 will give playable framerates with physx on but lets a say a 6950 and 6970 won't be able to provide playable frame rates with that feature on. So why will anybody buy an amd card and buy a dedicated physx card later*?*
> 
> Amd is not thinking about a physx alternative and that is a drawback which they should work on fixing or providing something out of the box. Mafia 2 plays brilliantly with physx on and i have seen it on my friend's gtx 460.



As far as I know, if a game with the PhysX API calls is run JUST using ATI cards and they will render correct and fine. Only the PhysX will not render. How can you say that PhysX enabled games will not run on a HD6970, that to playable..? I am able to run Metro 2033 all maxed out just fine on my system..? PhysX cannot be turned ON when just an ATI card is used, but yes the installation forces you to install the pack else the game will not launch. API calls are not made in the game for PhysX and it renders 'normal'. 

nVidia has invested in PhysX. Game developers sign up to use this tool and become part of the program. It is cross portable to consoles too. As of now the Unreal Engine 3, Gamebryo, Vision, Instinct, Trinigy, Diesel, Unity 3D, Hero, and BigWorld  engines support PhysX and EA, THQ, 2K Games, Sega publishers too. Why does ATI need to do this...?


----------



## ico (Dec 15, 2010)

Here's a video:

[youtube]_8D2Kql392c[/youtube]


----------



## vickybat (Dec 15, 2010)

asingh said:


> As far as I know, if a game with the PhysX API calls is run JUST using ATI cards and they will render correct and fine. Only the PhysX will not render. How can you say that PhysX enabled games will not run on a HD6970, that to playable..?




No buddy you got me wrong. I never said that. In fact amd cards will render  physx titles fine but with physx turned off. 

Lets take an example of mafia 2 and we have two cards:
1. gtx570
2.radeon 6970
We setup two systems with exactly same hardware sans the gpu which will be diff. ofcourse.
*System A* Has gtx 570 plugged and *System B* has radeon 6970.

If we fire mafia 2 side by side in both the systems at high settings and apex physx on : (now my questions)

1.Will *SYSTEM B* deliver playable framerates?
2.Will *SYSTEM A* deliver playable framerates?
3.Will there be significant differences between the two systems?


----------



## ico (Dec 15, 2010)

^^ I had answered it already a number of times. GTX 570 will run better obviously and HD 6970 will be running it at an unplayable rate because PhysX will be processed by the CPU in the case of HD 6970 for which PhysX is totally not optimized.


----------



## vickybat (Dec 15, 2010)

ico said:


> Here's a video:
> 
> [youtube]_8D2Kql392c[/youtube]



The looks pretty significant to me. We are missing out a lot of scattering effects which looks less realistic when we turn physx off.

Now we pay for a highend gpu to enable all the effects in a game and the way developers want it to be played . For churning out more fps we can simply tone down the resolution but that takes away all the joy.

So don't you think if a lot of quality titles come this year with nvidia physx ain't all the highend AMD owners missing out on something?



ico said:


> ^^ I had answered it already a number of times. GTX 570 will run better obviously and HD 6970 will be running it at an unplayable rate because PhysX will be processed by the CPU in the case of HD 6970 for which PhysX is totally not optimized.



Ok but why the significant drop in framerates? If only the physx code is not supported then physx based effects won't get rendered imo whereas the rest of th game should be rendered normally sans physx effects.

6970 is a powerful gpu and should render rest of the game flawlessly without the effects ofcourse but why unplayable frame rates?

Can you please explain this buddy?


----------



## ico (Dec 15, 2010)

Do you really think that those pop-corn effects churn out 60% of your frame rate even on an nVidia card? Do you really think that they can't be implemented without PhysX? How many quality titles supported PhysX in 2010???

PhysX is a gimmick. Not my criteria of choosing a card. That's all I can say.

If X card whether from nVidia or ATi runs Crysis better in my price range, I'll go for X.

High end AMD owners can go for a small nVidia card like GTS 450.

It looks pretty significant to you, go for an nVidia card? I'll nonetheless go for GTX 580 in my budget of 30k if buying a card and yea, PhysX won't be the deal maker for me. I'll decide my purchase according to the card's true power.



vickybat said:


> Ok but why the significant drop in framerates? If only the physx code is not supported then physx based effects won't get rendered imo whereas the rest of th game should be rendered normally sans physx effects.
> 
> 6970 is a powerful gpu and should render rest of the game flawlessly without the effects ofcourse but why unplayable frame rates?
> 
> Can you please explain this buddy?





vickybat said:


> If we fire mafia 2 side by side in both the systems at high settings *and apex physx on :* (now my questions)


What I said was only for PhysX enabled and in reply to your post.


----------



## asingh (Dec 15, 2010)

@VickyBat:

Because it is pathetic and lame coding by nVidia. When an nVidia card is not realized by the sub system it is all off loaded to the CPU. Now these are mathematical calculations with extremely high floating points --- and the CPU which is supposed to be managing things is now actually doing them. 

It is not ATI faults that these 'high end' affects cannot be rendered on their die and get off loaded to the CPU. It is the game developers and the PhysX engine which forces the game to run in this manner.  See this link here. Yes the Green cards are on top, but they are made for this game. You will also see some HD5xxx series in there giving 'playable' rates. Now why is that...?? 

The Mafia 2 developers signed up for this program with nVidia. Basically it is home ground advantage to put it in layman terms. Of course the green cards will win.

Regarding your system A,B question:
System A will run better cause it is customized to run on nVidia cards. ATI cards have a disadvantage here, which is not their fault. Why you think we run Vantage benchmarks with the PhysX capability off. To give both sides an even playing ground.

See this playable screenshot.
And this too. With PhyX off. The average is 101.2


----------



## Joker (Dec 15, 2010)

vickybat said:


> Ok but why the significant drop in framerates? If only the physx code is not supported then physx based effects won't get rendered imo whereas the rest of th game should be rendered normally sans physx effects.
> 
> 6970 is a powerful gpu and should render rest of the game flawlessly without the effects ofcourse but why unplayable frame rates?
> 
> Can you please explain this buddy?


he explained u everything and still u can be clueless.


----------



## vickybat (Dec 15, 2010)

Might not be in 2010 but may be in 2011. So you are saying anybody interested in physx effects should turn away from amd camp and look nowhere except nvidia right?

Even i think like you. I would choose a card according to its true power. But to be honest *ICO* i sometimes regret of not choosing the gts 450 when i made my purchase.Atleast it would have given me playable framerates in physx titles like mirror's edge and mafia 2.

But that is a personal choice anyways.


----------



## ico (Dec 16, 2010)

vickybat said:


> Might not be in 2010 but may be in 2011. So you are saying anybody interested in physx effects should turn away from amd camp and look nowhere except nvidia right?
> 
> Even i think like you. I would choose a card according to its true power. But to be honest *ICO* i sometimes regret of not choosing the gts 450 when i made my purchase.Atleast it would have given me playable framerates in *physx titles **with PhysX enabled* like mirror's edge and mafia 2.
> 
> But that is a personal choice anyways.


Fixed your post and here are the benchmarks:
Benchmark Results: Call Of Duty: Modern Warfare 2 (DX9) : Nvidia GeForce GTS 450: Hello GF106, Farewell G92
Benchmark Results: Crysis (DX10) : Nvidia GeForce GTS 450: Hello GF106, Farewell G92
Benchmark Results: Aliens Vs. Predator (DX11) : Nvidia GeForce GTS 450: Hello GF106, Farewell G92
Benchmark Results: DiRT 2 (DX11) : Nvidia GeForce GTS 450: Hello GF106, Farewell G92
Benchmark Results: Battlefield: Bad Company 2 (DX11) : Nvidia GeForce GTS 450: Hello GF106, Farewell G92
Benchmark Results: Just Cause 2 : Nvidia GeForce GTS 450: Hello GF106, Farewell G92


----------



## vickybat (Dec 16, 2010)

asingh said:


> @VickyBat:
> 
> Because it is pathetic and lame coding by nVidia. When an nVidia card is not realized by the sub system it is all off loaded to the CPU. Now these are mathematical calculations with extremely high floating points --- and the CPU which is supposed to be managing things is now actually doing them.
> 
> ...




Ok buddy i am satisfied. Its because of nvidia's proprietary physx engine amd cards are unable to process them and the poor cpu has to do them.But don't you think that lame coding is nvidia's advantage? Why will nvidia allow their propriety code to be handled by amd gpu's in competition point of view. Don't you think amd should stress on a physx equivalent and give its cards an advantage? 

So guess i will contend in playing mafia 2 with apex physx off.



ico said:


> Fixed your post and here are the benchmarks:
> Benchmark Results: Call Of Duty: Modern Warfare 2 (DX9) : Nvidia GeForce GTS 450: Hello GF106, Farewell G92
> Benchmark Results: Crysis (DX10) : Nvidia GeForce GTS 450: Hello GF106, Farewell G92
> Benchmark Results: Aliens Vs. Predator (DX11) : Nvidia GeForce GTS 450: Hello GF106, Farewell G92
> ...




5750 and gts 450 are quite similar but none of the above titles has physx. Anyways we have have to turn off physx for an amd card and it will render fine. I am satisfied with what i have.


----------



## aby geek (Dec 16, 2010)

so 6970 cf only weaker to 5970 cf and asus ares ?

and fudzilla is reliable they were atleast right about the prices

and man metro 2033 is crysis daddy no decent fps at any setting or res. are there any benchmarks for this game for 5970 and asus ares


----------



## ico (Dec 16, 2010)

vickybat said:


> Don't you think amd should stress on a physx equivalent and give its cards an advantage?


You think AMD should come up with their own gimmick and create more retarded confusion? How naive.


----------



## vickybat (Dec 16, 2010)

aby geek said:


> so 6970 cf only weaker to 5970 cf and asus ares ?
> 
> and fudzilla is reliable they were atleast right about the prices
> 
> and man metro 2033 is crysis daddy no decent fps at any setting or res. are there any benchmarks for this game for 5970 and asus ares



you're forgetting the gtx 580 sli. 6970cf is also weaker to it.


----------



## Joker (Dec 16, 2010)

damngoodman999 said:


> Latest games are coming with Physx ! whether it is Nvidia or Aegia Physx -> Nvidia Owns It !!


mafia 2 and metro 2033 are the only ones i could name in 2010. lol.

and yes..i own a nvidia card and i think it is a gimmick. lol.


----------



## vickybat (Dec 16, 2010)

ico said:


> You think AMD should come up with their own gimmick and create more retarded confusion? How naive.



No i'm not being naive but for the sake of competition amd should up the ante.

Afterall confusion will lead to better competition until something universal is accepted.

Just my thought but don't know if it will lead to fruitition.

AMD seriously has less software support and need to work out in this area.
Don't you think the same?


----------



## Joker (Dec 16, 2010)

get this thread back to topic..enough offtopic shite.


----------



## topgear (Dec 16, 2010)

damngoodman999 said:


> Top gear i never knew that u know RUSSIAN language ...





Faun said:


> lol...privet





Krazzy Warrior said:


> iXBT.com: AMD Radeon HD 6950/6970: îïèñàíèå âèäåîêàðòû è ðåçóëüòàòû ñèíòåòè÷åñêèõ òåñòîâ > Google Translate
> 
> Google rocks!



@ *damngoodman999* and *Faun* -I don't knwo russian langugae and look at *Krazzy Warrior* post - I think you guys got your answers. Let's not discuss it any further.



Joker said:


> get this thread back to topic..enough offtopic shite.



Looks like PhySyX is taking away all the lime lights !.

@ *ico* - thanks for making this one a separate thread.

BTW, updated the first post with specs and links to more reviews.


----------



## mohiuddin (Dec 16, 2010)

vickybat said:


> Very true. Cf has improved. 6950cf are the cards that scales well and beats gtx 570 sli in some benchmarks but thats due to improved driver optimisations and large frame buffer.Expect nvidia to release new drivers to improve sli scaling and thus fight back.*(we all know about nvidia's drivers prowess*). But a gtx 580 sli will still be at top but that increases the costs much higher than caymans.
> 
> Tesselation is also a factor and though amd has improved a lot and stress on geometry more unlike its earlier iterations but they are no match for nvidia. This is an important consideration imo because i see games like crysis 2 to utilize tesselation.



yea, in comparison to 6970, 570 is a better buy...but wait for 2~3 driver updates from amd.But, in price, 6950 is unbeatable.
in tesselletion, see this.
Radeon HD 6950 & 6970 review

AMD Radeon HD 6970 and HD 6950 Review - Page 15

nd this


----------



## mohiuddin (Dec 16, 2010)

vickybat said:


> Might not be in 2010 but may be in 2011. So you are saying anybody interested in physx effects should turn away from amd camp and look nowhere except nvidia right?
> 
> Even i think like you. I would choose a card according to its true power. But to be honest *ICO* i sometimes regret of not choosing the gts 450 when i made my purchase.Atleast it would have given me playable framerates in physx titles like mirror's edge and mafia 2.
> 
> But that is a personal choice anyways.



like *Ico* said 60% hit with enabling that bu**sh*t with a single nv card, an enthusiast surely will buy an xtra card for phys(if he really a fan of that).so, it is indeed not a even tiny factor for choosing gfx card...but , seriously ,in case of mine(and i think other price concern players) physx s*cks balls...and again it is already being saidthat those effects with physx ,could easily be achieved by in_built softwares.physx is worthless, coz, it isn't optimised for todays cpus, no SSE2 instruction,no multicore optimised.nv said ,'it is developer who made the choice ,we have those sse2 supports'. my question is 'why developers made the choice?


----------



## damngoodman999 (Dec 16, 2010)

YouTube - GTX 480 Vs HD 5870 Crysis Showdown

Sometimes i need to agree with Ati since Physx OFF The ATI cards sure Gona Kick the Nvidia Higher ends , If some needs eye candy Better Be gone with Nvidia !!


----------



## Cilus (Dec 16, 2010)

asingh said:


> How will this affect the manufacturing cost.


If you ar using very wide bus, it introduces lots of other problems. For example
1. When you are using a very wide bus, they are grouped in some logical divisions. In Nvidia each of the group is 64 bit wide and have 6 of them in GTX 580. This is done to reduce induction among the each of the BUS channels as each of the BUS has their own electromagnetic field (although very weak) and can induce the nearest ones with noise or wrong electrical signal.
So the grouping is necessary to cut that induction. So more the bus width, more precaution needed to stop that effect. In increase

2. In increases die size also. Now in a nano architecture (55nm or 40 nm) there is an incident called "Quantum Tunneling" The thing is when die size becomes very thin, sometimes electrons overcome their internal bonding force and travel through a barrier that it classically could not be possible.
It introduces an opposite flowing electron current which increases the power consumption and heat dissipation. So special hardware need to be used to fight those problems. 

3. Even normal fabrication process of 256 bit bus is simpler compared to 384 bit bus fabrication. In all the tech sites like guru3d, toms hardware it is mentioned. This is the reason Nvidia was not able to reduce the price of their high end 200 series GPUs like GTX 285, 275 to fight back. They use 448 bit or 512 Bit memory bus resulting huge manufacturing cost.


----------



## vickybat (Dec 16, 2010)

@ *damngoodman999*

No buddy , even with physx off, ati is lagging behind nvidia high end cards this time. The 6970 & 6950 trails behind the gtx 580 and gtx 570 duo in all the benchmarks and all resolutions.

And talk about sli vs cf, sli needs some further optimisations for the newer cards and probably a new driver would fix it. And the competitor of 6850cf will come in the form of gtx 560 sli. This time its all nvidia in the top segment and expect that trend to follow till the midrange territory.

If ati can get the pricing right, we can see some decent competition.

@ *cilus*

Nice piece of info buddy. But what has bus width to do with die size?

If manufactures know about "quantum tunneling effect", why will they fabricate thinner dies in the first place? A sillicon die which has more area can incorporate large no. of transistors & nano architecture i.e 55nm , 40nm refers to transistor size. 

Transistors being small in size, can be incorporated in smaller dies but without sacrificing their number strength because total no. of transistors is directly proportional to performance. But to reduce power consumption and heat, the die shrink and nano fabrication are necessary which also cuts down cost.

Both amd and nvidia are currently using 40nm transistors in their dies but amd's die size is smaller than nvidia and the latter has more no. of transistors.



mohiuddin said:


> like *Ico* said 60% hit with enabling that bu**sh*t with a single nv card, an enthusiast surely will buy an xtra card for phys(if he really a fan of that).so, it is indeed not a even tiny factor for choosing gfx card...but , seriously ,in case of mine(and i think other price concern players) physx s*cks balls...and again it is already being saidthat those effects with physx ,could easily be achieved by in_built softwares.physx is worthless, coz, it isn't optimised for todays cpus, no SSE2 instruction,no multicore optimised.nv said ,'it is developer who made the choice ,we have those sse2 supports'. my question is 'why developers made the choice?



Post properly cause that sms style looks crappy imo.

Amd has no trick up its sleeve from a software point of view. nvidia did this to gain an upper hand & if more games supporting physx sell more nvidia cards will.

Why would nvidia want amd cards to support physx? Thats the reason of not adding that sse2 instruction just because to cripple amd in nvidia titles.

And a person who wants physx would never go for an amd card even high end ones. They would prefer nvidia and current amd owners who wants to go the physx way will add a second nvidia card.

Nvidia is supporting more titles than amd & some are even aaa titles.Thats why the developer does what nvidia tells them to do. 

And if that sort of effects can be done without physx code, why doesn't amd do it and then do the talking. Atleast we will have some fair competition.



mohiuddin said:


> yea, in comparison to 6970, 570 is a better buy...but wait for 2~3 driver updates from amd.But, in price, 6950 is unbeatable.
> in tesselletion, see this.
> Radeon HD 6950 & 6970 review
> 
> ...



After amd's 2-3 driver updates, nvidia will also have 2-3 driver updates. 

The bottomline is both the caymans lag their nvidia counterparts in all the benchmarks. Even the 570 touches 6970 in performance. Nvidia will release a driver to improve its sli performance.

In tesselation amd still lags nvidia and in higher resolutions the gap increases.
If anything will save amd caymans now is going to be pricing.


----------



## Cilus (Dec 16, 2010)

vickybat, if you put a huge memory bus then it will take some extra place right... that's why a huge memory bus always increase the total Die size. Plus the extra hardware to arrange the Bus and to prevent the incidents I've mentioned earlier will increase it further.
Check the Die size of GTX 285 and 280, using 512 bit memory bus. The are HUGE.



> If manufactures know about "quantum tunneling effect", why will they fabricate thinner dies in the first place? A sillicon die which has more area can incorporate large no. of transistors & nano architecture i.e 55nm , 40nm refers to transistor size.


They are using because they know how to counter it. Now more the die size is the number of this effect occurring also increases. So you need some extra piece of hardware to counter it.

Regarding PhysX, we are again fighting. 
As I've mentioned earlier as per all the main review sites PhysX is a nice add on but not at all deciding factor. It is the gaming performance which is still the deciding factor till day.


> And a person who wants physx would never go for an amd card even high end ones.


This is not entirely true bro. I've checked several forums and the numbers of ATI+nVidia configuration for PhySX is pretty high. 
Let me give you an example, just consider you have 17-18K to spend for Gfx card setup. In nVidia front the best performance will come from a Custom PCB GTX 470, available @ 16K
In Radeon front the card is say HD 6870, available @ 14.5K. Both card will deliver the almost same performance in gaming (forget PhysX for now).
Now if you enable PhysX in a single GTX 470, your FPS will feel a sudden shock will drop to almost half of the fps when PhysX was disabled. So your gaming performance is now actually worse than HD 6870, actually on a GTX460 768 Mb level or max 1 GB level. 
Lets consider other scenario: get a *HD 6870 @ 14.5K + GT 240 @ 4K = 18K*.
*Gaming performance : superb; PhysX performance: above average.*
Less Heat, less Power consumption

Now which solutions look better to you? When you have some amount, buying a single nVidia card for both PhysX and game is not a good choice unless you are buying a very very high end card. Oherwise you have to be satisfied with a GTX 470 with GTX 460s performance when physX enabled. In @ price point except very recent time (release of HD 69 series), AMD is still in leading bench. That's why just PhysX just simply can't be any deciding factor at all.


----------



## vickybat (Dec 16, 2010)

Joker said:


> get this thread back to topic..enough offtopic shite.




Nobody is talking offtopic shite here. Anything involving the cayman gpu's is not offtopic at all in this thread.



Cilus said:


> Regarding PhysX, we are again fighting.
> As I've mentioned earlier as per all the main review sites PhysX is a nice add on but not at all deciding factor. It is the gaming performance which is still the deciding factor till day.
> 
> This is not entirely true bro. I've checked several forums and the numbers of ATI+nVidia configuration for PhySX is pretty high.
> ...



I got that bro but 6870 will have a significant performance hit compared to gtx 470 in framerates. 6870 will be unplayable but 470 will still provide great playable framerates and as you say in the league of a gtx 460 which is good.

Now lets say if we have 20-25k budget, what should we chose? A gtx570 or radeon 6950 considering physx. Lets say both will cost the same in india what shall a buyer choose if he or she prioritizes physx.

And if in future nvidia releases a driver that stops the second nvidia card if it realizes an amd in the vicinity then. We all know nvidia can or will release a driver to stop this. What will happen then considering physx in the picture?


----------



## ico (Dec 16, 2010)

You prioritize PhysX, get an nVidia card at any cost. End of discussion.  Hardly two games I can name which came out in 2010 support PhysX.


----------



## Joker (Dec 16, 2010)

vickybat said:


> No buddy , even with physx off, ati is lagging behind nvidia high end cards this time. The 6970 & 6950 trails behind the gtx 580 and gtx 570 duo in all the benchmarks and all resolutions.


first thing...there is NO competition to HD 6950...it is in a unique range and GTX 560 is still 1.5 months away.
HD 6970 = GTX 570...a draw.....and it wasnt even targetted at nvida's top card acknowledged by AMD itself...they were not able to move to 32nm properly so they had to switch back to 40nm hence the new architecture isnt delivering well here.
GTX 580 will get pipped by 6990 in April which will actually be 6950x2 and most definitely it will cuz AMD chips are scaling excellent.

now can nvidia make a GTX 580X2???? thermally impossible task....GTX 580 is a huge die a 520sq mm die and they obviously cant without clock deficits.

AMD just doesnt have the top card but it is still way more competitive in the midrange section.


----------



## ico (Dec 16, 2010)

vickybat said:


> I got that bro but 6870 will have a significant performance hit compared to gtx 470 in framerates. 6870 will be unplayable but 470 will still provide great playable framerates and as you say in the league of a gtx 460 which is good.


GTX 470 is crap. Overheats and power problems.

GTX 460 is excellent around the 10k price mark. Slightly above it is the HD 6850 @ 11.3k which performs better. Then we have their OCed versions respectively.



vickybat said:


> Now lets say if we have 20-25k budget, what should we chose? A gtx570 or radeon 6950 considering physx. Lets say both will cost the same in india


They will not cost the same. HD 6950 will be cheaper.



vickybat said:


> what shall a buyer choose if he or she prioritizes physx.


You know the answer and you still keep on asking the same question 100 times? 


vickybat said:


> And if in future nvidia releases a driver that stops the second nvidia card if it realizes an amd in the vicinity then. We all know nvidia can or will release a driver to stop this. What will happen then considering physx in the picture?


afaik, nVidia has already done that. You've to use this patch: Hybrid PhysX mod v1.03 / v1.04ff



vickybat said:


> And if that sort of effects can be done without physx code, why doesn't amd do it and then do the talking. Atleast we will have some fair competition.


AMD is a chip maker, not a game developer.  AMD is already fair, it's nVidia which started that PhysX thing; one company which was being dominated by AMD since 3 years needed a gimmick to sell their cards off during that period because they knew they are getting spanked all corners.

I might sound like an AMD fanboy from my posts, but I have always owned a nVidia chip. lol.


----------



## vickybat (Dec 16, 2010)

Joker said:


> first thing...there is NO competition to HD 6950...it is in a unique range and GTX 560 is still 1.5 months away.
> HD 6970 = GTX 570...a draw.....and it wasnt even targetted at nvida's top card acknowledged by AMD itself...they were not able to move to 32nm properly so they had to switch back to 40nm hence the new architecture isnt delivering well here.
> GTX 580 will get pipped by 6990 in April which will actually be 6950x2 and most definitely it will cuz AMD chips are scaling excellent.
> 
> ...




Check this for the competition to 6990. 6970 is their top end single card and not 6990 cause its goin to be dual.

I guess that answers your question.



ico said:


> GTX 470 is crap. Overheats and power problems.
> 
> GTX 460 is excellent around the 10k price mark. Slightly above it is the HD 6850 @ 11.3k which performs better. Then we have their OCed versions respectively..



470 is not crap and many gamers all around the world are satisfied by its performance. It has done its job well.




ico said:


> They will not cost the same. HD 6950 will be cheaper.



From indian sites, we see 6950 to be costing around 20k or more which is in 570's league and the latter is a much better card.




ico said:


> You know the answer and you still keep on asking the same question 100 times?



I'm not asking any questions now cause i already know the answers.



ico said:


> afaik, nVidia has already done that. You've to use this patch: Hybrid PhysX mod v1.03 / v1.04ff



Amd has never matched nvidia in driver support. You will definitely see better sli scaling with the upcoming drivers. What i meant to say was if nvidia sort of disables or that mod doesn't work in a future driver release, where will the amd owners prioritizing physx will go with their second physx card? 

Thats what i was asking buddy.




ico said:


> AMD is a chip maker, not a game developer.  AMD is already fair, it's nVidia which started that PhysX thing; one company which was being dominated by AMD since 3 years needed a gimmick to sell their cards off during that period because they knew they are getting spanked all corners.



Nonetheless nvidia cards were selling always and never lagged in that department. I know amd is a chipmaker but it should offer better software support for proper utilization of its chips. Does amd has an answer to cuda?

It does actually in the form of STREAM but where is the support? This is where AMD lags and physx is just another example but as you say not so much like cuda.



ico said:


> I might sound like an AMD fanboy from my posts, but I have always owned a nVidia chip. lol.



Its not about being loyal to any camps. Even i had never owned an AMD card before until now that is. LOL


----------



## Cilus (Dec 16, 2010)

Vickybat, you are sticking with PhysX and that's why you are so much complaining against Radeon. The thing is PhysX is nothing at all in all current generation games. If an experience person does not tell you what are the differences coming for enabling PhysX, you probably won't be able to see it actually.
 I'm telling it because Batman Arkum Asylum and Mafia II both I've checked in my friend's machine @ 1440X900 resolution with a GTX285 and in my system with my 6870. Believe me, ico is right, the difference is so so small and not at all worth the FPS drop and reduced playability.
It does not worth even your fighting over here... :
Even at Tom's Hardware, they explicitly mentioned that if you have a Nvidia PhySX enable card lying on dust, then only go for PhysX. Otherwise... no need at all. They are highly technical site and they reviewed it quite well and the conclusion..Nothing for must have PhySX.


----------



## ico (Dec 16, 2010)

vickybat said:


> From indian sites, we see 6950 to be costing around 20k or more which is in 570's league and the latter is a much better card.


They'll get revised.



vickybat said:


> Amd has never matched nvidia in driver support.


More like a big myth. nVidia is better overall, I agree but only on Linux, AMD is no match for nVidia.



vickybat said:


> What i meant to say was if nvidia sort of disables or that mod doesn't work in a future driver release, where will the amd owners prioritizing physx will go with their second physx card?


A patch will be out again. Cat and mouse. I suggest you to start a poll. You'll know how many people prioritize PhysX. Not as many as you are thinking or making out to be. It is neither a deal maker nor a deal breaker for most. Only two average games came out in 2010 which utilized PhysX. Hardly anyone prioritizes PhysX. If someone thinks that playing only countable games with extra effects is important to him, you again know what he will have to do. Sell off the AMD card and get nVidia.



vickybat said:


> I know amd is a chipmaker but it should offer better software support for proper utilization of its chips.


Do you know what DirectX API and OpenGL are for?



vickybat said:


> Does amd has an answer to cuda?
> 
> It does actually in the form of STREAM but where is the support? This is where AMD lags and physx is just another example but as you say not so much like cuda.


CUDA can be a deal maker and breaker if you are into rendering like I already posted thrice.

The thing is, CUDA again is a proprietary standard, not an "open" standard. FireStream utilizes OpenCL which is "open." nVidia supports both CUDA and OpenCL.

And heck, both are in infancy.


----------



## vickybat (Dec 16, 2010)

Cilus said:


> Vickybat, you are sticking with PhysX and that's why you are so much complaining against Radeon. The thing is PhysX is nothing at all in all current generation games. If an experience person does not tell you what are the differences coming for enabling PhysX, you probably won't be able to see it actually.
> I'm telling it because Batman Arkum Asylum and Mafia II both I've checked in my friend's machine @ 1440X900 resolution with a GTX285 and in my system with my 6870. Believe me, ico is right, the difference is so so small and not at all worth the FPS drop and reduced playability.
> It does not worth even your fighting over here... :
> Even at Tom's Hardware, they explicitly mentioned that if you have a Nvidia PhySX enable card lying on dust, then only go for PhysX. Otherwise... no need at all. They are highly technical site and they reviewed it quite well and the conclusion..Nothing for must have PhySX.



No buddy i am not fighting for physx but just wish amd having a better software support. Not just physx but cuda and others as well.

Don't you agree its no match for nvidia in this department?


----------



## Cilus (Dec 16, 2010)

AMD can't have CUDA, it is Nvidia patent, but they do have ATI Stream. Which may be not as mature as CUDA (read mature not less powerful) but currenly showing significant growth. Initially number of software supported CUDA but not ATI Stream was higher but now a days most of the major software like Adobe, Cyberlink, Corel, 3D Studio etc support ATI Stream support.
In fact in vido encoding ATI cards are faster.


----------



## vickybat (Dec 16, 2010)

ico said:


> They'll get revised.
> 
> 
> More like a big myth. nVidia is better overall, I agree but only on Linux, AMD is no match for nVidia.
> ...


----------



## ico (Dec 16, 2010)

vickybat said:


> Its good nvidia supports both open computing language and ofcourse their own cuda but don't you think  amd  should move beyond and develop its own framework? This is what i was saying all the time about support and development.


You just don't seem to understand one thing. AMD doesn't believe in reinventing the wheel. It will support something if it is "open" and common to all i.e. standards which are truly "open and universal" and are not proprietary. AMD supports OpenCL (should I again repeat it 10 times?) which is "open" and not proprietary like CUDA.

If a company X believes in fragmentation, the other doesn't have to follow the same route. *Get over it.* What you want is both companies coming up with their own competing technologies/frameworks on simple things which should rather be common to a developer and cause fragmentation + retarded confusion. 



vickybat said:


> Check this buddy.
> 
> And also this.


This thing by any means is NOT related to the discussion which we are heaving here. 



vickybat said:


> Both the camps support them so whats the big deal?


The big deal is your understanding. Those two things are universally accepted and hence supported by both camps. AMD is supporting the universally used standards and it is MORE than enough. It doesn't have to go beyond and cause fragmentation. *End of discussion.*


----------



## Joker (Dec 16, 2010)

*vickybat*

i suggest u to sell off your hd 5750 and get a gts 450...be satisfied and stop this useless discussion.

both companies have different idelogies and are managed differently. so **** it and get over it.


----------



## Cilus (Dec 16, 2010)

vickybat, your 5750 also support loss less bit-stream audio over HDMI and if you would buy a old HD 4850 card, then also you can get Bit stream audio over HDMI. 
All the 6XXX series cards support 3D bluray playback and you don't need any specific Display like Nvidia specified to watch bluray. A normal 3DTV can be used with the Radeon 6XXX series cards.

However it is off the topic discussion.


----------



## vickybat (Dec 16, 2010)

ico said:


> You just don't seem to understand one thing. AMD doesn't believe in reinventing the wheel. It will support something if it is "open" and common to all i.e. standards which are truly "open and universal" and are not proprietary. AMD supports OpenCL (should I again repeat it 10 times?) which is "open" and not proprietary like CUDA.
> 
> If a company X believes in fragmentation, the other doesn't have to follow the same route. *Get over it.* What you want is both companies coming up with their own competing technologies/frameworks on simple things which should rather be common to a developer and cause fragmentation + retarded confusion.



AMD's silly ideologies are not taking us anyhere. I don't have a personal grudge but i am not satisfied with the support thats all. That is just a personal opinion though.




ico said:


> This thing by any means is NOT related to the discussion which we are heaving here.



Thats not for you but was for cilus.




ico said:


> The big deal is your understanding. Those two things are universally accepted and hence supported by both camps. AMD is supporting the universally used standards and it is MORE than enough. It doesn't have to go beyond and cause fragmentation. *End of discussion.*



AMD is simply limiting itself sticking to universally accepted standards which is also done by the other camp so it doesn't matter. Developers are getting paid to develop and not doing it freely i.e their job is not "open".lol * Maybe its more than enough for you but not everyone's cup of tea. *

Same things also happen in consoles when you compare a multiplatform games with an exclusive one. The differences are somewhat big. Thats because the developers put more stress into a proprietary engine. Eg. uncharted 2. Had it been a multiplatform , it wouldn't even had been a masterpiece that it is today. So universally accepted objects are not necesarily the way to follow but thinking out of the box sets new benchmark and raises the bar(if thought properly).

Guess its not taking us anyhere so surely we *end this discussion* right here.


----------



## vickybat (Dec 16, 2010)

Joker said:


> *vickybat*
> 
> i suggest u to sell off your hd 5750 and get a gts 450...be satisfied and stop this useless discussion.
> 
> both companies have different idelogies and are managed differently. so **** it and get over it.




No buddy i'm not that rich.  I have to be content with what i have currently.

But in future upgrades if things stay the same or atleast follow this trend , then it will always be NVIDIA.


----------



## asingh (Dec 16, 2010)

End of the day I do not care two hoots what the GPU does as long as it is spinning out games to me quick and fast -- with all the eye-candy cranked up. My summary of this thread so far has been:

*Drivers:*
Both are equally fine. If one browses other forums nVidia and ATI have just the same amount of problems. Saying ATI drivers are worse than nVidia is a pure myth. People are equally happy and just as dissatisfied. ATI does not have profiles as of now, but they release CAP installers which are tweaked to enhance the driver for games. And instead of whining about drivers there are various tools out there which you let tweak and enhance driver functions (not OC) which are not available. Example profiling is available on CCC using RadeonPRO developed by a kewl dude on guru3d.com.

*XfireX/SLI:*
It is really easy to read up reviews and run comments. Once a user has used a multi-GPU subystem they will understand the pro and cons of two GPUs installed. Real experience counts here. I never comment on SLI cause I never used it. Sounds good though. XfireX of course I know what I am talking about.

*PhysX/Physics:*
Well it looks nice. If my GPU can render it WHY NOT. Again, I club tessellation and physics in one bucket. If the VGA can do it and I still get quick rates (since I only play FPS/TPS) I am fine. Nothing to run after and be all crazy about it. This thread sounds like PhysX is more important than FPS.

*CUDA/STREAM:*
Okay, what is this used for. How many of us have really used this, or will use it exclusively that we use it as a differentiator..? It just another on-side addition to the CPU via GPU. Again, not a killer or maker for GPUs.

Honestly when I bought my GPUs and the ones before I never even look at these factors. Pure gimmick and marketing tricks by BOTH camps. Its only the FPS I want my GPUs to run nothing else...! And that is what I will pay for.


----------



## vickybat (Dec 16, 2010)

Well said *asingh* and that is how it should be.

Currently we are getting more FPS from the green camp in the topend segment and that makes them highly recommended.


----------



## damngoodman999 (Dec 16, 2010)

asingh said:


> End of the day I do not care two hoots what the GPU does as long as it is spinning out games to me quick and fast -- with all the eye-candy cranked up. My summary of this thread so far has been:
> 
> *Drivers:*
> Both are equally fine. If one browses other forums nVidia and ATI have just the same amount of problems. Saying ATI drivers are worse than nVidia is a pure myth. People are equally happy and just as dissatisfied. ATI does not have profiles as of now, but they release CAP installers which are tweaked to enhance the driver for games. And instead of whining about drivers there are various tools out there which you let tweak and enhance driver functions (not OC) which are not available. Example profiling is available on CCC using RadeonPRO developed by a kewl dude on guru3d.com.
> ...



Yes , everything u say Is correct

physx - its depends on what so ever with the nvidia .But still Ati can give awesome FPS & eyecandy

Cuda - It doesn't take big part in games 

Xfire/Sli - Xfire has superb'ly improved from 2008 to 2010 all games which utilizes sli also uses Xfire ,, gr8 work ATI


DRIVERS  here comes my problem which i faced myself , 

ATI-Upgrade of drivers gives poor performance than previous [ Thats weird]
Upgrading drivers without removing OLD drivers Gives BSOD , Which never Happens in NVIDIA !! 
CCC disappears / wont function sometimes 
Heat increased after drivers update ??


----------



## ico (Dec 16, 2010)

damngoodman999 said:


> [*]Upgrading drivers without removing OLD drivers Gives BSOD , Which never Happens in NVIDIA !!


It happened with me only a week ago!


----------



## asingh (Dec 16, 2010)

@ICO;DGMan:
Well you have the option to uninstall right. Why not use that..?

Drivers need to be understood. At times there is no need to update. I am stilling using 10.5a, it is good enough.


----------



## ico (Dec 16, 2010)

asingh said:


> @ICO;
> Well you have the option to uninstall right. Why not use that..?


well, in my case it was Apple's Boot Camp which had an update so it had to upgrade all my Windows drivers.


----------



## vickybat (Dec 16, 2010)

^^
Do you have a mac?


----------



## Piyush (Dec 16, 2010)

^^yup he does


----------



## mohiuddin (Dec 16, 2010)

6950 is taking a new place- no one to compete!!!as 5xx gtx r revised version if gtx 4xx, (their alu,simd,sps pattern is almost same), i guess driver updates will not effect that much like 6900 driver updates.becoz, 6900 r totally different in VLIWs pattern...amd has a long way to go with their driver updates.....

guyz,may be old, but didn't see any link of it here, so posting it.

Radeon HD 6950 CrossfireX review

if i did something wrong, pardon me.


----------



## vickybat (Dec 16, 2010)

@ MOHIUDDIN

Well don't say there's no one to compete as you can see the 570 beating it in all the benchmarks both in single and multigpu avatars. VLIW belongs to amd architecural representation of placing stream processors and the special function sp has been removed in VLIW4.

Driver updates will happen and continue to improve performance in both the camps.GF 100 is not completely similar with GF110 and there are differences.

In the benches gtx 570 is also scaling almost twice as its score in metro was 27 and in sli is 54 at 1920x1080 & physx off whereas 6950 was 25 and in xfire it was 50.

So both are scaling well and expect these numbers to improve.

What makes 6950 sweet is its estimated price but we have to wait for indian prices.

Technically 6950 was made to compete with gtx 570 as they are their respective company's 2nd flagship models. and we are seeing the 570 as a clear winner here.


----------



## clear_lot (Dec 17, 2010)

^ Indian prices are a bit*h


----------



## Cilus (Dec 17, 2010)

Just got the review of HD 6950 Crossfire in guru3d. Its 1.8 to  sometimes 2 time scaling......really unbelievable! *Who said AMD has bad driver support*. In some 60% of the benchmarks it is just below GTX 580 SLI and almost of all the times on the par with GTX 570 SLI.
I think HD 6950 is gonna be the next star as at the price point it has no competitor.

asingh, I also support you regarding the Driver issue and PhysX, but CUDA and stream is not that bad as they have real utility which clearly benefit from these.
I myself use GPU based Vieo-editing and encoding using Cyberlink tools and some open source software like Mediacoder. The performance boost is really very high. I can convert videos in less than half an hour which took almost 2 to 2.4 hrs in my Phenom II 955.
Similarly I also use Media Player Classic Home Cinema to play my bluray rips and HD contents using my GPU and adding several post processing filters to enhance quality, up-scaling the Non-HD to HD resolution etc.

Now I'm not at all any professional in those fields, still I got several advantages from CUDA/Strea. Imagine how a professional can be benefited.


----------



## damngoodman999 (Dec 17, 2010)

asingh said:


> @ICO;DGMan:
> Well you have the option to uninstall right. Why not use that..?
> 
> Drivers need to be understood. At times there is no need to update. I am stilling using 10.5a, it is good enough.



Ok we ll END this discussion ! Ati sure need to concentrate on DRIVERS ! IMO

I call chennai dealer about XFX & MSI price on HD 6950 its 19000/- woah only 10% more performance than HD 6870 , Its 5K more thats horrible !! 

Wat u guys recommend ?? getting HD 6870 is better or HD 6950 is better ???


----------



## Liverpool_fan (Dec 17, 2010)

asingh said:


> *Drivers:*
> Both are equally fine. If one browses other forums nVidia and ATI have just the same amount of problems. Saying ATI drivers are worse than nVidia is a pure myth.


Not in Linux. 
But yeah they are improving steadily in that respect, and ATI OSS drivers are coming good too.


----------



## vickybat (Dec 17, 2010)

damngoodman999 said:


> Ok we ll END this discussion ! Ati sure need to concentrate on DRIVERS ! IMO
> 
> I call chennai dealer about XFX & MSI price on HD 6950 its 19000/- woah only 10% more performance than HD 6870 , Its 5K more thats horrible !!
> 
> Wat u guys recommend ?? getting HD 6870 is better or HD 6950 is better ???





Buddy if 6950 is 19k it would be wise to save 3k and go for the gtx 570 which is a much faster card and gf110 gpu's are power efficient as well as thermally cool.

Msi N570 GTX is available for 22k in smc which would be much better.


----------



## damngoodman999 (Dec 17, 2010)

vickybat said:


> Buddy if 6950 is 19k it would be wise to save 3k and go for the gtx 570 which is a much faster card and gf110 gpu's are power efficient as well as thermally cool.
> 
> Msi N570 GTX is available for 22k in smc which would be much better.



Its too high for me i play so rarely , HD 6870 should be good 

HD 6950 only 5 tp 8 fps more than HD 6870 , is it right ??


----------



## vickybat (Dec 17, 2010)

@ damngoodman999
Wait for gtx 570 prices to fall. Your current gtx 260 can still handle games well. Believe me it will a future proof buy cause newer games which favour tesselation will perform better in 570. 

Just wait a bit buddy as a price drop is imminent.


----------



## topgear (Dec 18, 2010)

Here's a comparison among these new hot GPUS and one old one - I know it's far from perfect but it's a good starting point anyway : 

GeForce GTX 570 ( Temp 79C Oced )

Core Clock: 800MHz
Shader Clock: 1600MHz
Memory Clock: 4500 MHz

MW2 1920*1200 161
BC2 1920*1200  64

Core Clock: 950MHz
Shader Clock: 950MHz
Memory Clock: 6192 MHz

Radeon HD 6970 ( Temp ?? C Oced ) 
MW2 1920*1200 147
BC2 1920*1200  60

Radeon HD 6970 :

Temp : 80C at Stock Clock Speed
GPU power consumption : ~ 207 Watts

GeForce GTX 570 :

Temp : 77C at Stock Clock Speed
GPU power consumption : ~ 213 Watts

Radeon HD 6950 :

Temp : 77C at Stock Clock Speed
GPU power consumption : ~ 158 Watts

Radeon HD 5870 :

Temp : 77C at Stock Clock Speed
GPU power consumption : ~ 209 Watts

Benchmark Resolution : 1920*1200

COD MW2 DX9 :

GTX 570 > HD6970 > HD 6950 > HD 5870

FC2 DX 10 :

GTX 570 > HD6970 > HD 6950 > HD 5870

Anno 1404 :

HD6970 > HD6950 > HD 5870 > GTX 570

Crysis Warhead :

HD6970 > GTX 570 > HD6950 > HD5870

3DMark Vantage :

GTX 570 > HD6970 > HD 5870 > HD6950

Metro 2033 : DX11 PhysyX off

HD6970 > GTX 570 > HD6950 > HD5870

DiRT 2 :

GTX 570 > HD6970 > HD 5870 > HD6950

BFBC2 :

HD6970 = GTX 570 > HD 5870 > HD6950

3D Mark 11 :

HD6970 > GTX 570 > HD6950 > HD5870

( All benches based on guru3d )

*Here goes the prices :*



Cilus said:


> Guys, one good news for you, guess...Ya, you are correct, the pricing of HD 6950 and HD 6970 in India. Got this from Tech enclave dealer's paradise.
> 
> *HIS HD 6950: 18.5K
> HIS HD 6970: 22.5K*
> ...



More Benches : ( Based on toms hw )

Lost Planet 2 :

GTX 570 > HD6970 > HD 6950 > HD 5870

Just Cause 2 (DX11)

HD6970 > HD 5870 > HD6950 = GTX 570

Aliens Vs. Predator (DX11)

HD6970 > GTX 570 = HD 5870 > HD6950


----------



## clear_lot (Dec 18, 2010)

^ so whats the conclusion?


----------



## damngoodman999 (Dec 18, 2010)

clear_lot said:


> ^ so whats the conclusion?



we should choose the conclusion , HD 6950 is better Buy & VLM but if it was 17K it wub be gr8 

I am sure gona wait up there is no good game will be releasing up to one month , only jan 25 DEAD SPACE .. so waiting is Worth !


----------



## Joker (Dec 18, 2010)

verdict is...HD 6970 = GTX 570...at the same price which is 22.5k i'd go with the nvidia's offfering.

GTX 580 remains the fastest single gpu card...although still slower than hd 5970 but i'll again go with nvidia's offering cuz it is single gpu.

no competition to AMD in the midrange section....it is beefed up with HD 6850, HD 6870 and HD 6950. also very very improved crossfire scaling with the new AMD GPUs...almost double performance.


----------



## clear_lot (Dec 18, 2010)

6970=570
i will go with 570 as it runs cooler and quiter.


----------



## Ishu Gupta (Dec 18, 2010)

^
and for CUDA (Physx too).


----------



## vickybat (Dec 18, 2010)

clear_lot said:


> 6970=570
> i will go with 570 as it runs cooler and quiter.



Great choice. GO FOR IT. Try the msi version.


----------



## aby geek (Dec 19, 2010)

@vickybat i meant to ask that 6970 cf is only weaker to 5970 cf and asus ares for radeon cards.

so am i right red team only has 2 cards better then that?

and how much do drivers count in for the performance extraction in this comparision.


----------



## vickybat (Dec 19, 2010)

@ aby geek

Well buddy the 5970 is a dual gpu and its made up of two 5850's on a single pcb. The ares is a customized card made by asus which is a combination of two 5870's. Now since both these cards are dual gpu's, you cannot compare them with a 6970 as its a single card. Crossfiring a 5970 or ares means 4 gpu's running simultaneously against two gpu's of a 6970cf.  So a 5970cf or ares cf will beat 6970cf(they won't scale as good as 6970cf) but it will take up costs way beyond especially with ares.

They can be compared against amd's upcoming 6990(read antiles) which is a dual gpu and will be definitely powerful than a 5970 or ares.

Nvidia is also coming up with their dual gpu solution still unnamed but hopefully GTX 595. Early leaks and rumors suggest it of having two gf110 chips most probably the gtx 580.


----------



## aby geek (Dec 19, 2010)

uff who told you to cf 5970 and ares , they are dual already i know so how much does a 6970 cf hold against a single 5970/ares.

i dont know why ifeel 6990 will be weaker than 5970 may be by 1% but weaker.


----------



## vickybat (Dec 19, 2010)

u oh you mentioned 5970cf so ithought two 5970's! lol 

Okay two 6970's will beat a single 5970 and ares fair and square in all benchmarks. 5970 is set to retire and will be succeeded by 6990. And rest assured cause the 6990 will be a clear winner as it will have two 6950 chips. 6950cf is also way ahead of a single 5970 and ares.

Check this and this


----------



## topgear (Dec 19, 2010)

Some more from my observation : ( considering only single gpu setup )

12 Benchmarks : @ stock speed HD 6950 vs. HD5870

HD 6950 ( winner in 7 ) > HD 5870 ( winner in 5 )

The price of HD 5870 is lower but the power consumption is way too high anyway.

*lowest prices *( as found on newegg ) :

HD 6970 $369.99 > GTX 570 $349.99 > HD 6950 $299.99 > HD 5870 $269.99 > ( GTX 470 $239.99 = HD 6870 for ref. only )

12 Benchmarks : @ stock speed HD 6970 vs. GTX 570

HD 6970 ( winner in 6 ) > 570 ( winner in 5 ) ( Tie 1 BFBC2 )

But In OC ( 2 Benchmarks ) : GTX 570 > HD 6970

and temp Results :

HD 6970 > GTX 570 ( lower is better )

Performance wise GTX 570 = HD 6970 ( *according to benchmarks mentioned in this post* ) but GTX 570 price is lower and also it runs a little cooler.

In DX11 titles : ( Total 7 )

HD 6970 wins 4 DX 11 title

GTX 570 Wins 2 DX 11 Title

DX 11 title Tie : 1 ( BFBC2 )

But when Oced GTX outperformed HD 6970 in BFBC2

HD 5870 Wins 4 DX 11 Title and HD 6950 wins 3 Dx 11 title


----------



## clear_lot (Dec 19, 2010)

@aby geek


> uff who told you to cf 5970 and ares , they are dual already i know so how much does a 6970 cf hold against a single 5970/ares.
> 
> i dont know why ifeel 6990 will be weaker than 5970 may be by 1% but weaker.




6970> 5870

6970cf >> 5970.  due to much better cf scaling.


----------



## Zangetsu (Dec 20, 2010)

so which is cooler card in ATIHD6XXX series????


----------



## tkin (Dec 20, 2010)

vickybat said:


> @ aby geek
> 
> Well buddy the 5970 is a dual gpu and its made up of two 5850's on a single pcb. The ares is a customized card made by asus which is a combination of two 5870's. Now since both these cards are dual gpu's, you cannot compare them with a 6970 as its a single card. Crossfiring a 5970 or ares means 4 gpu's running simultaneously against two gpu's of a 6970cf.  So a 5970cf or ares cf will beat 6970cf(they won't scale as good as 6970cf) but it will take up costs way beyond especially with ares.
> 
> ...


HD5970 is two full HD5870 GPUs(i.e 1600 shader cypress) in one pcb, the clock speeds are lowered to HD5850 levels but shader count remains same.


----------



## vickybat (Dec 21, 2010)

@ tkin
Yes buddy you're right. The 5970 has 1600x2 sp's but clocks are down and on par with 5850. Thanks again.


----------



## topgear (Dec 21, 2010)

Here's some more : ( all results based on guru3d and the prices are lowest as found on newegg for each gpu and GTX 580 is out of stock as of now ) So Let's beging comparing more :

Radeon HD 5970 $499.99

GTX 580 $509.99 ***

GTX 480 $429.99

HD 6950 $299.99

Refer to these posts before :

*www.thinkdigit.com/forum/technology-news/135044-amd-hd-6950-6970-released-4.html#post1312641

*www.thinkdigit.com/forum/technology-news/135044-amd-hd-6950-6970-released-4.html#post1312976

*CFX Benchmark comparison of HD 6950 :*

DX9: Call of Duty - Modern Warfare 2

GTX 580 > ( GTX 465 SLi > ) HD 6950 CFX > HD 5970

DX10: Far Cry 2

( GTX 480 SLi > )HD 6950 CFX > GTX 580 > HD 5970

DX10: Anno 1404 - Dawn of Discovery :

HD 5970 > ( HD 6870 CFX > ) HD 6950 CFX > GTX 580

DX10: Crysis Warhead :

( GTX 580 SLi > )HD 6950 CFX > GTX 580 > HD 5970

DX10: 3DMark Vantage :

( GTX 570 SLi > )HD 6950 CFX > HD 5970 > GTX 580

DX11: Metro 2033 - The Last Refuge :

( GTX 480 SLi > )HD 6950 CFX > HD 5970 > GTX 580

DX11: Colin McRae DiRT 2 :

( GTX 480 SLi > )HD 6950 CFX > HD 5970 > GTX 580

DX11: Battlefield Bad Company 2 :

( GTX 480 SLi > )HD 6950 CFX > HD 5970 > GTX 580

DX11: 3DMark 11 :

HD 6950 CFX > HD 5970 > GTX 580

Power Consumption :

HD 6950 ~ 329W ( 89C )

HD 5970 Single : ~ 245W CFX ~539W ( 83C )

GTX 580 Sngle : ~ 280W SLI ~ 530W ( 87C )

GTX 480 ~ 500W SLI ( 95C )

So what you guys think about all these ?? Share your views !


----------



## Faun (Dec 21, 2010)

^^6850 is better.


----------



## Gmith (Dec 21, 2010)

topgear said:


> *First a little Specs :*
> 
> *images.techtree.com/ttimages/story/113790_specs_1.jpg
> 
> ...




Waaaaaaaooooooo! Amazing!


----------



## aby geek (Dec 21, 2010)

umm guys is there any diff between crossfire and crossfirex?


----------



## Piyush (Dec 21, 2010)

aby geek said:


> umm guys is there any diff between crossfire and crossfirex?



i dont think so
and if there is something like that then someone shed some light on it


----------



## asingh (Dec 21, 2010)

aby geek said:


> umm guys is there any diff between crossfire and crossfirex?



Just nomenclature. Official is ATI CrossFire X.


----------



## mohiuddin (Dec 22, 2010)

See, what 69xx have hidden under their cfx.

AMD Radeon HD 6970 2GB Video Card in Crossfire - Unigine Heaven Benchmark :: TweakTown USA Edition


----------



## vickybat (Dec 22, 2010)

mohiuddin said:


> See, what 69xx have hidden under their cfx.
> 
> AMD Radeon HD 6970 2GB Video Card in Crossfire - Unigine Heaven Benchmark :: TweakTown USA Edition



Seems like a biased review to me. None of the nvidia cards are slied. So with whom are they comparing with? We all know that current cards scale better in their multigpu avatars so nothin new here.

And considering single gpu , the 570 beats 6970 most of the time and a 570 sli can stand neck to neck with 6970cf.

The most value cf setup would be a 6850cf.


----------



## mohiuddin (Dec 22, 2010)

vickybat said:


> Seems like a biased review to me. None of the nvidia cards are slied. So with whom are they comparing with? We all know that current cards scale better in their multigpu avatars so nothin new here.
> 
> And considering single gpu , the 570 beats 6970 most of the time and a 570 sli can stand neck to neck with 6970cf.
> 
> The most value cf setup would be a 6850cf.



see it...again ,will u say biased?

Overclock3D :: Review :: Powercolor HD6970 Crossfire Review :: Up Close


----------



## Ishu Gupta (Dec 22, 2010)

It beats 580SLI in Crysis. 

The review needs a 570SLI to compare.


----------



## vickybat (Dec 23, 2010)

mohiuddin said:


> see it...again ,will u say biased?
> 
> Overclock3D :: Review :: Powercolor HD6970 Crossfire Review :: Up Close




Buddy crysis and crysis warhead are old games now. They responded to 6970's large frame buffer and thats why the increment in crysis. Now adding two 6970's the vram increased to 4gb and took the limelight.

But look the results in metro2033. Totally the opposite. When small amount of tessalation came into play, the nvidia cards shine and took a significant performance leap. 

Same thing gonna happen with crysis2 as it will incorporate tesselation and dx11 features which might or will work on nvidia's architecture better.

But no doubt 6xxx series are scaling much better and almost twice which is very good. Expect the same with nvidia with new and improved drivers. Future games will work better on nvidia cards as things stand currently.

I said that review was biased because they had no sli setups in their test.


----------



## topgear (Dec 23, 2010)

Faun said:


> ^^6850 is better.



I think you meant to say HD 6950.


----------



## Ishu Gupta (Dec 23, 2010)

vickybat said:


> Now adding two 6970's the vram increased to 4gb and took the limelight.


Eh? VRAM doesn't stack on CF/SLI.
Info stored on the VRAM has to be the same on both the cards.

So its still 2GB.


----------



## vickybat (Dec 23, 2010)

@ ishu

Yes it does not get stacked but each card uses its own vram. In this case each 6970 uses its 2gb.


----------



## asingh (Dec 23, 2010)

^^
Almost correct. Each card will use its VRAM, but the over all system (game) will only see 2GB from the master card. Each memory stripe is mirrored to both sets of GPU memories, so technically the HD6970 XFireX subsystem only has 2 GB to work with. So saying that HD6970 has advantage of 4GB vram is not right since it is not working in that manner. If textures have to be pre-loaded to vram they only have 2GB of space available, going like wise for similar vram requirements, they DO NOT have a 4GB playing field.


----------



## topgear (Dec 24, 2010)

Heard that GTX 560 is going to be released on Jan 2011. I think it will be a real competitor to the HD 6950.


----------



## clear_lot (Dec 24, 2010)

^ Amen!
read somewhere that a 560 can be OC'ed to be comparable to a 570/480.


----------



## mohiuddin (Dec 24, 2010)

vickybat said:


> Buddy crysis and crysis warhead are old games now. They responded to 6970's large frame buffer and thats why the increment in crysis. Now adding two 6970's the vram increased to 4gb and took the limelight.
> 
> But look the results in metro2033. Totally the opposite. When small amount of tessalation came into play, the nvidia cards shine and took a significant performance leap.
> 
> ...



if each using separate framebuffer than, what the advantage in cfx?if it got large vram advantage in crysis, then it would beat 580 in single ...in real, cfx scaling made here 69xx better..in metro sli, cfx both r giving 2x performance...and, pretty sure that any one buying 69xx cfx or 5xxgtx sli rig ,is not going to play below 1900x1200 res...so 69xx cfx is a clear winner.


----------



## aby geek (Dec 24, 2010)

that overclockers.net review has hd6870 sli written in uniengine benches lol


----------



## vickybat (Dec 24, 2010)

mohiuddin said:


> if each using separate framebuffer than, what the advantage in cfx?if it got large vram advantage in crysis, then it would beat 580 in single ...in real, cfx scaling made here 69xx better..in metro sli, cfx both r giving 2x performance...and, pretty sure that any one buying 69xx cfx or 5xxgtx sli rig ,is not going to play below 1900x1200 res...so 69xx cfx is a clear winner.




The gtx series is also scaling well. In fact in newer games like metro, cfx in 69xx series is behind nvidia. Thats what i was talking about. Sli is not at all scaling bad and will improve on future driver release. Wait for newer games and see. The framebuffer is utilized better when two or more cards come into play. In this case 2gb for amd and 1.5gb and 1.25gb for nvidia.

And both 6970 and 6950 are behind nvidia 5 series in all benchmarks. So in performance nvidia is ahead. amd can only fight back in prices. Moreover gtx 560 is also on the way and has the potential to topple both 6870 and 6850 and maybe perform close to 6950.


----------



## asingh (Dec 24, 2010)

^^


> Wait for newer games and see. The framebuffer is utilized better when two or more cards come into play. In this case 2gb for amd and 1.5gb and 1.25gb for nvidia.


What you mean by this..?


----------



## Joker (Dec 25, 2010)

vickybat said:


> In fact in newer games like metro, cfx in 69xx series is behind nvidia.


choose a neutral game to compare scaling 

anyways

*media.bestofmicro.com/R/7/273139/original/CrossFire%20Metro%202033%20Scaling.png



vickybat said:


> And both 6970 and 6950 are behind nvidia 5 series in all benchmarks.


check out topgear's post citing benchmarks... hd 6970 = gtx 570 with both leading in respective benchmarks. but the latter is the better deal.


----------



## topgear (Dec 25, 2010)

^^ In single card config here's what the results out of 7 DX 11 aps :

Metro 2033 : DX11 PhysyX off

HD6970 

DiRT 2 :

GTX 570

BFBC2 :

GTX 570* = HD6970 ( when OCed GTX 570 wins )

3D Mark 11 :

HD6970

Lost Planet 2 :

GTX 570

Just Cause 2 (DX11)

HD6970

Aliens Vs. Predator (DX11)

HD6970

GTX 570 winner of 3 titles and HD 6970 winner of 4 DX 11 title .



clear_lot said:


> ^ Amen!
> read somewhere that a 560 can be OC'ed to be comparable to a 570/480.



No doubt that can be possible - Read On Toms Hardware about EVGA GTX 460 FTW ! review.

In comparison it's neck to neck with GTX 470 and HD 6870 and in some games even outperformed them.


----------



## clear_lot (Dec 25, 2010)

@ topgear
from where do you get these benchmarks? please include the name of site as well.

yeah read about the evga ftw too.what i would have liked to see was each card OC'ed and then compared.as both 460 and 6850 OC like hell.
meanwhile, on guru3d there is review of msi 6850 cyclone. it reached >1GHZ on core. f***ing awesome!


----------



## topgear (Dec 26, 2010)

Read my previous posts citing benchmarks - they are all from guru3d and toms's HW.
On the above benches first 4 is based on guru3d and the last 3 is based on tom's HW.

Will check that review ..


----------



## mohiuddin (Dec 26, 2010)

clear_lot said:


> @ topgear
> from where do you get these benchmarks? please include the name of site as well.
> 
> yeah read about the evga ftw too.what i would have liked to see was each card OC'ed and then compared.as both 460 and 6850 OC like hell.
> meanwhile, on guru3d there is review of msi 6850 cyclone. it reached >1GHZ on core. f***ing awesome!



performance wise evga 460gtx 1gb is equal or better than gtx470 stock..but it has 4~5%oc capability.460gtx is slight good overclocker than 6850 in referance cooler.but, custom cooled 6850 can reach 1.05ghz and overcome 460gtx oced.

HD 6870 & HD 6850 vs. GTX 460 1GB: An Overclocking Study



vickybat said:


> The gtx series is also scaling well. In fact in newer games like metro, cfx in 69xx series is behind nvidia. Thats what i was talking about. Sli is not at all scaling bad and will improve on future driver release. Wait for newer games and see. The framebuffer is utilized better when two or more cards come into play. In this case 2gb for amd and 1.5gb and 1.25gb for nvidia.
> 
> And both 6970 and 6950 are behind nvidia 5 series in all benchmarks. So in performance nvidia is ahead. amd can only fight back in prices. Moreover gtx 560 is also on the way and has the potential to topple both 6870 and 6850 and maybe perform close to 6950.



i was talking about multigpu solution.not single.in metro both sli(5xx),cfx(6xxx) scaling is equal to 2x.see guru3d 6950cfx review.in case of single gpu, 6970 is a clear winner against 570,and a 6950 @870mhz/5.8ghz equal to 570 in 8xmsaa and 2500x1600 resolution.may be coz of 2gb vram,but mind it, we r getting that 2gb 6950 in a lot better price.


----------



## vickybat (Dec 26, 2010)

mohiuddin said:


> performance wise evga 460gtx 1gb is equal or better than gtx470 stock..but it has 4~5%oc capability.460gtx is slight good overclocker than 6850 in referance cooler.but, custom cooled 6850 can reach 1.05ghz and overcome 460gtx oced.
> 
> HD 6870 & HD 6850 vs. GTX 460 1GB: An Overclocking Study
> 
> ...



Can you justify the statement in bold? I don't see the 6970 to be a clear winner at all. If 6950 can be overclocked, so can the 570 which will blow the pants out of 6950.

The gtx 570 is priced perfectly for the perfomance it delivers and if price will be reduced, then we can get an even sweeter deal.


@ *asingh*

Buddy i meant that mutiple gpu's can better utilize a large framebuffer when rendering in real time (read procedural rendering) than a single one (though from the same family or generation of gpu's).

For example 2 6950's in cf will take the advantage og 2gb vram than a single 6950. Same goes for a single gtx 570 with 1.25gb and two 570's in sli which utilize the same amount of vram as in a single board. But i think 570 would have scaled or performed much better with more framebuffer , lets say somethin equivalent to the caymans.


----------



## ico (Dec 26, 2010)

Price quotes from friend:

HD 6850 @ 11k
HD 6870 @ 14.5k
HD 6950 @ 17k
HD 6970 @ 21.4k
GTX 570 @ 22k
GTX 580 @ 29k


----------



## asingh (Dec 26, 2010)

@VickyBat:
I think in multi-GPU subsystems the theoretical framebuffer does not go into 2x. Each frame is rendered twice -- once on each card. Not sure how you can correlate 2 GPUs will  take better advantage of VRAM vs. a single accelerator. 

Though the nVidia boards have less VRAM but their data path is wider compared to ATIs.


----------



## tkin (Dec 26, 2010)

asingh said:


> @VickyBat:
> I think in multi-GPU subsystems the theoretical framebuffer does not go into 2x. Each frame is rendered twice -- once on each card. Not sure how you can correlate 2 GPUs will  take better advantage of VRAM vs. a single accelerator.
> 
> Though the nVidia boards have less VRAM but their data path is wider compared to ATIs.


There are different techniques, one other approach is to split the frame into two or more parts depending on gpus, the load balancing makes sure that the gpus get frames proportional to their processing power, this is called scissor, ati used it before in mixed crossfire setup(just like lucid hydra).


----------



## asingh (Dec 26, 2010)

^^
Guess Alternate Frame Rendering (AFR), Scissor, and Super Tile are also available. CCC does not offer these (at least I could not find them). But there is a nice tool out on guru3d -- RadeonPRO which lets one do driver tweaks. Its quite awesome....!


----------



## topgear (Dec 27, 2010)

Here's what I've compiled about SLI and CF form two websites to know about these two a little bit better - I know it's not perfect but it will give us some clear idea for sure :

First Cross Fire :

Crossfire system comes with 3 ways of splitting the workload :

*AFR - Alternate Frame Rendering.*

Alternate frame rendering is also a technique that Nvidia's SLI can use. AFR is as simple as it sounds. One frame is fully rendered by one card while the second card is already working on the next frame before switching back again. This can almost double performance upto the point where the work on one frame becomes two much and then other modes become more useful. Also some games cannot use AFR as each frame can be dependent on the frame before it, therefore frame 2 cannot be rendered until frame 1 has been completed, defeating the object of AFR entirely. I a crossfire system I would recommend using Radeon cards capable of running the frame rate on there own adequately or this system becomes less efficient.

Be aware that you require one of each type to make this work and they must be from the same range i.e. both X800's. _However unlike Nvidia's solution you can mix and match varieties of these cards. For example you can pair a X800 XT Crossfire edition with a standard X800 pro. You wont see as much benefit as having two X800 XT's but it is possible and could help some people with lower speed X800 / X850 cards._

*Supertiling*

Supertiling is exclusive to ATI, Nvidia doesn't have a solution similar to this in their SLI technology. Supertiling shares the workload of a frame between the two graphics cards by splitting it up in the form of a chess board. Tile one is sent to the first graphics card for processing, tile two is sent to the second graphics card, then tile 3 is sent to the first card again and so on until the frame is fully rendered.

Supertiling sounds like a very complex method of splitting up a screen but take this as an example, the traditional method is to split the screen in half horizontally. The top half of the screen is sent to the first graphics card and the bottom half is sent to the second graphics card. _Supertiling does not support OpenGL and it doesn't give optimal performance all the time._

*Scissor Frame Rendering (SFR)*

Scissor frame rendering or SFR is very similar to the original method of SLI and Nvidia's Split Frame Rendering. SFR is the basic cut the screen in half and send half to the one graphics card and half to the other. There are some differences however between the original voodoo 2 SLI, Nvidia's Split Frame Rendering and ATI's Scissor Frame Rendering.

The original SLI simple cut the frame in half, no mathematics was required no calculations just cut in half and sent to two graphics cards. Nvidia's solution is the same but has a dynamic load calculation built in. It does not necessarily split the screen 50/50. It will calculate the load at the top of the frame and the bottom and split the frame accordingly.

ATI's solution is a little bit of both of the above. The screen is not always split 50/50 but its not calculated on the fly. This can save some calculations free up more clock cycles but can cause a slight drop from optimal efficiency. The load is set by the application / game and that value is fixed throughout the session. So if the screen is split 60 / 40 then that is how it will stay. The main advantage of the ATI solution here is that it can split the screen horizontally like the others, but also vertically. If splitting the screen vertically would give better results

Source

Now SLi :

The graphics memory is NOT doubled in SLI mode. If you have two 128MB graphics cards, you do NOT have an effective 256MB. Most of the games operate in AFR (alternate frame rendering). The first card renders one full frame and then the next card renders the next frame and so on. If you picture SLI working in this way it is easy to see that each frame only has 128MB of memory to work with. 

If you feel you need 256MB or even 512MB of graphics memory for certain games, then you need to make sure that each card has that much memory. Also keep in mind that if you mix a 128MB GeForce 6600GT with a 256MB GeForce 6600GT that BOTH cards will operate with 128MB of graphics memory (the lowest common memory size of the two cards).

Source


----------



## vickybat (Dec 27, 2010)

sapphire radeon 6950 @ 20.3k

Check this


----------



## ico (Dec 27, 2010)

vickybat said:


> sapphire radeon 6950 @ 20.3k
> 
> Check this



PowerColor Radeon HD6950 2GB PCI-E 2.1 Graphics Card @ 17.5k.

I suggest you to move out, look around in shops and get real quotes like my friend did. The site you quoted has everything overpriced by around 2-4k.


----------



## vickybat (Dec 27, 2010)

Thanks ico ,will do the same. In my city only asus, msi and xfx are available. Will try to get their real prices and will post them.


----------



## pauldmps (Dec 27, 2010)

*Radeon HD 6950 Unlocks to HD 6970*

Source: Techtree.com India > News > Hardware > Radeon HD 6950 Unlocks to HD 6970

Tutorial: AMD Radeon HD 6950 to HD 6970 Mod | techPowerUp

Those who settled for the Cayman Pro (HD 6950) over its more expensive cousin will be delighted to hear that it can be easily, and so far reliably, unlocked into the more powerful Cayman XT (HD 6970). Coincidentally, the Cayman Pro and XT are essentially the same GPU die with 128 shaders disabled. Fortunately for us, AMD hasn't locked the shaders on the silicon level, but simply by reconfiguring the VGA BIOS to disable the same. All it takes is a simple BIOS flash to unlock the shaders and transform your cheaper HD 6950 into a full fledged HD 6970.

Techpowerup.com has a detailed guide replete with all the dope you need to unlock the Cayman XT. Recovery is easy with the Caymans, thanks to a recovery BIOS feature. Just make sure you back up your existing BIOS before you meddle with it. The unlock process is reported to have been successful across AMD engineering samples, as well as on the HIS and Asus retail versions of the card, which again are one and the same.


----------



## clear_lot (Dec 27, 2010)

*Re: All Graphics Cards related queries here.*

*AMD Radeon HD 6950 to HD 6970 Mod*

AMD Radeon HD 6950 to HD 6970 Mod | techPowerUp


the admin wizzard there is really a wizard. and hes completely reputable.
so...


----------



## aby geek (Dec 27, 2010)

*Re: All Graphics Cards related queries here.*

yup all 6950 can become 6970 by flashing bios ,they are unlockable.dual bios ftw


----------



## topgear (Dec 28, 2010)

*Re: All Graphics Cards related queries here.*

@ *clear_lot* - great news buddy!

so fater offering unlockable cpus amd is offering unlockable gpus as well and with a backup bios the procedure can be tested even by the less experienced users.

wow ! now this is what I say absolute value for money. Kudos to the guy who have put some great effort for this hack work.


----------



## topgear (Dec 28, 2010)

So now AMD is actually offering great value in gpu department as well - with this hack we will be able to save 5k and will get a less power consumption with same performance level as a added bonus :

*tpucdn.com/articles/159/images/power.jpg


----------



## clear_lot (Dec 28, 2010)

*Re: All Graphics Cards related queries here.*



> wow ! now this is what I say absolute value for money. Kudos to the guy who have put some great effort for this hack work.



as i said


> the admin wizzard there is really a wizard. and hes completely reputable.


    :worship:


----------



## ico (Dec 28, 2010)

niiiceeee.


----------



## Faun (Dec 28, 2010)

Is 6950 worth the dough if I can unlock it to 6970 ?


----------



## mohiuddin (Dec 28, 2010)

vickybat said:


> Can you justify the statement in bold? I don't see the 6970 to be a clear winner at all. If 6950 can be overclocked, so can the 570 which will blow the pants out of 6950.
> 
> The gtx 570 is priced perfectly for the perfomance it delivers and if price will be reduced, then we can get an even sweeter deal.
> 
> ...



yea, 6970 is a clear winner.forget about fps above 50...will u play with >16k gpu at 1600x1080 with no aa? above 1900x1080 and in 8xaa who is a winner (clear) ?please, don't consider those biased games and some bugs with badcompany2 aa  in 69xx.coz those will be fixed with driverupdate soon. I didn't say 570gtx can't be overclocked.i just wanted to make it clear that 'only 70mhz oc can kick 570gtx stock!'


----------



## vickybat (Dec 28, 2010)

mohiuddin said:


> yea, 6970 is a clear winner.forget about fps above 50...will u play with >16k gpu at 1600x1080 with no aa? above 1900x1080 and in 8xaa who is a winner (clear) ?please, don't consider those biased games and some bugs with badcompany2 aa  in 69xx.coz those will be fixed with driverupdate soon. I didn't say 570gtx can't be overclocked.i just wanted to make it clear that 'only 70mhz oc can kick 570gtx stock!'




First of all , learn to post properly and then talk.You need some serious lessons. Now you have to justify all your above statements(lol). Gtx 570 if overclocked to 50mhz can kick 6970 oc'ed to 70 mhz. Those biased games you are talking about are the future and all upcoming ones will be biased too( lol). Drivers will be updated in both the camps. 6970 is crap and not worth the money.

For 6970 vs gtx 570 check this , this & this. Check them at 1900x1200.  *DON'T ARGUE ANYMORE ON THIS TOPIC CAUSE ITS POINTLESS*.

6950 seems to be the sweet spot now since it can be unlocked to 6970 (read the above posts is case you are unaware).  Really a great buy for its lower price than 6970. 6970 will be history.



Faun said:


> Is 6950 worth the dough if I can unlock it to 6970 ?



Yes buddy, its worth the dough.


----------



## mohiuddin (Dec 28, 2010)

^what do u mean by 'post properly'?did i said 6970 is a good buy or something like that?dude, u given result metro in aaa and lost planet2(lol), and badcompany 2 with 8xaa!!then, i will ask u to see justcause2 results. In lostplanet2 , 6970 is behind 470gtx. Now, u may say that 'hmmm...it is coz of tesselation'. Then what will u tell about unigineheaven results? Don't be insane, talking about future games like that. It seems that u r totally blind-fan of green team.again, see guru3d reviews.yea, it is pointless arguing with a blind-minded person.



vickybat said:


> Gtx 570 if overclocked to 50mhz can kick 6970 oc'ed to 70 mhz.



clarify this.


----------



## ico (Dec 28, 2010)

HD 6970 = GTX 570. Same price here and same performance. But I'll go for the GTX 570.


----------



## mohiuddin (Dec 28, 2010)

ico said:


> HD 6970 = GTX 570. Same price here and same performance. But I'll go for the GTX 570.



see different resolutions' result and in high aa settings.

AMD Radeon HD 6970 and HD 6950 Review - Page 24


----------



## ico (Dec 28, 2010)

I'm firm on what I said. Both are more or less equal.


----------



## mohiuddin (Dec 28, 2010)

How could u be so confident? from benchsites?they look much different from eachother.but, i sum up that in high resolution and high image quality 6970 is superior.


----------



## ico (Dec 28, 2010)

mohiuddin said:


> How could u be so confident? from benchsites?they look much different from eachother.but, i sum up that in high resolution and high image quality 6970 is superior.


so, you have a 2560x1600 monitor??

Half games show GTX 570 leading and half show HD 6970 leading.  And then results on different websites always varying slightly. They are more or less equal. Image quality?? I bet you can hardly differentiate those minor things with your naked eye (if they even exist).


----------



## mohiuddin (Dec 28, 2010)

Am i going to buy a 6970 or 570?!

I summed up means, after watching those benches in the internet. 
I was not talking about what a game look like in different image settings..a person buying 6970 or 570 will surly wanna play in high res ,aa and quality.in highest image quality it looks different.why u couldn't find i don't know,but people found.it is not much different, but still different.anyone buying over 16k sh*t will sure want the highestquality for sure...isn't it?


----------



## Ishu Gupta (Dec 28, 2010)

I agree with ico. 570=6970.

I would just get 6950 and unlock it to 6970 or buy 570 if I want CUDA.


----------



## ico (Dec 28, 2010)

mohiuddin said:


> why u couldn't find i don't know,but people found.it is not much different, but still different.anyone buying over 16k sh*t will sure want the highestquality for sure...isn't it?


well, I'm someone who believes in real world things and doesn't get much influenced by what these companies say. nVidia made a claim of superior image quality saying that AMD has done tweaks in their driver. It didn't even matter to me, know why?

*img52.imageshack.us/img52/2161/65301431.th.jpg

*img714.imageshack.us/img714/53/76505020.th.jpg

Try finding a difference. 

(what I'm doing might be controversial )


----------



## Ishu Gupta (Dec 28, 2010)

^
Marketing crap.


----------



## aby geek (Dec 28, 2010)

now that 6950 can be unlocked can we conclude that the gtx 560 is dead before its born?


----------



## mohiuddin (Dec 28, 2010)

Can u remember the crap things done by nvidia with their double cone anisotropic filtering, making image quality totally trash with their ~190 drivers?amd also did like this,but slightly noticable. @Ico, i meant the in game image settings...it matters to me(and also some other peoples).

Yea, considering 6950, both r crap 6970 and 570.and 560 also gonna get ars-kick hard...but, for how long?!only these few new cards r of that catagory.


----------



## ico (Dec 28, 2010)

ok then. I apologize.


----------



## mohiuddin (Dec 28, 2010)

The fact is that, with high res and aa 6970 rules...but, personally if my ars was rich enough to juice out above 16k  for a gpu, i would go for 570 coz runs cooler and quieter.


----------



## aby geek (Dec 29, 2010)

and also i was wondering if there wil be any difference between an unlocked 6950 cf and 
6970 cf?


----------



## topgear (Dec 29, 2010)

aby geek said:


> now that 6950 can be unlocked can we conclude that the gtx 560 is dead before its born?



I don't think so.

I think only first few batches of HD6950 can be unlocked into a HD6970. AMD disabled some shaders on HD6950 and I think they have some reason behind it.

It's like those amd cpus which can be unlocked into dual/quad core - many users who have bought earlier batches succeeded in unlocking but with later batches many users were unable unlock or faced issue when they unlocked.

So after some few batches from now I don't think HD6950 can be unlocked into a HD6970 anymore and GTX 560 will give HD6950 a hard completion for sure. Even EVGA GTX 460 AMP is neck to neck compared to GTX 470 when oced so a OCed GTX 5670 will give performance close to GTX 570 and HD6970 for sure.



aby geek said:


> and also i was wondering if there wil be any difference between an unlocked 6950 cf and
> 6970 cf?



I think the performance level would be same.


----------



## vickybat (Dec 29, 2010)

mohiuddin said:


> Can u remember the crap things done by nvidia with their double cone anisotropic filtering, making image quality totally trash with their ~190 drivers?amd also did like this,but slightly noticable. @Ico, i meant the in game image settings...it matters to me(and also some other peoples).
> 
> Yea, considering 6950, both r crap 6970 and 570.and 560 also gonna get ars-kick hard...but, for how long?!only these few new cards r of that catagory.



You are making conclusions too soon my friend. I gave those examples because thats how future games will be rendered and currently fermi architecture is showing strong inroads and nobody will deny this fact except you offcourse.

Wait till crysis 2 and then we shall talk about benchmarks cause it will be the pinnacle of game designing and provide a lesson or two to other game developers. Justcause 2 is a heavily optimised game and supports large framebuffer to render scenes. But games like metro and lost planet 2 are resource heavy games and support tesselation and some tricks of dx11.
*
Topgear* is right, gtx 560 has the capability to dethrone 6950 and when it arrives, we shall have more discussion about it. And about your query , most people will chose the gtx 570 over 6970 because its more futureproof and even reviewers admit this fact.


----------



## ico (Dec 29, 2010)

lol, GTX 570 might be future-proof in the sense if nVidia plans to re-brand it for the next two-three generations like they did with 8800. Their last creamy GPU.  You'll never feel outdated.


----------



## tkin (Dec 29, 2010)

aby geek said:


> and also i was wondering if there wil be any difference between an unlocked 6950 cf and
> 6970 cf?


The memory ics won't be able to keep up, so there will be performance loss, but less than 10% max.


----------



## vickybat (Dec 29, 2010)

ico said:


> lol, GTX 570 might be future-proof in the sense if nVidia plans to re-brand it for the next two-three generations like they did with 8800. Their last creamy GPU.  You'll never feel outdated.




He he somethin like that. G80 was definitely a breakthrough gpu at its time and first to employ a unified shader architecture. Then ati followed suit with its R600.

570 will last a while until *KEPLER* comes up as a successor to *FERMI*.

Nothing will be futureproof and technology will continue to outdo itself. Thats how it always was and will always go.


----------



## aby geek (Dec 30, 2010)

@topgear but then why would am go all the way to add a switch in their cards and offer dual bios if it was temperory.

but you may be right about initial unlock,but i think the unlock will stay untill the other island family is ready.

theres nothing bigger than the unlock which the 6950 has against the 560.

by the way do u guys know a trustable vendor who could repair my 8800 gts 320mb?


----------



## topgear (Dec 30, 2010)

aby geek said:


> @topgear but then why would am go all the way to add a switch in their cards and offer dual bios if it was temperory.
> 
> but you may be right about initial unlock,but i think the unlock will stay untill the other island family is ready.
> 
> theres nothing bigger than the unlock which the 6950 has against the 560.



I think they added dual bios coz OCing gpu through software is a great pain to some and enthusiast users always flash their gfx card once they find out the maximum stable oc - now with dual bios if a user face some kind of issue with oc/increased voltage etc they can always revert back safely to the factory default bios without much trouble and it will save manufacturers RMA cost due to incorrect bios flashing. So in both customers and manufacturers perspective it's a complete peace of mind.

Yep, Unlocking a great feature of HD6950 but we will have to wait to see what GTX 560 have up it's sleeve. GTX 460 AMP ( A factory OCed card ) is neck to neck with stock GTX 470 and HD6870 and yet it can be OCed more.
So GTX 560 is going to compete with HD6950/GTX570/HD6970 and with a lower prie tag than HD6950 for sure.



vickybat said:


> Topgear is right, gtx 560 has the capability to dethrone 6950 and when it arrives, we shall have more discussion about it. And about your query , most people will chose the gtx 570 over 6970 because its more futureproof and even reviewers admit this fact.



Yep, I'm also getting excited to see what kind of performance GTX560 packs under it's hood and how well it competes against GTX 570/HD6970 and of course with HD6950 in both single and dual card configs.


----------



## clear_lot (Dec 30, 2010)

gtx 560 would need one hell of SLI scaling to compete with cf 6950/6970.
a bigger memory buffer would definitely help.
i think that with the increasing popularity of 1080p and 4xAA, bigger buffers are needed for good scaling and/or single card performance.


----------



## mohiuddin (Dec 30, 2010)

When 560gtx is gonna release?


----------



## topgear (Dec 31, 2010)

^^ Jan 2011

actually 2 performance beast is going to unleash in jan 2011 Sandybridge and GTX560.

But I strongly believe AMD will not still idle and will definitely come up with some new 6xxx series or reduce the price of HD6950/HD6970.


----------



## clear_lot (Dec 31, 2010)

> actually 2 performance beast is going to unleash in jan 2011 *Sandybridge* and GTX560.



you must be talking about the anandtech sb reiew.
well, it reviewed the i5 2500 with HT, which is not available in commercial versions.
the actual product is without HT, albeit with turbo boost.
so the review has better performance than actual products, (concluded by anand himself), though he had engineering samples, not finished products.

the actual core i7 SB will be released much later (q3 2011 ?) probably with a new socket


----------



## aby geek (Dec 31, 2010)

^^ socket 2011 is ivy bridge not sandybridge.

gtx 560 is jan 15 i think.and its graphic core announcement said it will be competing with 6870 at rs 15k.

so as of now i believe nvidia is giving us monster babies at high prices.

the most what i love about this launch is hd 5850 will take 6870's current price and 6870
will see a drop


----------



## topgear (Jan 1, 2011)

clear_lot said:


> you must be talking about the anandtech sb reiew.
> well, it reviewed the i5 2500 with HT, which is not available in commercial versions.
> the actual product is without HT, albeit with turbo boost.
> so the review has better performance than actual products, (concluded by anand himself), though he had engineering samples, not finished products.
> ...



No buddy I'm not talking about that review.

I was talking about SandyBridge cpus which are going to be released on Jan.

For eg. Core i7 2600K is a Sandy Bridge cpu based on LGA 1155 and it's going to released on Jan of this year.



aby geek said:


> ^^ socket 2011 is ivy bridge not sandybridge.
> 
> gtx 560 is jan 15 i think.and its graphic core announcement said it will be competing with 6870 at rs 15k.
> 
> ...



Socket 2011 is based on 22nm if I'm not wrong - so it will get die shrink too !
BTW, let's not discuss about this cpu thing any further as it's offtopic.

Now I've two questions - as HD6950 can be unlocked to a HD6970 - is there any chance of price reduction of HD6970 and is there any rumor about something like HD66xx and HD67xx series gfx cards ?


----------



## clear_lot (Jan 1, 2011)

> Now I've two questions - as* HD6950 can be unlocked to a HD6970*



 if you are for a 6950 and thinking of upgrading, better hurry.amd is revising the 69xx boards
AMD Revising Radeon HD 6900 Series PCB | techPowerUp




> and is there any rumor about something like HD66xx and HD67xx series gfx cards ?



umm not exactly what you were thinking, but..
AMD Mobility Radeon HD 7000 Series Reality by Q4-2011 | techPowerUp


----------



## ico (Jan 1, 2011)

haha, Radeon 7000 series again.  [HD]


----------



## topgear (Jan 2, 2011)

clear_lot said:


> if you are for a *6950 and thinking of upgrading, better hurry.amd is revising the 69xx boards*
> AMD Revising Radeon HD 6900 Series PCB | techPowerUp
> 
> umm not exactly what you were thinking, but..
> AMD Mobility Radeon HD 7000 Series Reality by Q4-2011 | techPowerUp



No - I won't buy a HD6950 anytime soon so theres' no rush.

This was expected as I already wrote in earlier posts that amd will not continue this  but as amd is making new pcb design I think newer HD6950 will consume even less power and run a bit cooler than the previous version.


----------



## mohiuddin (Jan 2, 2011)

clear_lot said:


> if you are for a 6950 and thinking of upgrading, better hurry.amd is revising the 69xx boards
> AMD Revising Radeon HD 6900 Series PCB | techPowerUp
> 
> 
> ...



what , 6900 isn't finish yet...amd, nvidia take a break...relax...it is a start of a year...u two go to some vacations...

and why mobilities first?


----------



## clear_lot (Jan 2, 2011)

> *u two go to some vacations...*




man...
you are thinking we are nerds who are spending the new year hunting the net for more info...




EDIT: errrmm, you are actually true.


----------



## Liverpool_fan (Jan 2, 2011)

> The lineup begins with "*Wimbledon*", an ultra high-end mGPU. It has a 256-bit wide high-speed GDDR5 memory interface, 2-4 GB of dedicated memory, and 65W TDP. The DirectX 11 GPU will be about 25% faster than "*Blackcomb*", the Mobility HD 6000 series flagship. This is slated for Q2-2012. Next up is the high-end "Heathrow" mGPU, which has a 192-bit or 128-bit (selectable between variants) GDDR5 memory interface, 1-3 GB of dedicated memory, up to 45W TDP, and 30% higher performance compared to "*Chelsea*". This is slated for Q4-2011 (this should tell you that Radeon HD 7000 series will be in existence towards the end of 2011).
> 
> Going down, there's "*Chelsea*" itself, with its 128-bit GDDR5 memory interface, 1-2 GB memory, 20-30W TDP, performance 30% higher than *"Whistler*", production starting in Q4-2011. Lastly, there's "Thames". This mainstream mGPU will have 128-bit GDDR5 with option of GDDR3, 1 GB memory, 15-20W TDP, and 100% higher performance than "Seymour", Radeon HD 6000 series' mainstream mGPU. Production for this starts in Q4-2011, as well.


lol that's weird. What's next? Millwall and Longhorn?


----------



## aby geek (Jan 6, 2011)

gtx 560 may come feb 3


----------



## clear_lot (Jan 6, 2011)

^and your source?


----------



## aby geek (Jan 6, 2011)

just googled and many links are talking about feb or 2012.


----------



## vickybat (Jan 7, 2011)

It will be this year and not 2012.


----------



## aby geek (Jan 7, 2011)

and socket 2011/ivy bridgge is a die shrink of sandy bridge.

the true succesor of sandy bridge is haswell which comes arnd 2013.and haswells dieshrink will be rockwell.


hers a n edit: socket 2011 is sandy bridge e ,its 22nm die shrink is ivy bridge. and haswell will be the true 22nm architecture and its 16nm die shrink wil be rockwell.

and some news on gtx560/gtx475 :

*www.fudzilla.com/graphics/item/21488-geforce-gtx-560-comes-in-late-january

amd radeon 6990:

*www.fudzilla.com/graphics/item/21487-amd-to-have-hd-6990-dual-cards-in-february


----------



## aby geek (Jan 11, 2011)

woot amd

Cayman to get cheaper 1GB version

finally 560 dies


----------



## vickybat (Jan 11, 2011)

They won't scale in multigpu setups like their 2gb counter parts. And GTX560 is far from defeat. It will have better tricks up its sleeve. *Maybe amd sensed this*.


----------



## bilallucky (Jan 11, 2011)

6950 is taking a new place- no one to compete!!!as 5xx gtx r revised version if gtx 4xx, (their alu,simd,sps pattern is almost same), i guess driver updates will not effect that much like 6900 driver updates.becoz, 6900 r totally different in VLIWs pattern...amd has a long way to go with their driver updates.....


----------



## vickybat (Jan 11, 2011)

Gtx 5xx and 4xx are not same. There are significant differences. FP16 texture filtering happens in one clock cycle, just as it does on the GF104 found in GeForce GTX 460, and not the two cycles endured by GF100. Nvidia also made improvements to GF110’s Z-culling efficiency. This means the GPU is smarter about discarding pixels that don’t need to be rendered in a scene (because they’re behind other objects), conserving memory bandwidth and compute muscle.

In case of radeon vliw4 in 69xx series has a special function unit cutoff and not a breakthrough in architectural improvement. Its nothin like 69xx series has a long way to go in driver updates.

The fermi architecture has the upper hand technically and upcoming gtx 560 will pose a serious threat to 68xx series and maybe even 6950.


----------



## Joker (Jan 11, 2011)

^^ err...

GTX 5xx is just an improved/optimized Fermi gtx 4xx.

Radeon HD 6xxx is a new architecture just like Fermi gtx 4xx was. still not optimized...would get better with HD 7xxx.

i dont care about technical mumbojumbo.


vickybat said:


> In case of radeon vliw4 in 69xx series has a special function unit cutoff and not a breakthrough in architectural improvement.





vickybat said:


> Gtx 5xx and 4xx are not same. There are significant differences. FP16 texture filtering happens in one clock cycle, just as it does on the GF104 found in GeForce GTX 460, and not the two cycles endured by GF100.


and u mean this is an architectural improvement?  stop posting like a fanboy.



vickybat said:


> *They won't scale in multigpu setups like their 2gb counter parts. *And GTX560 is far from defeat. It will have better tricks up its sleeve. *Maybe amd sensed this*.


FUD.


----------



## vickybat (Jan 11, 2011)

@ *joker*

If radeon 7xxx will get better so will the upcoming *kepler* from nvidia. Of course the above points i stated are improvements from 4xx series.  You being an *AMD FANBOY* will always deny this no matter what. If you don't care about technical mumbojumbo then don't argue please. Post something relevent to prove your points.

In case you don't know, framebuffer is directly proportional to FPS(frames per second) for a same gpu. In this case 69502gb>6950 1gb and this will reflect greater in a multi gpu setup because the graphics core has lesser memory to play around whilst rendering in higher resolutions. I meant that and it doesn't mean i'm a fanboy or something.

About your *FUD* remark, we will see once the 560 releases. Counting out fermi prove you being a silly fanboy not me.


----------



## Joker (Jan 12, 2011)

vickybat said:


> You being an *AMD FANBOY* will always deny this no matter what.


*img151.imageshack.us/img151/8529/46306145.th.png
*img811.imageshack.us/img811/7369/72608869.th.png

fanboi me? naah



vickybat said:


> If you don't care about technical mumbojumbo then don't argue please. Post something relevent to prove your points.


my whole point is about cooking up of things. u cooked up CUDA without realzing what it is... 

GTX 5xx is a refined Fermi...fact you are bigging up gtx 5xx (yup they rock) but then when AMD took a different route with radeon hd 6xxx with their VLIW4 architecutre, u think it is not a architectural breakthrough which in fact gtx 5xx is even not.  nvidia got it horribly wrong with gtx 4xx except gtx 460 and improved it in gtx 5xx. AMD has got it wrong in in hd 6xxx and will improve the VLIW4 in next generation.


----------



## aby geek (Jan 12, 2011)

well fudzilla like to call gtx 560 gtx 475 instead so vicky maybe right about 560 being faster than 6950 as it will be between 470 and 570.

but vicky everyone is trying to tell you that gtx 5 series are improved 4 series nothing more whereas amd are working hard  to build something new from scratch.

the amd 6 series you see was not to come so  early it was preponed coz the other family of their graphic cards had to be postponed as global foundaries moved to new fab process and the plants were to be upgraded for that.

we are just applauding amd for coming in early and very wel for a preponed product.


----------



## Joker (Jan 12, 2011)

aby geek said:


> well fudzilla like to call gtx 560 gtx 475 instead so vicky maybe right about 560 being faster than 6950 as it will be between 470 and 570.


i actually want THIS to be right  GTX 560 should be faster.

but my whole point is him bigging up gtx 5xx as a new architecutral improvement etc. etc. over Fermi and smalling the new VLIW4 architechture of AMD and saying it is mere a unit cut off when he hardly has any idea about it.

he was bigging up CUDA before & it just got raped by Stream and quicksync.


----------



## ico (Jan 12, 2011)

Joker banned for 4 days. Should have made his point politely.


----------



## topgear (Jan 12, 2011)

look what I've found about GTX560 :



> According to the grapevine, the chip is based on the same GF104 chip used in its predecessor, the GeForce GTX 460, but using the full allocation of 384 stream processors, rather than the 336 found in the GTX 460.
> 
> The clock speeds also represent a significant boost over its predecessor, and could help the GPU hit the performance levels it needs to seriously compete with the Radeon HD 6950.



Read More


----------



## vickybat (Jan 12, 2011)

Joker said:


> i actually want THIS to be right  GTX 560 should be faster.
> 
> but my whole point is him bigging up gtx 5xx as a new architecutral improvement etc. etc. over Fermi and smalling the new VLIW4 architechture of AMD and saying it is mere a unit cut off when he hardly has any idea about it.
> 
> he was bigging up CUDA before & it just got raped by Stream and quicksync.



Fermi was an architectural improvement but was plaugued by heat issues. These were sorted in 5xx & along with these, so were the compute efficiency as i mentioned earlier.

In case of AMD, they realised their folly with vliw5 as the 5th sp was never used cause as the workloads went narrower . Actually in a unified shader architecture, vliw design is unecessary and more scaler architecture is deemed fit which utilises TLP( Thread level parallelism). Amd relies more on ILP (Instruction level parallelism)

"AMD’s architecture is heavily invested in Instruction Level Parallelism, that is having instructions in a single thread that have no dependencies on each other that can be executed in parallel. With VLIW5 the best case scenario is that 5 instructions can be scheduled together on every SPU every clock, a scenario that rarely happens. We’ve already touched on how in games AMD is seeing an average of 3.4, which is actually pretty good but still is under 80% efficient. Ultimately extracting ILP from a workload is hard, leading to a wide delta between the best and worst case scenarios." ( source within quotes-ANANDTECH).


*i56.tinypic.com/1zf323l.png

So amd discarded the 5th sp which computes scalar operations and allowed three of its 4 sp's to handle it one at at time i.e vertex and pixel shading.

Now i will not call it an architectural breakthrough but rather a fine tune to utilise its resources properly and the reduced space can be used to add more simd's. This is what makes caymans faster than cypress.

Now in their next design it has to be seen whether amd goes the vliw 4 way or takes a more contemporary scaler path like nvidia or maybe have some new tricks in their sleeve.

Now about CUDA, i never cooked up anything but was backed by facts. Cuda is stiil great and transcoding is not the only thing cuda utilises unlike quicksync. There are lots of other apps acclerated by CUDA and stream though getting better is nowhere near cuda yet. 

Cuda is good and everybody will agree but in case you don't remember , i was backing physx & i still consider it good but as other members pointed, its not a make or break deal when purchasing a card. Performance matters most.


----------



## mohiuddin (Jan 12, 2011)

@vickybat ,u may not a fan of nvidia,but any one reading these thread can tell that ur post are NVIDIA biased.
tell me thinking twice,bro isn't the gtx500 are the revision of gtx400?
bro, answer it as a neutral person,'will u tell adding z-cull ,fp16 filtering and a bit tesselation improvement, a major architectural improvement for shifting to a new-generation naming-scheme?'
and about amd,
they planned that vliw4 for their planned 28nm fab.as the schedule failed ,they where bound to stick with 40nm.And it is a totally different architecture to work with.they even wanted to clock 6970 at 930~mhz.which would be possible with 28nm fab.But, they failed in 40nm for high yeild failure.For that cause amd lagged in 6900 series.

--
NVIDIA, is doing great with their 5xx gtx, though it was expected with 4xxgtx.
gtx560 and amd HD6950 1gb r going to fight well only for us ,end users.no need to support any one of them biased.


----------



## vickybat (Jan 12, 2011)

Well buddy , i have answered as a neutral person. Just as you say 5xx series is an optimised 4xx series, the same can be said for amd only its done in the chip level. I said amd realised there folly and are now experimenting with vliw4 design to achieve better scalar performance.

Maybe the next amd series won't use vliw4 at all if they can't reap benefits. Their engineers must be toiling as we discuss to make something new , something that can achieve *REALISM*. Both camps will do that and we will continue to see some architectural breakthroughs.

But currently i don't see any breakthrough from amd's side.


----------



## marksteyn (Jan 12, 2011)

Hello i want to get the more on the AMD hard disk and the latest version.


----------



## tkin (Jan 12, 2011)

marksteyn said:


> Hello i want to get the more on the AMD hard disk and the latest version.


Are you for real?? AMD hard disk, and that too on the first post?? HD6970 is the name of a graphic card not a hard disk.

But wait, if you are a troll/spammer then you win the grand prize for
*images3.wikia.nocookie.net/__cb20100722074142/crysis/images/c/c2/MAXIMUM_Trolling.jpg



vickybat said:


> Fermi was an architectural improvement but was plaugued by heat issues. These were sorted in 5xx & along with these, so were the compute efficiency as i mentioned earlier.
> 
> In case of AMD, they realised their folly with vliw5 as the 5th sp was never used cause as the workloads went narrower . Actually in a unified shader architecture, vliw design is unecessary and more scaler architecture is deemed fit which utilises TLP( Thread level parallelism). Amd relies more on ILP (Instruction level parallelism)
> 
> ...


AMD is experimenting with VLIW4, if not successful they will abandon it on their next gpus, and also they need to redesign their drivers and compilers for this technology, so buying the 69xx is a serious gamble at this point.

CUDA is utilized on mostly transcoding, and in some processing software, in terms of quality both CUDA and stream suck, when i use mediaesprersso to convert video using stream the end result is disgustingly horrible and pixelated, and the video also hitches, in terms of quality x86 rules.


----------



## vickybat (Jan 12, 2011)

@ *Tkin*

Yes buddy i totally agree with you. Stream is a bit better than cuda in terms of the output image quality but cuda transcodes faster. Both are slower than quicksync and the latter also produces better image quality.


----------



## tkin (Jan 13, 2011)

vickybat said:


> @ *Tkin*
> 
> Yes buddy i totally agree with you. Stream is a bit better than cuda in terms of the output image quality but cuda transcodes faster. Both are slower than quicksync and the latter also produces better image quality.


You can't use quicksync unless you hook a monitor to it, Intel sucks.


----------



## vickybat (Jan 13, 2011)

@ tkin

Yeah you got that right again. Lets see what z68 brings to the table.

Offtopic discussion though. We should continue this in the "intel sandybridge released" thread.


----------



## clear_lot (Jan 17, 2011)

> CUDA is utilized on mostly transcoding, and in some processing software, in terms of quality both CUDA and stream suck, when i use mediaesprersso to convert video using stream the end result is disgustingly horrible and pixelated, and the video also hitches, in terms of quality x86 rules.



+1
i've been using mediaespresso for a few days and using cuda to transcode video makes the video *JERKY*. i dont know why.
most of the review sites that do the testing never ever said this. even at anandtech, where they compared the video quality of cuda, stream, x86 and quicksync, they never said this.


----------



## rchi84 (Jan 26, 2011)

To me, i look at the arguments for and against the 6950 in this way.

comparisons to the 570 are unfair because that card belongs in an upper price segment. The 6970 and 570 trade blows in a couple of games and the 570 dominates in "TWIMTBP" games, not surprisingly. But for a lot of people, the money saved by opting for the 6950, instead of the 6970, can be better spent of stuff like cooling for e.g.

But at the current 17K price segment, the 6950 2GB model is a phenomenal package, simply because i think the frame buffer advantage will come into play at the 1080 resolution as newer games are released (Skyrim wink wink). even if you don't game at 1600, the added frame buffer helps with AA and AF. Even if the 6950 to 6970 mod isn't stable or doable with your card, it has great performance on its own.

Tessellation is something that Nvidia are much better at than AMD and that's not going to change in the current generation. But with the 6xxx series, AMD aren't completely lost. But if Tessellation is such a game changer, then there is no choice than Nvidia.

Cuda vs stream has never been an issue for me as i only look at gaming. For me, transcoding isn't an issue, as i always leave my PC on all night for such purposes  So a half hour or 2 min session doesn't affect me when i'm asleep lol.

Now for Physx, from my understanding, when you enable it, except for the highest grade cards like the 480 or 580, your fps take a massive hit. The recommended way to enjoy Physx is to have a dedicated Nvidia card in addition to your GPU. Now that means additional investment in a card, bigger PSU, SLI or Xfire motherboard besides the potential for greater heating inside the cabinet.

And besides, There is no game out there that will not run without Physx enabled, simply because the installed user base isn't large enough. So every game currently can be run without Physx with the assurance that core gameplay elements won't be affected by its absence. Current implementation of Physx is mostly cosmetic, because no developer will take a risk and introduce controls that can only be implented with Physx. So if you decide not to go the Physx route, you haven't lost all that much really, besides watching debris, dust, cloth and papers fly, which to be honest, i don't give more than a cursory glance to when i am playing.

At the moment, if you've got the money for it, the 6950, more than the 6970, is unmatched for the performance you get in the price segment.

Of course, i could be wrong on a lot of things..


----------



## tkin (Jan 26, 2011)

rchi84 said:


> To me, i look at the arguments for and against the 6950 in this way.
> 
> comparisons to the 570 are unfair because that card belongs in an upper price segment. The 6970 and 570 trade blows in a couple of games and the 570 dominates in "TWIMTBP" games, not surprisingly. But for a lot of people, the money saved by opting for the 6950, instead of the 6970, can be better spent of stuff like cooling for e.g.
> 
> ...


I have only one issue with amd, that is drivers, the latest 10.12 drivers(11.1a is hotfix) had bsod when monitor goes to sleep, reported by both hd6xxx and hd5xxx users(also by hd4xxx users), seems this is a general bug, it corrupted my torrent downloads(utorrent) and I had to go through a lot of hassle just to get it working back again, also the first 11.1a driver corrupted opengl tess, so amd's driver team is absolutely crap now. I have been badly burned by hd5850(just like I was burnt by x1900xtx that made me jump ship), my next card will be from nVidia.

PS: 570 is mostly faster than 6970, also you get PhysX with it(and with 570's massive processing power you can actually use it in many games, maybe except metro), also currently the only known H.264 codec that can use gpu acceleration is coreavc and it does not support amd, I like to play videos in wmp(due to its minimalistic looks) and coreavc allowed me to use the nvidia gpu for it, there is no amd equivalent for it, and also amd's codecs(avivo) gets stuck on may formats(and any 3rd part tools like media espresso also gets stuck as it uses avivo instead of using its own codecs), nVidia does not provide codecs but opens up their interface and media espresso runs flawlessly.


----------



## Cilus (Jan 26, 2011)

Tkin, CoreAVC is not the only H264 decoder to use GPU acceleration, there are some others. The examples are FFDshow DXVA Decoder and Media Player Classic's home theater's internal video decoder filters. They also support ATI Stream along with CUDA.
I use MPC HC with lots of post processing shaders and it plays them very smoothly without any hassle. I've also checked the GPU and CPU usage while playing those contents using task manager and GPU-z. CPU usages never goes over 15-20% and GPU usage is around 30 to 40%.


----------



## tkin (Jan 29, 2011)

Cilus said:


> Tkin, CoreAVC is not the only H264 decoder to use GPU acceleration, there are some others. The examples are FFDshow DXVA Decoder and Media Player Classic's home theater's internal video decoder filters. They also support ATI Stream along with CUDA.
> I use MPC HC with lots of post processing shaders and it plays them very smoothly without any hassle. I've also checked the GPU and CPU usage while playing those contents using task manager and GPU-z. CPU usages never goes over 15-20% and GPU usage is around 30 to 40%.


Ffdshow dxva has issues with some video files(1080p), coreavc is by far the most stable ever, also coreavc does not get stuck for a few sec when seeking 1080p video like ffdshow dxva does.


----------



## vickybat (Jan 29, 2011)

@ tkin 

I sort of agree with you. Using dxva decoder currently in media player classic homecinema and having some issues playing 1080p content like video freezing up at times. Even seeking is a problem. My video card is asus radeon 5750.


----------



## Cilus (Jan 29, 2011)

I think you are using Haali MEdia Splitter for Splitting MP4 and MKV files.If not then use it along with the latest ffdshow built. Also the latest media player Classic Home Cinema's inbuilt filters are very good. I have tried it with the Avatar 11 GB 1080P rip. No problem.


----------



## tkin (Jan 29, 2011)

Cilus said:


> I think you are using Haali MEdia Splitter for Splitting MP4 and MKV files.If not then use it along with the latest ffdshow built. Also the latest media player Classic Home Cinema's inbuilt filters are very good. I have tried it with the Avatar 11 GB 1080P rip. No problem.


Yes, I use haali splitter and decode the h.264 stream with ffdshow(not latest, a few months old, updating codecs are total pain), also the filters in mpchc are nice but the issue is that if I change the filters a lot(like say about 5-6 times) the graphics driver crashes(vpu recovery error), so I just stick to normal dxva in mpchc and no filters, the hassle is way to much.

Again I blame amd drivers for the crashes, using nVidia never experienced a crash with the vpu with mpchc and filters.


----------



## rchi84 (Feb 9, 2011)

Interesting link reviewing the Powercolor PCS 6950 ++. 

PowerColor Radeon PCS++ 6950

The best thing is that Powercolor have included two Bios in this card. The first one is reference clocks and the second one D) increases the GPU clock to 880 Mhz AND unlocks the additional shaders. 

That's right! Powercolor have released a card that comes close to the 6970 specs, straight out of factory.. The only thing separating the two is that the 6950 memory comes clocked slower, but a little OCing takes care of that.

The card's MSRP is around 310$ which is just slightly more than the regular 2GB model.

This has to be the best 6950 model to get!!


----------



## max_snyper (Feb 9, 2011)

Now that both the camps have launch their upper middle segement cards.what next in their lineup are they building the gpu from scratch or following the same manufacturing process just optimising the current architechture.
Saw tons and tons of review regarding 560gtxi and hd6950.IMO the ati 6950 (1gb version) is just the filler in the ati line up nothing else though it performs good if not best where as nvidia did the same with the 560 filling the gap of their lineup optimisong their lineup.

Read that AMD is working on antilles (dual gpu solution),stated release after chinese new year and southern island is the later one really coming into existance..if so then till when?
And what about nvidia,are they working on new gpu or new lineup of gpu's.

read the heated debate on cuda,stream,physx etc IMO if a person want to buy a graphic card for animation,rendering,image processing...whatever he wants to do he will go and buy a professional cards from any of the camps why he will look for the gaming series. 
1.Professional series are tend to provide good result in rendering(though they are costly).
2.Who will use a gaming card thinking that it will provide good result in rendering against professional series.
3.And why a gamer look for cuda or stream or physx technology in their graphic cards as the main criteria for their selection.He will just see the review,how it performs in games and buy the card as per his need.
All the technology that the companies brag about is just a marketing gimmick, for the end user (gamer) its just a add-on feature to brag about.
And seriously,since my exposture to gaming i havent read a single review that has put forward the "add-ons" as the unique selling feature of any of the cards,they just go on the performances on games,power consumption,efficency against last generation of cards thats it. Nor i have spotted any application that has been proved beneficial or been boosted with the help of these add-ons.
performance in games,relibility of cards to support upcoming games(atleast for a year or two,power consumption that only matters when choosing a graphic cards.
Think of it.
Just bragging about the add-ons doesnt work,what works is how these add-on are efficient in doing their work.  
Well thats just my opinion about the graphic card manufacturers.


----------



## vickybat (Feb 10, 2011)

^^ well when performance is more or less the same, addons come into play. I hope you got it.


----------



## topgear (Feb 10, 2011)

Gigabyte HD6970 1GB ( GV-R695OC-1GD ) clocked at 870 Mhz ! - probably the fastest and longest HD6950 1GB card.



> First of all, the cooling system. The card is equipped with WINDFORCE™ 3X ,the latest cooling technology that differentiates the brand’s graphics cards from the rest. WINDFORCE™ 3X features 3 ultra quiet PWM fans. The special inclined triple-fan design effectively minimizes the flow of turbulence between three fans. With a unique vapor chamber, WINDFORCE™ 3X is able to transfer heat from the hot spot to the cool spot as thermal energy becomes evaporated to the surrounding air. By capillary action, the condensed liquid droplets circulate back to the base of chamber. The cycles of evaporation and condensation enhance heat dissipation for greater cooling efficiency. Moreover, WINDFORCE™ 3X is equipped with three copper heat pipes to strengthen the speed of heat dissipation.
> 
> Next is specification. the card has 1GB of GDDR5 with a 256-bit memory bus. The default core clocks have been increased by 70Mhz and amounts to 870 Mhz, while the memory has not been change compared to specs of a reference card. Another important thing to note is that the card belongs to Ultra Durable line-up. Ultra Durable VGA series cards provide dramatic cooling effect on lowering both GPU and memory temperature. By adopting 2 oz PCB board, Japanese Solid Capacitor, Ferrite Core Chokes, and Low RDS (on) MOSFET, Ultra Durable VGA makes PCB board as a big heat sink. According to Gigabyte testing results, Ultra Durable VGA graphics accelerators can lower GPU temperature by 5% to 10% and lower memory temperature by 10% to 40%. Also, these cards feature reduced voltage ripples in normal and transient state, thus effectively lowers power noises and ensures higher overclocking capability. The rest specs and capabilities does not differ from its 2 GB brother.



*hw-lab.com/uploads/hardware/videocards/amd-cayman/gigabyte-hd6950-1gb/gigavyte-hd6950-1gb_1_x500.jpg

image courtesy of hw-lab.com

Gigabyte shows 1GB version of HD 6950 | hw-lab.com


----------



## vickybat (Feb 10, 2011)

^^ Should be the most expensive 6950 as well. That cooler looks really nice though.


----------



## Jaskanwar Singh (Feb 10, 2011)

max_snyper said:


> read the heated debate on cuda,stream,physx etc IMO if a person want to buy a graphic card for animation,rendering,image processing...whatever he wants to do he will go and buy a professional cards from any of the camps why he will look for the gaming series.
> 1.Professional series are tend to provide good result in rendering(though they are costly).
> 2.Who will use a gaming card thinking that it will provide good result in rendering against professional series.
> 3.And why a gamer look for cuda or stream or physx technology in their graphic cards as the main criteria for their selection.He will just see the review,how it performs in games and buy the card as per his need.
> ...



+1. very well said. and physx is just good in some games only. like mafia II and metro. but just think what will happen to 560 on enabling physx in metro. already fps are 23 at 1920x1080 physx on will get them to 12-13


----------



## vickybat (Feb 10, 2011)

^^ Well then gaming at 1600x900 will enable more fps with physx on. Atleast its an option and a good one to have. Review sites tested metro with very high setting and in dx 11.

My card gives 30-40 fps in metro at high settings(not very high) and resolutions of 1600x900 (still more than 720p). 560 at these settings can easily give more than 60fps. Now enabling physx will bring fps to my current levels which are very much playable.


----------



## Jaskanwar Singh (Feb 10, 2011)

^^you can say that. but people with these cards mostly have full HD monitors and when they pay 16k for a gpu they atleast tend to game at full hd.

addons welcome but.


----------



## max_snyper (Feb 10, 2011)

Hey i also like add-on, who doesnt??
The thing is any technology that is implemented and marketed should work as it is supposed to be.
The trend from last couple of years is,these gpu manufacturers bombard with new jargons and terms which the customers doesnt totally understand whats the use of it,keeping the main purpose aside..(see what happened with the last to last generation of gpus,the ones with dx10) they were totally killed by the games rendered on dx10.
They just know to make money on the jargons they publish,the end user doesnt even know what to do with these features.That is not a problem in this forum but a world scale problem,we know all the technicals,know-how of these technology but very few of us actaully implement in their work.
Still no game perfected for dx11 fully optimised,let alone the tessallation,ray tracing and all the stuff that is present in dx11.
(And this is for all gpu giants,programmers,game developers etc.) 
I just hope that they dont make dx11 a failure too just as dx10 was,though it was tough to code and render dx10,still it was a break through to new techniques of rendering.
DX10 was a failure itself IMO.(uncooked meal from microsoft).


----------



## topgear (Feb 11, 2011)

vickybat said:


> ^^ Should be the most expensive 6950 as well. That cooler looks really nice though.



After a hell lot of finding found the price :

$ 480 ( new Zealand Dollar )
Video Cards - PCI Express - Gigabyte GV-R695OC-1GD Video Card, Radeon HD 6950, 1024MB, DDR5, PCIe-16, DVI, HDMI, CrossFire
when Converted : $368

wait a little bit - after newegg and all other major on line shops gets this card in their stock the price will go down for sure.


----------



## rchi84 (Feb 11, 2011)

With such close performance, i think we are due for some price cuts this month. The success with the 6950 bios flash makes it impossible to consider a 6970 anymore. Definitely not worth the 50 USD price difference.


----------



## rchi84 (Feb 12, 2011)

W00t! Another Powercolor 6950 PCS ++ Review.

PowerColor HD 6950 PCS++ 2 GB Review - Page 1/28 | techPowerUp

At $305 (around 14K- Should be worth another 3K here), this looks to be an awesome bargain, esp if the newer batches of the 6950 have started hitting shelves without the additional shaders. 

Interesting thing is that they managed to OC the memory upto 1600 Mhz!

I think this will be released in limited numbers and not hopeful of seeing this card reach indian shores anytime soon.


----------

