# 7950 vs 660ti??



## bunnycool (Jan 16, 2013)

which would be more future friendly..


----------



## Rajat Giri (Jan 16, 2013)

HD 7950 is faster and future proof. it can max out any current game and upcoming big titles like Crysis 3 , GTA V
Sapphire HD 7950 Vapor-X OC 3GB is the way to go...


----------



## vickybat (Jan 16, 2013)

bunnycool said:


> which would be more future friendly..



Get a 660-ti if you want to game in 1080p. Its almost as fast as a 7950 in 1080p.
It also has TXAA support which is implemented in a number of titles like AC3 and COD black ops 2 with extremely positive results.
However crysis 3 is the most notable and awaited game to test TXAA's advantages as it will support the same.

Its seen to have a much lesser performance hit than MSAA (being much less dependent on memory bus) and offers much better AA.
Crysis 3 will shed more light to all this.

There are also reports of 79xx series to have a frametime lag or latency which leads to minimal lags in single gpu mode and notable stuttering and lag in crossfire.
There are methods to fix these but at a cost of fps drop.

Considering all these things, i think 660-ti is a better option if gaming is all you do especially at 1080p.


----------



## vkl (Jan 16, 2013)

Get HD7950.HD7950 is better than gtx660ti.
Even many reviews show the gtx660ti being beaten by the Tahiti LE(HD7870 XT) with the new drivers from both campaigns.


----------



## hitman4 (Jan 16, 2013)

i think gtx 660 ti provides smoother graphics and better gameplay experience
gtx 660 ti for 19k & hd 7950 for 22k
so gtx 660 ti is better choice imo



vkl said:


> Get HD7950.HD7950 is better than gtx660ti.
> Even many reviews show the gtx660ti being beaten by the Tahiti LE(HD7870 XT) with the new drivers from both campaigns.



please give links to those reviews..


----------



## ghost_z (Jan 16, 2013)

With new drivers and ghz edition hd 7950 is a better investent, moreover i have read about those frame latency issues and amd has released official statement recognizing the issue and a driver update is in the works to resolve that !
As of now hd 7950 is a bit ahead of gtx 660 ti but due to frame latency issues the gameplay experience is slightly smoother on gtx 660 ti, but as the solution is already in the works, it would not hurt to wait and watch and then decide !


----------



## vkl (Jan 16, 2013)

hitman4 said:


> i think gtx 660 ti provides smoother graphics and better gameplay experience
> gtx 660 ti for 19k & hd 7950 for 22k
> so gtx 660 ti is better choice imo
> 
> ...



Powercolor_radeon_hd_7870_myst_edition_review :Hardocp
Sapphire HD 7870 XT Review :Hardwareheaven 
PowerColor PCS+ HD7870 Myst Edition :Hardwareheaven
PowerColor PCS+ HD 7870 MYST Edition


----------



## Myth (Jan 16, 2013)

I think those reviews havent been fair with nvidia. Looks like custom cards for amd and reference for nvidia, unless i missed something.
660ti is a direct competitor for 7950. 
7870 XT is basically a scaled down version of 7950. It is not expected to compete with 660ti.


----------



## vkl (Jan 16, 2013)

HD7870 XT(Tahiti LE) has a default clock of 925MHz and boost clock of 975MHz.And the tests are done with those specs only without any overclock.So comparison is fine. 
Also Tahiti LE's performance is not far off from hd7950.It does its job well at its price-point.


----------



## vickybat (Jan 16, 2013)

*@ Myth*

 Yup amd hasn't released a reference card for 7870 XT. Its all upto the board partners to come up with their versions.
 This gen , frames are not an outright measure of performance as things stand, especially with amd. The way they have managed to up the performance with new drivers , definitely has a catch.
Its not very clear though but AMD admitting to that frametime issue is really something.

Even in this review , despite a 7970 cfx giving consistently higher frames than a 680 sli was not praised for its performance.
Somehow, those extra frames seems not to be evident in real gaming experience and were inferior in most cases too.

HARDOCP - Introduction - ASUS GeForce GTX 670 DirectCU II 4GB SLI Review

Check especially this:

Screenshots | [H]ard|OCP



Spoiler






> While Radeon HD 7970 GHz Edition CrossFire has a huge performance advantage, it was a very choppy, and the stuttery experience is not reflected in the framerate. GTX 680 SLI is faster than ASUS GTX 670 4GB SLI.






So i have some huge doubts.

Our member "The incinerator" will post some AA visual tests conducted on a GTX 680 and according to him, after 310.90 drivers, AA performance was more consistent and smooth.
He might post some screenies as well.

I don't want to detract op here but just putting my points. Maybe for single gpu these aren't that evident but one reviewer had put some interesting results even for single gpu.

Anyway, its upto OP to decide.


----------



## havoknation (Jan 16, 2013)

Radeons always suffer with bad drivers and whenever a new game is released, AMD will launch its performance fix for that particular game after 2-3 weeks that spoils our gaming experience. Nvidia is trusted for better Image quality and better driver support.. I think you should go for 660ti without second thought.

and yes, Zotac GTX660ti is retailing for 19k. You can save straight 3k if you go for it.


----------



## bunnycool (Jan 16, 2013)

7950 is a vry gud card and yes with the latest drivers it outperforms most of its competitors...both the thing is if a new game comes out..i would have to wait for new drivers for getting the best performance in any new game...




havoknation said:


> Radeons always suffer with bad drivers and whenever a new game is released, AMD will launch its performance fix for that particular game after 2-3 weeks that spoils our gaming experience. Nvidia is trusted for better Image quality and better driver support.. I think you should go for 660ti without second thought.
> 
> and yes, Zotac GTX660ti is retailing for 19k. You can save straight 3k if you go for it.


i can even get a MSi 660ti at 19k


----------



## theproffesor (Jan 16, 2013)

flip a coin ..!
get whichever you want both cards will be great for 1080p
if you don't overclock get gtx660ti but if you overclock. and want more performance ,then hd7950 is way to go...and guys don't type bullshit about amd drivers ..I didn't had any problems with their drivers!


----------



## techdabangg (Jan 17, 2013)

@vickybat - 
Agreed nvdia has TXAA but *what about GCN architecture* and loads of other things in favour of AMD? Also *nvidia has not optimized their scheduler for Compute Performance which eventually will make Kepler suffer*. In my opinion GCN architecture is superior than Kepler. 

Please *don't make TXAA so much of magic word* since saying "TXAA offers much better AA" is very subjective and depends on person to person. There are people out there who prefer MSAA and dont care about TXAA since it actually makes graphics plastici instead of realistic. The amount of blur TXAA gives is actually equal to that used in film CGI. The problem with that is that that blur is part of the "experience" and the image quality of film running at 24 FPS. So it is really bad to see it on a game that is NOT film, nor is it running at 24 FPS 

Now related to micro stuttering in current gen games, HD7950 suffers only in Far Cry 3 as per my own hands on experience with it (*dont get me to be an AMD fanboy, as I have tested both HD7950 and 660Ti, and both of them are good cards*), but *HD7950 a hell of an overclocker* which significantly increases its value for money. Micro-stuttering is cause of slight concern only in case of multi-GPU setup where in even nvidia cards suffer from it in many titles. All other games like AC III, Hitman Absolution, Max Payne 3, Battlefield 3, Crysis 2, Metro 2033 run absolutely without any issues on HD7950. It all depends on games supporting GCN or Kepler. See, for Hitman Absolution HD7950 performs far better than 660Ti - Source hardocp.com

*www.hardocp.com/images/articles/1355966357333LikoxjM_4_3.jpg


Even in Far Cry 3 HD7950 is slightly fatser than 660Ti - Source hardocp.com

*www.hardocp.com/images/articles/1355517972SmtzmJYEeY_6_3.gif

Infact here is what they say in conclusion for Hitman Absolution -*
"The GeForce GTX 680 and Radeon HD 7950 with Boost video cards seemed to be on par in this game. We found 1080p playable with 2X MSAA and Enhanced Alpha to Coverage. With the GeForce GTX 660 Ti we had to turn off Alpha to Coverage, but that allowed 2X MSAA to be playable. Whereas with the Radeon HD 7870 GHz Edition we had to turn off AA altogether. It is clear the GTX 660 Ti is sitting pretty between the 7870 GHz Edition and 7950 Boost in this game."*

Here is what they say in conclusion for Far Cry3 -*
"If you want the best from Far Cry 3 we would probably skip the Radeon HD 7870 GHz Edition, and would even be wary of the GTX 660 Ti. The GTX 660 Ti performed better than the 7870 GHz Edition, but we still couldn't enable Alpha to Coverage unless you have a very highly clocked card. I would probably set my sights on the Radeon HD 7950 Boost or GeForce GTX 670 as the lowest end card for Far Cry 3 to get the best experience at 1080p, and then I'd overclock both."*

And one more thing to add is that people have been saying that 3GB of memory on gfx is overkill for 1080p but in reality the games which are going to be released this year will make use of it. And here the 384-bit bus interface of AMD cards will get benefited.

So basically, I really dont see any advantage of 660Ti over HD7950. 

*@havoknation *- AMD driver issues are faced only in case of multi-GPU setup. And even nvidia takes 2-3 weeks time to release their optimized drivers. In my opinion the hierarchy is like HD7870 --> GTX660Ti--> HD7950 --> GTX670 --> HD7970 --> GTX680. Hence the differences of 3k in pricing as well between the two cards under our discussion. But then HD7950 overclocks to the levels of HD7970 and totally justifies the price of 22k.


----------



## Chaitanya (Jan 17, 2013)

GeForce GTX 660 Ti Review: Nvidia's Trickle-Down Keplernomics : The Kepler Trickle-Down Continues


After browsing through benchmarks what i can conclude is that both cards are good performers at their fields (ex. At image processing Lowest AMD outperforms the biggest kepler but when things come to video processing lowest end kepler easily kills the Highest AMD card.

If I were you I would wait for a bit of time & look for the performance of cards in C3, GTAV cause its only about 2-3 weeks, then its deal done..


Yeah I forgot to add with nVidia you get 3D & with AMD you get HD3D. Now if you want 3D then nvidia is way to go..


----------



## ghost_z (Jan 17, 2013)

Last point from my side ,do take direct compute performance in the equation, as its gonna be used in many upcoming games and amd is much better at that !


----------



## amjath (Jan 17, 2013)

But in any way the less memory bus [192 bit] affects 660 ti in future gaming??


----------



## Cilus (Jan 17, 2013)

amjath said:


> But in any way the less memory bus [192 bit] affects 660 ti in future gaming??


Yes, it affects badly if you use high conventional AA and AF and unless games are using special post Processing AA like FXAA or SMAA. TXAA is not any magic and not at all proven till date. In future games if you use high level of AA then the 384 bit bus will always come handy with it and this a thing you cannot overcome by using software level optimization for a 192 bit Bus Memory BUS.


----------



## N@m@n (Jan 17, 2013)

vickybat said:


> Get a 660-ti if you want to game in 1080p. Its almost as fast as a 7950 in 1080p.
> It also has TXAA support which is implemented in a number of titles like AC3 and COD black ops 2 with extremely positive results.
> However crysis 3 is the most notable and awaited game to test TXAA's advantages as it will support the same.
> 
> ...





hitman4 said:


> i think gtx 660 ti provides smoother graphics and better gameplay experience
> gtx 660 ti for 19k & hd 7950 for 22k
> so gtx 660 ti is better choice imo
> 
> ...





bharadghost said:


> With new drivers and ghz edition hd 7950 is a better investent, moreover i have read about those frame latency issues and amd has released official statement recognizing the issue and a driver update is in the works to resolve that !
> As of now hd 7950 is a bit ahead of gtx 660 ti but due to frame latency issues the gameplay experience is slightly smoother on gtx 660 ti, but as the solution is already in the works, it would not hurt to wait and watch and then decide !





havoknation said:


> Radeons always suffer with bad drivers and whenever a new game is released, AMD will launch its performance fix for that particular game after 2-3 weeks that spoils our gaming experience. Nvidia is trusted for better Image quality and better driver support.. I think you should go for 660ti without second thought.
> 
> and yes, Zotac GTX660ti is retailing for 19k. You can save straight 3k if you go for it.


Check this out A driver update to reduce Radeon frame times - The Tech Report - Page 1... *AMD has fixed frame latency in upcoming 13.2 beta driver*


----------



## vickybat (Jan 17, 2013)

^^ Thanks a lot for the link N@m@n. Finally the issue has been addressed thanks to 13.2 drivers.
Amd's driver team has done some serious improvements this time i think.

I take back my previous words and now we can safely recommend radeon products without second thoughts. I guess crossfire will be smooth from now too.
Op can now choose the 7950 with all issues resolved. 

Also thanks to *tech report* from bringing this issue out in the open forcing AMD to take serious action.


----------



## Myth (Jan 17, 2013)

Time to buy a 7950


----------



## ico (Jan 17, 2013)

HD 7950 > GTX 660 Ti.



techdabangg said:


> Please *don't make TXAA so much of magic word* since saying "TXAA offers much better AA" is very subjective and depends on person to person. There are people out there who prefer MSAA and dont care about TXAA since it actually makes graphics plastici instead of realistic. The amount of blur TXAA gives is actually equal to that used in film CGI. The problem with that is that that blur is part of the "experience" and the image quality of film running at 24 FPS. So it is really bad to see it on a game that is NOT film, nor is it running at 24 FPS


Finally someone who tried TXAA out. +1.


----------



## hitman4 (Jan 17, 2013)

N@m@n said:


> Check this out A driver update to reduce Radeon frame times - The Tech Report - Page 1... *AMD has fixed frame latency in upcoming 13.2 beta driver*


 Nica One.. 
Now i would advice @OP to go for 7950


----------



## amjath (Jan 18, 2013)

> Our report on this driver was delayed by a couple of factors, including our attendance at CES and an apparent incompatibility between this beta driver and our Sapphire 7950 card.
> 
> We still haven't figured out the problem with the Sapphire card, but we ultimately switched to a different 7950, the MSI R7950 OC, which allowed us to test the new driver.


A driver update to reduce Radeon frame times - The Tech Report - Page 1

There is something wrong with sapphire cards and this beta driver


----------



## hitman4 (Jan 18, 2013)

amjath said:


> A driver update to reduce Radeon frame times - The Tech Report - Page 1
> 
> There is something wrong with sapphire cards and this beta driver


not everyone is facing those problems its only them..
maybe they were sent faulty card


----------



## S.S gadgets (Jan 18, 2013)

Cilus said:


> Yes, it affects badly if you use high conventional AA and AF and unless games are using special post Processing AA like FXAA or SMAA. TXAA is not any magic and not at all proven till date. In future games if you use high level of AA then the 384 bit bus will always come handy with it and this a thing you cannot overcome by using software level optimization for a 192 bit Bus Memory BUS.



But why AMD Cards are slow at video processing??    

Actually most of the guys depend on the features of Nvidia cards like phyx effects,CUDA and all stuffs...


----------



## ico (Jan 18, 2013)

S.S gadgets said:


> But why AMD Cards are slow at video processing??
> 
> Actually most of the guys depend on the features of Nvidia cards like phyx effects,CUDA and all stuffs...


1) PhysX is dead.
2) AMD Cards are way way faster at Compute. nVidia is very slow at it.
3) nVidia cards are faster at video transcoding because they include a better fixed function video transcoder. But using it (and even Quick Sync) doesn't give you control. Most people use Handbrake on their CPU.
4) nVidia cards are painfully slow at handling 3ds Max viewports. AMD is again much faster at it.

Things mentioned in this review are just select cases except Luxmark which gives the right picture overall i.e. Kepler being slower than Fermi.:
OpenCL: GPGPU Benchmarks : GeForce GTX 660 Ti Review: Nvidia's Trickle-Down Keplernomics
*www.tomshardware.com/reviews/geforce-gtx-660-ti-benchmark-review,3279-13.html
OpenCL: Image Processing (Basemark CL) : GeForce GTX 660 Ti Review: Nvidia's Trickle-Down Keplernomics
OpenCL: Video Processing (Basemark CL) : GeForce GTX 660 Ti Review: Nvidia's Trickle-Down Keplernomics


----------



## S.S gadgets (Jan 18, 2013)

Thanks ico 

Thanks ico


----------



## vickybat (Jan 18, 2013)

Newer games supporting physx-

*1. Metro last light(upcoming)
2. Hawken (free to play)
3. Lost planet 3(upcoming)
4. BorderlandsII
5. Arma 3(upcoming)*

Most new games that will run on the upcoming *unreal engine 4* has a high chance of using PhysX including the next batman game.
So its not as dead as people think. Its very much alive and well.
Calling it a  deal maker or deal breaker is a different thing, but calling it obsolete is far from truth.

Talk about GPGPU, applications using CUDA are targeted towards supercomputers. Most advanced supercomputers in the world including* "Titan"* at oakridge,and "CRAY" all use CUDA based applications for monitoring weather, protein folding and a lot of other stuff. CUDA libraries are extremely vast and is an overkill for general purpose use.

Open-cl finds itself in consumer apps as its easier to develop. Nvidia supports both of them but its kepler hardware lacks the fundamental units for vector processing. GCN architecture thus performs great in open-cl apps. However the ones used in the above supercomputers (k20x) are built on GK110 which is nvidia's newest iteration that supports compute. There are reports of them to be faster in compute than GCN based workstation cards.
The next kepler family of gpu's will support compute.

Before debating about compute performance, the user has to ask himself/herself whether they are actually going to use apps that harness gpu's compute ability 
in open-cl codepath. In games, it does not matter at all coz both gpu's have enough resources to get directcompute based jobs done.


----------



## techdabangg (Jan 19, 2013)

Hmmm.... Nvidia fanbouys some 

1. We are talking about current gen GPUs and not next gen. So "The next Kepler family of gpu's will support compute" isn't valid argument here. Apart from this its not like Kepler does not support Direct Compute, its just that nVidia forgot to optimize scheduler for compute performance, which they'll do for upcoming Kepler Refresh.

2. Agreed about PhysX being not dead, but the real question is - is it really necessary? I mean, I've played batman series on HD7970GHz (without PhysX) and GTX680 (with PhysX). The visual oomphs which PhysX gives isn't that much great. In fact, in Batman series its just some smoke in some corridors or new-papers cuttings going helter-scalter. Its just a marketing ploy by nvidia for PCs. Compare Batman AC between XBOX360, which uses ATI xenos (no-way it supports PhysX libraries) and PC (with nvidia 660Ti). You'll see that the smoke/papers etc, physX effects are present in Xbox version without PhysX support. For PC they've just been omitted for non-nvidia cards. Why and How? Its a developer-nvidia nexus.
   Nvidia has had upper hand in marketing tricks over AMD/ATI always.

3. Direct Compute is being used in many games as of now as well, so it does matter in games as well. e.g. Civillization V -

    *images.anandtech.com/graphs/graph6025/47486.png



@vickybat - Please don't take me in any wrong way. I'm just putting in here what I've experienced myself. I do test products extensively for a 3rd party multi-national technology firm.


----------



## Cilus (Jan 19, 2013)

1st of all, Please don't bring some Super Computer Design here to prove Nvidia's supremacy. Is OP going to buy a Tesla based GPU? No. So point of discussing how they perform. Assignment modules are missing in Kepler gaming card because they generate lots of Heat for gaming card which needs huge amount of real time processing, not because this time Nvidia wanted AMD to win by providing inferior chip.

And GCN performs well in any kind of compute performance than any Gaming card from Nvidia. So keeping super computer apart....it is a very good buy.


Regarding CUDA and OpenCL, Vickybat, you are getting the wrong picture. It is not like CUDA is only for super computer and OPENCL is only for general use. CUDA is not a programming language, it is the name of the GPU stream processor architecture of Nvidia and OPenCL language is also used in Super Compute based applications, it is not like that CUDA is the only GPU coding methodology. And just bringing Titan doesn't prove that Nvidia is superior in Super Computing. Super Comute performance mainly depends upon the number of devices connected in chain, not the architecture of each of the components. Currently Titan has the highest number of Workstation GPU attached with it and that's why it is fastest, not because they use only Nvidia hardware.

And CUDA is just for Super computing and too vast for normal usage.... from where did you get the idea? CUDA, OpenCL and DirectCompute has more than 60% common library functions but handles hardware differently. CUDA has restriction on running only on Nvidia Unified Stream Architecture, OpenCL can target any SIMD (Single Instruction Multiple Data) based architecture, irrespective of their nature and vendor. It can even target an APU or a Multi-Core Processor.






> Open-cl finds itself in consumer apps as its easier to develop. Nvidia supports both of them but its kepler hardware lacks the fundamental units for vector processing. GCN architecture thus performs great in open-cl apps.



OPENCL is not at all any consumer app development API, it is more useful than CUDA even for designing super computer apps. Don't write anything what you just think without getting the fact correct. Buddy, OpenCL offers far better portability than CUDA and if coded properly it performs as good as CUDA + the added benefit of running it on any SIMD design. Just giving the example of Titan doesn't prove than Nvidia is better in super computing...they are in that business for long time...that's all.


----------



## vickybat (Jan 19, 2013)

techdabangg said:


> Hmmm.... Nvidia fanbouys some



Nope no fanboy's here. 



techdabangg said:


> 1. We are talking about current gen GPUs and not next gen. So "The next Kepler family of gpu's will support compute" isn't valid argument here. Apart from this its not like Kepler does not support Direct Compute, its just that nVidia forgot to optimize scheduler for compute performance, which they'll do for upcoming Kepler Refresh.



Well we are talking about current GPU's here. GK110 is already out in the form of K20X. Its just limited for a specific usage set now and commercial cards are a while away from getting launched.
Talk about optimizing scheduler, you are partially correct there. 

HPC or high performance computing is the key what we term as compute performance or GPGPU computing. The logical units involved here should be capable of working on double precision arithmetic (integer or float). GPU's highly parallel architecture makes it the best for incorporating multiple units that compute on double precision arithmetic. GCN also does this and so does intel's knight's corner based co-processors.

Now kepler gk104 (gtx 680 and its derivatives), did not have these units that helps in computation of DP arithmetic and they were handled by conventional streaming units (cuda cores) while instructions scheduled by warp schedulers. Each SMX, has 4 warp schedulers. Warp is nothing but a group of 32 instructions to be scheduled for dispatch and execution. So an SMX can dispatch 32x4 warps to all its execution units.

*Now the key in GK110, is that the double precision instructions are grouped together with single precision instructions* in common and dispatched so that the cuda cores can work on Single ones whereas the DP units can work on double precision instructions only. Thus its compute performance is seen to be much much higher as it has physical units that assists compute. GK104 did not have these.
Commercial GK110 has 64 DP units in a single SMX and has a total of 15 SMX. The general version might have lesser no. of DP units per smx, lets say 32. Kepler GK104 has zero DP units.
So its not only the scheduler but also due to actual physical execution units that assist compute.

Refer the below diagram coz it has both GK110 & GK 104:



Spoiler



*i.imgur.com/JmzcEEz.png
*i.imgur.com/CLmmCan.png








techdabangg said:


> 2. Agreed about PhysX being not dead, but the real question is - is it really necessary? I mean, I've played batman series on HD7970GHz (without PhysX) and GTX680 (with PhysX). The visual oomphs which PhysX gives isn't that much great. In fact, in Batman series its just some smoke in some corridors or new-papers cuttings going helter-scalter. Its just a marketing ploy by nvidia for PCs. Compare Batman AC between XBOX360, which uses ATI xenos (no-way it supports PhysX libraries) and PC (with nvidia 660Ti). You'll see that the smoke/papers etc, physX effects are present in Xbox version without PhysX support. For PC they've just been omitted for non-nvidia cards. Why and How? Its a developer-nvidia nexus.
> Nvidia has had upper hand in marketing tricks over AMD/ATI always.



I would say, yes. Physics is necessary and Physx is simply a GPU implementation. The engine is similar to other cpu based physics engine like havok and bullet. In fact the library set of Physx is quite similar with havoc but run on gpu instead. Talk about visual oomphs, you don't expect physics code to do ray tracing ambient occlusion or HDR do you? Its only meant to be in the game engine for the behavior of objects in the game world complying with laws of physics in the real world as close as possible. The flying of newspaper, scattering of ash particles, bullets ricocheting, explosions, movement of cloth etc are the main aspects that a physics engine takes care of. All these things definitely add up to the visual oomph. 

Besides batman arkham city is by far the best games to implement Physx and looks way way better than xbox 360. Well the physics engine used in consoles is different. They have engines which can be used by them and thus both ps3 and xbox 360 versions show physics effects. But the pc version employs nvidia's physx engine which has been locked up from amd cards . 

But arkham city's pc graphics and physics effects are a lot better than its console counterparts. The first scene itself will tell you. Console version has a much toned down version of ash scattering effects throughout the city along with hail. I've compared in game console videos with my friend's 6870 cf + GT240 based system. Arkham city just looks amazing in pc with physx effects. Its not like amd cannot do these, but it has been deliberately shut for them and we know the reason.

One thing i'll agree on is that current physx implementation in games doesn't propel the necessity of physics code running in a gpu as till date, there has been no effects which can't be done by other physics engines run on cpu. So its still a work in progress and maybe in future we might see multiple physics engines developed to run on a gpu with huge library set that can harness gpu's high parallel computing abilities.



techdabangg said:


> 3. Direct Compute is being used in many games as of now as well, so it does matter in games as well. e.g. Civillization V -
> 
> 
> 
> ...



I agree that direct compute is used in many games. What i meant is that in game implementation of direct compute do not harness a gpu's absolute compute potential.
Basically directcompute is used for implementation ambient occlusion in games by ray tracing methods. Farcry 3, hitman absolution , sleeping dogs are some great titles that use directcompute.
But we haven't seen GCN cards to have edge on kepler in AO performance in these games. Kepler performs toe to toe with GCN and can apply the highest AO (HDAO) while delivering similar frames.

The cuda cores of kepler have enough horse power to utilize direct compute and it isn't getting crippled at all. That was my point. 
Absolute compute power of GCN  is much higher than kepler though due to the reasons given above. But it has yet to show this advantage in games.






techdabangg said:


> @vickybat - Please don't take me in any wrong way. I'm just putting in here what I've experienced myself. I do test products extensively for a 3rd party multi-national technology firm.



No mate, i haven't taken you in a wrong way at all. We need members like you who can put facts owing to their experience. Everybody is free to give their points as per their will. 



Cilus said:


> Spoiler
> 
> 
> 
> ...



Read this:

*streamcomputing.eu/blog/2011-06-22/opencl-vs-cuda-misconceptions/

Although its an old article, it gives more insight. This will throw light to the fact that why i spoke about CUDA model and its libraries.
It will also tell why CUDA is used in supercomputer applications more than open-cl. Open-cl has a lot of catching up to do as far as providing libraries is concerned. Cuda's math libraries are vast and thus finds use in supercomputers.


----------



## Cilus (Jan 19, 2013)

Guys, the name of the thread is GTX 660 Ti Vs HD 7950, not CUDA vs OpenCL or Tesla vs AMD Firepro. So try not to go much out of topic.  

Will create a thread about Super computers where you can discuss about these things.


----------

