Pureoverclock: PC Hardware reviews and news for overclockers!

 
 
 
 
 

Posts Tagged ‘nvidia’
ASUS Strix

ASUS ROG Strix GTX 1080: GPU Fan Headers Say What!?!

The GTX 1080 has been making a wave ever since it’s release last week. FinFET is finally here and the wait has been entirely too long! NVIDIA released their own Founder’s Edition that left most of us thinking that we’ll just wait until the manufacturers bring their own variants out, but sometimes it can feel like we already know what we’re going to see. However, ASUS surprised me with a feature that sounds like the perfect combo for a high end GPU with the addition of a fan header on the board. You’re probably thinking I’ve lost it at this point. “You’re seriously getting excited about a fan header!?!” Just bear with me for a sec! Admittedly, this isn’t the kind of feature the majority of users should be particularly concerned about. The ROG Strix has plenty of other things going for it anyways so I’m sure it’s going to be a much coveted card. Where this excites me is in the fact that I’ve been a SLI/Crossfire user for many years now. I understand all the impracticability of it, but what will never change is the fact that I think a computer looks 500 times better with two cards. Unfortunately, these configurations also generate a lot more heat. Under an intensive load, the GPU fans can get revved up pretty high and even though your computer can stay quiet at idle, loads can bring more fan noise than desired. Most of the time, my case fan speeds are set in stone at a level that doesn’t bot...



slides16

NVIDIA Keeps saying Async Support…

“You keep using that term. I do not think it means what you think it means.” Inigo Montoya. Before anyone gets upset about my rant, I want to mention that AMD has been just as guilty of marketing hype. That’s what Pascal’s Async Support is looking like it amounts to and I’m getting tired of companies using our politician’s tactic of repeating the lie to try and convince “dumb” consumers to believe it. A.) We’re smarter than that. B.) The GTX 1080 is such a great release that it doesn’t need false hype to brag it up! Admittedly, while the technicality of the matter could actually mean the 1080 has Async support, it’s beginning to look like NVIDIA is using some marketing tricks to smokescreen what’s really going on. A benchmark came out showcasing various tests with Ashes of the Singularity and a GTX 1080. While Pascal fared better than Maxwell, the results aren’t particularly good. In some cases, we see the performance drop again from DirectX 11 and the gains that were made were practically unnoticeable. The key quote from WCCF in understanding how Pascal supports Async is here. ” Dynamic load balancing and improved pre-emption both improve the performance of async compute code considerably on Pascal compared to Maxwell. Although principally this is not exactly the same as Asynchronous Shading or Computing. Because Pascal still can’t execute async code concurrently without pre-emption. ...



GTX1080_Web_GF-1200x627-7-OG

NVIDIA GTX 1080 Review Round Up

You might be wondering why there was so little news on the site about the upcoming Pascal GPUs from NVIDIA. That’s mostly because the info was pretty iffy most of the time. The great thing about the hardware industry is that new releases can be very exciting, but they can also be over-hyped. I sat back for some time trying to determine if and what I should be covering about this release, ultimately deciding to wait until the reviews came in. Well, they’re in, and the GTX 1080 is a monster of a GPU! Overall, NVIDIA looks like they really outdid themselves on this one, which is impressive considering how well they outdid themselves with Maxwell. The 1080 achieves performance that’s almost impossible to believe and is certainly a worthy successor to the previous 900 series architecture. The only head scratcher that has some reviewers questioning it is in the high price of the founder’s edition. Outside of that, NVIDIA has knocked it out of the park. Here’s a list of reviews that have released already so that you can easily browse through them at your leisure. GTX 1080 Reviews http://www.pcgamer.com/gtx-1080-review/ http://www.techspot.com/review/1174-nvidia-geforce-gtx-1080/ http://www.guru3d.com/articles-pages/nvidia-geforce-gtx-1080-review,1.html http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1080-pascal,4572.html http://arstechnica.com/gadgets/2016/05/nvidia-gtx-1080-review/ https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080...



AMD Fx

Is the New AMD Right Around the Corner?

Rumors are flying everywhere about Polaris, Pascal, Zen and even Kaby Lake. While I love following the rumors, not much is being leaked that gives a concrete idea of performance, price, etc. What is catching my eye is what AMD is doing right now. Last week gave us the release of the 16.3 drivers. The release schedule on these drivers is certainly improving, but what really caught my eye was some of the fixes. The bug list is getting smaller and that’s the kind of improvement enthusiasts like to see in drivers. We have to wait for Polaris and Zen to know if AMD’s financial future will look brighter in the years to come, but the things I’m seeing now indicates that they are already turning things around for the better. I already touched on the 16.3 Crimson driver, but I will elaborate further on that. The bug fix I’ve been closely watching is one that involved AMD GPUs losing their clock speed settings during use. This is kind of a big deal and I was fairly certain this was affecting those who were overclocking their video cards. March’s driver fixed that issue and it’s off the list of known issues. In fact, the known issues seem pretty minor now, with most issues related to new game releases. There is the issue of the Gaming Evolved app causing games to crash that I’m hoping get’s resolved soon, but that’s mostly because it keeps crashing my WoW. At least it’s easy to close for temporary fix but for those who use...



Hitman Taking a Contract on Asynchronous Shaders

AMD has been talking up the Asynchronous Compute Engines pretty much since DirectX 12 has been announced. In short, these are hardware components in AMD GPUs that can hopefully be leveraged to add significant performance in games. We’ve been waiting for the final say for some time and while certain Beta releases have shown some promise, it’s only official releases that will not only prove the benefit of Asynchronous Shaders, but will also help determine how legit DirectX 12 is for being the next big thing for gaming. AMD just shared some info that Hitman has been working specifically with them to take advantage of their Asynchronous Shaders and it looks like we have about a month before the official release date. Whether or not Hitman is your kind of game, this will certainly be a big moment in the PC gaming industry. So keep your eyes peeled because March 11th is the official release day for Hitman and I’m sure tech sites will be looking into the performance with DirectX 12 and various AMD and NVIDIA GPUs. Below is the full statement from AMD. AMD is once again partnering with IO Interactive to bring an incredible Hitman gaming experience to the PC. As the newest member to the AMD Gaming Evolved program, Hitman will feature top-flight effects and performance optimizations for PC gamers. Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs—called asynchronous compute engines—to handle heavier workloads and better image quality with...



Radeon Technologies Group

AMD has a Response to Gameworks with GPUOpen

We all want better graphics in games, but we like those improvements to not utterly destroy our framerates. We’re beginning a new era of PC gaming that is bringing better performance from a software side, rather than completely relying on GPU manufacturers to make beefier chips. NVIDIA GameWorks is an API released in 2014 that was not only supposed to give developers better hardware control, but also offer some rich features for better graphics in game. Unfortunately, the results led to some performance hits in various situations causing varied opinions of the benefit. Now AMD is finally responding with there own developer tool called GPUOpen. AMD seems to be stepping up their game big time and I think GPUOpen will end being a great thing for PC Gamers. As the name implies, GPUOpen is open source, which means there will be a lot more minds trying to optimize the features for performance, as well as being able to share the code without repercussion. This is a good way to bring developers on board with AMD, and should help with some of the disparity we’ve seen in performance from NVIDIA and AMD GPUs from certain game titles. The other interesting thing about GPUOpen is what it could mean for console ports. Radeon Tech is in just about every console right now, but particularly the XBox One and PS4. If the developer tool makes porting to PC that much easier, we may see more console exclusives make their way to PC since development costs can be a major detractor. S...



GF TSMC

What does FinFET mean for Gamers?

There’s a ton of leaks and rumors circulating about the AMD Arctic Islands and NVIDIA Pascal GPUs slated to release next year. Right now, it’s hard to get too excited about anything. Don’t get me wrong, I believe the new releases are going to be phenomenal, but we’re so early before the release of these products that any actual performance numbers are still a long ways off. However, there are still a couple of pieces of information being leaked that are intriguing and quite frankly, should be getting gamers very excited about the games of 2017 and beyond. FinFET is a new process that helps shrink the size of the transistor to 14nm or 16nm. Both TSMC and Global Foundries have managed to get their processes mature enough that they can begin mass production shortly, but we still have to see how good the yields are for determining what the cost of the new GPUs will be. Since graphics cards have been stuck on the 28nm process for quite some time now, it makes sense that efficiency and performance are going to improve significantly. While AMD is claiming double the performance per watt, that can mean little until actual gaming performance is measured. Many times, that double per watt slogan can translate into something quite a bit less than what it sounds like. However, the other factor is the massive amount of transistors that can be squeezed into one die as a result of the shrink. The flagship chip should contain up to 18 billion transistors, over do...



DirectX 12

DirectX 12 is a Great Thing for AMD

And everyone for that matter! But let’s not harsh on my sacrifice of proper grammar for stylistic writing when we can focus on good things from DirectX 12. I’m a firm believer that by the end of next year, DirectX 12 and Vulcan are going to be taking the gaming world by storm. I got a chance to play around with Fable Legends and the graphics were down right amazing! Techspot recently did a comparison of DirectX 11 and 12 to show the FPS gain from a couple of different configurations in Ashes of the Singularity. While the improvements were nice overall, there were some particular gains with FX CPUs that I’ve been waiting to see for quite some time now. Let’s start with the bad news. DirectX 12 is not Bulldozer’s salvation. If anything DirectX 12 is the final nail in that coffin, in the sense that new architecture is long overdue. Shifting away from single-thread performance was never a good move. I still believe AMD was on the right track in that we needed to move to more utilization of multi-threading, especially in gaming, but that shouldn’t have been at the sacrifice of single-thread. Zen is looking to solve these issues, but these initial results are showing an 8350 struggling to keep up with an i3. Even though the i3 is later gen, we’re still talking about a low budget range CPU beating out an enthusiast one for gaming. Now that we got that out of the way though, let’s get on to the good news for AMD. We still have only o...



AshesLogo-fullcolor

What if I told you an R9 290X was competing with a GTX 980 Ti?

So, I’m browsing the internet trying to find out whether I should save some cash and go with the i5-6600K, or go all out with an i7-6700K. I already know that there won’t be much of a discernible difference with current games, but DirectX 12 games are on the horizon, I just got a beta invite to one such game, and there could be some performance to gain from the hyper-threading. I didn’t find the info I wanted, but I did find something astonishing. The GTX 980 Ti seems almost untouchable, but you can imagine my shock when I saw an R9 290X tying, and even beating the Maxwell behemoth in several benchmarks. Yesterday, I did a pretty heavy write-up on some Ashes of the Singularity benchmarks that surfaced about a week ago. It turns out, those weren’t the only ones done. Ars Technica decided to do a very comprehensive set of tests that involved comparing an old school R9 290X with a very state of the art GTX 980 Ti and pretty much showed the card matching the NVIDIA flagship on every turn. The 980 Ti still destroys AMD’s part in DirectX 11, but once we get to 12 we see a super competitive landscape. Here’s a couple patterns I noticed. The GTX card benefits slightly more from 6 cores and hyper-threading than the R9 card does at higher resolutions. The NVIDIA card also has a slight advantage with average framerates during the heavy scenes. An interesting thing that was happening was that once hyper-threading was disabled and the CPU was reduce...



AMD Radeon graphics logo

Ashes of the Singularity Scaling: The AMD Crossroads

Last week, the new game, “Ashes of Singularity” had a pretty comprehensive scaling review performed with both DirectX 11 and DirectX 12. The results were interesting to say the least. Multiple CPUs were used to test both the R9 390X as well as the GTX 980. While AMD enjoyed some impressive gains, NVIDIA had some fairly lackluster results that even prompted the company to release statements for damage control. It would be very easy to say that AMD is making a comeback and NVIDIA is gonna be in trouble, but that would be too easy. How can we come to the proper conclusions about these results? Let me start off by saying that this is great news for AMD. It’s long been claimed that Radeon GPUs would be much better if the drivers could just utilize them properly. It seems this is almost true, but rather than drivers, it’s APIs that needed to take advantage of that hardware. However, I’m seeing some massive problems here that if AMD doesn’t quickly solve them, we can say goodbye to competition for a long time to come. I want to show you three conclusions that I saw from these results, and why I think there could be more bad news here than good if AMD doesn’t make some dramatic changes in the near future. (Click for Larger View) Let’s begin with the first big implication these AotS results are showing us. AMD needs to refocus their software development. This seems like something that is already in progress, but when we see what Direc...



The GTX 950 Review that Matters

I’ve been a long time League of Legends player. I’m one of those silver scrubs who likes to play competitively, tries to get to gold, but ultimately just uses LoL as an excuse to hang with his friends. League has had it’s moments for me, but ultimately, the game is too time constraining and frustrating for me to actually get anywhere. Then Heroes of the Storm happened. I found such a perfect blend of competitiveness combined with a schedule friendly match system, that I haven’t touched League for a good month or two now. So when I heard the GTX 950 was the go-to card for MOBA games, imagine my surprise when nobody was measuring frame rates in MOBA games. (Especially since they’re free!) Thankfully, I finally found a review that focused on MOBAs and I have to say, the GTX 950 is looking like a nice little card. Hardware Heaven posted a review on several GTX 950 cards and personally, outside of skipping the overclock section, I think they nailed it. First off, they highlighted the pipeline advantage. Basically, NVIDIA optimized the render path so that the delay is cut nearly in half. This should lead to a smoother overall gaming experience but should also help reduce latency and that’s important in competitive games. Whether or not that drop in latency is actually noticeable, the frame rates on the various MOBAs are looking good. The only game that the GTX 950 lagged behind an R7 370 in is DOTA 2: Reborn. The other MOBAs on the list gave...



Maxwell

[UPDATE] Possible NVIDIA GTX 950 Ti on the Horizon

UPDATE: It looks like NVIDIA is just releasing the GTX 950 in a 2 GB and 4 GB variant. Each will still have a 128-bit interface and will likely be priced to compete in the $100-150 price range. http://videocardz.com/57015/confirmed-nvidia-to-launch-geforce-gtx-950 It looks like NVIDIA isn’t willing to let the sub $150 market go without a little more competition. Even though they’ve had two Maxwell cards in this bracket for a while now with the GTX 750 Ti and 750, the new GTX 950 Ti and 950 will offer a slightly updated GPU that I’m sure will bump the performance up as well. While neither NVIDIA or AMD are willing to design much more on the 28nm manufacturing process with 16nm around the corner, this will give team Green a little more competition in the budget segment of the market. It looks like the GPU core will be based off a cut down GM206 die which is currently in use by the GTX 960. The power ratings will be just under 100W for the 950 Ti with the 950 coming in at a meager 64W. These are some impressive ratings but we’ve come to expect that out of Maxwell. Honestly, there isn’t much to see here but hopefully, we may yet see a 960 Ti that bumps the interface up from the 960 while staying in the mid $200 price range. That seems to be the sweet spot for performance to cost, but even the 950 Ti and 950 aren’t official yet. Time will tell soon enough. http://wccftech.com/nvidia-readying-geforce-gtx-950-ti-geforce-gtx-950-graphics-car...



zy-635x463

Getting up to Speed with Fiji and Maxwell

If you have no idea about what’s happening with AMD and NVIDIA, then you either live under a rock, or you just don’t care about computer components that much. Assuming that you’re here because you don’t fit into either category, then let me get you up to speed with what’s been happening in the GPU world, especially since the anticipated release of stacked memory is right around the corner. The stage is set with NVIDIA dominating the GPU market. Maxwell was impressive when it released, but has managed to become one of the most notable GPUs to date. AMD on the other hand, is telling us they aren’t out yet. With a slew of refreshes, as well as two high end GPUs that are the first ever to feature stacked memory, the Green team might be facing some stiff competition in the upcoming weeks. DirectX 12 could also have some new implications on the gaming front so let’s throw all of this together and take a stab and what the future holds for us. We now have a fully unlocked GM200-400 die in the form of the Titan-X and a slightly cut down GM200-310 die in the GTX 980 Ti. The Titan-X is the graphics card most of us loftily dream about having some day, but never seriously imagine owning. The GTX 980 Ti is the card that we might actually sell a kidney for. Many people were shocked to see a $649 starting price tag for what would be considered NVIDIAs go-to enthusiast card. When you factor in that the gaming performance is right up there with t...



Directx12

DirectX 12 Looking Better with an End to Mantle

News just arrived that AMD is ending the Mantle API and directing developers to start working towards DirectX 12 instead. I’m sure the thought that comes to everyone’s mind is, “What on earth is going on!?!” When I first saw the end to Mantle, even I was slightly disappointed but as that segued into the appeal to start using DirectX 12, I felt the rush of excitement again. It may not seem like it, but I have a feeling this is really good news for what’s in store for the future of gaming. Let’s review what’s been happening in the API world a bit. A couple years back, Mantle starts claiming how it can boost performance on AAA titles and as results start flowing in, the potential for the gaming industry looked promising even if more work was needed. Fast forward to GDC 2014 and Microsoft announces DirectX 12 coming to the next version of Windows. What everyone was scratching their heads at was the inclusion of AMD, along with NVIDIA, INTEL and QUALCOMM, as one of the major supporters for the new Microsoft API. Why would AMD support something that was seemingly in direct competition with Mantle? Now, it looks like AMD saw potential to get in on the ground floor, which not only allowed them to make sure performance was going to reach their standards, but also allowed them to determine if they needed to spend resources keeping Mantle in the mix if their GPUs could benefit just as well from DirectX 12. So is this good news? At first, it ...



The Hardware Hound

The Hardware Hound Episode 2

After a crazy 2014, the Hound is back taking care of business and boy did he have some catching up to do. On this episode, he covers a nuclear reactor disguised as a CPU cooler, some important features on Windows 10, and most importantly, new GPUs from both NVIDIA and AMD. So sit back, enjoy the show and get excited because great hardware news is always good news!






Find us on Google+