Pureoverclock: PC Hardware reviews and news for overclockers!

 
 
 
 
 

Posts Tagged ‘nvidia’
PureOC Monthly News

October News Round Up: SWIFTECH Apogee XL2, GAMER STORM Genome II, GTX 1050 Ti and More!

Welcome back to our second roundup of hardware news that happened this month. The CPU and GPU side of things may have slowed down a tad, but there’s a lot that happened with cooling and cases. Let’s dive in and get excited about what’s coming down the road! CPUs Let’s start out with an update on Kaby Lake. Geekbench came out with some impressive results and the story made it look like INTEL has merely been holding back for a while. Recently however, an i5-7500K sample was benchmarked and only showed about a 10% improvement over Skylake. This is actually pretty disappointing, especially considering that the 4970K is still a top tier single thread performer. On the plus side, it looks like all 100 series motherboards will be able to support Kaby Lake with a BIOS update, but I doubt many will see the need to upgrade from Skylake in such a short time table. At the time being, ASUS is the only brand this is confirmed for, but that’s a good indication that other manufactures will follow suit. We’re still a ways out from release so we can almost expect performance to fluctuate a bit more until closer to release. http://wccftech.com/intel-core-i5-7600k-kaby-lake-cpu-benchmarks/ http://wccftech.com/kaby-lake-support-asus-100-series-motherboards-confirmed/ GPUs The big news this month was the release of the GTX 1050 Ti and the 1050. Each GPU offers a great budget option for 1080p gaming while being extremely energy efficient. Rather than reinvent ...



PureOC Monthly News

September News Round Up: Zen, Kaby Lake, GTX 1050 and More

How does a freelance writer keep up with everything happening in the industry when he has a full time job, a family, reviews to keep up with and a puppy in training? It’s worth pointing out that a lot has happened in the month of September this year. AMD has been making announcements about Zen, INTEL has had some details about Kaby Lake and NVIDIA is talking about the GTX 1050. Rather than try to keep up with every piece of news as it releases, we’re going to try something new by having a once a month update to catch you up on the things that are happening in the industry. Welcome to the PureOC Monthly News! On the last Friday of each month, we’ll pull together all the news that feels relevant and give a one stop location to get your info from. So without further ado, let’s get into what’s happened this month. CPUs, Motherboards and Chipsets  AMD Zen, Grey Hawk, X370 Chipset and A12-9800 We’ll start with AMD this time around since much of the anticipation of what will happen in the CPU sector relies on them. We got some more detailed shots of the new Zen chip and the 1331 pin design is confirmed. As far as new info is concerned though, we didn’t get much this month for that. The A12-9800 on the other hand had some very interesting leaks. Some performance scores showed up and even though the new AM4 chip is still using Steamroller cores, it has some reasonable improvements. What was really interesting was an overclocking result. So...



AMD-Nvidia-Feature-635x357

The New GPUs are Great, but the Opinions are Terrible

There’s been so little time and so much going on that’s it’s killing me to not cover all the great things happening in the GPU market lately. The GTX 1080 easily conquered the most powerful graphics card spot, the GTX 1070 fits perfectly into the affordable high performance spot, and now the Rx 480 is a great mid-range main stream card. I want to focus on what NVIDIA and AMD has done for the market, but I also want to bring up how the fan boy wars seem to be destroying the PC building communities as we know them. The GTX 1070 is easily the unsung hero of 2016 release segment so far. While it’s ideally propositioned as a 1440p card, 4k isn’t a stretch with some settings tweaks. The Pascal architecture not only destroys DirectX 11 titles with ease, but much like Maxwell, it overclocks extremely well. The close release after the 1080 overshadowed the card, although it will likely be more popular than the 1080 in the long run. The only serious drawback is that because supply isn’t as high for the time being, retailers are selling the cards at a pretty high price over suggested retail values. Asynchronous support has come up a few times, but Async isn’t widely enough used to be an issue for the time being. My hope is that if Async becomes a boon for game development, that NVIDIA doesn’t keep muddying the waters with their “support.” Of course, this week was the big week for the RX 480 release. I think many of us in th...



ASUS Strix

ASUS ROG Strix GTX 1080: GPU Fan Headers Say What!?!

The GTX 1080 has been making a wave ever since it’s release last week. FinFET is finally here and the wait has been entirely too long! NVIDIA released their own Founder’s Edition that left most of us thinking that we’ll just wait until the manufacturers bring their own variants out, but sometimes it can feel like we already know what we’re going to see. However, ASUS surprised me with a feature that sounds like the perfect combo for a high end GPU with the addition of a fan header on the board. You’re probably thinking I’ve lost it at this point. “You’re seriously getting excited about a fan header!?!” Just bear with me for a sec! Admittedly, this isn’t the kind of feature the majority of users should be particularly concerned about. The ROG Strix has plenty of other things going for it anyways so I’m sure it’s going to be a much coveted card. Where this excites me is in the fact that I’ve been a SLI/Crossfire user for many years now. I understand all the impracticability of it, but what will never change is the fact that I think a computer looks 500 times better with two cards. Unfortunately, these configurations also generate a lot more heat. Under an intensive load, the GPU fans can get revved up pretty high and even though your computer can stay quiet at idle, loads can bring more fan noise than desired. Most of the time, my case fan speeds are set in stone at a level that doesn’t bot...



slides16

NVIDIA Keeps saying Async Support…

“You keep using that term. I do not think it means what you think it means.” Inigo Montoya. Before anyone gets upset about my rant, I want to mention that AMD has been just as guilty of marketing hype. That’s what Pascal’s Async Support is looking like it amounts to and I’m getting tired of companies using our politician’s tactic of repeating the lie to try and convince “dumb” consumers to believe it. A.) We’re smarter than that. B.) The GTX 1080 is such a great release that it doesn’t need false hype to brag it up! Admittedly, while the technicality of the matter could actually mean the 1080 has Async support, it’s beginning to look like NVIDIA is using some marketing tricks to smokescreen what’s really going on. A benchmark came out showcasing various tests with Ashes of the Singularity and a GTX 1080. While Pascal fared better than Maxwell, the results aren’t particularly good. In some cases, we see the performance drop again from DirectX 11 and the gains that were made were practically unnoticeable. The key quote from WCCF in understanding how Pascal supports Async is here. ” Dynamic load balancing and improved pre-emption both improve the performance of async compute code considerably on Pascal compared to Maxwell. Although principally this is not exactly the same as Asynchronous Shading or Computing. Because Pascal still can’t execute async code concurrently without pre-emption. ...



GTX1080_Web_GF-1200x627-7-OG

NVIDIA GTX 1080 Review Round Up

You might be wondering why there was so little news on the site about the upcoming Pascal GPUs from NVIDIA. That’s mostly because the info was pretty iffy most of the time. The great thing about the hardware industry is that new releases can be very exciting, but they can also be over-hyped. I sat back for some time trying to determine if and what I should be covering about this release, ultimately deciding to wait until the reviews came in. Well, they’re in, and the GTX 1080 is a monster of a GPU! Overall, NVIDIA looks like they really outdid themselves on this one, which is impressive considering how well they outdid themselves with Maxwell. The 1080 achieves performance that’s almost impossible to believe and is certainly a worthy successor to the previous 900 series architecture. The only head scratcher that has some reviewers questioning it is in the high price of the founder’s edition. Outside of that, NVIDIA has knocked it out of the park. Here’s a list of reviews that have released already so that you can easily browse through them at your leisure. GTX 1080 Reviews http://www.pcgamer.com/gtx-1080-review/ http://www.techspot.com/review/1174-nvidia-geforce-gtx-1080/ http://www.guru3d.com/articles-pages/nvidia-geforce-gtx-1080-review,1.html http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1080-pascal,4572.html http://arstechnica.com/gadgets/2016/05/nvidia-gtx-1080-review/ https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080...



AMD Fx

Is the New AMD Right Around the Corner?

Rumors are flying everywhere about Polaris, Pascal, Zen and even Kaby Lake. While I love following the rumors, not much is being leaked that gives a concrete idea of performance, price, etc. What is catching my eye is what AMD is doing right now. Last week gave us the release of the 16.3 drivers. The release schedule on these drivers is certainly improving, but what really caught my eye was some of the fixes. The bug list is getting smaller and that’s the kind of improvement enthusiasts like to see in drivers. We have to wait for Polaris and Zen to know if AMD’s financial future will look brighter in the years to come, but the things I’m seeing now indicates that they are already turning things around for the better. I already touched on the 16.3 Crimson driver, but I will elaborate further on that. The bug fix I’ve been closely watching is one that involved AMD GPUs losing their clock speed settings during use. This is kind of a big deal and I was fairly certain this was affecting those who were overclocking their video cards. March’s driver fixed that issue and it’s off the list of known issues. In fact, the known issues seem pretty minor now, with most issues related to new game releases. There is the issue of the Gaming Evolved app causing games to crash that I’m hoping get’s resolved soon, but that’s mostly because it keeps crashing my WoW. At least it’s easy to close for temporary fix but for those who use...



Hitman Taking a Contract on Asynchronous Shaders

AMD has been talking up the Asynchronous Compute Engines pretty much since DirectX 12 has been announced. In short, these are hardware components in AMD GPUs that can hopefully be leveraged to add significant performance in games. We’ve been waiting for the final say for some time and while certain Beta releases have shown some promise, it’s only official releases that will not only prove the benefit of Asynchronous Shaders, but will also help determine how legit DirectX 12 is for being the next big thing for gaming. AMD just shared some info that Hitman has been working specifically with them to take advantage of their Asynchronous Shaders and it looks like we have about a month before the official release date. Whether or not Hitman is your kind of game, this will certainly be a big moment in the PC gaming industry. So keep your eyes peeled because March 11th is the official release day for Hitman and I’m sure tech sites will be looking into the performance with DirectX 12 and various AMD and NVIDIA GPUs. Below is the full statement from AMD. AMD is once again partnering with IO Interactive to bring an incredible Hitman gaming experience to the PC. As the newest member to the AMD Gaming Evolved program, Hitman will feature top-flight effects and performance optimizations for PC gamers. Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs—called asynchronous compute engines—to handle heavier workloads and better image quality with...



Radeon Technologies Group

AMD has a Response to Gameworks with GPUOpen

We all want better graphics in games, but we like those improvements to not utterly destroy our framerates. We’re beginning a new era of PC gaming that is bringing better performance from a software side, rather than completely relying on GPU manufacturers to make beefier chips. NVIDIA GameWorks is an API released in 2014 that was not only supposed to give developers better hardware control, but also offer some rich features for better graphics in game. Unfortunately, the results led to some performance hits in various situations causing varied opinions of the benefit. Now AMD is finally responding with there own developer tool called GPUOpen. AMD seems to be stepping up their game big time and I think GPUOpen will end being a great thing for PC Gamers. As the name implies, GPUOpen is open source, which means there will be a lot more minds trying to optimize the features for performance, as well as being able to share the code without repercussion. This is a good way to bring developers on board with AMD, and should help with some of the disparity we’ve seen in performance from NVIDIA and AMD GPUs from certain game titles. The other interesting thing about GPUOpen is what it could mean for console ports. Radeon Tech is in just about every console right now, but particularly the XBox One and PS4. If the developer tool makes porting to PC that much easier, we may see more console exclusives make their way to PC since development costs can be a major detractor. S...



GF TSMC

What does FinFET mean for Gamers?

There’s a ton of leaks and rumors circulating about the AMD Arctic Islands and NVIDIA Pascal GPUs slated to release next year. Right now, it’s hard to get too excited about anything. Don’t get me wrong, I believe the new releases are going to be phenomenal, but we’re so early before the release of these products that any actual performance numbers are still a long ways off. However, there are still a couple of pieces of information being leaked that are intriguing and quite frankly, should be getting gamers very excited about the games of 2017 and beyond. FinFET is a new process that helps shrink the size of the transistor to 14nm or 16nm. Both TSMC and Global Foundries have managed to get their processes mature enough that they can begin mass production shortly, but we still have to see how good the yields are for determining what the cost of the new GPUs will be. Since graphics cards have been stuck on the 28nm process for quite some time now, it makes sense that efficiency and performance are going to improve significantly. While AMD is claiming double the performance per watt, that can mean little until actual gaming performance is measured. Many times, that double per watt slogan can translate into something quite a bit less than what it sounds like. However, the other factor is the massive amount of transistors that can be squeezed into one die as a result of the shrink. The flagship chip should contain up to 18 billion transistors, over do...



DirectX 12

DirectX 12 is a Great Thing for AMD

And everyone for that matter! But let’s not harsh on my sacrifice of proper grammar for stylistic writing when we can focus on good things from DirectX 12. I’m a firm believer that by the end of next year, DirectX 12 and Vulcan are going to be taking the gaming world by storm. I got a chance to play around with Fable Legends and the graphics were down right amazing! Techspot recently did a comparison of DirectX 11 and 12 to show the FPS gain from a couple of different configurations in Ashes of the Singularity. While the improvements were nice overall, there were some particular gains with FX CPUs that I’ve been waiting to see for quite some time now. Let’s start with the bad news. DirectX 12 is not Bulldozer’s salvation. If anything DirectX 12 is the final nail in that coffin, in the sense that new architecture is long overdue. Shifting away from single-thread performance was never a good move. I still believe AMD was on the right track in that we needed to move to more utilization of multi-threading, especially in gaming, but that shouldn’t have been at the sacrifice of single-thread. Zen is looking to solve these issues, but these initial results are showing an 8350 struggling to keep up with an i3. Even though the i3 is later gen, we’re still talking about a low budget range CPU beating out an enthusiast one for gaming. Now that we got that out of the way though, let’s get on to the good news for AMD. We still have only o...



AshesLogo-fullcolor

What if I told you an R9 290X was competing with a GTX 980 Ti?

So, I’m browsing the internet trying to find out whether I should save some cash and go with the i5-6600K, or go all out with an i7-6700K. I already know that there won’t be much of a discernible difference with current games, but DirectX 12 games are on the horizon, I just got a beta invite to one such game, and there could be some performance to gain from the hyper-threading. I didn’t find the info I wanted, but I did find something astonishing. The GTX 980 Ti seems almost untouchable, but you can imagine my shock when I saw an R9 290X tying, and even beating the Maxwell behemoth in several benchmarks. Yesterday, I did a pretty heavy write-up on some Ashes of the Singularity benchmarks that surfaced about a week ago. It turns out, those weren’t the only ones done. Ars Technica decided to do a very comprehensive set of tests that involved comparing an old school R9 290X with a very state of the art GTX 980 Ti and pretty much showed the card matching the NVIDIA flagship on every turn. The 980 Ti still destroys AMD’s part in DirectX 11, but once we get to 12 we see a super competitive landscape. Here’s a couple patterns I noticed. The GTX card benefits slightly more from 6 cores and hyper-threading than the R9 card does at higher resolutions. The NVIDIA card also has a slight advantage with average framerates during the heavy scenes. An interesting thing that was happening was that once hyper-threading was disabled and the CPU was reduce...



AMD Radeon graphics logo

Ashes of the Singularity Scaling: The AMD Crossroads

Last week, the new game, “Ashes of Singularity” had a pretty comprehensive scaling review performed with both DirectX 11 and DirectX 12. The results were interesting to say the least. Multiple CPUs were used to test both the R9 390X as well as the GTX 980. While AMD enjoyed some impressive gains, NVIDIA had some fairly lackluster results that even prompted the company to release statements for damage control. It would be very easy to say that AMD is making a comeback and NVIDIA is gonna be in trouble, but that would be too easy. How can we come to the proper conclusions about these results? Let me start off by saying that this is great news for AMD. It’s long been claimed that Radeon GPUs would be much better if the drivers could just utilize them properly. It seems this is almost true, but rather than drivers, it’s APIs that needed to take advantage of that hardware. However, I’m seeing some massive problems here that if AMD doesn’t quickly solve them, we can say goodbye to competition for a long time to come. I want to show you three conclusions that I saw from these results, and why I think there could be more bad news here than good if AMD doesn’t make some dramatic changes in the near future. (Click for Larger View) Let’s begin with the first big implication these AotS results are showing us. AMD needs to refocus their software development. This seems like something that is already in progress, but when we see what Direc...



The GTX 950 Review that Matters

I’ve been a long time League of Legends player. I’m one of those silver scrubs who likes to play competitively, tries to get to gold, but ultimately just uses LoL as an excuse to hang with his friends. League has had it’s moments for me, but ultimately, the game is too time constraining and frustrating for me to actually get anywhere. Then Heroes of the Storm happened. I found such a perfect blend of competitiveness combined with a schedule friendly match system, that I haven’t touched League for a good month or two now. So when I heard the GTX 950 was the go-to card for MOBA games, imagine my surprise when nobody was measuring frame rates in MOBA games. (Especially since they’re free!) Thankfully, I finally found a review that focused on MOBAs and I have to say, the GTX 950 is looking like a nice little card. Hardware Heaven posted a review on several GTX 950 cards and personally, outside of skipping the overclock section, I think they nailed it. First off, they highlighted the pipeline advantage. Basically, NVIDIA optimized the render path so that the delay is cut nearly in half. This should lead to a smoother overall gaming experience but should also help reduce latency and that’s important in competitive games. Whether or not that drop in latency is actually noticeable, the frame rates on the various MOBAs are looking good. The only game that the GTX 950 lagged behind an R7 370 in is DOTA 2: Reborn. The other MOBAs on the list gave...



Maxwell

[UPDATE] Possible NVIDIA GTX 950 Ti on the Horizon

UPDATE: It looks like NVIDIA is just releasing the GTX 950 in a 2 GB and 4 GB variant. Each will still have a 128-bit interface and will likely be priced to compete in the $100-150 price range. http://videocardz.com/57015/confirmed-nvidia-to-launch-geforce-gtx-950 It looks like NVIDIA isn’t willing to let the sub $150 market go without a little more competition. Even though they’ve had two Maxwell cards in this bracket for a while now with the GTX 750 Ti and 750, the new GTX 950 Ti and 950 will offer a slightly updated GPU that I’m sure will bump the performance up as well. While neither NVIDIA or AMD are willing to design much more on the 28nm manufacturing process with 16nm around the corner, this will give team Green a little more competition in the budget segment of the market. It looks like the GPU core will be based off a cut down GM206 die which is currently in use by the GTX 960. The power ratings will be just under 100W for the 950 Ti with the 950 coming in at a meager 64W. These are some impressive ratings but we’ve come to expect that out of Maxwell. Honestly, there isn’t much to see here but hopefully, we may yet see a 960 Ti that bumps the interface up from the 960 while staying in the mid $200 price range. That seems to be the sweet spot for performance to cost, but even the 950 Ti and 950 aren’t official yet. Time will tell soon enough. http://wccftech.com/nvidia-readying-geforce-gtx-950-ti-geforce-gtx-950-graphics-car...






Find us on Google+