Pureoverclock: PC Hardware reviews and news for overclockers!

 
 
 
 
 

Posts Tagged ‘nvidia’
AMD Fx

Is the New AMD Right Around the Corner?

Rumors are flying everywhere about Polaris, Pascal, Zen and even Kaby Lake. While I love following the rumors, not much is being leaked that gives a concrete idea of performance, price, etc. What is catching my eye is what AMD is doing right now. Last week gave us the release of the 16.3 drivers. The release schedule on these drivers is certainly improving, but what really caught my eye was some of the fixes. The bug list is getting smaller and that’s the kind of improvement enthusiasts like to see in drivers. We have to wait for Polaris and Zen to know if AMD’s financial future will look brighter in the years to come, but the things I’m seeing now indicates that they are already turning things around for the better. I already touched on the 16.3 Crimson driver, but I will elaborate further on that. The bug fix I’ve been closely watching is one that involved AMD GPUs losing their clock speed settings during use. This is kind of a big deal and I was fairly certain this was affecting those who were overclocking their video cards. March’s driver fixed that issue and it’s off the list of known issues. In fact, the known issues seem pretty minor now, with most issues related to new game releases. There is the issue of the Gaming Evolved app causing games to crash that I’m hoping get’s resolved soon, but that’s mostly because it keeps crashing my WoW. At least it’s easy to close for temporary fix but for those who use...



Hitman Taking a Contract on Asynchronous Shaders

AMD has been talking up the Asynchronous Compute Engines pretty much since DirectX 12 has been announced. In short, these are hardware components in AMD GPUs that can hopefully be leveraged to add significant performance in games. We’ve been waiting for the final say for some time and while certain Beta releases have shown some promise, it’s only official releases that will not only prove the benefit of Asynchronous Shaders, but will also help determine how legit DirectX 12 is for being the next big thing for gaming. AMD just shared some info that Hitman has been working specifically with them to take advantage of their Asynchronous Shaders and it looks like we have about a month before the official release date. Whether or not Hitman is your kind of game, this will certainly be a big moment in the PC gaming industry. So keep your eyes peeled because March 11th is the official release day for Hitman and I’m sure tech sites will be looking into the performance with DirectX 12 and various AMD and NVIDIA GPUs. Below is the full statement from AMD. AMD is once again partnering with IO Interactive to bring an incredible Hitman gaming experience to the PC. As the newest member to the AMD Gaming Evolved program, Hitman will feature top-flight effects and performance optimizations for PC gamers. Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs—called asynchronous compute engines—to handle heavier workloads and better image quality with...



Radeon Technologies Group

AMD has a Response to Gameworks with GPUOpen

We all want better graphics in games, but we like those improvements to not utterly destroy our framerates. We’re beginning a new era of PC gaming that is bringing better performance from a software side, rather than completely relying on GPU manufacturers to make beefier chips. NVIDIA GameWorks is an API released in 2014 that was not only supposed to give developers better hardware control, but also offer some rich features for better graphics in game. Unfortunately, the results led to some performance hits in various situations causing varied opinions of the benefit. Now AMD is finally responding with there own developer tool called GPUOpen. AMD seems to be stepping up their game big time and I think GPUOpen will end being a great thing for PC Gamers. As the name implies, GPUOpen is open source, which means there will be a lot more minds trying to optimize the features for performance, as well as being able to share the code without repercussion. This is a good way to bring developers on board with AMD, and should help with some of the disparity we’ve seen in performance from NVIDIA and AMD GPUs from certain game titles. The other interesting thing about GPUOpen is what it could mean for console ports. Radeon Tech is in just about every console right now, but particularly the XBox One and PS4. If the developer tool makes porting to PC that much easier, we may see more console exclusives make their way to PC since development costs can be a major detractor. S...



GF TSMC

What does FinFET mean for Gamers?

There’s a ton of leaks and rumors circulating about the AMD Arctic Islands and NVIDIA Pascal GPUs slated to release next year. Right now, it’s hard to get too excited about anything. Don’t get me wrong, I believe the new releases are going to be phenomenal, but we’re so early before the release of these products that any actual performance numbers are still a long ways off. However, there are still a couple of pieces of information being leaked that are intriguing and quite frankly, should be getting gamers very excited about the games of 2017 and beyond. FinFET is a new process that helps shrink the size of the transistor to 14nm or 16nm. Both TSMC and Global Foundries have managed to get their processes mature enough that they can begin mass production shortly, but we still have to see how good the yields are for determining what the cost of the new GPUs will be. Since graphics cards have been stuck on the 28nm process for quite some time now, it makes sense that efficiency and performance are going to improve significantly. While AMD is claiming double the performance per watt, that can mean little until actual gaming performance is measured. Many times, that double per watt slogan can translate into something quite a bit less than what it sounds like. However, the other factor is the massive amount of transistors that can be squeezed into one die as a result of the shrink. The flagship chip should contain up to 18 billion transistors, over do...



DirectX 12

DirectX 12 is a Great Thing for AMD

And everyone for that matter! But let’s not harsh on my sacrifice of proper grammar for stylistic writing when we can focus on good things from DirectX 12. I’m a firm believer that by the end of next year, DirectX 12 and Vulcan are going to be taking the gaming world by storm. I got a chance to play around with Fable Legends and the graphics were down right amazing! Techspot recently did a comparison of DirectX 11 and 12 to show the FPS gain from a couple of different configurations in Ashes of the Singularity. While the improvements were nice overall, there were some particular gains with FX CPUs that I’ve been waiting to see for quite some time now. Let’s start with the bad news. DirectX 12 is not Bulldozer’s salvation. If anything DirectX 12 is the final nail in that coffin, in the sense that new architecture is long overdue. Shifting away from single-thread performance was never a good move. I still believe AMD was on the right track in that we needed to move to more utilization of multi-threading, especially in gaming, but that shouldn’t have been at the sacrifice of single-thread. Zen is looking to solve these issues, but these initial results are showing an 8350 struggling to keep up with an i3. Even though the i3 is later gen, we’re still talking about a low budget range CPU beating out an enthusiast one for gaming. Now that we got that out of the way though, let’s get on to the good news for AMD. We still have only o...



AshesLogo-fullcolor

What if I told you an R9 290X was competing with a GTX 980 Ti?

So, I’m browsing the internet trying to find out whether I should save some cash and go with the i5-6600K, or go all out with an i7-6700K. I already know that there won’t be much of a discernible difference with current games, but DirectX 12 games are on the horizon, I just got a beta invite to one such game, and there could be some performance to gain from the hyper-threading. I didn’t find the info I wanted, but I did find something astonishing. The GTX 980 Ti seems almost untouchable, but you can imagine my shock when I saw an R9 290X tying, and even beating the Maxwell behemoth in several benchmarks. Yesterday, I did a pretty heavy write-up on some Ashes of the Singularity benchmarks that surfaced about a week ago. It turns out, those weren’t the only ones done. Ars Technica decided to do a very comprehensive set of tests that involved comparing an old school R9 290X with a very state of the art GTX 980 Ti and pretty much showed the card matching the NVIDIA flagship on every turn. The 980 Ti still destroys AMD’s part in DirectX 11, but once we get to 12 we see a super competitive landscape. Here’s a couple patterns I noticed. The GTX card benefits slightly more from 6 cores and hyper-threading than the R9 card does at higher resolutions. The NVIDIA card also has a slight advantage with average framerates during the heavy scenes. An interesting thing that was happening was that once hyper-threading was disabled and the CPU was reduce...



AMD Radeon graphics logo

Ashes of the Singularity Scaling: The AMD Crossroads

Last week, the new game, “Ashes of Singularity” had a pretty comprehensive scaling review performed with both DirectX 11 and DirectX 12. The results were interesting to say the least. Multiple CPUs were used to test both the R9 390X as well as the GTX 980. While AMD enjoyed some impressive gains, NVIDIA had some fairly lackluster results that even prompted the company to release statements for damage control. It would be very easy to say that AMD is making a comeback and NVIDIA is gonna be in trouble, but that would be too easy. How can we come to the proper conclusions about these results? Let me start off by saying that this is great news for AMD. It’s long been claimed that Radeon GPUs would be much better if the drivers could just utilize them properly. It seems this is almost true, but rather than drivers, it’s APIs that needed to take advantage of that hardware. However, I’m seeing some massive problems here that if AMD doesn’t quickly solve them, we can say goodbye to competition for a long time to come. I want to show you three conclusions that I saw from these results, and why I think there could be more bad news here than good if AMD doesn’t make some dramatic changes in the near future. (Click for Larger View) Let’s begin with the first big implication these AotS results are showing us. AMD needs to refocus their software development. This seems like something that is already in progress, but when we see what Direc...



The GTX 950 Review that Matters

I’ve been a long time League of Legends player. I’m one of those silver scrubs who likes to play competitively, tries to get to gold, but ultimately just uses LoL as an excuse to hang with his friends. League has had it’s moments for me, but ultimately, the game is too time constraining and frustrating for me to actually get anywhere. Then Heroes of the Storm happened. I found such a perfect blend of competitiveness combined with a schedule friendly match system, that I haven’t touched League for a good month or two now. So when I heard the GTX 950 was the go-to card for MOBA games, imagine my surprise when nobody was measuring frame rates in MOBA games. (Especially since they’re free!) Thankfully, I finally found a review that focused on MOBAs and I have to say, the GTX 950 is looking like a nice little card. Hardware Heaven posted a review on several GTX 950 cards and personally, outside of skipping the overclock section, I think they nailed it. First off, they highlighted the pipeline advantage. Basically, NVIDIA optimized the render path so that the delay is cut nearly in half. This should lead to a smoother overall gaming experience but should also help reduce latency and that’s important in competitive games. Whether or not that drop in latency is actually noticeable, the frame rates on the various MOBAs are looking good. The only game that the GTX 950 lagged behind an R7 370 in is DOTA 2: Reborn. The other MOBAs on the list gave...



Maxwell

[UPDATE] Possible NVIDIA GTX 950 Ti on the Horizon

UPDATE: It looks like NVIDIA is just releasing the GTX 950 in a 2 GB and 4 GB variant. Each will still have a 128-bit interface and will likely be priced to compete in the $100-150 price range. http://videocardz.com/57015/confirmed-nvidia-to-launch-geforce-gtx-950 It looks like NVIDIA isn’t willing to let the sub $150 market go without a little more competition. Even though they’ve had two Maxwell cards in this bracket for a while now with the GTX 750 Ti and 750, the new GTX 950 Ti and 950 will offer a slightly updated GPU that I’m sure will bump the performance up as well. While neither NVIDIA or AMD are willing to design much more on the 28nm manufacturing process with 16nm around the corner, this will give team Green a little more competition in the budget segment of the market. It looks like the GPU core will be based off a cut down GM206 die which is currently in use by the GTX 960. The power ratings will be just under 100W for the 950 Ti with the 950 coming in at a meager 64W. These are some impressive ratings but we’ve come to expect that out of Maxwell. Honestly, there isn’t much to see here but hopefully, we may yet see a 960 Ti that bumps the interface up from the 960 while staying in the mid $200 price range. That seems to be the sweet spot for performance to cost, but even the 950 Ti and 950 aren’t official yet. Time will tell soon enough. http://wccftech.com/nvidia-readying-geforce-gtx-950-ti-geforce-gtx-950-graphics-car...



zy-635x463

Getting up to Speed with Fiji and Maxwell

If you have no idea about what’s happening with AMD and NVIDIA, then you either live under a rock, or you just don’t care about computer components that much. Assuming that you’re here because you don’t fit into either category, then let me get you up to speed with what’s been happening in the GPU world, especially since the anticipated release of stacked memory is right around the corner. The stage is set with NVIDIA dominating the GPU market. Maxwell was impressive when it released, but has managed to become one of the most notable GPUs to date. AMD on the other hand, is telling us they aren’t out yet. With a slew of refreshes, as well as two high end GPUs that are the first ever to feature stacked memory, the Green team might be facing some stiff competition in the upcoming weeks. DirectX 12 could also have some new implications on the gaming front so let’s throw all of this together and take a stab and what the future holds for us. We now have a fully unlocked GM200-400 die in the form of the Titan-X and a slightly cut down GM200-310 die in the GTX 980 Ti. The Titan-X is the graphics card most of us loftily dream about having some day, but never seriously imagine owning. The GTX 980 Ti is the card that we might actually sell a kidney for. Many people were shocked to see a $649 starting price tag for what would be considered NVIDIAs go-to enthusiast card. When you factor in that the gaming performance is right up there with t...



Directx12

DirectX 12 Looking Better with an End to Mantle

News just arrived that AMD is ending the Mantle API and directing developers to start working towards DirectX 12 instead. I’m sure the thought that comes to everyone’s mind is, “What on earth is going on!?!” When I first saw the end to Mantle, even I was slightly disappointed but as that segued into the appeal to start using DirectX 12, I felt the rush of excitement again. It may not seem like it, but I have a feeling this is really good news for what’s in store for the future of gaming. Let’s review what’s been happening in the API world a bit. A couple years back, Mantle starts claiming how it can boost performance on AAA titles and as results start flowing in, the potential for the gaming industry looked promising even if more work was needed. Fast forward to GDC 2014 and Microsoft announces DirectX 12 coming to the next version of Windows. What everyone was scratching their heads at was the inclusion of AMD, along with NVIDIA, INTEL and QUALCOMM, as one of the major supporters for the new Microsoft API. Why would AMD support something that was seemingly in direct competition with Mantle? Now, it looks like AMD saw potential to get in on the ground floor, which not only allowed them to make sure performance was going to reach their standards, but also allowed them to determine if they needed to spend resources keeping Mantle in the mix if their GPUs could benefit just as well from DirectX 12. So is this good news? At first, it ...



The Hardware Hound

The Hardware Hound Episode 2

After a crazy 2014, the Hound is back taking care of business and boy did he have some catching up to do. On this episode, he covers a nuclear reactor disguised as a CPU cooler, some important features on Windows 10, and most importantly, new GPUs from both NVIDIA and AMD. So sit back, enjoy the show and get excited because great hardware news is always good news!



AMD-Nvidia-Feature-635x357

GPU Wars Heating Up with New Leaks

2014 has almost passed and the year 2015 is quickly coming upon us. I used to be more excited about Christmas while dreading the new year in light of the stark reminder that gift anticipation was over and school was about to begin. Now, I’m an adult, Christmas means only one day off, and the upcoming GPU news makes 2015 a much more exciting prospect than that sock-shaped present under the tree. In recent days some performance numbers have leaked for not only the new AMD flagships, but NVIDIA has some mentions of their upcoming monster graphics cards as well which give us some gaming and power performance comparisons. Spoilers and speculation ahead! The first, and probably most, enticing info that was leaked were charts that showed the performance of the new GPUs with not only their older counterparts, but also showed tests that compared both manufacturers together. The first tests involved a compilation of 20 games followed by a single comparison of Battlefield 4. The next tests picked out several singular titles at 4k resolution. Without going into too much detail, the end result looks to have the AMD flagship card (Bermuda XT / R9 390X) taking the lead with the NVIDIA flagship (GM200 Full-fat) not far behind. This is amazing news because the green side has managed to hold on to the single GPU performance lead for some time. Red has countered with some impressive pricing structures, but with this kind of tight competition, we are bound to see some GPU wars that wil...



AMD-Radeon-R9-390X

AMD R9 390X Possible Specs Spotted!

Maxwell did an amazing job of bringing the graphics card industry to a whole new level. Honestly, the architecture involved is quite amazing and we haven’t even seen the “Big Daddy” Maxwell card yet. (The 980 is by no means the top dog coming from Nvidia.) This put AMD under a lot of pressure to deliver something that could compare. The good news is that not only does AMD have a response, but if the leaked specs combined with HBM is true, the R9 390X could be as big of a game changer as Maxwell was a month ago. First, let’s take a quick peek at some of the more prevalent specs. The 390X is slated to sport 4096 GCN Cores. When compared to the 2816 Cores of the 290X, that leap seems pretty massive. It looks like we’re still showing 4GB of memory with a 1Ghz boost clock speed, but the next interesting fact about the Fiji architecture is the Manufacturing Process. These will be the first chips to feature the TSMC 20nm node which should help with efficiency and performance. The tiny taste of Tonga last month showed that AMD was heading down the path of better efficiency with better performance. However, outside of major architectural advantages that while extremely possible have not been seen yet, the improvements here may not seem out of this world. That is until we take a closer look at the memory being used. Some of you may have heard of HBM before but others may be scratching your heads. HBM stands for High Bandwidth Memory and sounds very much...



Nvidia quarterly revenue tops Wall Street expectations

(Reuters) – Nvidia (NVDA.O) on Thursday posted higher fiscal third-quarter revenue that was above Wall Street’s expectations, fueled by the company’s latest graphics chips for personal computers as well as processors for data centers and cars. Revenue in the fiscal third-quarter ended Oct. 26 was $1.225 billion, up 16 percent from the year-ago quarter, compared with analysts’ average estimate of $1.202 billion. For the current fourth quarter, Nvidia said it expects revenue of $1.20 billion, plus or minus 2 percent. Analysts on average expected fourth-quarter revenue of $1.198 billion, according to Thomson Reuters I/B/E/S. Third-quarter net income was $173 million, or 31 cents a share, compared to $119 million, or 20 cents a share, in the year-ago quarter. Non-GAAP earnings per share were 39 cents. After struggling to compete against larger chipmakers like Qualcomm (QCOM.O) in smartphones and tablets, Nvidia has increased its focus on using its Tegra chips to power entertainment and advanced navigation systems in cars made by companies including Volkswagen’s Audi, BMW and Tesla (TSLA.O). In the third quarter, revenue from Tegra chips for automobiles and mobile devices jumped 51 percent to $168 million. Nvidia’s much larger PC graphics chip business expanded 13 percent to $991 million. Shares of Nvidia rose 1.34 percent in extended trade, after closing up 0.45 percent at $20.22 on Nasdaq. Source (Reporting by Noel Randewich; Editing by Ch...






Find us on Google+