Jump to content


Member Since 20 Nov 2009
Offline Last Active Today, 10:38 AM

Topics I've Started

Absolver Gets New Trailer, Pre-order and Collector's Edition Details

18 July 2017 - 03:41 PM

With its release coming at the end of August, Absolver, the online melee action game from Slocap and Devolver Digital have revealed the title's pre-order bonuses, what the Collector's Edition contains, and a new trailer featuring weapons and powers gameplay. Players will be able to equip Tension Shards that can fill during the course of a battle, enabling powers to be used and weapons to be formed. You will need to be careful with these weapons though, as they can break or you can be disarmed and see them turned against you.

For those considering pre-ordering the game, you will enjoy 10% off the base price or $29.99, the Labyrinth Prospect mask, and Uring Priest gear. The limited batch of physical Absolver Collector's Editions from Special Reserve Games include a 52-page The Art of Absolver book, a sticker pack, and an ultra-premium, wearable Prospect mask with display stand, in addition to the PS4 disk or Steam key for PC download. The Collector's Edition will cost $74.99 though.

Absolver launches on August 29 for PC and PlayStation 4.



Source: Press Release

Back to original news post

AMD Reveals Some Threadripper and R3 Information

13 July 2017 - 03:59 PM

At long last AMD has finally released some specs for the upcoming Ryzen Threadripper and Ryzen R3 CPUs, which complete the Ryzen lineup at the top and bottom, respectively. The two R3 processors are 4-core/4-thread CPUs with the R3 1200 sporting a base clock of 3.1 GHz and a boost of 3.4 GHz, while the R3 1300X will have a base of 3.5 GHz and a boost of 3.7 GHz. The R3 CPUs will also be launching later this month on July 27, and as they use the AM4 socket they can installed in the already available A320, B350, and X370 motherboards.

For those looking for high end desktop (HEDT) CPUs, we not only have some clock speed information on the Threadripper processors but even prices and a release window. Starting at the top, the Ryzen Threadripper 1950X, with its 16 cores and 32 threads, will have a base clock of 3.4 GHz and a boost of 4.0 GHz. Its suggested retail price is just $999, matching Intel's 10-core/20-thread i9-7900X CPU (3.3 GHz base with 4.3 GHz boost under Turbo Boost 2.0 or 4.5 GHz with Turbo Boost 3.0). The 12-core/24-thread Threadripper 1920X has a base clock of 3.5 GHz and a boost of 4.0 GHz and will be priced at $799. The video embedded below also shares some performance numbers for these CPUs from Cinebench R15.

These two Threadripper CPUs will be available early August, and we will also be getting more details on them following the R3 launch later this month. Do not forget that at SIGGRAPH, also the end of this month, more information on Vega GPUs will be released as well.



Source: AMD

Back to original news post

System Developed for Optimizing Use of Caches

10 July 2017 - 01:25 PM

A fundamental part of every processor is its caches, where the cores can keep what information they need for performing the operations assigned to them. As these caches are on on-chip, accessing them is very fast, but that does not mean there are not better ways to allocate the cache space among the cores. Researchers at MIT have developed a system called Jenga that finds the optimal distribution of not only the local cache but also DRAM for a CPU.

Two advantages to having caches built into a chip are that accessing the data in the caches is very fast and it takes little energy to do so, compared to off-chip memory. This makes optimizing the use of a chip's cache rather valuable, but modern chips need to be designed as a compromise between the capacity and latency needs of various programs. What Jenga does is measure the latency between each processor core and each cache it can access and uses that information to build a cache hierarchy. The hierarchy considers the differences between different levels of cache, like the on-chip L1, L2, and L3 caches but Jenga actually also makes measurements for using DRAM. Once the optimal cache level is determined, algorithms from an older system called Jigsaw are used to optimally allocate the caches for the entire chip. Jenga builds on Jigsaw by considering cache level in its algorithms, but the Jigsaw algorithms for optimizing along the latency-capacity curve are still valid once level is determined.

The researchers tested Jenga by simulating a system with 36 cores and found it could increase processing speed by 20-30% while reducing energy consumption by 30-85%. A fairly significant improvement for something that just improves how the hardware is used, and as more and more cores are added to CPUs, this could become an even more valuable system.

Source: MIT

Back to original news post

3D RRAM Chip Combines Data Storage and Computing

06 July 2017 - 10:10 AM

The rate of innovation is always increasing as new creations lead to more new projects, but this progress is not uniform. In some instances the ability to produce something can come to outpace the ability to use it, and this is happening with data currently. Right now we have the ability to generate more data than many systems can efficiently handle, but there are many researchers working to change that, including some at MIT and Stanford University where a very advanced chip has been created that combines data storage and processing.

This new chip combines two technologies that can still be considered futuristic for computers; carbon nanotubes and a 3D architecture. Modern computer chips have a 2D design, though some have a 2.5D design with layers stacked and connected to each other. The benefit to a full 3D design is that the multiple parts of the chip are able to communicate with each other much more quickly and efficiently than what is currently possible. Carbon nanotubes can also take this to a new level as their small size and electrical properties allow the chips to be made denser. In this case the chip is a form of resistive random-access memory (RRAM), which is a kind of nonvolatile memory and has some one million RRAM cells and two million carbon nanotube field-effect transistors. This combination of memory and computing removes the bandwidth bottleneck between data and processing that is an issue with today's largest datasets.

To prove the capabilities of this design, the researchers also added over one million nanotube-based sensors for detecting and classifying gases. The measurements from the sensors were processed all in parallel and written directly to memory, thanks to this integrated design of emerging nanotechnologies. What makes this accomplishment even more impressive is that the chip is compatible with CMOS, so such an RRAM chip could be combined with current silicon chips and there is a fair chance there will be many more applications for this design in the future.

Source: MIT

Back to original news post

NVIDIA Investigated Multi-Chip-Module GPUs in Research Paper

05 July 2017 - 10:00 AM

Necessity is the mother of invention, and the semiconductor industry is quickly approaching a limit to current technologies. This is because consumers and professionals keep demanding more and more performance but there are physical limitations we are approaching. A solution needs to be found for this demand to be met and NVIDIA, Arizona State University, University of Texas at Austin, and Barcelona Supercomputer Center recently worked together to research one of these approaches: multi-chip-modules.

To continue increasing the amount of computing power, more transistors are needed and traditionally this has been achieved by improving monolithic designs or, in the case of GPUs using a multi-GPU system. The monolithic approach has worked for a long time now, but is approaching the limits of silicon feature size and the aperture size of the lithography systems used to create the processors. A multi-GPU system removes the size concern but presents latency issues because of how long it takes to transmit information between two different processors as well as load-balancing concerns. By going with a multi-chip-module GPU though, which stitches together multiple processors onto a single substrate, the size limitations are worked around with far less latency than a multi-GPU system.

According to the research paper, an MCM-GPU appears to very much be a desirable solution, based on the simulations run as part of the work. With an optimized design to address bandwidth, latency, and load-imbalance concerns between GPU modules, an MCM-GPU with 256 streaming multiprocessors (SMs) was 45.5% faster than the largest possible monolithic GPU with 128 SMs and 26.8% better than a multi-GPU system using two such GPUs. When compared to a hypothetical 256 SM monolithic GPU, the MCM-GPU came within 10% of its performance, so while it is not perfect, it is very close.

AMD is also working on MCM designs, with the recently released EPYC CPUs being MCM processors using Infinity Fabric to connect four Zeppelin dies, and Navi GPU architecture likely to use Infinity Fabric in a similar manner. When NVIDIA might actually create an MCM-GPU is hard to guess, but with both companies at least considering this direction, it will be interesting to see what comes.

Source: NVIDIA

Back to original news post