• Breaking News

    Friday, March 1, 2019

    GPU Mining CN-GPU: a new path to ASIC resistance

    GPU Mining CN-GPU: a new path to ASIC resistance


    CN-GPU: a new path to ASIC resistance

    Posted: 01 Mar 2019 04:12 AM PST

    Disclaimer: This post is aimed at people who see value in POW consensus; it isn't aimed at starting a discussion on the disadvantages of POW or the advantages of other crypto consensus. The post neither intends to start a debate on the futility of ASIC resistance and the advantages of higher hashrate at the cost of centralisation to those who manufacture ASIC. If you are one of the people who have a strong opinion on the above, then I request you to skip this post.

    TLDR: Here is the source code. CN-GPU uses 32bit floating point operations to beat ASIC's. Hashrate is directly propotional to No of FP cores and more FP cores increases heat generation and wattage consumption.

    Vive la résistance!

    CN History

    Before we divulge into CN-GPU. Let me first introduce you to the current situation of Cryptonote space and the fight against ASIC. ASIC's first showed up at the end of 2017 bullrun. Coins which were based on cryptonote use cryptonight POW, it was assumed until then, that developing ASIC's for CN was cost prohibitive. However with abnormal hashrate increases in different CN based coins particularly in Moreno, people begin to question the ASIC resistance of CN POW. Given the situation at that time, it was assumed that the abnormal hashrate increase was due to the interest in mining by the masses, few even claimed that the hashrate explosion was due to GPU farms powered by solar power. However when the enthusiasm died down in later months. The presence of ASIC became more apparent. Due to the secretive nature of ASIC manufacturing and development. It was only later that their presence was confirmed. To a GPU miner the results should be terrifying. ASIC could perform 64X(CNV8) better than GPU's.

    The common theme among every attempt at ASIC resistance was making a minor change in CN POW. While such an approach bricked the current versions of ASIC's, given few months they would come back. It seems that ASIC's have been bitten by the crypto bug. Even in a bear market, the cost of fabricating a new batch of ASIC and putting them to work seems to be a profitable business venture.

    Let us look at the most well-known example of ASIC resistance – Monero. The developers first changed their POW in April to brick the AISC's. by the time of Monero's next hardfork later that year, the network was again riddled by ASIC/FPGA. Monero again changed their algo to to CNV8(the current version) and looking at the present situation, it would seem that ASIC have come back with a vengeance. Monero's next hardfork is on march 9, while the current batch of ASIC will be bricked, some hardware specialists have expressed doubts against the next POW. Whether such doubts are justified, only time will tell.

    CN GPU

    Coming to the topic at hand. CN-GPU is developed by u/fireice_uk and u/psychocrypt. The developers have previously made CN-Heavy which was later adopted by nearly 10 coins. In spite of low market cap(555K) and ranking(888) of CN-heavy based RYO. After a period of nearly 8 months, ASIC/FPGA showed up on its network in late December. The devs could have claimed that the large miner was a GPU farm powered by solar. Instead the devs informed the RYO community about the possibility of FPGA/ASIC and started developing a new POW.

    CN-GPU is a complete rewrite of original CN POW and is based on 32-bit floating point operations. Namely addition, subtractions, multiplication & division. It doesn't rely on any fancy feature, instead anchoring its performance to physical no of cores. The algorithm is very through test for IEEE 754 implementation and has been tested on Intel SSE2, Intel AVX, ARM7, ARM8, NVIDIA, AMD.

    Some might argue that floating point maths can be done using AND/OR/XOR gates which can be pipelined by a ASIC manufacturer. While this is technically true, it should be noted that any computer operation can be reduced to a series of NAND gates. Just as you can solve bitcoin hash using pen and paper, you could also mine CN-GPU by using NAND gates, both the methods are inefficient. The best way to do floating point operations is through FP cores.

    Ah Ha! ASIC can be made with only FP cores. Before you rush to your nearest ASIC manufacturer with this idea. You should know that there is already an ASIC for Floating point operations - GPU. Yes, GPU can also be called as floating point ASIC because they are filled with FP cores. You don't believe me and have already reached the ASIC manufacturer, since GPU's are filled with FP cores, you propose a die which is twice the size of a normal GPU and you remove everything present on a GPU die apart from FP cores. first problem you would face is the wattage consumption and heat dissipation. since most power is consumed by FP cores in GPU, stripping everything apart from it and adding more FP cores would increase power consumption and heat generation. Again, the more no of FP core you add the more the watt and heat you have to deal with, you won't be able to add 60X time of FP cores. you still go ahead, Great you now have a machine which can mine CN-GPU. Since the only way to get more hash rate is through more Number of FP cores. To get 50X – 125X you will require 50X-125X of FP cores of normal GPU which isn't feasible, however it is possible to make ASIC which are 2-3X that of GPU's with 2-3X the FP cores. You now have a machine that is 2-3X faster than GPU's. But at what price will you sell them? Will a person buy a $6000 ASIC which is only2-3X faster than a GPU? As of now there are no CN based coin in top 10.

    What about FPGA? Well by now, it shouldn't come as a surprise that FPGA can mine CN-GPU. In fact all FPGA advertise TFLOPS(a measure of floating point operations) Bitman recently jumped into the market with their FPGA which can do 2TFLOPS. And if you have $10K lying around and buy the top of the line FPGA which is made by Intel. you would get 9.2TFLOPS. Sounds good right? Well if you have a vega 64 you would get 12.5TFLOPS. this is why deep/machine learning farms use GPU instead of FPGA's the deepfakes some of you view for science are made possible by floating point ASIC aka GPU. And the reluctance of companies such as Google or Apple to make their own FP ASIC for their deep learning should speak volumes.

    You would be curious about the performance of CPU on the POW. The performance of CPU is limiting. The devs have acknowledged it and expressed their desire for closing the asymmetry in next iteration of POW (present L3 2MB scratchpads will be 3GB scratchpads) Taking a detour to express my personal opinion. CPU mining in spite of many years has failed to develop dedicated CPU mining communities like the ones we have for GPU mining. The lack of profitability is the major reason why we won't see any large scale CPU mining communities. This isn't saying that CPU only mining is not viable, if some coin manages to keep away botnets and server farms and ensures profitability of solo miners with ever decreasing supply, I would be delighted to see them succeed. Another valid criticism of CN-GPU is the lack of documentation. Given the little time the devs had (2 months), it would be fair to give them some time before demanding documentation. The devs are quite busy playing bingo, once they have free time, documentation would be made available on a silver plate.

    This brings us to the end of our post. CN-GPU doesn't have any magic tricks up its sleeve, while it is possible to have ASIC's on the network which are 2-3X faster than GPU's, the cost should prevent any large scale operations. This is not a perfect solution, it is however better than allowing ASIC to mine one's network while waiting for the perfect solution. The POW provides a new path for projects who want ASIC resistance and are tired of playing the game of cat and mouse with ASIC.

    submitted by /u/Raj9224
    [link] [comments]

    GTX1660ti RVN/GRIN/ETH results

    Posted: 28 Feb 2019 03:58 PM PST

    GTX1660ti RVN/GRIN/ETH results

    I did some quick run-thru with my GTX1660ti of my favorite Algos. Results/Pictures below

    Ethereum

    I noticed there's a bug in the miners or software. Because something up. Topping out +1500 memory does nothing. Basically anything pass 6800mhz Memory results in decreased results. Best I was able to get was 29.5Mh/s @ 65w~ however claymore's miner was the only one working properly and had crazy hashrate bounces. Needs Miner Dev to "tune" or add GTX1660ti kernels into there miners.

    Edit: getting about 30mh/s~ in phoenixminer 4.1c stably @60w~.

    RVN

    RVN was probably the best result out of all of them. I'm talking 16mh/s~ for a mere 75w! That's GTX1070ti levels performance for almost half the wattage. Extremely impressive.

    GRIN

    Only 3.3~h/s for GRIN. Assuming optimization needs to go here as well for Miner Dev's

    Zhash

    Another bad result. Only 36sol/s~ at stock that is GTX1060 levels. Again I assume as time goes on GTX1660ti will get some more love and show its true power.

    My take: the GTX1660ti was "Ampere" the unreleased Mining GPU from Nvidia they teased last year. It lacks RT/Tensor cores which lowers power consumption yet brings all of turnings excellent feature set. Truly the GTX1660ti will be the next "RX480" for mining if you will.

    https://i.redd.it/sh6r7ympgej21.jpg

    RVN

    Ethereum

    30mh/s @60w

    Zhash

    GRIN Cuckaroo29

    submitted by /u/Xazax310
    [link] [comments]

    Riser Video cards Not detecting in 16x slots

    Posted: 01 Mar 2019 04:53 AM PST

    I'm very new to this, so I'm sorry if this is easy to solve, I did search for solutions, but nothing has worked so far.

    I should say this build is for Rendering, not mining. And it also serves as my Work/gaming PC. So I don't want to cripple the performance if possible.

    The problem is Windows 10 isn't detecting any Video card plugged into my PCIE x16 slots via the PCIE risers. The 1x slot is not a problem though, it works fine. Though I only have one on my board.

    I've tried a few things to get it to work. 1) Tried different videocards in different risers, they all work, on the 1x slot. No combination works in the 16x slots. 2) In the BIOS I enabled 4G Decoding and changed all the PCIE link speeds to Gen1. But nothing changed

    Motherboard is an ASRock - X99 Extreme4 Link to a topdown photo

    I have a frame for mounting GPUs Externally. And a 750w HP PSU with a breakout.

    The risers I'm using are VER008S

    My GPUS are;

    • 1x Founders 1080ti
    • 1x ASUS 1080ti
    • 1x Founders 1080
    • 1x ASUS 970

    Thanks for your time

    submitted by /u/phunkaeg
    [link] [comments]

    GTX 1660 Ti Mining Performance

    Posted: 28 Feb 2019 04:34 PM PST

    GTX 1660 Ti Mining Performance

    Disclaimer: I am still testing, and I am sharing information so the community can work together in obtaining the final results. Allow me time to fine tune, and thank you for your patience.

    The video card I am using for testing is the MSI 1660 Ti Ventus 6G of GDDR6, I had some issues tuning the card for certain ALGOs but will continue to tune as needed. I will update this post just as I did in the past for the data I shared for the RTX 2080 and RX 590. If you want me to test anything, certain Algos, Miners, or have command line arguments - parameters that you believe I should use please share. This is an open setting, let's help each other, my intention is to help the community make an informed decision on whether buying this GPU is worth the investment. I have jumped on the grenade to help people save money in the past so if you could take time to like and sub to my channel, despite the quality not being top notch, I would greatly appreciate it. Linked below is the video and Data table for you to use.

    https://youtu.be/oAmps8dweZM

    Thank you for your time, take care.

    | Key |

    \** = Testing is still in Progress.*

    Algo (Miner) / Hashrate / OC settings / Powerlimit / Intensity / Powerdraw

    BCD (Trex 0.9.2) *** 20 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 18 90 Watts
    Bitcore (Trex 0.9.2) *** 20 Mhs Core +80 / Mem +0 Powerlimit 75 IIntensity 18 90 Watts
    C11 (Trex 0.9.2) *** 24.1 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 18 90 Watts
    CNHeavy (XMRrig 2.13.1) *** 521 Mhs Core +0 / Mem +0 Powerlimit 75 86 Watts
    CNV8 (XMRStak 2.8.3) 575 Hs Core +40 / Mem +1000 Powerlimit 75 84 Watts
    Cuckaroo29 (Bminer v14.3) 3.43 G/s (Hs) Core +60 / Mem +400 Powerlimit 75 90 Watts
    ETH Pre-Fork (Claymore 12) 29.5 Mhs Core +80 / Mem + 1100 Powerlimit 75 90 Watts
    Equihash 144_5 (EWBF 0.6) 37 Sol/s Core +60 / Mem +200 Powerlimit 75 90 Watts
    Equihash 150.5 (EWBF 0.6) 13.5 kH/s Core +60 / Mem +200 Powerlimit 75 90 Watts
    Equihash 96.5 (EWBF 0.6) 18.27 kH/s Core +60 / Mem +200 Powerlimit 75 90 Watts
    Equihash 192,7 (EWBF 0.6) 17 Sol/s (Hs) +140 Core / Mem +200 Powerlimit 75 90 Watts
    Hex (Z-enemy 1.28) *** 12.3 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 20 90 Watts
    Hmq1725 (CryptoDredge 0.17) *** 9.2 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 6 90 Watts
    Lyra2REv3 (CryptoDredge 0.17) *** 40.8 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 6 90 Watts
    Lyra2z (CryptoDredge 0.17) *** 2.6 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 6 90 Watts
    MTP-Zcoin (CryptoDredge 0.17) 1.84 Mhs Core +80 / Mem +600 Powerlimit 75 Intensity 6 90 Watts
    Neoscrypt (CryptoDredge 0.17) *** TBA TBA Powerlimit 75 Intensity 6 90 Watts
    Phi (CryptoDredge 0.17) *** 24.6 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 6 90 Watts
    Phi2 (CryptoDredge 0.17) 9.2 Mhs Core +80 / Mem +200 Powerlimit 75 Intensity 6 90 Watts
    ProgPOW (ethminer0.18 alpha) 13.4 Mhs Core +0 / Mem +0 Powerlimit 100 120 Watts
    ProgPOW (ethminer0.18 alpha) *** 11.9 Mhs Core +60 / Mem +200 Powerlimit 75 90 Watts
    ProgPOW (BCI Miner 0.16) 11.67 Mhs Core +60 / Mem +200 Powerlimit 75 90 Watts
    Skunkhash (CryptoDredge 0.17) *** 36.2 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 6 90 Watts
    Sonoa (Trex 0.9.2) *** 2.55 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 18 90 Watts
    Timetravel (Trex 0.9.2) *** 36 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 18 90 Watts
    Tribus (Trex 0.9.2) *** 81 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 18 90 Watts
    x16r (Z-Enemy.1-28) *** 17.8 Mhs Core +100 / Mem +0 Powerlimit 75 Intensity 20 * 90 Watts
    x16r (Trex 0.9.2) *** 19 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 18 98 Watts
    x16rt (Trex 0.9.2) *** 20 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 18 86 Watts
    x16rt (CryptoDredge 0.17) 19 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 6 89 Watts
    x17 (Trex 0.9.2) *** 16.5 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 21 88 Watts
    x21s (Trex 0.9.2) 11.8 Mhs Core +80 / Mem +200 Powerlimit 75 Intensity 20 88 Watts
    x22i (Trex 0.9.2) 10.2 Mhs Core +80 / Mem +400 Powerlimit 75 Intensity 18 89 Watts
    Xevan (Z-Enemy.1-28) *** 3.6 Mhs Core +0 / Mem +0 Powerlimit 75 Intensity 20 88 Watts

    At stock settings the 1660 Ti mines at the same levels as 1070, but at around 118 Watts.

    Mining on different Algos, adjusting the Core and Memory settings didn't do anything to increase hashrate but adjusting the TDP reduces the power draw to 90 Watts. Measured from the wall, Average temps varied between ALGOs from 63C to 67C. When Mining Equihash 144_5 it seemed better to leave to core clock and memory clock alone and just reduce TDP to 75%. GMiner performs better for me then EWBF miner. As driver & DEVs release newer revisions of their miners, we should see more stability - better performance when mining with the GTX 1660 Ti.

    Pushing The Memory OC while mining ETH past +1100 seems to reduce the hashrate. So 1100 is the sweet spot for my card, anything beyond that will crash. Trying +1400 on memory, cause Nvidia inspector to crash and be unusable so I had to reinstall drivers. ***Note: just because you can push the memory really high like the 20 series card, don't push it too hard.

    Vega - SerpentXSF

    https://i.redd.it/sgl9nahfeej21.jpg

    submitted by /u/cmvjax
    [link] [comments]

    Gigabyte 1070 $300 @ Newegg + Fortnight whatev

    Posted: 28 Feb 2019 03:22 PM PST

    No comments:

    Post a Comment