Quantcast
Channel: Mining — Ethereum Community Forum
Viewing all articles
Browse latest Browse all 10806

GTX1070 Linux installation and mining clue (Goodbye AMD, Welcome NVidia for miners)

$
0
0
Hi all,

Yeah you read that correctly, there is a card from NVidia that outperforms the AMD cards in the same class. Yeah yeah AMD is getting new cards, but there arent here are they (september perhaps, maybe even Q1/17 sooooo.....) As far as i really seen with an R9 390X its less power, less heat and less stressing on PSU's (cheaper PSU's needed) so that AMD is not that much cheaper, after 1 yr (for me again) that AMD is a lot more expensive, and i am not the only one that done the math on this.

As you might have read i got 18 GTX1070 and posted some benchmark information earlier (http://forum.ethereum.org/discussion/comment/42663). @vaulter asked to some some details (@vaulter perhaps you can add your GTX1080 findings, settings?) In this topic i want to place the more technical notes how you can get this working. The short summery: Yes i managed to get 6x GTX1070 running at 218.11MH/s with heavy tuning / overclocking, but no idea how this would hold long term. Currently i keep them at 192,88MH/s (x3 rigs) which seem to be the 'safe overclocking defaults' to me. Who knows how stuff progresses with updates from @Genoil and if its running under a windows driver stable and fast. Safe to say with the lesser power consumption AND more MH/s then a card like R9 390X this GTX1070 with its price is a very nice card to have (especially if you run apps that only run good on Nvidia cards)

It took me quite a while to get it working and this document only contains 5% of my notes and stuff, its the minimum to get you started and you will need to do some tuning on your own to max your card out. Some stuff aren't as good as i like yet (e.g. headless VNC access without the use of a monitor) but it works and more importantly its stable. Thanks go out to @Genoil for his clue and his work on ethminer. This document is not entirely ment as a walk-through as some knowledge on mining, linux overclocking and common sense is still required ... So here goes.

I took the time to write & test this, so consider donations to me or @Genoil (for his work on the ethminer project keeping it alive) :smile: I am not really a miner, i use the GPU's for other projects as well and mining is a means to and end. Earning back the hardware helps my project because i can get better hardware.

Me: ETH: 0xbF2d2D40caDf23f799B808D4A7Db72863f854c34
Genoil: ETH: 0xeb9310b185455f863f526dab3d245809f6854b4d

Disclaimer: Stuff might not work for you, you must be doing some thinking to get this working. Also i have the Founders Edition and stuff might be very differently on changed PCB from vendors, so overclocking with nvidia-settings might need some tuning if you have a different card. Also even if you have two FE editions the cards perform differently (silicons ... google it) my guide is a stable overall machine so the lowers stable card pulls down the higher rated ones. You can tune it out per card.

Connecting hardware & BIOS

If you buy PCI-e riser buy them from China and/or check the version/quality ... some french dude on ebay sold me bad performing ones and some were broken. Some of my stability issues (and loads of time) went into looking/finding out this issue.

Protip on assembly: The open Air Frames i got could be connected to each other ... its a great racking way, but my 3 rigs weigh over 60KG now ... not really handy to move, so keep them separate and only have to move 20KG 3 times.

Make sure your primary GPU in BIOS is PCI-e and disable your onboard GPU as it confuses X11. In addition connect your monitor to the primary PCI-e graphics slots (the x16 one) during installing / bios setup (others wont work) but after installation you need to stick it in the FIRST GPU (PCI slots numbered 1) to get the headless mode working correctly.

Get the basics going

Install Ubuntu 14.04 SERVER (do not try 16.04 or any desktop version unless your found in fixed a bunch of problems)

apt-get install -y opencl-headers build-essential protobuf-compiler libprotoc-dev libboost-all-dev libleveldb-dev hdf5-tools libhdf5-serial-dev libopencv-core-dev libopencv-highgui-dev libsnappy-dev libsnappy1 libatlas-base-dev cmake libstdc++6-4.8-dbg libgoogle-glog0 libgoogle-glog-dev libgflags-dev liblmdb-dev git python-pip gfortran python-twisted

Download Cuda8 from nvidia and install that

dpkg -i cuda-repo-ubuntu1404-8-0-rc_8.0.27-1_amd64.deb

apt-get install -y cuda

REBOOT FIRST!

Down download the latest driver *367.27* and install that too

bash NVIDIA-Linux-x86_64-367.27.run (say yes on everything, ignore the loading error)

REBOOT AGAIN

Now you should got the driver running, check that with nvidia-smi. If you see all your cards AND the version driver you installed your good.


Fix the hardcoded Cuda8 361 driver

We're going for Cudaminer from Genoil (1.1.5) however somewhere in the cuda framework is a hardcoded detection of loaded drivers which isn't working (for us) but there is an easy fix for this and you want to do this because OpenCL mining with the GTX1070 is not very stable (i never got it running longer then 4 hours with the kernel dying on me). Its the brute solution, but it works for me:

cd /lib/modules/4.2.0-38-generic/updates/dkms/
mkdir old
mv nvidia_361* old/
ln -s nvidia.ko nvidia_361.ko
ln -s nvidia-modeset.ko nvidia_361_modeset.ko
ln -s nvidia-modeset.ko nvidia_361-modeset.ko
ln -s nvidia-drm.ko nvidia_361_drm.ko
ln -s nvidia-uvm.ko nvidia_361_uvm.ko

Building the miner

Now you can run with 1.0.8 from Genoil, but i found 1.1.4 (and now 1.1.5) to be a lot better especially with DAG loading initially(3-4 min vs seconds), so lets install that:

git clone https://github.com/Genoil/cpp-ethereum/
cd cpp-ethereum/
git checkout 110
mkdir build
cd build
cmake -DBUNDLE=cudaminer -DCOMPUTE=61 .. (you will get a warning that libOpenCL might be unsafe/hidden, but i just ignored that and it works fine for me)
make -j32
make install

The libs are now installed in /usr/local/lib, i am lazy (prolly an argument to fix that) but they should be in /usr/lib so i just move them:

cd /usr/local/lib/
mv lib* /usr/lib/

Now check your version:

ethminer --version

if it says 0.9.41-genoil-1.x.x TWICE (first and last lines) your good, else you get it ONCE at the first line and perhaps an error.

then go and see your devices:

root@miner:/usr/local/lib# ethminer -U --list-devices ethminer: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by ethminer) ethminer: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by /usr/lib/libethcore.so) ethminer: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by /usr/lib/libethash-cl.so) ethminer: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by /usr/lib/libethash-cl.so) Genoil's ethminer 0.9.41-genoil-1.1.5 ===================================================================== Forked from github.com/ethereum/cpp-ethereum CUDA kernel ported from Tim Hughes' OpenCL kernel With contributions from nicehash, nerdralph, RoBiK and sp_ Please consider a donation to: ETH: 0xeb9310b185455f863f526dab3d245809f6854b4d [CUDA]: Listing CUDA devices. FORMAT: [deviceID] deviceName [0] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507162624 [1] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840 [2] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840 [3] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840 [4] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840 [5] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840

Overclocking to the max

Unless anyone can find me a working tool to change clock speeds on the GUI, me (and thus you) are limited to nvidia-settings. The tool is great, but its graphical and you need a GUI. But running it in a virtual X with VNC will not work as the NVidia driver is not loaded, so you will need to connect a monitoring to your first GPU. In addition we need to make X believe each GPU has a monitor connected else you cant control them. Funny enough within an Xterm you can pass along CLI commands to the nvidia-settings so you can script stuff, as long as you run them from within X.

apt-get install vnc4server gnome gnome-session gnome-session-flashback

Next download edid.bin and place it into /etc/X11/ (google it and you'll find it ... i did)

Now create an xorg.conf with the following contents:

# nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 367.27 (buildmeister@swio-display-x64-rhel04-12) Thu Jun 9 19:24:36 PDT 2016 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" 0 0 Screen 2 "Screen2" 0 0 Screen 3 "Screen3" 0 0 Screen 4 "Screen4" 0 0 Screen 5 "Screen5" 0 0 InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:1:0:0" Option "Coolbits" "31 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:2:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:3:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Device" Identifier "Device3" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:4:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Device" Identifier "Device4" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:5:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Device" Identifier "Device5" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:6:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Coolbits" "31" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen2" Device "Device2" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen3" Device "Device3" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen4" Device "Device4" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen5" Device "Device5" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection

The EDID options will fake monitors, except on GPU where your real monitor is. Further coolbits 31 allows you to overclock, fan control, etc of all your GPU's

After doing so reboot. You can enable auto-login and autostartup VNC to safe you some time

run 'vnc4server' it will ask for a password, set it and then kill the server again with vnc4server -kill :1

Now you can run:

x0vncserver -display :0 -passwordfile /home/miner/.vnc/passwd

Now you will have VNC access and you can disconnect your monitoring, however if you reboot wou will need to connect a monitoring until X11 has started / logged you in and VNC has started. Else you will not have GUI access and the ability to change settings of your GPU's.


Overclocking

So to summerize:

No overclocking get you about 22MH/s
Overclocking with SMI (which does not need all the X11 crap) will get you up to 26MH/s
Overclocking with nvidia-settings gets you over 32MH/s easily with still leaving 30Watts for tuning/optimizing
Overclocking with nvidia-settings gets you over 37MH/s running Cuda8

These are running benchmarks with a SINGLE card (less work on the PCI bus) running 6 cards limited overclocking and i am currently getting 192,88MH/s with them running CUDA. I did manage to run at 203.39MH/s for a while but after 6/7 it would stop with OpenCL, well all goes to 0MH/s but keeps running and needs a restart. I dont know if this is a driver or ethminer (tuning) issue but if you fancy an ugly daemon wrapper guarding the ethminer for this you can get an extra 10MH/s from your rig. With Cuda i was able to run over 220MH/s for 12 hours before i manged to brake the driver because i started nvidia-smi while the cards where maxed out (and it doenst like that it seems). If tune it fully, you shutdown graphics after, you might get 218.11MH/s (to be exact) too i am still tuning it for stability but i am nearly there.

But how do you do this?

nvidia-smi tool is documentent just find your max clock and set it, if your lazy you can copy/past this:

nvidia-smi -ac 1911,4004

and your set.

IF you going for the nvidia-settings clocks, i've created & use this script:

CLOCK=200 MEM=1500 CMD='/usr/bin/nvidia-settings' echo "performance" >/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo "performance" >/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor echo 2800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 2800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq for i in {0..5} do nvidia-smi -i ${i} -pm 0 nvidia-smi -i ${i} -pl 170 # nvidia-smi -i ${i} -ac 4004,1911 ${CMD} -a [gpu:${i}]/GPUPowerMizerMode=1 ${CMD} -a [gpu:${i}]/GPUFanControlState=1 ${CMD} -a [fan:${i}]/GPUTargetFanSpeed=80 for x in {3..3} do ${CMD} -a [gpu:${i}]/GPUGraphicsClockOffset[${x}]=${CLOCK} ${CMD} -a [gpu:${i}]/GPUMemoryTransferRateOffset[${x}]=${MEM} done done

If you can read, you can figure this out. I am setting the fans to 80% here as its stable on cooling and nice to them ears. IF you don't mind the noice i'd suggest setting it to max and you will be able to overclock more.

As you can see here i am doing +200 on the clock and +1500 on the memory, some notes you might find handy:

single card: +285 & +1800 is the max, single card pushes you to 37.19MH/s with Cuda or 32,66MH/s with OpenCL
multi card: + 275 & + 1775 is the max, but not stable (read: 6 hourely crashes) and pushes 4MH/s per card
multi card: +200 & + 1600 semi stable, needs more testing but this might be better and pushes another 3MH/s per card

again, these settings might not work for you, but gets you in the ballpark.

Viewing all articles
Browse latest Browse all 10806

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>