I just got into eth mining about a week ago and I am a complete noob to mining in general. However, I do know my way around computer hardware and I needed an excuse to work on a new project. Before I started, I read up as much as I could in about a 72 hour time period (not enough time...) before I bought my hardware. Of course I somehow missed the 3-4 threads with people complaining about the skylake chipset and how it is basically trash for mining with 4+ GPUs. My heart sunk when I found this information out, but I have been determined to make it work...
First, here's my build:
MSI Z170A SLI PLUS
i3 6100
Corsair DDR4 LPX 2x4GB
EVGA 220-P2-1200-X1 (1200w Platinum rated)
4x R7 370 (3 XFX, 1 Asus)
1x Sapphire Nitro R9 380 (had it laying around and have another on the way)
X16 --> X1 Powered Risers (USB type)
AMD Driver: 16.6.1 (latest)
Windows 10
Claymore's Dual Miner (solo Eth mode)
The rig started with 3x 370s and 1x 380. It was essentially plug and play, minus the fact that I went ahead and changed PCIe to Gen2 in bios before hand. The only issue I ran into were crashes as the result of a faulty riser. Changing the riser out solved the problem immediately. As far as where I plugged the cards in, I just populated the main x16 slot (the one closest to CPU) with the 380, then filled out the 3 slots below that with the other 3 cards.
When I received my 5th gpu the other night, I ran into issues. First, it wasn't being detected at all. I lowered PCIe to Gen1 in bios. It was detected. Next, it was "GPU FAIL" errors in Claymore. 1 GPU would drop out after ~5 minutes every time. Clocks back to defaults, switched drivers around (15.12, 15.11, etc), tried a couple parameters in Claymore (-ethi, -gser) to no avail. GPU would continue to drop out. I tried rotating the GPUs around into different slots, still the same problem. I ran out of ideas, so I set the -wd to 0 and called it a night. The closest I got in any of those changes was maybe getting an extra 15 minutes of run time with all 5 GPUs up... and it was random so I considered it a fluke.
Finally, last night I decided to check bios again. The MSI Z170 SLI Plus bios is VERY limited imo... but there was the PCI Latency Timer setting. It was at default of 32. I set it to 64 and haven't had a GPU drop in nearly 24 hours now.
TLDR; Fiddle with the PCI Latency Timer setting. Maybe it is just something that just works with this particular board, but with that simple change from 32 --> 64, and she's been stable for nearly 24 hours. I even cranked the clocks up mid-day today while it was still running. Zero issues!
I look forward to continuing on with this experiment in the coming weeks/months. I will have a 6th GPU arriving Tuesday, at which time I will update with progress if anyone cares. I am only sharing this because it seemed most have dismissed the Z170a chipset as being viable. Perhaps the cost is still prohibitive, but I feel pretty good about sticking the board into a decent gaming rig or selling it down the line, especially over something like the H81 BTC boards.
First, here's my build:
MSI Z170A SLI PLUS
i3 6100
Corsair DDR4 LPX 2x4GB
EVGA 220-P2-1200-X1 (1200w Platinum rated)
4x R7 370 (3 XFX, 1 Asus)
1x Sapphire Nitro R9 380 (had it laying around and have another on the way)
X16 --> X1 Powered Risers (USB type)
AMD Driver: 16.6.1 (latest)
Windows 10
Claymore's Dual Miner (solo Eth mode)
The rig started with 3x 370s and 1x 380. It was essentially plug and play, minus the fact that I went ahead and changed PCIe to Gen2 in bios before hand. The only issue I ran into were crashes as the result of a faulty riser. Changing the riser out solved the problem immediately. As far as where I plugged the cards in, I just populated the main x16 slot (the one closest to CPU) with the 380, then filled out the 3 slots below that with the other 3 cards.
When I received my 5th gpu the other night, I ran into issues. First, it wasn't being detected at all. I lowered PCIe to Gen1 in bios. It was detected. Next, it was "GPU FAIL" errors in Claymore. 1 GPU would drop out after ~5 minutes every time. Clocks back to defaults, switched drivers around (15.12, 15.11, etc), tried a couple parameters in Claymore (-ethi, -gser) to no avail. GPU would continue to drop out. I tried rotating the GPUs around into different slots, still the same problem. I ran out of ideas, so I set the -wd to 0 and called it a night. The closest I got in any of those changes was maybe getting an extra 15 minutes of run time with all 5 GPUs up... and it was random so I considered it a fluke.
Finally, last night I decided to check bios again. The MSI Z170 SLI Plus bios is VERY limited imo... but there was the PCI Latency Timer setting. It was at default of 32. I set it to 64 and haven't had a GPU drop in nearly 24 hours now.
TLDR; Fiddle with the PCI Latency Timer setting. Maybe it is just something that just works with this particular board, but with that simple change from 32 --> 64, and she's been stable for nearly 24 hours. I even cranked the clocks up mid-day today while it was still running. Zero issues!
I look forward to continuing on with this experiment in the coming weeks/months. I will have a 6th GPU arriving Tuesday, at which time I will update with progress if anyone cares. I am only sharing this because it seemed most have dismissed the Z170a chipset as being viable. Perhaps the cost is still prohibitive, but I feel pretty good about sticking the board into a decent gaming rig or selling it down the line, especially over something like the H81 BTC boards.