log in |
Message boards : Number crunching : Trial Factoring
Previous · 1 . . . 16 · 17 · 18 · 19 · 20 · 21 · 22 . . . 23 · Next
Author | Message |
---|---|
You can trick the server by saying you have x amount of GPU’s or CPU’s. You are too naive. If we had 2700 GPU how many wus would we all be doing per day? Make the calculations. I advice you to read this https://boinc.berkeley.edu/wiki/Client_configuration and stop where you are. | |
ID: 6543 · Rating: 0 · rate: / Reply Quote | |
You can trick the server by saying you have x amount of GPU’s or CPU’s. You are too naive. If we had 2700 GPU how many wus would we all be doing per day? Make the calculations. While I don't personally do it (I keep all of my computers visible and what you see there is 100% accurate), some crunchers do this in order to have more work in queue in case of a site outage. Another reason some do it because they have limited internet connection times, so they connect when they are able and get as much work as they need to continue crunching until the next time they can connect. Those who participate in the various team competitions (Formula BOINC, BOINC Pentathlon, etc.) do it for bunkering purposes. I'm sure there are other reasons, as well. In all honesty, you really should consider upping the limit from 20 to a higher value because those of us with fast GPU's burn through 20 WU's very quickly. My 2080's and 2080 Supers running at 135W power limit are currently completing the latest WU's in ~31 seconds, while running 2 per GPU. That means I finish 20 WU's in just over 5 minutes. Maybe up the limit to 100, at least? ____________ | |
ID: 6544 · Rating: 0 · rate: / Reply Quote | |
I agree with this request 100%. I would even set the limit at 1000, rather than 100 ! At least as long as we're working on the 70-72 bits... After that, when the number of bits increases to 73-75, you could lower the limit to 100 again. Honestly, everyone would want at least 6 to 10 hours of work reserve ! That's a minimum on BOINC... That way, while the server is under maintenance, we can continue the calculations. Otherwise, we have to download WUs from another project in the meantime, so as not to leave the GPU idle. | |
ID: 6545 · Rating: 0 · rate: / Reply Quote | |
The limit is now 50, this could be enough for a short downtime of doing a db backup. | |
ID: 6546 · Rating: 0 · rate: / Reply Quote | |
The limit is now 50, this could be enough for a short downtime of doing a db backup. Thank you! ____________ | |
ID: 6547 · Rating: 0 · rate: / Reply Quote | |
While I don't personally do it (I keep all of my computers visible and what you see there is 100% accurate), some crunchers do this in order to have more work in queue in case of a site outage. Another reason some do it because they have limited internet connection times, so they connect when they are able and get as much work as they need to continue crunching until the next time they can connect. Those who participate in the various team competitions (Formula BOINC, BOINC Pentathlon, etc.) do it for bunkering purposes. I'm sure there are other reasons, as well. Okay, thank you for your answer :) ... It was a bit of a bummer, that we did not have the ressources that I thought, but I do see that there is a lot of good reasons for that behaviour. How did you manage to run 2 WU per GPU? Have you tested that you are not inefficient compared to running 1 WU per GPU? | |
ID: 6548 · Rating: 0 · rate: / Reply Quote | |
Thank you very much for the idea ! <app_config>
<app>
<name>TF</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.5</cpu_usage>
</gpu_versions>
</app>
</app_config> | |
ID: 6549 · Rating: 0 · rate: / Reply Quote | |
Thank you very much for the idea ! Cool :) I reckon this applys to other GPU models too. Maybe even my OLD noisy AMD :) Hope this goes to the FAQ :) | |
ID: 6550 · Rating: 0 · rate: / Reply Quote | |
Thank you very much for the idea ! You don't even need to allocate that much CPU to the WUs, at least when using an Nvidia GPU. My 2080's under both Windows and Linux use virtually zero CPU cycles when running these WUs. I allocate only .25 CPU per WU, but you could go as low as .1 if you wanted to. I have as a test (I always test multiple configs when running a new project to find the best settings - I am an engineer by trade) and noticed no difference in completion times. Seems these WUs (again, at least on Nvidia GPUs) do all of their computations on the GPU. ____________ | |
ID: 6551 · Rating: 0 · rate: / Reply Quote | |
Seems these WUs (again, at least on Nvidia GPUs) do all of their computations on the GPU. They sure also should. Mfaktc, didn't use to use 0% CPU, but eventually someone upgraded the mfaktc app and the app went from using both CPU and GPU to only using GPU. It appears though, that some CPU is still used, when looking at the completion data on workunits completed. Those running more than 1 WU per GPU may wanna test each time we jump a bit level. Runtime, aswell as speed of computation may change, when a bit is jumped. My old GPU was very efficient up to 75 bit, after that it slowed down dramatically, aswell it slowed down if running more than 1 test at a time on the GPU. | |
ID: 6552 · Rating: 0 · rate: / Reply Quote | |
If we can maintain this firepower, going all the way to 74 bit by the end of the year is indeed feasible and going all the way to 76 bit next year is feasible. With that kind of work being completed at the moment, it is best to go like Rebirther prefers and go breadth first - that will give those with slow GPUs a chance to adapt to the near future, where everything with a factor below 75 bit is cleared. Don't forget that hardware improves and better GPUs will come here. ;) | |
ID: 6554 · Rating: 0 · rate: / Reply Quote | |
Don't forget that hardware improves and better GPUs will come here. ;) It sure does :) No doubt, once GTR 2080 TI becomes standard for low cost GPUs then we start talking :) Let's hope the 3xxx indeed are powerhorses by themself :) | |
ID: 6556 · Rating: 0 · rate: / Reply Quote | |
I'll assume points are bouncing around from 190 to 180 and now 170 because of runtimes? | |
ID: 6566 · Rating: 0 · rate: / Reply Quote | |
I'll assume points are bouncing around from 190 to 180 and now 170 because of runtimes? correct | |
ID: 6567 · Rating: 0 · rate: / Reply Quote | |
The last batch with around 122k WUs are running now (795M max), no more work left in the 70-71 range. There could be some tests left. I will check this later. The next 71-72 range is planned later today. | |
ID: 6574 · Rating: 0 · rate: / Reply Quote | |
The last batch with around 122k WUs are running now (795M max), no more work left in the 70-71 range. There could be some tests left. I will check this later. The next 71-72 range is planned later today. Quick question, for 71 to 72 bits, are we going from lowest n range to highest n range or the other way around? | |
ID: 6578 · Rating: 0 · rate: / Reply Quote | |
The last batch with around 122k WUs are running now (795M max), no more work left in the 70-71 range. There could be some tests left. I will check this later. The next 71-72 range is planned later today. lowest to highest but I cant influence it if I request work. | |
ID: 6579 · Rating: 0 · rate: / Reply Quote | |
The next range 71-72 bit has started. | |
ID: 6580 · Rating: 0 · rate: / Reply Quote | |
Does it work yet, running one task per GPU, with multiple GPUs on Windows? Back in April, I think it was still being worked on. But I haven't heard any update since then. | |
ID: 6677 · Rating: 0 · rate: / Reply Quote | |
Does it work yet, running one task per GPU, with multiple GPUs on Windows? Back in April, I think it was still being worked on. But I haven't heard any update since then. I have sent a PM to the dev. At the moment no. | |
ID: 6678 · Rating: 0 · rate: / Reply Quote | |
Message boards :
Number crunching :
Trial Factoring