Trial Factoring
log in

Advanced search

Message boards : Number crunching : Trial Factoring

Previous · 1 . . . 16 · 17 · 18 · 19 · 20 · 21 · 22 . . . 23 · Next
Author Message
Gigacruncher [TSBTs Pirate]
Send message
Joined: 28 Mar 20
Posts: 51
Credit: 8,419,360
RAC: 0
Message 6543 - Posted: 31 May 2020, 20:16:22 UTC - in response to Message 6542.

You can trick the server by saying you have x amount of GPU’s or CPU’s. You are too naive. If we had 2700 GPU how many wus would we all be doing per day? Make the calculations.


And exactly what is the purpose of doing so? Why not just accept the amount that the server send out?

Doing the last request would actually make it possible to calculate the real ressources we have - now in stead of a very huge production we only have a very huge cue of work waiting to be processed.

Maybe a setting on the server could/should be made to render that possibility IMPOSSIBLE - is that possible Reb?

A note on my third question is that the server has proven reliable enough to sustain the users with enough work even if they have only 20 workunits cued per host.


I advice you to read this https://boinc.berkeley.edu/wiki/Client_configuration and stop where you are.

Profile RFGuy_KCCO
Send message
Joined: 2 Dec 14
Posts: 10
Credit: 1,252,818,983
RAC: 0
Message 6544 - Posted: 1 Jun 2020, 2:53:53 UTC - in response to Message 6542.

You can trick the server by saying you have x amount of GPU’s or CPU’s. You are too naive. If we had 2700 GPU how many wus would we all be doing per day? Make the calculations.


And exactly what is the purpose of doing so? Why not just accept the amount that the server send out?

Doing the last request would actually make it possible to calculate the real ressources we have - now in stead of a very huge production we only have a very huge cue of work waiting to be processed.

Maybe a setting on the server could/should be made to render that possibility IMPOSSIBLE - is that possible Reb?

A note on my third question is that the server has proven reliable enough to sustain the users with enough work even if they have only 20 workunits cued per host.


While I don't personally do it (I keep all of my computers visible and what you see there is 100% accurate), some crunchers do this in order to have more work in queue in case of a site outage. Another reason some do it because they have limited internet connection times, so they connect when they are able and get as much work as they need to continue crunching until the next time they can connect. Those who participate in the various team competitions (Formula BOINC, BOINC Pentathlon, etc.) do it for bunkering purposes. I'm sure there are other reasons, as well.

In all honesty, you really should consider upping the limit from 20 to a higher value because those of us with fast GPU's burn through 20 WU's very quickly. My 2080's and 2080 Supers running at 135W power limit are currently completing the latest WU's in ~31 seconds, while running 2 per GPU. That means I finish 20 WU's in just over 5 minutes. Maybe up the limit to 100, at least?
____________

[AF>Amis des Lapins] Jean-Luc
Avatar
Send message
Joined: 12 Mar 18
Posts: 21
Credit: 1,686,681,846
RAC: 8,330,681
Message 6545 - Posted: 1 Jun 2020, 8:13:23 UTC - in response to Message 6544.


In all honesty, you really should consider upping the limit from 20 to a higher value because those of us with fast GPU's burn through 20 WU's very quickly. My 2080's and 2080 Supers running at 135W power limit are currently completing the latest WU's in ~31 seconds, while running 2 per GPU. That means I finish 20 WU's in just over 5 minutes. Maybe up the limit to 100, at least?


I agree with this request 100%.
I would even set the limit at 1000, rather than 100 !
At least as long as we're working on the 70-72 bits...
After that, when the number of bits increases to 73-75, you could lower the limit to 100 again.
Honestly, everyone would want at least 6 to 10 hours of work reserve ! That's a minimum on BOINC...
That way, while the server is under maintenance, we can continue the calculations.
Otherwise, we have to download WUs from another project in the meantime, so as not to leave the GPU idle.

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7479
Credit: 43,589,571
RAC: 35,521
Message 6546 - Posted: 1 Jun 2020, 8:20:12 UTC

The limit is now 50, this could be enough for a short downtime of doing a db backup.

Profile RFGuy_KCCO
Send message
Joined: 2 Dec 14
Posts: 10
Credit: 1,252,818,983
RAC: 0
Message 6547 - Posted: 1 Jun 2020, 13:10:24 UTC - in response to Message 6546.

The limit is now 50, this could be enough for a short downtime of doing a db backup.



Thank you!
____________

KEP
Volunteer tester
Send message
Joined: 28 Nov 14
Posts: 92
Credit: 1,102,770
RAC: 0
Message 6548 - Posted: 1 Jun 2020, 13:42:06 UTC - in response to Message 6544.

While I don't personally do it (I keep all of my computers visible and what you see there is 100% accurate), some crunchers do this in order to have more work in queue in case of a site outage. Another reason some do it because they have limited internet connection times, so they connect when they are able and get as much work as they need to continue crunching until the next time they can connect. Those who participate in the various team competitions (Formula BOINC, BOINC Pentathlon, etc.) do it for bunkering purposes. I'm sure there are other reasons, as well.

In all honesty, you really should consider upping the limit from 20 to a higher value because those of us with fast GPU's burn through 20 WU's very quickly. My 2080's and 2080 Supers running at 135W power limit are currently completing the latest WU's in ~31 seconds, while running 2 per GPU. That means I finish 20 WU's in just over 5 minutes. Maybe up the limit to 100, at least?


Okay, thank you for your answer :) ... It was a bit of a bummer, that we did not have the ressources that I thought, but I do see that there is a lot of good reasons for that behaviour. How did you manage to run 2 WU per GPU? Have you tested that you are not inefficient compared to running 1 WU per GPU?

[AF>Amis des Lapins] Jean-Luc
Avatar
Send message
Joined: 12 Mar 18
Posts: 21
Credit: 1,686,681,846
RAC: 8,330,681
Message 6549 - Posted: 1 Jun 2020, 17:50:29 UTC - in response to Message 6548.

Thank you very much for the idea !
I just gave it a try.
- 1 WU per GPU: 12-13 seconds, GPU Load: 95%.
- 2 WUs per GPU: 18-19 seconds, GPU Load: 100%.
I'm going to leave it running to see if it doesn't make mistakes...
So the RTX 2080 Ti GPU can produce even more !

To do this, you have to enter these parameters in an app_config.xml :

<app_config> <app> <name>TF</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>0.5</cpu_usage> </gpu_versions> </app> </app_config>

KEP
Volunteer tester
Send message
Joined: 28 Nov 14
Posts: 92
Credit: 1,102,770
RAC: 0
Message 6550 - Posted: 1 Jun 2020, 18:34:02 UTC - in response to Message 6549.

Thank you very much for the idea !
I just gave it a try.
- 1 WU per GPU: 12-13 seconds, GPU Load: 95%.
- 2 WUs per GPU: 18-19 seconds, GPU Load: 100%.
I'm going to leave it running to see if it doesn't make mistakes...
So the RTX 2080 Ti GPU can produce even more !

To do this, you have to enter these parameters in an app_config.xml :

<app_config>
<app>
<name>TF</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.5</cpu_usage>
</gpu_versions>
</app>
</app_config>


Cool :)

I reckon this applys to other GPU models too. Maybe even my OLD noisy AMD :)

Hope this goes to the FAQ :)

Profile RFGuy_KCCO
Send message
Joined: 2 Dec 14
Posts: 10
Credit: 1,252,818,983
RAC: 0
Message 6551 - Posted: 1 Jun 2020, 20:39:07 UTC - in response to Message 6549.
Last modified: 1 Jun 2020, 20:45:08 UTC

Thank you very much for the idea !
I just gave it a try.
- 1 WU per GPU: 12-13 seconds, GPU Load: 95%.
- 2 WUs per GPU: 18-19 seconds, GPU Load: 100%.
I'm going to leave it running to see if it doesn't make mistakes...
So the RTX 2080 Ti GPU can produce even more !

To do this, you have to enter these parameters in an app_config.xml :

<app_config> <app> <name>TF</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>0.5</cpu_usage> </gpu_versions> </app> </app_config>


You don't even need to allocate that much CPU to the WUs, at least when using an Nvidia GPU. My 2080's under both Windows and Linux use virtually zero CPU cycles when running these WUs. I allocate only .25 CPU per WU, but you could go as low as .1 if you wanted to. I have as a test (I always test multiple configs when running a new project to find the best settings - I am an engineer by trade) and noticed no difference in completion times. Seems these WUs (again, at least on Nvidia GPUs) do all of their computations on the GPU.
____________

KEP
Volunteer tester
Send message
Joined: 28 Nov 14
Posts: 92
Credit: 1,102,770
RAC: 0
Message 6552 - Posted: 1 Jun 2020, 20:53:02 UTC - in response to Message 6551.

Seems these WUs (again, at least on Nvidia GPUs) do all of their computations on the GPU.


They sure also should. Mfaktc, didn't use to use 0% CPU, but eventually someone upgraded the mfaktc app and the app went from using both CPU and GPU to only using GPU. It appears though, that some CPU is still used, when looking at the completion data on workunits completed.

Those running more than 1 WU per GPU may wanna test each time we jump a bit level. Runtime, aswell as speed of computation may change, when a bit is jumped. My old GPU was very efficient up to 75 bit, after that it slowed down dramatically, aswell it slowed down if running more than 1 test at a time on the GPU.

Luigi R.
Avatar
Send message
Joined: 17 Jul 18
Posts: 21
Credit: 35,672,299
RAC: 13
Message 6554 - Posted: 4 Jun 2020, 12:04:41 UTC - in response to Message 6529.

If we can maintain this firepower, going all the way to 74 bit by the end of the year is indeed feasible and going all the way to 76 bit next year is feasible. With that kind of work being completed at the moment, it is best to go like Rebirther prefers and go breadth first - that will give those with slow GPUs a chance to adapt to the near future, where everything with a factor below 75 bit is cleared.

Don't forget that hardware improves and better GPUs will come here. ;)

KEP
Volunteer tester
Send message
Joined: 28 Nov 14
Posts: 92
Credit: 1,102,770
RAC: 0
Message 6556 - Posted: 4 Jun 2020, 14:48:40 UTC - in response to Message 6554.

Don't forget that hardware improves and better GPUs will come here. ;)


It sure does :) No doubt, once GTR 2080 TI becomes standard for low cost GPUs then we start talking :) Let's hope the 3xxx indeed are powerhorses by themself :)

bluestang
Send message
Joined: 6 Jun 19
Posts: 60
Credit: 2,244,690,070
RAC: 263,351
Message 6566 - Posted: 7 Jun 2020, 15:56:53 UTC

I'll assume points are bouncing around from 190 to 180 and now 170 because of runtimes?

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7479
Credit: 43,589,571
RAC: 35,521
Message 6567 - Posted: 7 Jun 2020, 15:57:58 UTC - in response to Message 6566.

I'll assume points are bouncing around from 190 to 180 and now 170 because of runtimes?


correct

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7479
Credit: 43,589,571
RAC: 35,521
Message 6574 - Posted: 9 Jun 2020, 10:52:25 UTC

The last batch with around 122k WUs are running now (795M max), no more work left in the 70-71 range. There could be some tests left. I will check this later. The next 71-72 range is planned later today.

Gigacruncher [TSBTs Pirate]
Send message
Joined: 28 Mar 20
Posts: 51
Credit: 8,419,360
RAC: 0
Message 6578 - Posted: 9 Jun 2020, 17:57:31 UTC - in response to Message 6574.

The last batch with around 122k WUs are running now (795M max), no more work left in the 70-71 range. There could be some tests left. I will check this later. The next 71-72 range is planned later today.


Quick question, for 71 to 72 bits, are we going from lowest n range to highest n range or the other way around?

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7479
Credit: 43,589,571
RAC: 35,521
Message 6579 - Posted: 9 Jun 2020, 18:03:28 UTC - in response to Message 6578.

The last batch with around 122k WUs are running now (795M max), no more work left in the 70-71 range. There could be some tests left. I will check this later. The next 71-72 range is planned later today.


Quick question, for 71 to 72 bits, are we going from lowest n range to highest n range or the other way around?


lowest to highest but I cant influence it if I request work.

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7479
Credit: 43,589,571
RAC: 35,521
Message 6580 - Posted: 9 Jun 2020, 19:09:04 UTC

The next range 71-72 bit has started.

zombie67 [MM]
Avatar
Send message
Joined: 4 Dec 14
Posts: 31
Credit: 1,166,007,944
RAC: 66,791
Message 6677 - Posted: 5 Aug 2020, 3:31:01 UTC

Does it work yet, running one task per GPU, with multiple GPUs on Windows? Back in April, I think it was still being worked on. But I haven't heard any update since then.
____________
Reno, NV
Team: SETI.USA

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7479
Credit: 43,589,571
RAC: 35,521
Message 6678 - Posted: 5 Aug 2020, 4:51:26 UTC - in response to Message 6677.

Does it work yet, running one task per GPU, with multiple GPUs on Windows? Back in April, I think it was still being worked on. But I haven't heard any update since then.


I have sent a PM to the dev. At the moment no.

Previous · 1 . . . 16 · 17 · 18 · 19 · 20 · 21 · 22 . . . 23 · Next
Post to thread

Message boards : Number crunching : Trial Factoring


Main page · Your account · Message boards


Copyright © 2014-2024 BOINC Confederation / rebirther