Posts by KEP
log in
21) Message boards : Number crunching : Trial Factoring (Message 6542)
Posted 31 May 2020 by KEP
You can trick the server by saying you have x amount of GPU’s or CPU’s. You are too naive. If we had 2700 GPU how many wus would we all be doing per day? Make the calculations.


And exactly what is the purpose of doing so? Why not just accept the amount that the server send out?

Doing the last request would actually make it possible to calculate the real ressources we have - now in stead of a very huge production we only have a very huge cue of work waiting to be processed.

Maybe a setting on the server could/should be made to render that possibility IMPOSSIBLE - is that possible Reb?

A note on my third question is that the server has proven reliable enough to sustain the users with enough work even if they have only 20 workunits cued per host.
22) Message boards : Number crunching : Trial Factoring (Message 6529)
Posted 31 May 2020 by KEP
Thanks everyone for your feedback :)

I can see that, we are now having more than 2700 GPUs at work and a good deal of them are high end users. If we can maintain this firepower, going all the way to 74 bit by the end of the year is indeed feasible and going all the way to 76 bit next year is feasible. With that kind of work being completed at the moment, it is best to go like Rebirther prefers and go breadth first - that will give those with slow GPUs a chance to adapt to the near future, where everything with a factor below 75 bit is cleared.

Personally I'm amazed at the firepower that a 2080 has, especially the 2080 TI. By the end of this bit level, some users are going to have a completion time of approximately 10 seconds per test :)

Thank you everyone for contributing, it sure does help GIMPs a lot and by going breadth first, we actually contribute to every users, and not just those users searching at the wavefront :)
23) Message boards : Number crunching : Trial Factoring (Message 6518)
Posted 29 May 2020 by KEP
n my opinion we should test for the lower n range from 72 bits to 73 bits to get a feeling of the timings needed, we have almost 470k units in there. This will help in short term GIMPS project but I do understand we should get fast wus available to the community, like the ones we are doing now, which will only be useful to GIMPS in 20-30 years time. It’s a threshold.

Extrapolating my timings I would get 40-45 mins per wu at n<=199M from 72 bits to 73bits.


If you at n=200M use 42.5 minutes in average to go from 72-73 bit, it means, that at n=120M (likely starting point) you would spend ~70.8 minutes going from 72-73 bit. That could be too much for someone. A GTR 2080 will spend on doing the same n/bit range (if 50x faster), ~1m25 seconds. Looks like Rebirthers approach is correct and that running breadth first is the best suited for all users. Last time we jumped to 77-78 bits, a lot of the slow GPUs vanished. Going breadth first, may actually be the proper way of keeping our momentum.

Thanks for your feedbacks, now let's see if we can actually clear all exponents to 72 bit, before the end of the year (maybe 73 bit if enough highend users comes to our aide) :)
24) Message boards : Number crunching : Trial Factoring (Message 6515)
Posted 28 May 2020 by KEP
Maybe for higher ranges instead of going to 71 bits from 70 just go straight to 72 or 73 instead. You will have to make some trials for timings.


How would people here in general feel about doing lower n, breadth first to max bit ie n>96.83M to n<=120M to 78 bit? I know it eventually is a huge increase in testing time, but does it really make a difference for you as a user when it comes to supporting this project or not?

I'm asking, because if we really want to benefit and boost this project, running that suggested n range, to 78 bit, is the way to move forward. I know it will steal some of the low hanging fruits. What does each user prefer in terms of short or long running tasks?

I reckon Rebirther is the one who takes the final decision, but it is seriously as little as changing one setting in the .ini file and then everything from work creation to validation will work the same, even if we decide to run multiple bits per workunit.

The offer Rebirther, that I gave in a private message, still stands, when it comes to creating the next bit range, if you wont go all remaining bits up to 78, but breadth first and increase by 1 bit per workunit - all we have to do, is to agree with George that we can reserve and keep everything in that range untill we reach 78 bit.

Has the new way of showing progress, made it possible to have a mix of bits?

I understand the need to go breadth first, but we also have to use our ressources to best help Primenet and as mentioned in private message, going breadth first, but completing the range n>96.83M to n<=120M to 78 bit, one bit at a time, wont be difficult and I will no doubt help you do it flawlessly :)

Everyone with an oppinion or enlightment, please let me hear what you think, preferably from both slow and fast GPU users :)
25) Message boards : Number crunching : Trial Factoring (Message 6497)
Posted 23 May 2020 by KEP
And ECM factoring methods are not efficient for factoring such large numbers ?


Someone else has to elaborate on this, but I have read somewhere, that for n>20M there is not much benefit/possibility of doing ECM, due to (as I recall) memory use. Of course, there comes a time, where Trial Factoring and P-1 exhaust its efficiency and possibilities and at that time, unless something limits us, ECM might become the most efficient :)
26) Message boards : Number crunching : Trial Factoring (Message 6495)
Posted 23 May 2020 by KEP
And why are some of the cells yellow ?


The yellow cells are CPU optimum (old GPU optimum) Trial Factoring depth. Today, everyone with a ressource average like BOINC or those single users, with a modern GPU, should go 2 bit above the yellow line. So yes, at that point, where the yellow line is, some has to change to P-1 and other users has to keep Trial Factoring, if they want to use their ressources optimally. Since BOINC moves breadth first, then unless sometime in the future, the yellow bar moves to a higher bit depth, it should always make sence for BOINC to go 2 bit higher than the yellow cell shows.
27) Message boards : News : New work for TF added (Message 6385)
Posted 10 May 2020 by KEP
I hope I will soon be able to do the calculations with both GPUs together !


I sure hope so. What comforts me is that great minds are working on this. Yesterday I did my own TF work on a noisy ancient GPU using mfakto. It was sure nice to see that the progress bar worked as supposed to :)

A big thank you to all those of you who have taken up the challenge of modernizing and getting mfakt(o)(c) to work flawlessly on BOINC, both at single aswell (eventually) multi GPU systems.
28) Message boards : News : New work for TF added (Message 6383)
Posted 9 May 2020 by KEP
Thank you very much for these valuable explanations.
My understanding is getting better and better and I will almost be able to answer all the questions about TF for SRBase on the Alliance francophone forum.

I still have one last question, if it's not too complicated to answer here:
I don't understand why the task calculation time is shorter for larger exponents?
For example:
n=250M testingtime is 40 seconds
n=500M testingtime is 20 seconds
n=1000M testingtime is 10 seconds


Great question :)

A factor for a Mersenne candidate is always defined this way: 2 x k x prime_exponent_n + 1 (that is important to remember in the explanation below)

Let's answer you question, with 70 bit to 71 bit arithmetics:

at n=250M kmin=2,361,183,241,434 and kmax=4,722,366,482,869 (2,361,183,241,434 k to sieve or test for factor)
at n=500M kmin=1,180,591,620,717 and kmax=2,361,183,241,434 (1,180,591,620,717 k to sieve or test for factor)
at n=1000M kmin=590,295,810,358 and kmax=1,180,591,620,717 (590,295,810,358 k to sieve or test for factor)

So as you can see above, the higher n get's the shorter the range of possible factor candidates for the bit level becomes. Therefor it is almost certain, that the previous candidate you tried to factor, will take longer than the current candidate you try to factor.

One nice feature is that the expectancy level of the amount of candidates being factored, remains the same, despite having fewer pairs to test - so with less work you remove the same percentage of candidates and elimintates them from further testing :)

Hope this helped :)
29) Message boards : News : New work for TF added (Message 6379)
Posted 9 May 2020 by KEP
Thank you for your answer !


Even though it is not possible to tell who found a factor and what factor was found, then to give you an idea how much you have contributed, the equation looks like this:

Percentage of tasks resulting in a factor found result: ~1.436 %

You have currently 9,502 valid tasks. 136 of those, using statistical average, have found a factor. In other words, your contribution has saved on an i5-4670 more than 204 realtime (816 CPU) months of first time primality computation - so keep up the good work, as you can see your work is very valuable :)
30) Message boards : News : New work for TF added (Message 6366)
Posted 28 Apr 2020 by KEP
The calculation time of a WU is 22-23 seconds.


You have seen nothing yet :)

What takes you 22-23 seconds now, will take you at n=460M ~11.5 seconds and as we for bit 70 to 71, exhaust the candidates remaining at n~800M, you will litterally see a workunit complete in just 5 seconds.

When we go to bit 71 to 72, your RTX 2080 will start at ~80 sec per candidate and that will then at each doubling of n, cut in half. The reduction in runtime, will smoothly manifest itself, compared to what it were when you started crunching at certain bit level.

Example given for a test at n=125M running 80 seconds for bit 71 to 72, then the testing time will be approximately scaling to the following values:

n=250M testingtime is 40 seconds
n=500M testingtime is 20 seconds
n=1000M testingtime is 10 seconds

If the testingtimes is as follows per 71 to 72 bit, then you will more or less have these runtimes at these levels:

72 to 73 bit (n=250M=80 seconds) (n=500M=40 seconds) (n=1000M=20 seconds)
73 to 74 bit (n=250M=160 seconds) (n=500M=80 seconds) (n=1000M=40 seconds)
74 to 75 bit (n=250M=320 seconds) (n=500M=160 seconds) (n=1000M=80 seconds)
75 to 76 bit (n=250M=640 seconds) (n=500M=320 seconds) (n=1000M=160 seconds)
76 to 77 bit (n=250M=1280 seconds) (n=500M=640 seconds) (n=1000M=320 seconds)
...

It is only possible to scale, once you know the testtime at a given bitlevel, since the higher the bit level, dependant of your hardware, there is a slowdown in productivity :)
31) Message boards : Number crunching : Trial Factoring (Message 6318)
Posted 20 Apr 2020 by KEP
I'm not sure if this is worth anything, but now that I think about it, Rieselsieve, also had a lot of errors, when they upgraded from one version of an app to a newer one. If I recall correct, it may have something to do with the workunit once created is tied to a certain app. Maybe v13 can be implemented, but only for the new work. As mentioned not sure if this is infact the problem, or if I recall correctly, it is afterall more than 10 years back in time :)

Good job Reb :)
32) Message boards : Number crunching : Trial Factoring (Message 6309)
Posted 20 Apr 2020 by KEP
Collatz is the worst example. Making credit like collatz will attract all the point whores and cheaters.


That is just sad :( ... to be honest, I've always been in this for the science. A good measurement of how much science has been done, is in fact the pure uninflated/undeflated cobblestone. Now, because of Collatz and most likely also other projects, it is virtually impossible to tell wether we actually computed more science one year back, compared to the years before that. Well, this is not for the credit hunters. Of course we all should work together on getting credit right, such that people get what they should have, without beeing too much off to either side.
33) Message boards : Number crunching : Trial Factoring (Message 6295)
Posted 19 Apr 2020 by KEP
Collatz is inflated.

by a factor of 1.527 - this is roughly what I'd expect, perhaps as high as 1.6 to match PPS Sieve and Moo credits (based on what I remember !)


Just as expected :)

Well, 1.6 is propably what we should shoot for Reb - so we need to have credit that resembles for 77-78 bit 165000x1.6 = 264000. That way we will align with Primegrid and Moo.

To be honest, I don't think that it is fair by Collatz :( They are seriously distorting the true amount of computations done by the BOINC community :(

Thanks for your usefull and enlightning feedback :)
34) Message boards : Number crunching : Trial Factoring (Message 6289)
Posted 19 Apr 2020 by KEP
Okay, it appears that credit needs to be additionally 6.56 times higher.

I may ask, is the credit given by the Collatz Conjecture, inflated or does it reflect the actual amount of computation done to complete a single task?`

I'm asking, because 11,969,000 for GTX 2080 TI sounds like a lot and at least more than what PG hands out for PPS Sieve. The original intention with the cobblestone, is to reflect the actual calculations done and not to use as a mean to attract users - by forinstance inflating the credit given per workunit.

Just did a comparison, between PPS Sieve at Primegrid and it appears that a GTX 2080 TI produces 2,722,003 cobblestones each day. Even with that in account, we are still off, by a factor of 1.527. I must stress, that just as with money, credit should not be inflated. It kind of makes credit loose its potential and afterall we might end up in best case scenario with a damaging competition between various projects, but in the worst case, credit might have to evolutionate backwards to simply just count the workunits completed. Both scenarious are unwanted, so lets do our best to keep it real :)
35) Message boards : Number crunching : Trial Factoring (Message 6284)
Posted 19 Apr 2020 by KEP
Collatz projects credits 30k per 3-4 mins run...you think about it.


Could you please reveal the type of GPU that does 30K credit per 3-4 minute runs?

I must say, even though you propably already know this, that not all GPUs produce 30K credits worth of calculations each 3-4 minutes, but if credit is to low and unreasonable it should of course be adjusted. How does Collatz credit compare to Primegrid credit on a GPU?
36) Message boards : Number crunching : Trial Factoring (Message 6272)
Posted 19 Apr 2020 by KEP
Which tiny ones are you talking about, 72 to 73 bits? If so looking forward to it.


We will be starting with 70 to 71 bit. Due to the very short runtime, it may be best for the server, if they are actually 70-72 bit. It is as simple as search and replace, before Rebirther creates new work. That small task of search and replace, will in itself increase the chance of a factor found from ~1.436% to ~2.872% and it will increase the testime by a factor of 3. Let's see what Rebirther decides to do, but it is very simple to run from 70 bit to 72 without messing anything up.
37) Message boards : Number crunching : Trial Factoring (Message 6266)
Posted 19 Apr 2020 by KEP
It's hard to ignore when we can only have 1 task per host and its minutes between tasks.


Yes, it is.

Unfortunantly when Rebirther was doing some testing locally, in regards to remove the last traces of the projectname we were not allowed to use, something happened and that affected the prepared work, and does for now unfortunantly give us a higher download error rate, because the work that was prepared was deleted and the system has to recreate the workunits once more. If we increase the max from 1 to 2 or 3, it might result in weeks without new work for the highend GPU users.

We are sorry for the inconvienience and hope it all soon corrects itself.

On a possitive note, this will not happen again, once we start getting work from Mersenne.org, wich we hopefully soon will :)
38) Message boards : Number crunching : Trial Factoring (Message 6246)
Posted 16 Apr 2020 by KEP
SRBase is now 10th in overall production for Trial Factoring :)

When will we be no. 1? :)

Follow this link and let's see how fast we can move to no. 1 in overall Trial Factoring: https://www.mersenne.org/report_top_500_custom/?team_flag=0&type=1001&rank_lo=1&rank_hi=15&start_date=1995-01-01&end_date=
39) Message boards : Number crunching : Trial Factoring (Message 6240)
Posted 16 Apr 2020 by KEP
Hi,once the "batch1" tasks are depleted, when will the next batch be uploaded? Just want to know the waiting time between "batches" of work units. Thx.


This hopefully wont be long. Quite a few factors depent on the time between having no work to send and being able to upload new work. The plan to new work, is looking (as far as I understand) like this:

1. Complete all 73-74 bit and 77-78 bit tests and wait for return of the last one. (This can be a long haul, if we are extremely unlucky and a user running the same old GPU as I have and only runs it for 12 hours a day, it will take for 77-78 bit a whoopping 7-9 days to complete each run)
2. After all work is complete, the last traces of the project name we were not allowed to use, has to be removed and that requires some changes in the database
3. Once changes in step 2 is complete, work from mersenne.org will be loaded and there will most likely from that point, not be any time we have to go to 0 work available/remaining. In other words, there will as soon as we reach step 3, always be work available :)
40) Message boards : Number crunching : Trial Factoring (Message 6188)
Posted 14 Apr 2020 by KEP
Okay, just worrying about the May.7th lawyers thing :/


I was to, but at least now we have George in our back and for the future the Trial Factoring effort is going to be (if needed) coordinated with George. What would really help, were if those abandoning the tasks just aborted them and if a few more with highend GPUs came to our assistance. If we get to close to may 7th, there is still a not nice possibility to use, but let's hope that we have all current work completed by the end of the month so we can get going with mersenne.org :)


Previous 20 · Next 20

Main page · Your account · Message boards


Copyright © 2014-2024 BOINC Confederation / rebirther