Posts by KEP
log in
21) Message boards : Number crunching : Trial Factoring (Message 6497)
Posted 23 May 2020 by KEP
And ECM factoring methods are not efficient for factoring such large numbers ?


Someone else has to elaborate on this, but I have read somewhere, that for n>20M there is not much benefit/possibility of doing ECM, due to (as I recall) memory use. Of course, there comes a time, where Trial Factoring and P-1 exhaust its efficiency and possibilities and at that time, unless something limits us, ECM might become the most efficient :)
22) Message boards : Number crunching : Trial Factoring (Message 6495)
Posted 23 May 2020 by KEP
And why are some of the cells yellow ?


The yellow cells are CPU optimum (old GPU optimum) Trial Factoring depth. Today, everyone with a ressource average like BOINC or those single users, with a modern GPU, should go 2 bit above the yellow line. So yes, at that point, where the yellow line is, some has to change to P-1 and other users has to keep Trial Factoring, if they want to use their ressources optimally. Since BOINC moves breadth first, then unless sometime in the future, the yellow bar moves to a higher bit depth, it should always make sence for BOINC to go 2 bit higher than the yellow cell shows.
23) Message boards : News : New work for TF added (Message 6385)
Posted 10 May 2020 by KEP
I hope I will soon be able to do the calculations with both GPUs together !


I sure hope so. What comforts me is that great minds are working on this. Yesterday I did my own TF work on a noisy ancient GPU using mfakto. It was sure nice to see that the progress bar worked as supposed to :)

A big thank you to all those of you who have taken up the challenge of modernizing and getting mfakt(o)(c) to work flawlessly on BOINC, both at single aswell (eventually) multi GPU systems.
24) Message boards : News : New work for TF added (Message 6383)
Posted 9 May 2020 by KEP
Thank you very much for these valuable explanations.
My understanding is getting better and better and I will almost be able to answer all the questions about TF for SRBase on the Alliance francophone forum.

I still have one last question, if it's not too complicated to answer here:
I don't understand why the task calculation time is shorter for larger exponents?
For example:
n=250M testingtime is 40 seconds
n=500M testingtime is 20 seconds
n=1000M testingtime is 10 seconds


Great question :)

A factor for a Mersenne candidate is always defined this way: 2 x k x prime_exponent_n + 1 (that is important to remember in the explanation below)

Let's answer you question, with 70 bit to 71 bit arithmetics:

at n=250M kmin=2,361,183,241,434 and kmax=4,722,366,482,869 (2,361,183,241,434 k to sieve or test for factor)
at n=500M kmin=1,180,591,620,717 and kmax=2,361,183,241,434 (1,180,591,620,717 k to sieve or test for factor)
at n=1000M kmin=590,295,810,358 and kmax=1,180,591,620,717 (590,295,810,358 k to sieve or test for factor)

So as you can see above, the higher n get's the shorter the range of possible factor candidates for the bit level becomes. Therefor it is almost certain, that the previous candidate you tried to factor, will take longer than the current candidate you try to factor.

One nice feature is that the expectancy level of the amount of candidates being factored, remains the same, despite having fewer pairs to test - so with less work you remove the same percentage of candidates and elimintates them from further testing :)

Hope this helped :)
25) Message boards : News : New work for TF added (Message 6379)
Posted 9 May 2020 by KEP
Thank you for your answer !


Even though it is not possible to tell who found a factor and what factor was found, then to give you an idea how much you have contributed, the equation looks like this:

Percentage of tasks resulting in a factor found result: ~1.436 %

You have currently 9,502 valid tasks. 136 of those, using statistical average, have found a factor. In other words, your contribution has saved on an i5-4670 more than 204 realtime (816 CPU) months of first time primality computation - so keep up the good work, as you can see your work is very valuable :)
26) Message boards : News : New work for TF added (Message 6366)
Posted 28 Apr 2020 by KEP
The calculation time of a WU is 22-23 seconds.


You have seen nothing yet :)

What takes you 22-23 seconds now, will take you at n=460M ~11.5 seconds and as we for bit 70 to 71, exhaust the candidates remaining at n~800M, you will litterally see a workunit complete in just 5 seconds.

When we go to bit 71 to 72, your RTX 2080 will start at ~80 sec per candidate and that will then at each doubling of n, cut in half. The reduction in runtime, will smoothly manifest itself, compared to what it were when you started crunching at certain bit level.

Example given for a test at n=125M running 80 seconds for bit 71 to 72, then the testing time will be approximately scaling to the following values:

n=250M testingtime is 40 seconds
n=500M testingtime is 20 seconds
n=1000M testingtime is 10 seconds

If the testingtimes is as follows per 71 to 72 bit, then you will more or less have these runtimes at these levels:

72 to 73 bit (n=250M=80 seconds) (n=500M=40 seconds) (n=1000M=20 seconds)
73 to 74 bit (n=250M=160 seconds) (n=500M=80 seconds) (n=1000M=40 seconds)
74 to 75 bit (n=250M=320 seconds) (n=500M=160 seconds) (n=1000M=80 seconds)
75 to 76 bit (n=250M=640 seconds) (n=500M=320 seconds) (n=1000M=160 seconds)
76 to 77 bit (n=250M=1280 seconds) (n=500M=640 seconds) (n=1000M=320 seconds)
...

It is only possible to scale, once you know the testtime at a given bitlevel, since the higher the bit level, dependant of your hardware, there is a slowdown in productivity :)
27) Message boards : Number crunching : Trial Factoring (Message 6318)
Posted 20 Apr 2020 by KEP
I'm not sure if this is worth anything, but now that I think about it, Rieselsieve, also had a lot of errors, when they upgraded from one version of an app to a newer one. If I recall correct, it may have something to do with the workunit once created is tied to a certain app. Maybe v13 can be implemented, but only for the new work. As mentioned not sure if this is infact the problem, or if I recall correctly, it is afterall more than 10 years back in time :)

Good job Reb :)
28) Message boards : Number crunching : Trial Factoring (Message 6309)
Posted 20 Apr 2020 by KEP
Collatz is the worst example. Making credit like collatz will attract all the point whores and cheaters.


That is just sad :( ... to be honest, I've always been in this for the science. A good measurement of how much science has been done, is in fact the pure uninflated/undeflated cobblestone. Now, because of Collatz and most likely also other projects, it is virtually impossible to tell wether we actually computed more science one year back, compared to the years before that. Well, this is not for the credit hunters. Of course we all should work together on getting credit right, such that people get what they should have, without beeing too much off to either side.
29) Message boards : Number crunching : Trial Factoring (Message 6295)
Posted 19 Apr 2020 by KEP
Collatz is inflated.

by a factor of 1.527 - this is roughly what I'd expect, perhaps as high as 1.6 to match PPS Sieve and Moo credits (based on what I remember !)


Just as expected :)

Well, 1.6 is propably what we should shoot for Reb - so we need to have credit that resembles for 77-78 bit 165000x1.6 = 264000. That way we will align with Primegrid and Moo.

To be honest, I don't think that it is fair by Collatz :( They are seriously distorting the true amount of computations done by the BOINC community :(

Thanks for your usefull and enlightning feedback :)
30) Message boards : Number crunching : Trial Factoring (Message 6289)
Posted 19 Apr 2020 by KEP
Okay, it appears that credit needs to be additionally 6.56 times higher.

I may ask, is the credit given by the Collatz Conjecture, inflated or does it reflect the actual amount of computation done to complete a single task?`

I'm asking, because 11,969,000 for GTX 2080 TI sounds like a lot and at least more than what PG hands out for PPS Sieve. The original intention with the cobblestone, is to reflect the actual calculations done and not to use as a mean to attract users - by forinstance inflating the credit given per workunit.

Just did a comparison, between PPS Sieve at Primegrid and it appears that a GTX 2080 TI produces 2,722,003 cobblestones each day. Even with that in account, we are still off, by a factor of 1.527. I must stress, that just as with money, credit should not be inflated. It kind of makes credit loose its potential and afterall we might end up in best case scenario with a damaging competition between various projects, but in the worst case, credit might have to evolutionate backwards to simply just count the workunits completed. Both scenarious are unwanted, so lets do our best to keep it real :)
31) Message boards : Number crunching : Trial Factoring (Message 6284)
Posted 19 Apr 2020 by KEP
Collatz projects credits 30k per 3-4 mins run...you think about it.


Could you please reveal the type of GPU that does 30K credit per 3-4 minute runs?

I must say, even though you propably already know this, that not all GPUs produce 30K credits worth of calculations each 3-4 minutes, but if credit is to low and unreasonable it should of course be adjusted. How does Collatz credit compare to Primegrid credit on a GPU?
32) Message boards : Number crunching : Trial Factoring (Message 6272)
Posted 19 Apr 2020 by KEP
Which tiny ones are you talking about, 72 to 73 bits? If so looking forward to it.


We will be starting with 70 to 71 bit. Due to the very short runtime, it may be best for the server, if they are actually 70-72 bit. It is as simple as search and replace, before Rebirther creates new work. That small task of search and replace, will in itself increase the chance of a factor found from ~1.436% to ~2.872% and it will increase the testime by a factor of 3. Let's see what Rebirther decides to do, but it is very simple to run from 70 bit to 72 without messing anything up.
33) Message boards : Number crunching : Trial Factoring (Message 6266)
Posted 19 Apr 2020 by KEP
It's hard to ignore when we can only have 1 task per host and its minutes between tasks.


Yes, it is.

Unfortunantly when Rebirther was doing some testing locally, in regards to remove the last traces of the projectname we were not allowed to use, something happened and that affected the prepared work, and does for now unfortunantly give us a higher download error rate, because the work that was prepared was deleted and the system has to recreate the workunits once more. If we increase the max from 1 to 2 or 3, it might result in weeks without new work for the highend GPU users.

We are sorry for the inconvienience and hope it all soon corrects itself.

On a possitive note, this will not happen again, once we start getting work from Mersenne.org, wich we hopefully soon will :)
34) Message boards : Number crunching : Trial Factoring (Message 6246)
Posted 16 Apr 2020 by KEP
SRBase is now 10th in overall production for Trial Factoring :)

When will we be no. 1? :)

Follow this link and let's see how fast we can move to no. 1 in overall Trial Factoring: https://www.mersenne.org/report_top_500_custom/?team_flag=0&type=1001&rank_lo=1&rank_hi=15&start_date=1995-01-01&end_date=
35) Message boards : Number crunching : Trial Factoring (Message 6240)
Posted 16 Apr 2020 by KEP
Hi,once the "batch1" tasks are depleted, when will the next batch be uploaded? Just want to know the waiting time between "batches" of work units. Thx.


This hopefully wont be long. Quite a few factors depent on the time between having no work to send and being able to upload new work. The plan to new work, is looking (as far as I understand) like this:

1. Complete all 73-74 bit and 77-78 bit tests and wait for return of the last one. (This can be a long haul, if we are extremely unlucky and a user running the same old GPU as I have and only runs it for 12 hours a day, it will take for 77-78 bit a whoopping 7-9 days to complete each run)
2. After all work is complete, the last traces of the project name we were not allowed to use, has to be removed and that requires some changes in the database
3. Once changes in step 2 is complete, work from mersenne.org will be loaded and there will most likely from that point, not be any time we have to go to 0 work available/remaining. In other words, there will as soon as we reach step 3, always be work available :)
36) Message boards : Number crunching : Trial Factoring (Message 6188)
Posted 14 Apr 2020 by KEP
Okay, just worrying about the May.7th lawyers thing :/


I was to, but at least now we have George in our back and for the future the Trial Factoring effort is going to be (if needed) coordinated with George. What would really help, were if those abandoning the tasks just aborted them and if a few more with highend GPUs came to our assistance. If we get to close to may 7th, there is still a not nice possibility to use, but let's hope that we have all current work completed by the end of the month so we can get going with mersenne.org :)
37) Message boards : Number crunching : Trial Factoring (Message 6186)
Posted 14 Apr 2020 by KEP
Hello Reb, i think the "GPU72" on the account page and the server status page aren't updated yet.


It will be changed as soon as we finish our GPU72 reservation. Unfortunantly it is not possible to make the changes before our current reservation is complete, without risk of loosing work currently in progress. Our ambition is to leave and wrap our reservation from GPU72 cleanly. A consequence of that, may be that we may also very temporarily run out of work, while going from GPU72 to mersenne.org work. It is a shame that it had to end and be like that, but at least the support for GIMPs and thereby the support for the research to find new Mersenne Primes, is intact. Thanks for chiming in.

Take care :)
38) Message boards : Number crunching : Trial Factoring tests (Message 6133)
Posted 12 Apr 2020 by KEP
yes, do this as long as you can, my grandmother died on Friday but no Corona.


I'm sorry to hear that, really hope that you got the best of your time with her :)
39) Message boards : News : Trial Factoring - new subproject (beta) (Message 6121)
Posted 11 Apr 2020 by KEP
Also, is there ckeckpoints? If I have a power issue or have to reboot do I loose my current work up to that time or is there a checkpoint saved for it to go back to?


Good question, I haven't checked the FAQ. There is checkpointing done every 5 minutes, so in worst case (and then you have to be really unlucky) you will loose 4m59s of work. If I recall correctly, you will only loose any really progress, if you have a crash in stead of a nice shut down of BOINC and the computer in general :)
40) Message boards : News : Trial Factoring - new subproject (beta) (Message 6094)
Posted 10 Apr 2020 by KEP
Maybe that should be in the intro post describing Trial Factoring or in the FAQ. It appears due to the slow move in the amount of unsent tasks, that we have had quite a few users cancel their work, due to them not thinking that they actually progressed. Would such a news or FAQ had made a difference for you STEVE? Would it have made a difference for you Bluestang?


Previous 20 · Next 20

Main page · Your account · Message boards


Copyright © 2014-2024 BOINC Confederation / rebirther