New work for TF added

log in |

Message boards : News : New work for TF added

Author | Message |
---|---|

There is now new work from mersenne.org started from 72-73 bit range. All necessarily changes were made. | |

ID: 6298 · Rating: 0 · rate:
/ Reply
Quote
| |

max GPU WUs in progress are 20 now. | |

ID: 6303 · Rating: 0 · rate:
/ Reply
Quote
| |

If you have a cc_config and app_config running with the old GPU72 before the name changed pls rename the data to TF to avoid any problems. | |

ID: 6328 · Rating: 0 · rate:
/ Reply
Quote
| |

For GPU72, it was not recommended to compute two WUs together on two GPUs installed on the same computer. | |

ID: 6360 · Rating: 0 · rate:
/ Reply
Quote
| |

For GPU72, it was not recommended to compute two WUs together on two GPUs installed on the same computer. Its the same, there was only a name change. Maybe in the future after some changes on the program code we could use both. | |

ID: 6361 · Rating: 0 · rate:
/ Reply
Quote
| |

OK ! | |

ID: 6363 · Rating: 0 · rate:
/ Reply
Quote
| |

OK ! will be harder, check FAQ, if we have done this range the next has double runtime but for your 2080 its only a snack :) | |

ID: 6364 · Rating: 0 · rate:
/ Reply
Quote
| |

The calculation time of a WU is 22-23 seconds. You have seen nothing yet :) What takes you 22-23 seconds now, will take you at n=460M ~11.5 seconds and as we for bit 70 to 71, exhaust the candidates remaining at n~800M, you will litterally see a workunit complete in just 5 seconds. When we go to bit 71 to 72, your RTX 2080 will start at ~80 sec per candidate and that will then at each doubling of n, cut in half. The reduction in runtime, will smoothly manifest itself, compared to what it were when you started crunching at certain bit level. Example given for a test at n=125M running 80 seconds for bit 71 to 72, then the testing time will be approximately scaling to the following values: n=250M testingtime is 40 seconds n=500M testingtime is 20 seconds n=1000M testingtime is 10 seconds If the testingtimes is as follows per 71 to 72 bit, then you will more or less have these runtimes at these levels: 72 to 73 bit (n=250M=80 seconds) (n=500M=40 seconds) (n=1000M=20 seconds) 73 to 74 bit (n=250M=160 seconds) (n=500M=80 seconds) (n=1000M=40 seconds) 74 to 75 bit (n=250M=320 seconds) (n=500M=160 seconds) (n=1000M=80 seconds) 75 to 76 bit (n=250M=640 seconds) (n=500M=320 seconds) (n=1000M=160 seconds) 76 to 77 bit (n=250M=1280 seconds) (n=500M=640 seconds) (n=1000M=320 seconds) ... It is only possible to scale, once you know the testtime at a given bitlevel, since the higher the bit level, dependant of your hardware, there is a slowdown in productivity :) | |

ID: 6366 · Rating: 0 · rate:
/ Reply
Quote
| |

OK, thanks for the detailed explanations ! | |

ID: 6369 · Rating: 0 · rate:
/ Reply
Quote
| |

Can we know if we have found prime factors and if so, how many ? | |

ID: 6373 · Rating: 0 · rate:
/ Reply
Quote
| |

You can find here number of attempts against successes. | |

ID: 6374 · Rating: 0 · rate:
/ Reply
Quote
| |

All right, thank you ! | |

ID: 6375 · Rating: 0 · rate:
/ Reply
Quote
| |

All right, thank you ! No, individually only with Primenet. | |

ID: 6376 · Rating: 0 · rate:
/ Reply
Quote
| |

Thank you for your answer ! | |

ID: 6377 · Rating: 0 · rate:
/ Reply
Quote
| |

Thank you for your answer ! Even though it is not possible to tell who found a factor and what factor was found, then to give you an idea how much you have contributed, the equation looks like this: Percentage of tasks resulting in a factor found result: ~1.436 % You have currently 9,502 valid tasks. 136 of those, using statistical average, have found a factor. In other words, your contribution has saved on an i5-4670 more than 204 realtime (816 CPU) months of first time primality computation - so keep up the good work, as you can see your work is very valuable :) | |

ID: 6379 · Rating: 0 · rate:
/ Reply
Quote
| |

Thank you very much for these valuable explanations. | |

ID: 6382 · Rating: 0 · rate:
/ Reply
Quote
| |

Thank you very much for these valuable explanations. Great question :) A factor for a Mersenne candidate is always defined this way: 2 x k x prime_exponent_n + 1 (that is important to remember in the explanation below) Let's answer you question, with 70 bit to 71 bit arithmetics: at n=250M kmin=2,361,183,241,434 and kmax=4,722,366,482,869 (2,361,183,241,434 k to sieve or test for factor) at n=500M kmin=1,180,591,620,717 and kmax=2,361,183,241,434 (1,180,591,620,717 k to sieve or test for factor) at n=1000M kmin=590,295,810,358 and kmax=1,180,591,620,717 (590,295,810,358 k to sieve or test for factor) So as you can see above, the higher n get's the shorter the range of possible factor candidates for the bit level becomes. Therefor it is almost certain, that the previous candidate you tried to factor, will take longer than the current candidate you try to factor. One nice feature is that the expectancy level of the amount of candidates being factored, remains the same, despite having fewer pairs to test - so with less work you remove the same percentage of candidates and elimintates them from further testing :) Hope this helped :) | |

ID: 6383 · Rating: 0 · rate:
/ Reply
Quote
| |

Thank you very much, I understand much better ! | |

ID: 6384 · Rating: 0 · rate:
/ Reply
Quote
| |

I hope I will soon be able to do the calculations with both GPUs together ! I sure hope so. What comforts me is that great minds are working on this. Yesterday I did my own TF work on a noisy ancient GPU using mfakto. It was sure nice to see that the progress bar worked as supposed to :) A big thank you to all those of you who have taken up the challenge of modernizing and getting mfakt(o)(c) to work flawlessly on BOINC, both at single aswell (eventually) multi GPU systems. | |

ID: 6385 · Rating: 0 · rate:
/ Reply
Quote
| |

Post to thread

Message boards :
News :
New work for TF added

Copyright © 2014-2020 BOINC Confederation / rebirther