log in |
Message boards : Number crunching : Trial Factoring
Previous · 1 . . . 14 · 15 · 16 · 17 · 18 · 19 · 20 . . . 23 · Next
Author | Message |
---|---|
And ECM factoring methods are not efficient for factoring such large numbers ? Someone else has to elaborate on this, but I have read somewhere, that for n>20M there is not much benefit/possibility of doing ECM, due to (as I recall) memory use. Of course, there comes a time, where Trial Factoring and P-1 exhaust its efficiency and possibilities and at that time, unless something limits us, ECM might become the most efficient :) | |
ID: 6497 · Rating: 0 · rate: / Reply Quote | |
Thanks for everything ! | |
ID: 6498 · Rating: 0 · rate: / Reply Quote | |
For better understanding the next batch will have a better name with current range max. The current batch reached 438M of 1000M max. | |
ID: 6500 · Rating: 0 · rate: / Reply Quote | |
For better understanding the next batch will have a better name with current range max. The current batch reached 438M of 1000M max. Nope, for n range taking from 70 to 71 bits the available range is only up to 799M. I believe the range 1000M to 1099M is only available to CPU at the moment, high exponent low bit. | |
ID: 6501 · Rating: 0 · rate: / Reply Quote | |
For all which have noticed 250 credits only is because the higher range now has half the runtime as before so the credits were also adjusted. | |
ID: 6502 · Rating: 0 · rate: / Reply Quote | |
I'm not seeing much of a decrease in run time, not near a 50% decrease. | |
ID: 6504 · Rating: 0 · rate: / Reply Quote | |
I've only seen a 12% decrease in run time. | |
ID: 6505 · Rating: 0 · rate: / Reply Quote | |
I'm not seeing much of a decrease in run time, not near a 50% decrease. I will recheck. | |
ID: 6506 · Rating: 0 · rate: / Reply Quote | |
The upcoming batch will have 300 credits, the runtime went down from 1m40s to 1m30s (@RX5500XT) so its nearly half from the 200M+ range. | |
ID: 6507 · Rating: 0 · rate: / Reply Quote | |
For better understanding the next batch will have a better name with current range max. The current batch reached 438M of 1000M max. Thanks for the "better name with current range max" ! That way we know where we are ! ;-) | |
ID: 6508 · Rating: 0 · rate: / Reply Quote | |
Can you queue more than 500k wus onto the server or the limitation is on GIMPS side to provide a bigger batch of wus? Just conscience that we will reach a time we will start processing more than 200-300k wus per day, which would be awesome, and therefore time demanding from your side to keep feeding the server on a daily basis. Really looking forward to seeing for 70-71 bits all done the 4M wus. | |
ID: 6511 · Rating: 0 · rate: / Reply Quote | |
Can you queue more than 500k wus onto the server or the limitation is on GIMPS side to provide a bigger batch of wus? Just conscience that we will reach a time we will start processing more than 200-300k wus per day, which would be awesome, and therefore time demanding from your side to keep feeding the server on a daily basis. Really looking forward to seeing for 70-71 bits all done the 4M wus. yes, its planned but it will growing the database very fast, Iam trying to purge things before I backup the database. The main issue is the assignment queue. The more I have in pipeline the more its taking to report results. | |
ID: 6512 · Rating: 0 · rate: / Reply Quote | |
Can you queue more than 500k wus onto the server or the limitation is on GIMPS side to provide a bigger batch of wus? Just conscience that we will reach a time we will start processing more than 200-300k wus per day, which would be awesome, and therefore time demanding from your side to keep feeding the server on a daily basis. Really looking forward to seeing for 70-71 bits all done the 4M wus. Maybe for higher ranges instead of going to 71 bits from 70 just go straight to 72 or 73 instead. You will have to make some trials for timings. | |
ID: 6513 · Rating: 0 · rate: / Reply Quote | |
Here's an example on my GPU. | |
ID: 6514 · Rating: 0 · rate: / Reply Quote | |
Maybe for higher ranges instead of going to 71 bits from 70 just go straight to 72 or 73 instead. You will have to make some trials for timings. How would people here in general feel about doing lower n, breadth first to max bit ie n>96.83M to n<=120M to 78 bit? I know it eventually is a huge increase in testing time, but does it really make a difference for you as a user when it comes to supporting this project or not? I'm asking, because if we really want to benefit and boost this project, running that suggested n range, to 78 bit, is the way to move forward. I know it will steal some of the low hanging fruits. What does each user prefer in terms of short or long running tasks? I reckon Rebirther is the one who takes the final decision, but it is seriously as little as changing one setting in the .ini file and then everything from work creation to validation will work the same, even if we decide to run multiple bits per workunit. The offer Rebirther, that I gave in a private message, still stands, when it comes to creating the next bit range, if you wont go all remaining bits up to 78, but breadth first and increase by 1 bit per workunit - all we have to do, is to agree with George that we can reserve and keep everything in that range untill we reach 78 bit. Has the new way of showing progress, made it possible to have a mix of bits? I understand the need to go breadth first, but we also have to use our ressources to best help Primenet and as mentioned in private message, going breadth first, but completing the range n>96.83M to n<=120M to 78 bit, one bit at a time, wont be difficult and I will no doubt help you do it flawlessly :) Everyone with an oppinion or enlightment, please let me hear what you think, preferably from both slow and fast GPU users :) | |
ID: 6515 · Rating: 0 · rate: / Reply Quote | |
In my opinion we should test for the lower n range from 72 bits to 73 bits to get a feeling of the timings needed, we have almost 470k units in there. This will help in short term GIMPS project but I do understand we should get fast wus available to the community, like the ones we are doing now, which will only be useful to GIMPS in 20-30 years time. It’s a threshold. | |
ID: 6516 · Rating: 0 · rate: / Reply Quote | |
Maybe for higher ranges instead of going to 71 bits from 70 just go straight to 72 or 73 instead. You will have to make some trials for timings. I suggest we do something between 73 and 77, since it would be a compromise for both fast and slow GPU Users. I have only laptop GPUs, so I'd prefer something small but will still fastforward Primenet in the short term. | |
ID: 6517 · Rating: 0 · rate: / Reply Quote | |
n my opinion we should test for the lower n range from 72 bits to 73 bits to get a feeling of the timings needed, we have almost 470k units in there. This will help in short term GIMPS project but I do understand we should get fast wus available to the community, like the ones we are doing now, which will only be useful to GIMPS in 20-30 years time. It’s a threshold. If you at n=200M use 42.5 minutes in average to go from 72-73 bit, it means, that at n=120M (likely starting point) you would spend ~70.8 minutes going from 72-73 bit. That could be too much for someone. A GTR 2080 will spend on doing the same n/bit range (if 50x faster), ~1m25 seconds. Looks like Rebirthers approach is correct and that running breadth first is the best suited for all users. Last time we jumped to 77-78 bits, a lot of the slow GPUs vanished. Going breadth first, may actually be the proper way of keeping our momentum. Thanks for your feedbacks, now let's see if we can actually clear all exponents to 72 bit, before the end of the year (maybe 73 bit if enough highend users comes to our aide) :) | |
ID: 6518 · Rating: 0 · rate: / Reply Quote | |
Personally, I don't mind having long WUs to compute, because I have a very powerful GPU. | |
ID: 6519 · Rating: 0 · rate: / Reply Quote | |
I'm fine with whatever | |
ID: 6520 · Rating: 0 · rate: / Reply Quote | |
Message boards :
Number crunching :
Trial Factoring