Trial Factoring
log in

Advanced search

Message boards : Number crunching : Trial Factoring

Previous · 1 . . . 14 · 15 · 16 · 17 · 18 · 19 · 20 . . . 23 · Next
Author Message
KEP
Volunteer tester
Send message
Joined: 28 Nov 14
Posts: 92
Credit: 1,102,770
RAC: 0
Message 6497 - Posted: 23 May 2020, 14:52:13 UTC - in response to Message 6496.

And ECM factoring methods are not efficient for factoring such large numbers ?


Someone else has to elaborate on this, but I have read somewhere, that for n>20M there is not much benefit/possibility of doing ECM, due to (as I recall) memory use. Of course, there comes a time, where Trial Factoring and P-1 exhaust its efficiency and possibilities and at that time, unless something limits us, ECM might become the most efficient :)

[AF>Amis des Lapins] Jean-Luc
Avatar
Send message
Joined: 12 Mar 18
Posts: 21
Credit: 1,757,370,846
RAC: 8,282,163
Message 6498 - Posted: 23 May 2020, 16:01:49 UTC - in response to Message 6497.

Thanks for everything !
I literally took a passion for this project !

;-)

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7490
Credit: 43,876,295
RAC: 29,389
Message 6500 - Posted: 25 May 2020, 17:21:42 UTC

For better understanding the next batch will have a better name with current range max. The current batch reached 438M of 1000M max.

Gigacruncher [TSBTs Pirate]
Send message
Joined: 28 Mar 20
Posts: 51
Credit: 8,419,360
RAC: 0
Message 6501 - Posted: 25 May 2020, 17:39:38 UTC - in response to Message 6500.

For better understanding the next batch will have a better name with current range max. The current batch reached 438M of 1000M max.


Nope, for n range taking from 70 to 71 bits the available range is only up to 799M. I believe the range 1000M to 1099M is only available to CPU at the moment, high exponent low bit.

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7490
Credit: 43,876,295
RAC: 29,389
Message 6502 - Posted: 26 May 2020, 15:31:15 UTC

For all which have noticed 250 credits only is because the higher range now has half the runtime as before so the credits were also adjusted.

Sphynx
Send message
Joined: 23 Dec 14
Posts: 1
Credit: 363,833,344
RAC: 3,345
Message 6504 - Posted: 27 May 2020, 2:41:36 UTC - in response to Message 6502.

I'm not seeing much of a decrease in run time, not near a 50% decrease.

Profile Bryan
Send message
Joined: 4 Dec 14
Posts: 1
Credit: 100,191,175
RAC: 0
Message 6505 - Posted: 27 May 2020, 5:09:21 UTC

I've only seen a 12% decrease in run time.

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7490
Credit: 43,876,295
RAC: 29,389
Message 6506 - Posted: 27 May 2020, 5:24:01 UTC - in response to Message 6504.

I'm not seeing much of a decrease in run time, not near a 50% decrease.


I will recheck.

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7490
Credit: 43,876,295
RAC: 29,389
Message 6507 - Posted: 27 May 2020, 17:51:33 UTC

The upcoming batch will have 300 credits, the runtime went down from 1m40s to 1m30s (@RX5500XT) so its nearly half from the 200M+ range.

[AF>Amis des Lapins] Jean-Luc
Avatar
Send message
Joined: 12 Mar 18
Posts: 21
Credit: 1,757,370,846
RAC: 8,282,163
Message 6508 - Posted: 28 May 2020, 12:46:49 UTC - in response to Message 6500.

For better understanding the next batch will have a better name with current range max. The current batch reached 438M of 1000M max.


Thanks for the "better name with current range max" !
That way we know where we are !

;-)

Gigacruncher [TSBTs Pirate]
Send message
Joined: 28 Mar 20
Posts: 51
Credit: 8,419,360
RAC: 0
Message 6511 - Posted: 28 May 2020, 17:33:56 UTC

Can you queue more than 500k wus onto the server or the limitation is on GIMPS side to provide a bigger batch of wus? Just conscience that we will reach a time we will start processing more than 200-300k wus per day, which would be awesome, and therefore time demanding from your side to keep feeding the server on a daily basis. Really looking forward to seeing for 70-71 bits all done the 4M wus.

Profile rebirther
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 2 Jan 13
Posts: 7490
Credit: 43,876,295
RAC: 29,389
Message 6512 - Posted: 28 May 2020, 17:41:43 UTC - in response to Message 6511.

Can you queue more than 500k wus onto the server or the limitation is on GIMPS side to provide a bigger batch of wus? Just conscience that we will reach a time we will start processing more than 200-300k wus per day, which would be awesome, and therefore time demanding from your side to keep feeding the server on a daily basis. Really looking forward to seeing for 70-71 bits all done the 4M wus.


yes, its planned but it will growing the database very fast, Iam trying to purge things before I backup the database. The main issue is the assignment queue. The more I have in pipeline the more its taking to report results.

Gigacruncher [TSBTs Pirate]
Send message
Joined: 28 Mar 20
Posts: 51
Credit: 8,419,360
RAC: 0
Message 6513 - Posted: 28 May 2020, 18:04:47 UTC - in response to Message 6512.

Can you queue more than 500k wus onto the server or the limitation is on GIMPS side to provide a bigger batch of wus? Just conscience that we will reach a time we will start processing more than 200-300k wus per day, which would be awesome, and therefore time demanding from your side to keep feeding the server on a daily basis. Really looking forward to seeing for 70-71 bits all done the 4M wus.


yes, its planned but it will growing the database very fast, Iam trying to purge things before I backup the database. The main issue is the assignment queue. The more I have in pipeline the more its taking to report results.


Maybe for higher ranges instead of going to 71 bits from 70 just go straight to 72 or 73 instead. You will have to make some trials for timings.

Gigacruncher [TSBTs Pirate]
Send message
Joined: 28 Mar 20
Posts: 51
Credit: 8,419,360
RAC: 0
Message 6514 - Posted: 28 May 2020, 18:18:07 UTC - in response to Message 6513.

Here's an example on my GPU.

For n=794M it takes 10 mins from 70 bits to 71 bits
For n=794M it takes 20 mins from 71 bits to 72 bits
For n=794M it takes 30 mins from 70 bits to 72 bits

A NVIDIA card RTX 2080 is 50x faster than my laptop GPU.

KEP
Volunteer tester
Send message
Joined: 28 Nov 14
Posts: 92
Credit: 1,102,770
RAC: 0
Message 6515 - Posted: 28 May 2020, 20:08:05 UTC - in response to Message 6513.

Maybe for higher ranges instead of going to 71 bits from 70 just go straight to 72 or 73 instead. You will have to make some trials for timings.


How would people here in general feel about doing lower n, breadth first to max bit ie n>96.83M to n<=120M to 78 bit? I know it eventually is a huge increase in testing time, but does it really make a difference for you as a user when it comes to supporting this project or not?

I'm asking, because if we really want to benefit and boost this project, running that suggested n range, to 78 bit, is the way to move forward. I know it will steal some of the low hanging fruits. What does each user prefer in terms of short or long running tasks?

I reckon Rebirther is the one who takes the final decision, but it is seriously as little as changing one setting in the .ini file and then everything from work creation to validation will work the same, even if we decide to run multiple bits per workunit.

The offer Rebirther, that I gave in a private message, still stands, when it comes to creating the next bit range, if you wont go all remaining bits up to 78, but breadth first and increase by 1 bit per workunit - all we have to do, is to agree with George that we can reserve and keep everything in that range untill we reach 78 bit.

Has the new way of showing progress, made it possible to have a mix of bits?

I understand the need to go breadth first, but we also have to use our ressources to best help Primenet and as mentioned in private message, going breadth first, but completing the range n>96.83M to n<=120M to 78 bit, one bit at a time, wont be difficult and I will no doubt help you do it flawlessly :)

Everyone with an oppinion or enlightment, please let me hear what you think, preferably from both slow and fast GPU users :)

Gigacruncher [TSBTs Pirate]
Send message
Joined: 28 Mar 20
Posts: 51
Credit: 8,419,360
RAC: 0
Message 6516 - Posted: 28 May 2020, 20:25:12 UTC
Last modified: 28 May 2020, 20:32:56 UTC

In my opinion we should test for the lower n range from 72 bits to 73 bits to get a feeling of the timings needed, we have almost 470k units in there. This will help in short term GIMPS project but I do understand we should get fast wus available to the community, like the ones we are doing now, which will only be useful to GIMPS in 20-30 years time. It’s a threshold.

Extrapolating my timings I would get 40-45 mins per wu at n<=199M from 72 bits to 73bits.

Since I can run the standalone client as expected it is not feasible to run these bigger wus on slow slow GPU’s so whatever we decide I’m perfectly fine with it.

PS( my card is AMD AMD Radeon HD 7670)

dannyridel
Avatar
Send message
Joined: 21 Jul 19
Posts: 63
Credit: 16,001,619
RAC: 93,795
Message 6517 - Posted: 29 May 2020, 4:47:25 UTC - in response to Message 6515.

Maybe for higher ranges instead of going to 71 bits from 70 just go straight to 72 or 73 instead. You will have to make some trials for timings.


How would people here in general feel about doing lower n, breadth first to max bit ie n>96.83M to n<=120M to 78 bit? I know it eventually is a huge increase in testing time, but does it really make a difference for you as a user when it comes to supporting this project or not?

I'm asking, because if we really want to benefit and boost this project, running that suggested n range, to 78 bit, is the way to move forward. I know it will steal some of the low hanging fruits. What does each user prefer in terms of short or long running tasks?

I reckon Rebirther is the one who takes the final decision, but it is seriously as little as changing one setting in the .ini file and then everything from work creation to validation will work the same, even if we decide to run multiple bits per workunit.

The offer Rebirther, that I gave in a private message, still stands, when it comes to creating the next bit range, if you wont go all remaining bits up to 78, but breadth first and increase by 1 bit per workunit - all we have to do, is to agree with George that we can reserve and keep everything in that range untill we reach 78 bit.

Has the new way of showing progress, made it possible to have a mix of bits?

I understand the need to go breadth first, but we also have to use our ressources to best help Primenet and as mentioned in private message, going breadth first, but completing the range n>96.83M to n<=120M to 78 bit, one bit at a time, wont be difficult and I will no doubt help you do it flawlessly :)

Everyone with an oppinion or enlightment, please let me hear what you think, preferably from both slow and fast GPU users :)


I suggest we do something between 73 and 77, since it would be a compromise for both fast and slow GPU Users. I have only laptop GPUs, so I'd prefer something small but will still fastforward Primenet in the short term.

KEP
Volunteer tester
Send message
Joined: 28 Nov 14
Posts: 92
Credit: 1,102,770
RAC: 0
Message 6518 - Posted: 29 May 2020, 8:09:16 UTC - in response to Message 6516.

n my opinion we should test for the lower n range from 72 bits to 73 bits to get a feeling of the timings needed, we have almost 470k units in there. This will help in short term GIMPS project but I do understand we should get fast wus available to the community, like the ones we are doing now, which will only be useful to GIMPS in 20-30 years time. It’s a threshold.

Extrapolating my timings I would get 40-45 mins per wu at n<=199M from 72 bits to 73bits.


If you at n=200M use 42.5 minutes in average to go from 72-73 bit, it means, that at n=120M (likely starting point) you would spend ~70.8 minutes going from 72-73 bit. That could be too much for someone. A GTR 2080 will spend on doing the same n/bit range (if 50x faster), ~1m25 seconds. Looks like Rebirthers approach is correct and that running breadth first is the best suited for all users. Last time we jumped to 77-78 bits, a lot of the slow GPUs vanished. Going breadth first, may actually be the proper way of keeping our momentum.

Thanks for your feedbacks, now let's see if we can actually clear all exponents to 72 bit, before the end of the year (maybe 73 bit if enough highend users comes to our aide) :)

[AF>Amis des Lapins] Jean-Luc
Avatar
Send message
Joined: 12 Mar 18
Posts: 21
Credit: 1,757,370,846
RAC: 8,282,163
Message 6519 - Posted: 29 May 2020, 13:43:11 UTC - in response to Message 6518.

Personally, I don't mind having long WUs to compute, because I have a very powerful GPU.
Short WUs to compute also suit me.
But if you need to test for 78 or 80 bits, I'm willing !
Otherwise, I take what comes...
It all depends on what GIMPS needs in priority.

MAGPIE
Avatar
Send message
Joined: 3 Aug 16
Posts: 7
Credit: 421,307,264
RAC: 0
Message 6520 - Posted: 30 May 2020, 4:20:43 UTC

I'm fine with whatever

Wish I could afford afford a 2080ti,,,sadly guess this will never happen...tsk
____________

Previous · 1 . . . 14 · 15 · 16 · 17 · 18 · 19 · 20 . . . 23 · Next
Post to thread

Message boards : Number crunching : Trial Factoring


Main page · Your account · Message boards


Copyright © 2014-2024 BOINC Confederation / rebirther