Posts by PDW
log in
1) Message boards : Number crunching : Trial Factoring tests (Message 6723)
Posted 18 days ago by Profile PDW

I did the CUDA thing and 1804 is gone and 2004 is in it's place, so I tried that and of course it failed as I'm using Linux Mint 19.3. I will work on just the Lib 10 files


https://mrprajesh.blogspot.com/2018/11/install-cuda-10-on-linux-mint-19-or.html


Thank you but it fails at the part where it says wget...cuda-repo-ubuntu...and says no such directory

There is no wget command on that web page !
2) Message boards : Number crunching : Gpu app not working linux (Message 6643)
Posted 20 Jul 2020 by Profile PDW
You need to install the Cuda toolkit 10.1, see the FAQ here for linux...
http://srbase.my-firewall.org/sr5/forum_thread.php?id=6
3) Message boards : Number crunching : Work available?? (Message 6606)
Posted 6 Jul 2020 by Profile PDW
There are 10 batches being prepared for the last 22 hours, it takes time.
You can see this in the Science sub-forum http://srbase.my-firewall.org/sr5/forum_forum.php?id=3
4) Message boards : News : server outage / db crash again (Message 6467)
Posted 21 May 2020 by Profile PDW
Thanks reb :)
5) Message boards : News : server outage / db crash again (Message 6397)
Posted 17 May 2020 by Profile PDW
Hi reb, Windows 10, say no more :(

I stopped when I saw my account was back in olden times !

Fortunately I was only doing TF and SR Average WUs.
From the last Free-DC stats, my highest Boinc client figure which was updating mostly every 30 seconds so would be quite close, plus the completed work on my machines waiting to report can you amend these 3 values please:

TF = 274,795,836
S/R Base Average = 3,035,424
Total = 338,843,676

Completed work not uploaded was 90k for TF and 13,580 for Average.

Thanks

Edit: As you have started new work I have noted that the uplift should be as follows please:

TF plus 24,906,500
S/R Base Average plus 50,080
Total plus 24,956,580
6) Message boards : Number crunching : Trial Factoring (Message 6319)
Posted 20 Apr 2020 by Profile PDW

../../projects/srbase.my-firewall.org_sr5/wrapper_26016-lt3_x86_64-pc-linux-gnu[0x40548a]


Was this on linux or windows?

It was on only one of my linux machines, the others all had failed to download.
7) Message boards : Number crunching : Trial Factoring (Message 6315)
Posted 20 Apr 2020 by Profile PDW
Was surprised to see that some did manage to download but they didn't last long when they ran...

process exited with code 193 (0xc1, -63)</message> <stderr_txt> 18:00:10 (97312): wrapper (7.16.26016): starting 18:00:10 (97312): wrapper (7.16.26016): starting 18:00:10 (97312): wrapper: running ./mfaktc.exe () SIGSEGV: segmentation violation Stack trace (8 frames): ../../projects/srbase.my-firewall.org_sr5/wrapper_26016-lt3_x86_64-pc-linux-gnu[0x42ad40] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890)[0x7f36f6ceb890] /lib/x86_64-linux-gnu/libc.so.6(+0xb1e55)[0x7f36f6999e55] ../../projects/srbase.my-firewall.org_sr5/wrapper_26016-lt3_x86_64-pc-linux-gnu[0x409392] ../../projects/srbase.my-firewall.org_sr5/wrapper_26016-lt3_x86_64-pc-linux-gnu[0x40970e] ../../projects/srbase.my-firewall.org_sr5/wrapper_26016-lt3_x86_64-pc-linux-gnu[0x4088e3] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7f36f6909b97] ../../projects/srbase.my-firewall.org_sr5/wrapper_26016-lt3_x86_64-pc-linux-gnu[0x40548a] Exiting...
8) Message boards : Number crunching : Trial Factoring (Message 6292)
Posted 19 Apr 2020 by Profile PDW
Okay, it appears that credit needs to be additionally 6.56 times higher.

I may ask, is the credit given by the Collatz Conjecture, inflated or does it reflect the actual amount of computation done to complete a single task?`

I'm asking, because 11,969,000 for GTX 2080 TI sounds like a lot and at least more than what PG hands out for PPS Sieve. The original intention with the cobblestone, is to reflect the actual calculations done and not to use as a mean to attract users - by forinstance inflating the credit given per workunit.

Just did a comparison, between PPS Sieve at Primegrid and it appears that a GTX 2080 TI produces 2,722,003 cobblestones each day. Even with that in account, we are still off, by a factor of 1.527. I must stress, that just as with money, credit should not be inflated. It kind of makes credit loose its potential and afterall we might end up in best case scenario with a damaging competition between various projects, but in the worst case, credit might have to evolutionate backwards to simply just count the workunits completed. Both scenarious are unwanted, so lets do our best to keep it real :)

Collatz is inflated.

by a factor of 1.527 - this is roughly what I'd expect, perhaps as high as 1.6 to match PPS Sieve and Moo credits (based on what I remember !)
9) Message boards : Number crunching : Trial Factoring (Message 6285)
Posted 19 Apr 2020 by Profile PDW
It still says GPU72 on a user's home page...
10) Message boards : Number crunching : Trial Factoring (Message 6279)
Posted 19 Apr 2020 by Profile PDW
Pls stop/abort all the current work for GPU72.


You had clients preparing themselves for a boinc challenge, you said you were going to only made server changes when the server dries out. You should accept previous wus when returned by clients.

Too late, having realised he had removed them from the server I went and found them still running on the client so aborted there also.
11) Message boards : Number crunching : Trial Factoring (Message 6276)
Posted 19 Apr 2020 by Profile PDW
Pls stop/abort all the current work for GPU72.

Can't, someone already killed them !
12) Message boards : Number crunching : Trial Factoring tests (Message 6253)
Posted 17 Apr 2020 by Profile PDW
Started running this task yesterday evening on the second device but it was going to device 0 by default even though it said device 1.
Carried on running it overnight outside of Boinc until GPU 0 was available to complete it on device 0 and report it through Boinc.

<core_client_version>7.9.3</core_client_version> <![CDATA[ <stderr_txt> 18:53:44 (21480): wrapper (7.2.26012): starting 18:53:44 (21480): wrapper: running ./mfaktc.exe ( --device 1) 19:01:46 (21530): wrapper (7.2.26012): starting 19:01:46 (21530): wrapper: running ./mfaktc.exe ( --device 1) 19:04:28 (21551): wrapper (7.2.26012): starting 19:04:28 (21551): wrapper: running ./mfaktc.exe ( --device 1) 19:23:31 (21671): wrapper (7.2.26012): starting 19:23:31 (21671): wrapper: running ./mfaktc.exe ( --device 1) 20:56:42 (22038): wrapper (7.2.26012): starting 20:56:42 (22038): wrapper: running ./mfaktc.exe ( --device 0) 10:31:40 (126157): wrapper (7.2.26012): starting 10:31:40 (126157): wrapper: running ./mfaktc.exe ( --device 0) 18:56:42 (126157): ./mfaktc.exe exited; CPU time 24.265708 18:56:42 (126157): called boinc_finish </stderr_txt> ]]>

Do you have any plans to work on getting '--device' changed to '-d' for mfaktc so it can work as it should ?
It is easy enough for me just to run a second client and put a GPU in each but most won't want, or know how, to do that.
13) Message boards : Number crunching : Trial Factoring tests (Message 6234)
Posted 15 Apr 2020 by Profile PDW
So there is currently only one mapping --device 0 to d 00. (possible)

I don't think you are mapping anything !

On mfaktc everything goes to the first device because the program does not recognise any of the command line telling it which GPU device to use so it defaults to the first one. The wrapper seems to know the right number of the device to use but it is ignored because it is not formatted correctly.
14) Message boards : Number crunching : Trial Factoring tests (Message 6228)
Posted 15 Apr 2020 by Profile PDW
I have only been looking at mfaktc on linux for nVidia, don't have any working AMD cards. I only have the one dual GPU box that I put together this afternoon to try.

The output in stderr.txt says:
wrapper: running ./mfaktc.exe ( --device 1)
are you actually passing "-d 1" to the mfaktc program ?
This is a second task, the first task going to first GPU says ( --device 0).

In the paused Boinc slot for the second task (that started on first GPU) I can type "sudo ./mfaktc.exe -d 1' and it will run on the second GPU. It needs sudo to create the checkpoint file. Any other attempt than "-d 1" makes it run on the first GPU again.

If the wrapper has managed to work out the correct device number to pass to mfaktc (which it looks like it has on my dual GPU system) then I don't understand why it wouldn't run it on the second GPU ?


Theoretical both applicatins can support more than one (different) GPU.
But: BOINC enumerates the GPU with 0, 1, 2, ....
In OpenCl you have platforms, e.g. Intel=0, AMD=1, NVidia=2, and for each platform 1..n devices GPU.

A mapping form 0, 1, 2 to 00, 10, 11 is different for each computer with more than one graphics device.

So there is currently only one mapping --device 0 to d 00. (possible)
15) Message boards : Number crunching : Trial Factoring tests (Message 6223)
Posted 15 Apr 2020 by Profile PDW
Sorry to interject your development process.

Why are you using "--device x" for mfakto and mfaktc ?
Both their guides say to use "-d x"

Q: Does mfakto support multiple GPUs?
A: No, but you can use the -d option to tell an instance to run on a specific
device. Please also read the next question.

Q Does mfaktc support multiple GPUs?
A Yes, with the exception that a single instance of mfaktc can only use one
GPU. For each GPU you want to run mfaktc on you need (at least) one
instance of mfaktc. For each instance of mfaktc you can use the
commandline option "-d <GPU number>" to specify which GPU to use for each
specific mfaktc instance. Please read the next question, too.

I can run a second task on a second GPU but only if I specify "-d x", as soon as I pass "--device x" (or "-device x") on the command line it defaults to the first GPU.

Can you use "-d x" for linux wrapper at least please ?
16) Message boards : Number crunching : Trial Factoring (Message 6197)
Posted 14 Apr 2020 by Profile PDW
from my assignment in total left:

73-74 = 227 tasks
77-78 = 4546 tasks

That's only about 10 more days based on yesterday's consumption according to a very rough calculation isn't it ?
If more join in even less, though some may wander off to bunker for the Pentathlon starting early next month...
17) Message boards : Number crunching : Trial Factoring (Message 6065)
Posted 9 Apr 2020 by Profile PDW
Well, if it could be changed, I think the name of the subproject could be..um..more artistic? Though I'm pure brain and next to no art, I think TF78 would even be better ;)

Maybe SRB72, or Reb72, or even UPU72 :D
Why limit yourself to just 72, think bigger :)
18) Message boards : Number crunching : Trial Factoring tests (Message 5969)
Posted 30 Mar 2020 by Profile PDW
<app_config> <app> <name>GPU72</name> <project_max_concurrent>1</project_max_concurrent> <gpu_versions> <gpu_usage>1</gpu_usage> <cpu_usage>0.01</cpu_usage> </gpu_versions> </app> </app_config>

You should change the line:
<project_max_concurrent>1</project_max_concurrent>
to be...
<max_concurrent>1</max_concurrent>

This will restrict GPU72 to only running 1 task but other SRBase CPU tasks can be run at the same time.

The <project_max_concurrent> x </project_max_concurrent> tags don't go within the <app> </app> section.
19) Message boards : Number crunching : Trial Factoring tests (Message 5964)
Posted 30 Mar 2020 by Profile PDW
In case it helps now, or in the future !

I believe BOINC numbers GPUs in the order it receives information about them from CAL, CUDA or OpenCL drivers. (This from the BOINC forum a couple of years ago.)

I do not know if the mfakto/mfaktc when given a device number is then using the BOINC GPU numbering or a number assigned by the OS when it enumerates the GPUs.

Just putting it out there :)
20) Message boards : Number crunching : Trial Factoring (Message 5892)
Posted 28 Mar 2020 by Profile PDW
Funny thing is that the system with the A10-5700 has three HD6670 cards, and it own IGP, and crunches four of the GPU72 WUs at the same time.

Those four WUs will have been running on just 1 of the GPUs.


Next 20

Main page · Your account · Message boards


Copyright © 2014-2020 BOINC Confederation / rebirther