Posts by bluestang
log in
21) Message boards : News : provider / network outage (Message 7073)
Posted 2 Dec 2020 by bluestang
Can we skip this year ^^


I'm game for that...right to mid/late 2021 :)
22) Message boards : Number crunching : Download Failed error (Message 6983)
Posted 19 Nov 2020 by bluestang
Ok, thanks.
23) Message boards : Number crunching : Download Failed error (Message 6981)
Posted 19 Nov 2020 by bluestang
I thought is was only my 1 machine, but it's happening on all of them.
24) Message boards : Number crunching : Download Failed error (Message 6980)
Posted 18 Nov 2020 by bluestang
I'm getting:

"11/18/2020 5:58:55 PM | SRBase | Giving up on download of worktodo13a83_0055604.txt: permanent HTTP error"

This machine http://srbase.my-firewall.org/sr5/show_host_detail.php?hostid=205296


EDIT:
Stderr output
<core_client_version>7.16.11</core_client_version>
<![CDATA[
<message>
WU download error: couldn't get input files:
<file_xfer_error>
<file_name>worktodo13a83_0055232.txt</file_name>
<error_code>-224 (permanent HTTP error)</error_code>
<error_message>permanent HTTP error</error_message>
</file_xfer_error>
</message>
]]>


Any ideas?

Thanks,
blue
25) Message boards : Number crunching : SRBase has been chosen for Formula Boinc Sprint (Message 6838)
Posted 22 Oct 2020 by bluestang
What a quick and awesome response by the Project Admin. Wish they were all on top of their game like here.

Thanks rebirther!
26) Message boards : Number crunching : Distribution of Credits (Message 6795)
Posted 17 Oct 2020 by bluestang
How much lower is the GPU credit going to be reduced to?

Impossible to work out the average credit for any GPU because it reduces every couple of days :(

Only 300 per WU now.


The credits for TF are around 10k per hour, the last bunch of WUs for the 71-72 range were setup so the next batch would be longer as we starting 72-73 range soon, a news will follow.


I understand the theory behind reducing credits as the GPU tasks get shorter, but it does not work out to getting 10k per hour all the time. If that were the case the the PPD of a GPU would stay the same every day...but it doesn't. Just saying :)
27) Message boards : News : New work for TF added (Message 6569)
Posted 7 Jun 2020 by bluestang
Ok, thanks for the information! Not worth the effort for him to even consider it. Was just curious on results.

Thanks, you guys are really on top of things here and that goes along way with us crunchers.
28) Message boards : Number crunching : Trial Factoring (Message 6566)
Posted 7 Jun 2020 by bluestang
I'll assume points are bouncing around from 190 to 180 and now 170 because of runtimes?
29) Message boards : News : New work for TF added (Message 6565)
Posted 7 Jun 2020 by bluestang
For your information: in 8-9 days, my RTX 2080 Ti calculated about 30,000 tasks (so I discovered 430 factors !) and the calculation time of a task went from 23-24 seconds to 20-21 for 70-71 bits !


Where do you find your discovered factors?
30) Message boards : Number crunching : Trial Factoring (Message 6522)
Posted 30 May 2020 by bluestang
Is it possible to have it all? I mean let users choose in Project Preferences which WUs to check off to run based on their GPU? This way slow GPUs can pick the short WUs and fast GPUs can pick the long WUs. Or is that too much of a pain to setup on the server to feed proper WUs based on that selection?
31) Message boards : News : server outage / db crash again (Message 6414)
Posted 17 May 2020 by bluestang
Make sure you have some cold beer for yourself at the ready!
32) Message boards : News : server outage / db crash again (Message 6406)
Posted 17 May 2020 by bluestang
Last Free-DC stat 44,319,701
Currrenton account 44,225,701

What should we do with work waiting to be uploaded but can't?
33) Message boards : Number crunching : Trial Factoring tests (Message 6307)
Posted 19 Apr 2020 by bluestang
I've patched mfakto, not mfaktc.
So the mapping --device 0 to -d 00 is only implemented for AMD, not NVidia.
(and only linux)


This is ridiculous. You're screwing us on Windows...please fix/implement so it works properly there too.
34) Message boards : Number crunching : Trial Factoring tests (Message 6306)
Posted 19 Apr 2020 by bluestang
Started running this task yesterday evening on the second device but it was going to device 0 by default even though it said device 1.
Carried on running it overnight outside of Boinc until GPU 0 was available to complete it on device 0 and report it through Boinc.

<core_client_version>7.9.3</core_client_version> <![CDATA[ <stderr_txt> 18:53:44 (21480): wrapper (7.2.26012): starting 18:53:44 (21480): wrapper: running ./mfaktc.exe ( --device 1) 19:01:46 (21530): wrapper (7.2.26012): starting 19:01:46 (21530): wrapper: running ./mfaktc.exe ( --device 1) 19:04:28 (21551): wrapper (7.2.26012): starting 19:04:28 (21551): wrapper: running ./mfaktc.exe ( --device 1) 19:23:31 (21671): wrapper (7.2.26012): starting 19:23:31 (21671): wrapper: running ./mfaktc.exe ( --device 1) 20:56:42 (22038): wrapper (7.2.26012): starting 20:56:42 (22038): wrapper: running ./mfaktc.exe ( --device 0) 10:31:40 (126157): wrapper (7.2.26012): starting 10:31:40 (126157): wrapper: running ./mfaktc.exe ( --device 0) 18:56:42 (126157): ./mfaktc.exe exited; CPU time 24.265708 18:56:42 (126157): called boinc_finish </stderr_txt> ]]>

Do you have any plans to work on getting '--device' changed to '-d' for mfaktc so it can work as it should ?
It is easy enough for me just to run a second client and put a GPU in each but most won't want, or know how, to do that.



What is your cc_config and app_config setup? I was able to run 2 instances to get both GPUs to work on 1 WU ea, but now with the changes (and who knows what was changed as no one knows apparently) it will only run on my 1st GPU no matter what I've tried. I'm on Windows 10 with 2xc 1660ti..
35) Message boards : Number crunching : Trial Factoring (Message 6305)
Posted 19 Apr 2020 by bluestang
<cmdline>-d 0</cmdline> (or -d 1) no longer works like it used to to be able to tell which GPU to use. On Windows 10 with NVIDIA GPUs.

EDIT: Had to reset Project as it still had the job_GPU72_x63c_00002.xml file in the folder. After a reset everything working like it was before the change. Let's hope it stays that way.
36) Message boards : Number crunching : Trial Factoring (Message 6296)
Posted 19 Apr 2020 by bluestang
This project just turned into a clusterf*ck with the WUs being aborted and even the finished ones by some will get no credit now...WTF!

Not trying to be a dick, but think about WtF you're doing before you do it without giving some sort of advanced notice first. You know you can send notices to the BOINC manager right lol.
37) Message boards : News : Trial Factoring - new subproject (beta) (Message 6097)
Posted 10 Apr 2020 by bluestang
It's always been like that from the start. Only difference now is the amount of time it takes to actually complete after it hits 100%. By far worse on AMD than NVIDIA. I think NVIDIA is pretty accurate as it looks like it tracks time properly with Progress Bar.

I thinks it's a good idea to put hat info out there so people don't abort. I knew that it was doing work still as my I look at my GPU utilization before doing anything drastic like aborting them after running for so long.

Also, is there ckeckpoints? If I have a power issue or have to reboot do I loose my current work up to that time or is there a checkpoint saved for it to go back to?
38) Message boards : News : Trial Factoring - new subproject (beta) (Message 6093)
Posted 10 Apr 2020 by bluestang
The Progress bar is no where accurate. It gets to 100% fairly quick compared to the runtime of the WU, but then is there at 100% for a long time still computing until it is actually finished.
39) Message boards : Number crunching : Trial Factoring tests (Message 6000)
Posted 3 Apr 2020 by bluestang
I'm beginning to think AMD needs the "-d xx" format and NVIDIA is fine with the "-d x" format? Or is it because it's Linux?
40) Message boards : Number crunching : Trial Factoring tests (Message 5981)
Posted 31 Mar 2020 by bluestang
On one of my systems I have a HD7950 (Dev 0) and a Vega 64 (Dev 1). If I include the following in the options section of my cc_config.xml file:

<exclude_gpu> <url>http://srbase.my-firewall.org/sr5/</url> <type>ATI</type> <device_num>0</device_num> <app>GPU72</app> </exclude_gpu>


That should tell it to ignore my 7950 (Dev 0) and run on my Vega 64. However, that is not the case. It still runs on the 2st device no matter what. The status column in BOINC manager says it's running on Device 1 like the cc_config.xml tells it to, but according to GPU utilization in both GPU-Z and HWiNFO it is still running on 1st device.

EDIT: So by default nature of the app, it will always use the 1st device no matter what it is told in cc_config or app_config it seems? At least on AMD GPUs. NVIDIA is a different story I think?


I'm successfully using this in my cc_config to exclude the second GPU in all of my two-GPU systems, so that one doesn't go to waste when running GPU72. I am running another project on the second GPU in those rigs.


Yes, on NVIDIA. Not on AMD.


Previous 20 · Next 20

Main page · Your account · Message boards


Copyright © 2014-2024 BOINC Confederation / rebirther