Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC (2024)

Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

High priority Framework issues:

  • Fix dropping lifetime::timeframe for good:Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
    • Recently this seems to happen more often in QC, so at least this simplifies the debugging hopefully since we can reproduce it more or less in staging (still not locally).
  • Start / Stop / Start:2 problems on O2 side left:
      • All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
      • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
        • Giulio proposed a test on staging with export MALLOC_CHECK_=3, could you follow this up?
  • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Status?
  • Fix problem with ccdb-populator: no idea yet:
    • Ole will try to create a reproducer. Not sure if he will still find time. Status?
  • Memory leak in DPL internal-ccdb-backend: Are we sure that all leaks are fixed now?
    • There was also a leak in the TRD calib task reported. Is that understood? I understood it is TRD internal?

Other framework tickets:

  • We need to make progress with these tickets at some point...
  • https://github.com/AliceO2Group/AliceO2/pull/12976 : Better DPL backpressure reporting: Was merged and tested at P2 but does not work, since something was missing. 2 New PRs for O2 and QC open. I think it would be good to double-check our changes locally before asking RC to test them at P2, this saves time and is also better for our credibility.
  • TOF problem with receiving condition in tof-compressor: https://alice.its.cern.ch/jira/browse/O2-3681
  • Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
  • Backpressure reporting when there is only 1 input channel: no progress: https://alice.its.cern.ch/jira/browse/O2-4237
  • Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes. https://alice.its.cern.ch/jira/browse/O2-2710
  • https://alice.its.cern.ch/jira/browse/O2-1900 : FIX in PR, but has side effects which must also be fixed.
  • https://alice.its.cern.ch/jira/browse/O2-2213 : Cannot override debug severity for tpc-tracker
  • https://alice.its.cern.ch/jira/browse/O2-2209 : Improve DebugGUI information
  • https://alice.its.cern.ch/jira/browse/O2-2140 : Better error message (or a message at all) when input missing
  • https://alice.its.cern.ch/jira/browse/O2-2361 : Problem with 2 devices of the same name
  • https://alice.its.cern.ch/jira/browse/O2-2300 : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
  • https://its.cern.ch/jira/browse/O2-4759: Run getting stuck when too many TFs are in flight.
  • https://its.cern.ch/jira/browse/O2-4234: Reduce obsolete DPL metrics
  • https://its.cern.ch/jira/browse/O2-4860: Do not use string comparisons to derrive processor type, since DeviceSpec.name is user-defined.
  • Found a reproducible crash (while fixing the memory leak) in the TOF compressed-decoder at workflow termination, if the wrong topology is running. Not critical, since it is only at the termination, and the fix of the topology avoids it in any case. But we should still understand and fix the crash itself. A reproducer is available.
  • Support in DPL GUI to send individual START and STOP commands.
  • Problem I mentioned last time with non-critical QC tasks and DPL CCDB fetcher is real. Will need some extra work to solve it. Otherwise non-critical QC tasks will stall the DPL chain when they fail.
  • DPL sending SHM metrics for all devices, not only input proxy: https://alice.its.cern.ch/jira/browse/O2-4234
  • Some improvements to ease debugging: https://alice.its.cern.ch/jira/browse/O2-4196 https://alice.its.cern.ch/jira/browse/O2-4195 https://alice.its.cern.ch/jira/browse/O2-4166

Global calibration topics:

  • TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.

Async reconstruction

  • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes.Checking 2 things:
    • does the situation get better without GPU monitoring? --> Inconclusive
    • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
  • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.

EPN major topics:

  • Fast movement of nodes between async / online without EPN expert intervention.
    • 2 goals I would like to set for the final solution:
      • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
      • We must not lose which nodes are marked as bad while moving.
  • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
    • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
  • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
  • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
  • TfBuilders should stop in ERROR when they lose connection.

Other EPN topics:

  • Check NUMA balancing after SHM allocation, sometimes nodes are unbalanced and slow: https://alice.its.cern.ch/jira/browse/EPN-245
  • Fix problem with SetProperties string > 1024/1536 bytes: https://alice.its.cern.ch/jira/browse/EPN-134 and https://github.com/FairRootGroup/DDS/issues/440
  • After software installation, check whether it succeeded on all online nodes (https://alice.its.cern.ch/jira/browse/EPN-155) and consolidate software deployment scripts in general.
  • Improve InfoLogger messages when environment creation fails due to too few EPNs / calib nodes available, ideally report a proper error directly in the ECS GUI: https://alice.its.cern.ch/jira/browse/EPN-65
  • Create user for epn2eos experts for debugging: https://alice.its.cern.ch/jira/browse/EPN-383
  • EPNs sometimes get in a bad state, with CPU stuck, probably due to AMD driver. To be investigated and reported to AMD.

Raw decoding checks:

  • Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.

Full system test issues:

  • New FST datasets generated for Pb-Pb 50khz, pp 650 khz, pp 1.3mhz

Topology generation:

  • Should test to deploy topology with DPL driver, to have the remote GUI available.
    • DPL driver needs to implement FMQ state machine. Postponed until YETS issues solved.

QC / Monitoring / InfoLogger updates:

  • CTF/RAW Size monitoring: status?

AliECS related topics:

  • Extra env var field still not multi-line by default.

GPU ROCm / compiler topics:

  • Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Disappeared with ROCm 6.x
  • Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math. Seems fixed with ROCm >= 5.5.
  • Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Verified that this is not a bug in our code but internal compiler problem. AMD has a minimal reproducer.
  • Found another compiler problem with template treatment found by Ruben. Same problem as the previous one.
  • Debugging the calibration, debug output triggered another internal compiler error in HIP compiler. Same problem as the previous one.
  • Most likely update of ROCm / OS to a stable RPM release only possible after Pb-Pb 2024.
  • List of open issues with AMD:
    • Application crashing with ROCm >= 6.x.
    • Bug report abort non-working synchronization of kernel-call and DMA transfer on MI100.
    • GPUs stalling (long stalls up to 24h in async, generally short stalls up to 1 min also in sync.)
    • Want to use an official RPM version, not a custom patched version.
    • Still waiting for a proper fix for the register-spilling problem, instead of using a workaround.
    • Need a proper fix for internal compiler error from the template code example.
  • ROCm 6.1 released, have a test node available.
    • Default ROCm 6.1 crashes in the same way as 6.0.
    • There is a new compiler behavior such that we can no longer compiler our code with -O0. Filed a bug report to AMD.
    • Received from AMD instructions to build a custom compiler for 6.1, which fixes the known compiler issues.
      • With this, the standalone benchmark reproducer we gave them for the latest issues seem to be fixed, but it still crashes in the FST with a newer data set.
        • So far, I did not manage to reproduce this crash in any standalone test, so we did not yet file a proper bug report.
    • 6.1 fixes the performance regression we saw with 6.0, and is even ~1% faster than our current setup.

TPC GPU Processing

  • Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
  • New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex.
  • Status of cluster error parameterizations
    • No progress yet on newly requested debug streamers.
    • Implemented usage of new cluster errors in most parts of track seeding (still missing in extension to adjacent sectors for short track segments, and in refit before sector track merging). Already now, it yields a 6% increase in total TPC processing time, while I do not see a significant performance change in MC. Asked TPC experts to check if the results are good. I would not waste 6% total time for nothing. Optionally, now that we have RTC, we can also enable/disable it with RTC.

TPC processing performance regression:

  • O2/dev:
      • Total time: 4.695s, Track Fit Time 1.147s, Seeding Time 1.241s
    • O2/dev with commit from 4.3. reverted:
      • Total time 4.351s, Track Fit Time 1.089, Seeding Time 1.008s
    • For reference: before introduction of the V-Shape map:
      • Total time 3.8421s (didn't measure individual times)
    • O2/dev with scaling factors hard-coded to 0 (essentially using one single transformation map without any scaling):
      • Total time 3.093 Track Fit Time 0.682s Seeding Time 0.429s
  • Proposed 3 ideas to speed up the map access:
    1. We merge the maps on-the-fly to one combined map, and query only one map.
    2. We could add plenty of #ifdef in the code, to make sure that for online purposes all the code for the non-static map is not seen.
    3. We could try to optimize the code to make it easier for the compiler.
  • Outcome of meeting with TPC:
    • Sergey will implement a new fully flat map, and multiple existing maps will be merged into the new one.
      • This will give optimal performance for both sync and async.
      • It will also remove the memory overhead of having multiple maps in memory per process, since then we can use the flat maps from the SHM.
      • Timescale is ~September.
    • Meanwhile, I will add the possibility to remove the M/V-shape code with run time compilation, which should restore the performance from last November at P2 if we use RTC.

General GPU Processing

  • Porting all CUDA features to HIP finished. Also per-kernel compilation now available with HIP.
  • Added caching with file-lock to compile RTC GPU code at P2 only once.
    • Currently several problems which are understood but must still be fixed:
      • /tmp is inside the slurm container and wiped afterwards, will use /var/tmp to make cache file persist environment creations.
      • RTC is started from one of the GPU processes, which has a NUMA pinning to one NUMA domain, thus it uses only half of the CPU cores. Need to extend the CPU pinning for the compilation subprocesses.
      • RTC compiles for the architecturs of the original build, which is currently MI50/MI100, i.e. all nodes compile twice, which takes extra time. Need to add an option to select an architecture, and the topology generation must put in the setting for MI50 / MI100 architectures.
      • AMD compiler leaves stale temp folders, need to check how to prevent this or clean up.
      • RTC compilation fails in an online run since headers (e.g. <cmath>) are not found.
      • Found one more problem that due to a bug in the current AMD compiler the RTC spams the DDS folder with bogus temp folders. Found a workaround to create these temporary stuff in the /tmp folder of the job_container, so it will be cleaned up.
    • Deployed to P2 yesterday, to be enabled optionally via extra env variable for tests under real conditions. Pippo did some tests on staging today, but failed due to the header issue.
  • O2 GPU code was 1% slower than in standalone test again due to CXX compile flags overriding extra HIP compile flags in the O2 CMake. Should now also be fixed.
Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC (2024)

References

Top Articles
Latest Posts
Article information

Author: Catherine Tremblay

Last Updated:

Views: 6779

Rating: 4.7 / 5 (47 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Catherine Tremblay

Birthday: 1999-09-23

Address: Suite 461 73643 Sherril Loaf, Dickinsonland, AZ 47941-2379

Phone: +2678139151039

Job: International Administration Supervisor

Hobby: Dowsing, Snowboarding, Rowing, Beekeeping, Calligraphy, Shooting, Air sports

Introduction: My name is Catherine Tremblay, I am a precious, perfect, tasty, enthusiastic, inexpensive, vast, kind person who loves writing and wants to share my knowledge and understanding with you.