<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://hpc.tu-berlin.de/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="https://hpc.tu-berlin.de/feed.php">
        <title>HPC-Cluster-Dokumentation hpc:scheduling</title>
        <description></description>
        <link>https://hpc.tu-berlin.de/</link>
        <image rdf:resource="https://hpc.tu-berlin.de/lib/tpl/bootstrap3/images/favicon.ico" />
       <dc:date>2026-04-28T16:07:28+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:access&amp;rev=1742810892&amp;do=diff"/>
                <rdf:li rdf:resource="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:fairshare&amp;rev=1737037878&amp;do=diff"/>
                <rdf:li rdf:resource="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:profiling_and_resource_monitoring&amp;rev=1738340219&amp;do=diff"/>
                <rdf:li rdf:resource="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:slurm_commands&amp;rev=1729861435&amp;do=diff"/>
                <rdf:li rdf:resource="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:start&amp;rev=1737037972&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="https://hpc.tu-berlin.de/lib/tpl/bootstrap3/images/favicon.ico">
        <title>HPC-Cluster-Dokumentation</title>
        <link>https://hpc.tu-berlin.de/</link>
        <url>https://hpc.tu-berlin.de/lib/tpl/bootstrap3/images/favicon.ico</url>
    </image>
    <item rdf:about="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:access&amp;rev=1742810892&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-03-24T11:08:12+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Access to Cluster</title>
        <link>https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:access&amp;rev=1742810892&amp;do=diff</link>
        <description>Access to Cluster

The following groups are eligible for scientific computing on the HPC cluster:

	*  Scientists at the TU Berlin
	*  Students writing their master's/bachelor's thesis at the TU Berlin

The HPC cluster is not intended for personal projects or educational purposes (e.g. exercises, labs).
Use of the cluster is therefore linked to a chair or research project at the TU Berlin.</description>
    </item>
    <item rdf:about="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:fairshare&amp;rev=1737037878&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-01-16T15:31:18+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Fairshare</title>
        <link>https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:fairshare&amp;rev=1737037878&amp;do=diff</link>
        <description>Fairshare

Slurm Fairshare Policy is a feature of the Slurm Workload Manager that allows for the fair distribution of computing resources among different users or groups based on their fairshare score. The fairshare score is calculated based on factors such as past resource usage, job priorities, and user/group weights.</description>
    </item>
    <item rdf:about="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:profiling_and_resource_monitoring&amp;rev=1738340219&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-01-31T17:16:59+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Profiling and Resource Monitoring</title>
        <link>https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:profiling_and_resource_monitoring&amp;rev=1738340219&amp;do=diff</link>
        <description>Profiling and Resource Monitoring

Introduction

This documentation is designed to provide a comprehensive guide on monitoring and managing resources on our cluster. It covers the use of profiling tools, strategies for monitoring memory, and best practices for resource allocation to optimize job scheduling and performance. By following these guidelines, users can enhance their workflow efficiency and overall experience on the cluster.</description>
    </item>
    <item rdf:about="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:slurm_commands&amp;rev=1729861435&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-10-25T15:03:55+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Slurm Commands</title>
        <link>https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:slurm_commands&amp;rev=1729861435&amp;do=diff</link>
        <description>Slurm Commands

Start Jobs

srun

This is the simplest way to run a job on a cluster.
Initiate parallel job steps within a job or start an interactive job (with --pty).

salloc

Request interactive jobs/allocations.
When the job is started a shell (or other program specified on the command line) it is started on the submission host (Frontend).
From this shell you should use srun to interactively start a parallel applications.
The allocation is released when the user exits the shell.</description>
    </item>
    <item rdf:about="https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:start&amp;rev=1737037972&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-01-16T15:32:52+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Scheduling</title>
        <link>https://hpc.tu-berlin.de/doku.php?id=hpc:scheduling:start&amp;rev=1737037972&amp;do=diff</link>
        <description>Scheduling

Index
scheduling index


Introduction

Job scheduling is done by Slurm Workload Manager.</description>
    </item>
</rdf:RDF>
