CC compute cluster

The University of Manitoba’s Information & Services Technology (IST) department manages several research computing environments. The CC Compute Cluster is available to anyone who has a UofM computer account.

This compute environment presently consists of 15 Dell R815 servers arranged as follows:

  • 4 login servers: Mars/Jupiter/
    These 3 servers are also known as (DNS round-robin)
  • 13 compute servers: cc01 through (Note: Nodes cc07 and cc09 have been removed from service)

We ask users to not run their compute-intensive jobs on the login servers. We reserve the right to kill any such jobs with little/no notice. Please see the Computationally-Intensive Processes Policy for more information.

Each server has 256 GB RAM and 4 AMD Interlagos processors, totalling 64 “virtual” cores (4 CPU/machine x 8 physical cores/CPU x 2 threads/core).

There is limited local disk on each; storage is provided via NFS. Users get a small default allocation of storage, but we will work directly with individuals or research groups to try to accommodate their storage needs. Send an email to with a request for the Unix team.


The compute service is available to all employees and students of the UofM, but you require a valid UMNetID with the Unix/Linux entitlement claimed. If you already have your umnetid, you can check if you have the entitlement by trying to login to one of the login servers.

If you require help getting your account, or getting it set up for this access, you can send an email to explaining that you would like your account to have the Unix/Linux entitlement set. The Service Desk will also have information about sponsoring accounts should that be necessary.

Refer to Accessing CC Unix/Linux for information on the various ways of accessing the general CCL environment.

In summary, there are typically 3 ways to access the compute resources:

  1. SSH
    • Using an SSH client (e.g. Putty), connect to
    • If you have access to a Unix/Linux shell, use “ssh
              Once you are logged on, you can use a similar approach to access the rest of the compute cluster.
  2. The ThinLinc client gives you access to a full/graphical Linux desktop environment.
    From this desktop you can open terminals and manage jobs on multiple compute nodes using ssh (e.g. “ssh”). Since the storage is provided via NFS, you will see your same home directory on both the login nodes as well as the compute nodes.
    More information on ThinLinc, including download links
  3. ThinLinc web client
    ThinLinc is also available through a web interface:
    It has fewer features, but it’s quick, easy, and supported on mobile devices.
    It is a good way to verify your access to the compute environment.
    ThinLinc desktop client

Batch Job Management

For researchers familiar with HPC resources: We do not presently have a batch/job scheduler. You need to login to the compute nodes, as described above, to launch your jobs.

You can use the “supcc” command to show the current load of the available cluster nodes and the command “sshcc” to ssh to the most lightly loaded compute node

We do hope to eventually add a job scheduler to our compute cluster to handle resource contention, which we currently handle according to the Computationally-Intensive Processes policy.

More Information

If you have additional questions/feedback, please don’t hesitate to ask. Create an IT Ticket or send an email to with an info request for the Unix team.

Other Research Computing resources available:


Mon - Fri: 8am to 8pm
204-474-8600 or
Chat Now button with link

After hours, weekends & holidays.
204-474-8600, press 2 to report a critical system outage

Fort Garry
123 Fletcher Argue
Mon - Fri: 8am to 6pm

Join the queue:

Text your name to 431-631-0844
or scan the QR code below:

Fort Garry QR code

230 Neil John Maclean Library
Mon - Fri: 8am to 4:30pm

Join the queue:

Text your name to 431-631-6555
or scan the QR code below:

bannatyne campus QR code

New! Submit requests & check ticket status online at: