Compute servers

Many department employees have computational tasks too heavy for their regular workplace machine(s). The department provides access to some servers specifically designed for running such tasks. They may have more CPU power and memory and keep running overnight.

Computing facilities at SURFsara

SURFSara provides the Lisa, Cartesius, and HPC Cloud remote computing facilities.

In October, 2019, the Math & CS department bought credits for these systems, for use by its employees, as part of the university's Computing Infrastructure initiative.

The departmental HPC cluster

The department operates HPC cluster nodes for use by its employees.

Employees can request an account by sending an email to Research IT (either Kimberley Thissen or Thom Castermans). Please use the mcs.default.q and mcs.gpu.q queues; check for specs here. There will be a quota of 1 TB on every user's home directory. Because there is roughly 95 TB available in total, there is likely no room for everyone to fill up their quota completely. The general advice thus remains: don't use the cluster as a place to store a lot of data. Every compute node does have slightly over 2 TB of scratch storage available under /local, which does not fall under the quota (but is deleted once a job finishes).

All communication concerning this cluster is provided on a mailing list. Subscribe to be kept up to date about technical changes and workshops/hand-on sessions.

Further support can be found on the HPC wiki, or contact Research IT for all other questions.

Other computing facilities at the department

Some general points

Disk space

Disk space is available on /home and /scratch as usual on linux systems.

Please use /home only for files you need to keep, and /scratch for temporary files you can afford to lose. We do not provide backups for files on /scratch.

When using a considerable amount of /scratch space, please also plan a date to clean it up. Your colleagues will thank you.

If you need to store compute results or other precious data that are too big for /home, consider a secondary copy on the storage server .

We used to impose disk quota. Please be considerate of other users.

Running GUI programs

In order to run GUI programs on these servers, you need to login with ssh -Y, else they will not have sufficient access to your display facilities and will appear with a grey window, without warning.

To run GUI programs from Windows systems, you need to install and setup an X Window server, such as Exceed.

Limited access

These systems can only be accessed from systems, that is, computers on campus or connected using TU/e's VPN service.

Available servers

Remote Linux desktops

Our remote Linux desktops are available for simple Linux tasks.

Few employees use them nowadays; we may discard them.

mastodont was designed to support computations requiring a large amount of RAM. It offers 3 TB of RAM, while its amount of CPU power is relatively limited.

As an employee, you are welcome to use it for memory-bound tasks.

You can log in to mastodont with your TUE account, once we grant you access.

For details, see how to use mastodont.

mammoth a.k.a. mammoet was mastodont's predecessor.

It has been out of maintenance for years, and will soon be taken offline.

You can log in to mammoth with a special department Linux account.

The system offers 56 x 2 Ghz, 935 GB memory (aggregated), 2.8 TB local disk, Fedora 12 / 64-bit
It is composed of 7 servers with each 144 GB that are aggregated by the vSMP software to appear as one system . All of the memory appears as a contiguous block of memory to the OS and to programs. The performance depends on your memory access profile, which is a complex matter. In general, vSMP gets the most out of the hardware. A few typical cases follow below:

  • Accessing all of the memory, fully randomly: latency of infiniband
  • Accessing all of the memory, sequentially: bandwith of inifinband
  • Clustered access, within a server's memory size: normal server performance
You can use numactl or taskset to have your process run on specific cores. The available nodes (physical cpus) and their free memory are reported by numactl -H. to

Until 2018, ngrid01 to ngrid31 were available as remotely accessible Linux machines. They will not be back.

The department intends to invest in a local high performance computation cluster.

Temporary systems

If the previous systems are too busy, or do not meet your requirements, you can ask us to set up a new (desktop) PC for private usage. Such requests will be granted without formalities but only for a limited period. PCs of this nature are usually set up in a day or two.

If the previous proposals do not work for you, request another system through your IT Committee representative.

Contact us | Webmaster

Questions? Mail to