Prospective users

ACCORD partners and users can get Started TODAY!

ACCORD resources are available for use by researchers public and not-for-profit research institutions.  Due to limited availability, priorities will be given to projects addressing the COVID pandemic.

Become an ACCORD user

How to become an ACCORD user:

Researchers wishing to use ACCORD resources will first become members of the ACCORD consortium. The researcher's home institution will enter into a membership agreement with the University of Virginia. The membership is meant to established shared responsibilities and risk management between the institutions. Note that the researcher's home institution owns the data (not the researcher) and is responsible for its management.

Once the membership document is executed, all researchers at said institution can begin to request accounts and access to the system without the need for additional agreements.

Once the account and allocation are created, users goes through an on-boarding process, which includes both technical training and support for setting up his/her research environment.

To get started, contact the ACCORD program for a consultation.

Michele Co (1).jpg



Standard security cluster


High-security computing environment

ACCORD is housed in UVA's Tier 2+ data center (shown in background photo), guaranteeing at least 99.749% reliability of service.

ACCORD is connected to Internet2 via the Mid-Atlantic Research Infrastructure Alliance network (MARIA) at 100Gbps.

  • A traditional high performance cluster with job scheduler, large file system, modules, and MPI processing.

  • Consists of 5 interactive and 548 heterogenous cluster nodes for a total of 20358 cores.

  • 1.8PB of Scratch / 6PB of Value storage / 6PB of project storage

  • A multi-platform, HIPAA-compliant system for secure data

  • Includes dedicated virtual machines (Linux and Windows), JupyterLab Notebooks, and Apache Spark.

  • Approved for HIPAA, CUI, FISMA and ITAR compliance.

  • Managed by OpenStack control planes for storage, compute, and virtual private cloud security.

  • 93 physical nodes for a total of ~1400 cores and ~29TB of memory.

  • 4 GPU nodes with 4 V100s each.

  • Virtual Machines are secured by UVA identity providers and a high security VPN using multi-factor authentication and rigorous user policies.

  • Data ingress/egress is facilitated by Globus GridFTP.

  • Running the Distributed Cloud Operating System, an enterprise product built upon the Apache Mesos and Marathon projects for container orchestration.

  • 17 physical nodes for a total of 1152 cores, 6.5TB of memory.

  • 320TB of local cluster storage, and attached to 2PB of other research storage via NFS.

  • Deploys built-in internal and external-facing load balancers, as well as a Shibboleth-protected SAML proxy for authenticated user sessions.

  • Implements service-driven templates, health checks for automated scaling and failover, and encrypted secrets.


Container cluster