Configuring student applications

When working with student applications on Nuvolos, it is important to consider the following performance characteristics:

Pre-starting student applications

If a class size is large (e.g. above ~50 users / or if individual applications have been customized to request higher resources) and students are expected to launch their applications at the same time, it can happen that resource allocation becomes slower (e.g. application launch time can be around 5 minutes instead of the usual 30-60 seconds).

For a good user experience, we recommend that instructors pre-launch the required application(s) for all users before students are expected to start work with it. Nuvolos supports both automatic and manual application prestart.

In both automatic and manual mode the progress and outcome of a pre-launch is visible in the task view.

For optimal resource allocation, only the very first prestart triggers the start of a given application for all users in the space. Subsequent prestart events first check which users actually used the applications around the last prestart time, and the app will be started only for them automatically.

Scheduled startup of student applications

Using automatic application prestart instructors can create scheduled prestarts ahead of time. This can be performed from the 'Application' view in the sidebar, by clicking on the three dots beside the application name and selecting the 'Schedule for start' option, and setting the time and date by when all apps in the space should be up and running.

The following limitations apply: - The time of the schedule has to be in at least 30 minutes, as it takes time to start all applications. - In a space, up to twenty scheduled prestarts can exist at the same time - Setting a prestart date more than 6 months into the future is not allowed - Setting a prestart date for archived courses is not allowed

The scheduled prestarts can be viewed, edited, and deleted from under the list of applications. If you wish the create a new schedule for the same time a week away, you can do that by clicking on 'add to next week' under the Actions column.

The next scheduled startup can also be viewed from the space overview page.

Scheduled startup with GPU

Nuvolos supports courses with GPU access. This means, it's possible to schedule application startup for students on machines that have GPU, and thus have a virtual lab session on Nuvolos. In such a scenario, every student gets their individual GPU, so they don't need to compete for GPU runtime on shared machines. This makes it possible for students to e.g. write exams on Nuvolos, with everybody having access to the exact same hardware setup in a completely isolated way.

By default, only 1/6 A10 GPUs are available for scheduled startup for classes up to 60 attendants. For classes larger in size, please contact support first.

To use scheduled startup with GPU, you need to first enable credit-based sizes in your space and have enough AC credits to cover the runtime costs for all students. As an space administrator, you can change the size of applications in the Master instance to the 1/6 A10 GPU machine size anytime to test your code. Since GPU machines consume credits, students are not allowed to request them on demand - rather you as the space administrator should configure a scheduled startup for them.

To configure startup on machines with GPU, turn on the Scale resources toggle and select the GPU size and configure the Stop after selected minutes field.

Since machines with GPU consume credits to run, scheduled startups with GPUs need to define the length of each session in minutes. After the set amount of minutes relative to the prestart schedule, every machine with GPU is automatically shut down in the space (including the machine(s) of the instructor(s)). Example: if the scheduled start is at 10:05 and stop after selected minutes is 120 minutes, all prestarted apps (including the instructors app) will be shut down at 12:05.

Limitations of scheduled startup with GPUs

  • Currently up to 60 concurrent students on 1/6 A10 GPUs are supported only. Please reach out to support to clear GPU sizes/attendant lists larger than this.

  • Stop after selected minutes can only be set between 30 and 360 minutes.

  • The total cost of a session with N students will be around N*[session length in hours]*[hourly price of GPU machine] + warmup premium, where the warmup premium means that applications are started 30-10 minutes ahead of time to allow for longer machine provisioning times due to higher machine checkout frequency around course start time.

  • Scheduled startups using GPU machines will not consider past user activity and will start up a GPU machine for every user in the course space.

  • Any running applications started by students will be restarted at the scheduled startup time and moved to GPU machines automatically.

Manual startup of student applications

Pre-launching can be performed from the 'Applications' view on the sidebar, by clicking the three dots beside the application name and selecting the 'Start for all users' option:

Student applications are stopped automatically after 1 hour of inactivity, so it does not make sense to perform the pre-launch more than an hour before the planned start time.

Pre-launching will start the application for all users in the space: for students, their respective applications will be started, for space administrators, the application in the master instance will be started. In case a space administrator is also an editor in a student instance, that application will also be started for the administrator.

Configuring applications

Each application can be configured by a space administrator. The following aspects may be customized:

  1. Application inactivity timeout

  2. Application resources

  3. Shared access

All of these items can be found by clicking on Configure in the Applications view of an Instance.

Configuring application inactivity timeout

Please refer to our documentation of inactivity for details on what we understand on an application breaching the inactivity limit. The slider changes the amount of time (in hours) after which the application is shut down if it is inactive during the time period. If you distribute an application, the setting at the time of distribution is also enforced at the target application.

Increasing the inactivity limit may result in higher-than-desired resource utilisation of your organisation.

Configuring application resources

Resource availability to applications running in non-exclusive environments is understood in Nuvolos Compute Units (NCUs). For a detailed description of NCUs, you may refer to its documentation. You may scale the NCU allocation of an application to a higher or lower level depending on the expected workload of the application. Applications can be started with 1, 2, 4, 8, 12 or 16 NCUs. If you distribute an application, the setting at the time of distribution is also enforced at the target application.

Increasing the NCU utilisation of an application may result in higher-than-desired resource utilisation of your organisation.

Configuring shared access

Applications can be configured to be shared access in case shared group work is expected from users of an instance. Please refer to our detailed guide here.

Performance sensitive code

On Nuvolos each student runs the code with the same application configuration as the instructor.

Nevertheless it is important to consider that when many students are concurrently executing a computationally intensive code, the application performance might be inferior to what the instructor experienced during material development when potentially the load from other users was lower.

Whilst usually this is within a reasonable factor, we recommend that during interactive sessions with a large number of students, either:

a) Code examples should be adjusted such that they execute within a minute or so maximum.

b) The space should be configured to have larger per-student resources to provide adequate compute performance. For the most performance sensitive cases, we suggest dedicated compute nodes for each application - please reach out to our support team to discuss such an option.

For out-of-class work when concurrency is lower, these considerations can be appropriately relaxed.

Last updated