What is OVITO?

The Open Visualization Tool (OVITO) is a powerful three-dimensional visualization software. As stated on their website:

OVITO is a scientific visualization and analysis software for atomistic and particle simulation data. It helps scientists gain better insights into materials phenomena and physical processes. …  It has served in a growing number of computational simulation studies as a useful tool to analyze, understand, and illustrate simulation results.

Visit their website for more information or to download the software for free.

OVITO has a number of powerful features, one of which is a built-in Python scripting interface. A Python script is a Python program that can give commands to OVITO. For example, you can write a script that loads a file instead of clicking the “load file” button in OVITO. Many of the actions you can perform through the user interface can be executed with Python scripts, and with much more detail and precision. The creators have developed a number of Python libraries and written detailed documentation on the features of Python scripting in OVITO, which can be found here. In this tutorial, we will build upon the documentation through a series of examples that you can copy and build from. All of the code examples are available here

Python scripting is especially useful when rendering images and animations. OVITO, without Python scripting, only allows you to make images or animations from a fixed point of view. However, Python scripting allows you to take control of the camera in order to create much more interesting and visually informative animations, like the one below. Click on the video below to see an example that uses many of the scripting features; later in the tutorial, we will show examples of each of the individual actions.

Creating a Basic Python Script

First, open OVITO and import your dataset in the normal way. Next, you’ll need to set up your script file. Start by opening your favorite text editor and create a new file with a “.py” file extension. (The first example below is named “render.py”.) The first line of the script must be “import ovito”. This gives you access to the libraries that the creators of OVITO developed. The next line, “from ovito.vis import * ” allows you to manipulate the camera, e.g., change the point of view. Other possible actions that ovito.vis gives you access to include:

  • Control the camera with a Viewport object
  • Render images
  • Add a text label overlay
  • Manipulate the visual appearance of the particles, vectors, bonds, etc.
  • And more! (Click here for the documentation)

The next line, “from ovito import dataset” will give the script access to the dataset you already loaded into OVITO. Here is what that code might look like:


Our next step is to create an instance of a Viewport object. This is what allows us to manipulate the camera and render images. So, what is a viewport? In the user interface of OVITO, there are viewports displaying your dataset. The screen in OVITO is usually split into four viewports. You can use these viewports to control how your data is displayed. In Python, the Viewport class has a number of useful features, most notably, the controls for the camera. We will directly control the camera using “vp”. You have access to a number of controls through “vp“, but we will focus on three main controls – camera position, camera direction, and field of view.

To set up your Viewport object in your script file, you’ll need the view position and view direction information for the camera. To find the correct information, switch back from your text editor to the OVITO interface. Once you’re there, orient your dataset in one of the viewports to a position that you like. Then, click the small title in the top left corner of the viewport (e.g., “Top,” “Left,” “Ortho,” etc.). A small drop-down menu will appear – select the last option called “Adjust View” (shown in the image below). This will open a dialogue box with all the view information that you need for your Python script. Go back to your editor now.


Set the values of the “vp” attributes to match the dialog box. This will set up the script to render what you see in OVITO. You can also optionally choose from a number of preset camera set-ups through the Viewport.Type command. More info about the Viewport class can be found here. Below is an example of translating the information from the “Adjust View” dialog box into a Python script. (Note:fov” stands for field of view)


Now that the viewport is set up to display your intended perspective, let’s render a series of images to make an animation. Take a look at the following code sample:


The code above creates an increments a variable called “frame” from 0 to 49, rendering an image at each iteration. If your dataset has multiple time-steps, you can choose to progress through them as you render each image by including line 17, as above. If your dataset has only one timestep, then this line won’t do anything and is not necessary. Next, an instance of the RenderSettings class, which was also imported from ovtio.vis, is created. This class tells “vp” how to save the images of your animation. The constructor for this class can take a number of different arguments, depending on what settings you would like to choose. We specified a resolution with “size, a location and naming convention with “filename,” and we choose to use the “TachyonRenderer” since it renders with a higher quality. A full list of options can be found here, as well as what the defaults are if you choose not to specify.

Note: you need to be very specific when setting the filename. In your Python script, you must replace “location” with a FULL file path (ex: C:/users/… etc).

Let’s recap: First, we opened OVITO to import and set up our dataset. Next, we made a Python script file and wrote code to import our dataset and necessary libraries. Then, we used the information in the “Adjust View” dialogue box to set the values of the “vp” attributes. Finally, we wrote code to loop through the time-steps of our dataset (i.e. the frames of our animation) and rendered an image at each step.

Great! Now you’re ready to run your script! With OVITO open and your dataset loaded, click on the “Scripting” button on the top menu bar, click “Run Script File,” and select your script file from the file explorer.

Screenshot (33)

Note: The script file that you’ve written will only execute if run FROM WITHIN OVITO. If you attempt to run it from, say the command line or a Jupyter notebook, the script will not be able to locate the proper OVITO libraries or your dataset, resulting in an error.

Once you have run your script, you will have a collection of images in the folder you specified in the “filename.” Your next step will be to compile those images into an animation. There are different softwares available to do this, but we’ll be using ffmpeg.

To download ffmpeg on Windows

To download ffmpeg on Mac use brew

Once you have it installed, open the command line and navigate to the directory with all your images. Then, execute the following command:

ffmpeg -framerate 10 -i imageName%01d.png -c:v mjepg -qscale:v 0 videoName.avi

Note: Replace “imageName” with your image naming convention from your Python script. The extension on the “videoName” does not need to be .avi, it can be any file extension that supports videos.

Now that you know how to render an animation, you’re ready to learn how to take control of the camera to make more creative animations, like the one you saw at the beginning!


  • Change field of view (zoom in and out)
  • Change camera position
  • Change camera direction

The code for the examples shown can be found here.

Field of View

In your Python script, as you loop through the frames and render each image, you can also manipulate the camera properties between rendering each image. In this example, we use the “field of view” camera attribute to zoom in on our dataset, as shown in the video below.

Okay, but where do we start? First, follow the steps from above to set up your dataset in OVITO and “vp” in your Python script. Then, within your loop that renders the images, increment the “vp.fov” attribute. “fov” refers to “field of view,” which changes how large your dataset appears on the screen.


The code above would zoom in for the first 30 frames, and then begin to increment the time-steps in the dataset (i.e. play the animation).

It is interesting to note that, in this example, the camera’s field of view changes at a linear rate. However, this does not have to be the case. For example, you may want the camera to move quickly at first and then slow down as you near the particles.

It is also important to note that manipulating the field of view only changes how large or small your dataset appears on screen, it does not move the camera. In order to do that, you must manipulate the camera_pos property, as shown in the following example.

Camera Position

In this example, we use the camera_pos attribute to move the camera along the z-axis through the center of the ring of particles, as shown in the video below.

The camera position is given by x, y, z, coordinates. Thus, it is important to be aware of how your viewport is oriented. This information is found by the set of xyz axes in the bottom left corner of the OVITO viewport. For the viewport below, adjusting the z coordinates of the camera position would move the camera in and out, adjusting x would move left and right, and adjusting y would move up and down.


So, moving the camera through the middle of the ring of particles could be done as shown in this code:


First, open your dataset in OVITO and set up “vp” to your desired starting position. Then, create a loop that renders images as before. However, the difference here is line 21. As you can see, the camera position is decremented by -.0003 each frame in the z direction, starting from the original position (seen in line 11).

The choice of the z direction can easily be modified to move in any direction. And again, this example shows the camera position changing at a linear rate, but this does not have to be the case.

Camera Direction

In this example, we use the camera_dir attribute to move the camera in a circle around the particles, thereby rotating the image, as shown in this video:

According to the OVITO Python reference manual, camera_dir controls “the viewing direction vector of the viewport’s camera. This can be an arbitrary vector with non-zero length.” 

The camera always points toward origin (usually the center of your particles), unless you change the camera position. Camera_dir will override camera_pos to make sure the camera_dir vector is pointing toward the origin.

Below are a few examples to help you understand what camera_dir does. As you can see, (0, 0 -1) views the particles along the z-axis, (1, 0, 0) views along the x-axis, and (1, 1, 1), looks down at a 45 degree angle.

Screen Shot 2018-02-27 at 1.37.50 PM

We can use this information to rotate the camera around the particles. Camera_dir is a direction vector. So, if we want to move in a circle around the particles, we can use a parametric vector equation for a circle:

v = <cos(t), sin(t), f(t)>

It is important to note that the order you choose to put the functions (cos(t), sin(t), and f(t)) will determine which direction of rotation and the axis around which you rotate. The coordinate that you set as f(t) will be the axis of rotation, and the ordering of cos versus sin determines the direction. f(t) also determines the height at which you rotate around the particles. For example, if you set camera_dir = (cos(t), sin(t), 0), then the camera will move counterclockwise in a circle in the z = 0 plane. 

It is also important to note that you must “import math” from the Python library in order to access sin, cos, and pi, etc. Below is an example of implementing these concepts. You can see that the proper libraries are imported at the top, and then “vp” is set to the starting position. Then in the loop that renders the image, the time-step is incremented. Next, a theta value, the angle at which you will view the data, is calculated with respect to “frame.” The camera direction vector is then updated using the parametric equations from above with “theta” as t.