Trident Documentation
Trident is a Python package for creating synthetic absorption-line spectra from astrophysical hydrodynamics simulations. It utilizes the yt package to read in simulation datasets and extends it to provide realistic synthetic observations appropriate for studies of the interstellar, circumgalactic, and intergalactic media.
To avoid confusion, make sure you are viewing the correct documentation for the version of Trident you are using: stable vs. development. For more information, see Versions of Trident.
Installation
Follow these steps to successfully install Trident and its dependencies.
Versions of Trident
There are currently two versions of Trident: a stable version and a development version. Make sure you are reading the correct docs for the version you are using!
The stable version is tried and tested and easy to install with pip. The development version is actively being updated with new features including superior support for particle-based datasets (previously known as the demeshening). Note that the stable version of trident requires the stable version of yt, and the development version of trident requires the development version of yt, due to some backwards-incompatible changes regarding particle-support in yt/trident.
Thus, the installation steps are slightly different for stable and development, so pay attention in the steps below. Don’t worry if you want to change later, you can always switch between the two versions easily enough by following the directions in Uninstallation or Switching Code Versions.
Trident’s Major Dependency: yt
yt is a python-based software package for the analysis and visualization of a different numerical datasets, including astrophysical hydrodynamical data. yt is the primary dependency of Trident, so you must install it before Trident will work. There are several methods for installing yt, which are all discussed in detail in the yt installation documentation. Use the one that is appropriate for you. We find that using conda is the most streamlined and reliable.
Installing the Stable Version of yt and Trident
Installation of the stable versions of yt and Trident is quite simple:
$ pip install yt
$ pip install trident
Now, you can try to run Trident for the first time, where it will download some additional files. See Step 3: Get Ionization Table and Verify Installation, for more information:
$ python
>>> import trident
Follow the instructions to download the ion_balance table and then verify that everything is working correctly. You should now be ready to do some Step 4: Science!
Installing the Development Version of yt and Trident
Step 0: Ensure Conda is Installed
Conda is a package manager providing a clean, stand-alone installation of python that is self-contained in its installation directory. yt & trident require a modern installation of python to work. conda provides that installation.
You can see if conda is already installed by running:
$ conda -h
If conda is installed, move to the next step. Otherwise install Mini-conda.
Use the appropriate conda install script for your architecture. We recommend getting the latest version of conda for Python3 for your architecture here: https://repo.continuum.io/miniconda/
For modern macs:
$ curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
$ bash Miniconda3-latest-MacOSX-x86_64.sh
For modern linux machines:
$ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
$ bash Miniconda3-latest-Linux-x86_64.sh
At the end of the installation step, allow conda to add its installation directory to the $PATH.
Step 1: Install yt
First you need yt’s major dependencies:
$ conda install numpy cython mpi4py git
Now you pull directly from the yt github repository to access the up-to-date version of the source code and build it:
$ git clone https://github.com/yt-project/yt.git yt
$ cd yt
$ pip install -e .
$ cd ..
Note, you’ll also need a separate library, yt_astro_analysis, to get some of the functionality necessary for Trident to work correctly:
$ git clone https://github.com/yt-project/yt_astro_analysis.git yt_astro_analysis
$ cd yt_astro_analysis
$ pip install -e .
$ cd ..
Step 2: Install Trident
Like yt, in order to get the development version of Trident, you must clone and build the up-to-date source code from its repository:
$ git clone https://github.com/trident-project/trident.git trident
$ cd trident
$ pip install -e .
$ cd ..
Step 3: Get Ionization Table and Verify Installation
In order to calculate the ionization fractions for various ions from density, temperature, metallicity fields, you will need an ionization table datafile and a configuration file. Because this datafile can be large, it is not packaged with the main source code. The first time you try to do anything that requires it, Trident will attempt to automatically set this all up for you with a series of interactive prompts. This step requires an internet connection the first time you run it.
In addition, Trident provides a simple test function to verify that your install is functioning correctly. This function not only tries to set up your configuration and download your ion table file, but it will create a simple one-zone dataset, generate a ray through it, and create a spectrum from that ray. This should execute very quickly, and if it succeeds it demonstrates that your installation has been totally successful:
$ python
>>> import trident
>>> trident.verify()
...Series of Interactive Prompts...
If you cannot directly access the internet on this computer, or you lack write
access to your $HOME
directory, or this step fails for any reason, please
follow our documentation on Manually Installing your Ionization Table.
Step 4: Science!
Congratulations, you’re now ready to use Trident! Please refer to the documentation for how to use it with your data or with one of our sample datasets. A good place to start is the annotated example, and the example scripts found in the source code.
Please join our mailing list or slack channel for announcements and updates when new features are added to the code.
Manually Installing your Ionization Table
If for some reason you are unable to install the config file and ionization
table data automatically, you must set it up manually. When Trident runs,
it looks for a configuration file called config.tri
in the
$HOME/.trident
directory or alternatively in the current working
directory (for users lacking write access to their $HOME
directories).
This configuration file is simple in that it tells Trident a few things about
your install including the location and filename of your desired ionization
table. Manually create a text file called config.tri
with contents
following the form:
[Trident]
ion_table_dir = ~/.trident
ion_table_file = hm2012_hr.h5
To manually obtain an ion table datafile, download and gunzip one from:
http://trident-project.org/data/ion_table . While the config.tri
file
needs to exist in your $HOME/.trident
directory or in the working directory
when you import trident, the ion_table datafile can exist anywhere on the
file system. Just assure that the config file points to the proper location
and filename of the ion table datafile.
Now, to confirm everything is working properly, verify your installation following Step 3: Get Ionization Table and Verify Installation. If this fails or you have additional problems, please contact our mailing list.
Uninstallation or Switching Code Versions
Uninstallation of the Trident source code is easy. If you installed the stable version of the code via pip, just run:
$ pip uninstall trident
If you installed the dev version of Trident, you’ll have to delete the source as well:
$ pip uninstall trident
$ rm -rf <YOUR_PATH_TO_TRIDENT_REPO>
If you want to switch between the two stable and development versions, just uninstall your version of the code as above, and then install the desired version as described in Step 2: Install Trident
To fully remove the code from your system, remember to remove any ion table
datafiles you may have downloaded in your $HOME/.trident
directory,
and follow the instructions for how to uninstall yt.
Updating to the Latest Version
If you want more recent features, you should periodically update your Trident codebase.
Updating to the Latest Stable Release
If you installed the “stable” version of the code using pip, then you can easily update your trident and yt installations:
$ pip install -U trident
$ yt update
Updating to the Latest Development Version
If you installed the “development” version of the code, it’s slightly more involved:
$ cd <YOUR_PATH_TO_TRIDENT_REPO>
$ git pull origin main
$ pip install -e .
$ yt update
For more information on updating your yt installation, see the yt update instructions.
Annotated Example
The best way to get a feel for what Trident can do is to go through an annotated example of its use. This section will walk you through the steps necessary to produce a synthetic spectrum based on simulation data and to view its path through the parent dataset. The following example, available in the source code itself, can be applied to datasets from any of the different simulation codes that Trident and yt support, although it may require some tweaking of parameters for optimal performance. If you want to recreate the following analysis with the exact dataset used, it can be downloaded here.
The basic process for generating a spectrum and overplotting a sightline’s trajectory through the dataset goes in three steps:
Simple LightRay Generation
A LightRay
is a 1D object representing the path a ray of
light takes through a simulation volume on its way from some bright background
object to the observer. It records all of the gas fields it intersects along
the way for use in many tasks, including construction of a spectrum.
In order to generate a LightRay
from your data, you need to first make sure
that you’ve imported both the yt and Trident packages, and
specify the filename of the dataset from which to extract the light ray:
import yt
import trident
fn = 'enzo_cosmology_plus/RD0009/RD0009'
We need to decide the trajectory that the LightRay
will take
through our simulation volume. This arbitrary trajectory is specified with
coordinates in code length units (e.g. [x_start, y_start, z_start] to
[x_end, y_end, z_end]). Probably the simplest trajectory is cutting
diagonally from the origin of the simulation volume to its outermost corner
using the yt domain_left_edge
and domain_right_edge
attributes. Here
we load the dataset into yt to get access to these attributes:
ds = yt.load(fn)
ray_start = ds.domain_left_edge
ray_end = ds.domain_right_edge
Let’s define what lines or species we want to be added to our final spectrum. In this case, we want to deposit all hydrogen, carbon, nitrogen, oxygen, and magnesium lines to the resulting spectrum from the dataset:
line_list = ['H', 'C', 'N', 'O', 'Mg']
We can now generate the light ray using the make_simple_ray()
function by passing the dataset and the trajectory endpoints to it as well
as telling trident to save the resulting ray dataset to an HDF5 file. We
explicitly instruct trident to pull all necessary fields from the dataset
in order to be able to add the lines from our line_list
:
ray = trident.make_simple_ray(ds,
start_position=ray_start,
end_position=ray_end,
data_filename="ray.h5",
lines=line_list)
The resulting ray
is a LightRay
object, consisting of a series
of arrays representing the different fields it probes in the original dataset along
its length. Each element in the arrays represents a different resolution element
along the path of the ray. The ray also possesses some special fields not originally
present in the original dataset:
('gas', l')
Location along the LightRay length from 0 to 1.
('gas', 'dl')
Pathlength of resolution element (not a true pathlength for particle-based codes)
('gas', 'redshift')
Cosmological redshift of resolution element
('gas', 'redshift_dopp')
Doppler redshift of resolution element
('gas', 'redshift_eff')
Effective redshift (combined cosmological and Doppler)
Like any dataset, you can see what fields are present on the ray by examining its
derived_field_list
(e.g., print(ds.derived_field_list
). If you want more ions
present on this ray than are currently shown, you can add them with
add_ion_fields
(see: Adding Ion Fields).
This ray
object is also saved to disk as an HDF5 file, which can later be loaded
into yt
as a stand-alone dataset (e.g., ds = yt.load('ray.h5')
).
Overplotting a LightRay’s Trajectory on a Projection
Here we create a projection of the density field along the x axis of the
dataset, and then overplot the path the LightRay
takes through the simulation,
before saving it to disk. The annotate_ray()
operation should work for
any volumentric plot, including slices, and off-axis plots:
p = yt.ProjectionPlot(ds, 'x', 'density')
p.annotate_ray(ray, arrow=True)
p.save('projection.png')

Calculating Column Densities
Perhaps we wish to know the total column density of a particular ion present along
this LightRay
. This can easily be done by multiplying the desired
ion number density field by the pathlength field, dl
, to yield an array of
column densities for each resolution element, and then summing them together:
column_density_HI = ray.r[('gas', 'H_p0_number_density')] * ray.r[('gas', 'dl')]
print('HI Column Density = %g' % column_density_HI.sum())
Spectrum Generation
Now that we have our LightRay
we can use it to generate a spectrum.
To create a spectrum, we need to make a SpectrumGenerator
object defining our desired wavelength range and bin size. You can do this
by manually setting these features, or just using one of the presets for
an instrument. Currently, we have three pre-defined instruments, the G130M,
G160M, and G140L observing modes for the Cosmic Origins Spectrograph aboard
the Hubble Space Telescope: COS-G130M
, COS-G160M
, and COS-G140L
.
Notably, instrument COS
aliases to COS-G130M
.
We then use this SpectrumGenerator
object to make a raw
spectrum according to the intersecting fields it encountered in the
corresponding LightRay
. We save this spectrum to disk, and
plot it:
sg = trident.SpectrumGenerator('COS-G130M')
sg.make_spectrum(ray, lines=line_list)
sg.save_spectrum('spec_raw.txt')
sg.plot_spectrum('spec_raw.png')

From here we can do some post-processing to the spectrum to include additional features that would be present in an actual observed spectrum. We add a background quasar spectrum, a Milky Way foreground, apply the COS line spread function, and add gaussian noise with SNR=30:
sg.add_qso_spectrum()
sg.add_milky_way_foreground()
sg.apply_lsf()
sg.add_gaussian_noise(30)
Finally, we use plot and save the resulting spectrum to disk:
sg.save_spectrum('spec_final.txt')
sg.plot_spectrum('spec_final.png')
which produces:

To create more complex or ion-specific spectra, refer to Advanced Spectrum Generation.
Compound LightRays
In some cases (e.g. studying redshift evolution of the IGM), it may be
desirable to create a LightRay
that covers a range in redshift
that is larger than the domain width of a single simulation snaptshot.
Rather than simply sampling the same dataset repeatedly, which is
inherently unphysical since large scale structure evolves with cosmic
time, Trident allows the user to create a ray that samples multiple
datasets from different redshifts to produce a much longer ray that is
continuous in redshift space. This is done by using the
make_compound_ray()
function. This function is
similar to the previously mentioned make_simple_ray()
function, but instead of accepting an individual dataset, it takes a
simulation parameter file, the associated simulation type, and the
desired range in redshift to be probed by the ray, while still
allowing the user to specify the same sort of line list as before::
fn = 'enzo_cosmology_plus/AMRCosmology.enzo'
ray = trident.make_compound_ray(fn, simulation_type='Enzo',
near_redshift=0.0, far_redshift=0.1,
lines=line_list)
In this example, we’ve created a ray from an Enzo simulation (the same one used above) that goes from z = 0 to z = 0.1. This ray can now be used to generate spectra in the exact same ways as before.
Obviously, there need to be sufficient simulation outputs over the desired
redshift range of the compound ray in order to have continuous sampling.
To assure adequate simulation output frequency for this, one can use yt’s
plan_cosmology_splice()
function. See an example of its usage in
the yt_astro_analysis documentation.
We encourage you to look at the detailed documentation for
make_compound_ray()
in the API Reference
section to understand how to control how the ray itself is constructed
from the available data.
Note
The compound ray functionality has only been implemented for the Enzo and Gadget simulation codes (and Gadget’s derivatives including Gizmo and AREPO). If you would like to help us implement this functionality for your simulation code, please contact us about this on the mailing list.
Adding Ion Fields
In addition to being able to create absorption spectra, Trident can be used to postprocess datasets to add fields for ions not explicitly tracked in the simulation. These can later be analyzed using the standard yt analysis packages. This page provides some examples as to how these fields can be generated and analyzed.
How does it work?
When you installed Trident, you were forced to download an ion table, a
data table consisting of dimensions in density, temperature, and redshift.
This ion table was constructed by running many independent Cloudy instances
to approximate the ionization states of all ionic species of the first 30
elements. The ionic species were calculated assuming collisional
ionization equilibrium based on different density and
temperature values and photoionization from a metagalactic ultraviolet
background unique to each ion table. The currently preferred ion table
uses the Haardt Madau 2012 model. You can change your default
ionization model by changing your config file (see: Manually Installing your Ionization Table), or
by specifying it directly in the ionization_table
keywords of the following
functions.
By following the process below, you will add different ion fields to your dataset based on the above assumptions using the dataset’s redshift, and the values of density, temperature, and metallicity found for each gas parcel in your dataset.
Generating species fields
As always, we first need to import yt and Trident and then we load up a dataset:
import yt
import trident
fn = 'enzo_cosmology_plus/RD0009/RD0009'
ds = yt.load(fn)
To add ion fields we use the add_ion_fields
function. This
will add fields for whatever ions we specify in the form of:
Ion fraction field. e.g.
Mg_p1_ion_fraction
Number density field. e.g.
Mg_p1_number_density
Density field. e.g.
Mg_p1_density
Mass field. e.g.
Mg_p1_mass
Note
Trident follows yt’s naming convention
for atomic, molecular, and ionic species fields. In short, the ionic
prefix consists of the element and the number of times ionized it is:
e.g. H I = H_p0
, Mg II = Mg_p1
, O VI = O_p5
(p is for plus).
Let’s add fields for O VI (five-times-ionized oxygen):
trident.add_ion_fields(ds, ions=['O VI'])
To show how one can use this newly generated field, we’ll make a projection of the O VI number density field to show its column density map:
proj = yt.ProjectionPlot(ds, "z", "O_p5_number_density")
proj.save()
which produces:

We can similarly create a phase plot to show where the O VI mass lives as a function of density and temperature:
# we need to create a data object from the dataset to make a phase plot
ad = ds.all_data()
phase = yt.PhasePlot(ad, "density", "temperature", ["O_p5_mass"],
weight_field="O_p5_mass", fractional=True)
phase.save()
resulting in:

Light Ray Generator
The LightRay
is the one-dimensional object representing
the pencil beam of light traveling from the source to the observer. Light
rays can stack multiple datasets together to span a redshift interval
larger than the simulation box.

The preferred manner for generating rays uses the
make_simple_ray()
for
LightRay
‘s spanning a single dataset
and
make_compound_ray()
for
LightRay
‘s spanning multiple datasets.
Simple Rays
For a simple ray, you specify the dataset to use, the start and end coordinates
of your 1D line, and then optionally any additional fields you want stored on the
LightRay
or optionally any ionic species you will want to
use with this ray:
import yt
import trident
ds = yt.load('FIRE_M12i_ref11/snapshot_600.hdf5')
ray = trident.make_simple_ray(ds,
start_position=[0, 0, 0],
end_position=[60000, 60000, 60000],
lines=['H', 'Mg', 'O'],
fields=[('gas', 'temperature'), ('gas', 'metallicity')])
Compound Rays
For a compound ray, you specify the simulation parameter filename, the simulation code,
the start and end redshifts of the LightRay
, and optionally
any additional fields you want stored or any ionic species you will want to use with
this ray:
import trident
fn = 'enzo_cosmology_plus/AMRCosmology.enzo'
ray = trident.make_compound_ray(fn,
simulation_type='Enzo',
near_redshift=0.0,
far_redshift=0.1,
lines=['H', 'Mg', 'O'],
fields=[('gas', 'temperature'), ('gas', 'metallicity')])
Ray Fields
The resulting ray
is a LightRay
object, consisting of a series
of arrays representing the different fields it probes in the original dataset along
its length. Each element in the arrays represents a different resolution element
along the path of the ray. The ray also possesses some special fields not originally
present in the original dataset:
('gas', 'l')
Location along the LightRay length from 0 to 1.
('gas', 'dl')
Pathlength of resolution element (not a true pathlength for particle-based codes)
('gas', 'redshift')
Cosmological redshift of resolution element
('gas', 'redshift_dopp')
Doppler redshift of resolution element
('gas', 'redshift_eff')
Effective redshift (combined cosmological and Doppler)
Like any dataset, you can see what fields are present on the ray by examining its
derived_field_list
(e.g., print(ds.derived_field_list
). If you want more ions
present on this ray than are currently available, you can add them with
add_ion_fields
(see: Adding Ion Fields).
This ray
object is also saved to disk as an HDF5 file, which can later be loaded
into yt
as a stand-alone dataset. By default it is saved as ray.h5
, but you
can specify other filenames when you create it. To later access this file and
load it into yt, load it like any other dataset: ds = yt.load('ray.h5')
.
Calculating Column Densities
Perhaps we wish to know the total column density of a particular ion present along
this LightRay
. This can easily be done by multiplying the desired
ion number density field by the pathlength field, dl
, to yield an array of
column densities for each resolution element, and then summing them together:
column_density_HI = ray.r[('gas', 'H_p0_number_density')] * ray.r[('gas', 'dl')]
print('HI Column Density = %g' % column_density_HI.sum())
Examining LightRay Solution Data
When a LightRay
is created, it saves the source information
from the dataset that produced it in a dictionary, including its filename, its start
and end points in the original dataset, etc. This is all accessible when
you load up the LightRay
again through the
light_ray_solution
:
import yt
ds = yt.load('ray.h5')
print(ds.light_ray_solution)
[{'end': unyt_array([1., 1., 1.], 'unitary'),
'filename': 'snapshot_600.hdf5',
'redshift': 0.05,
'start': unyt_array([0.48810148, 0.51748806, 0.54316002], 'unitary'),
'traversal_box_fraction': unyt_quantity(0.83878521, 'unitary'),
'unique_identifier': '1436307563512020127'}]
Useful Tips for Making Compound LightRays
Below are some tips that may come in handy for creating proper LightRays. For full
use of these, you may have to create the LightRay
by hand instead of using the make_compound_ray()
helper
function.
How many snapshots do I need for a compound ray?
The number of snapshots required to traverse some redshift interval depends on the simulation box size and cosmological parameters. Before running an expensive simulation only to find out that you don’t have enough outputs to span the redshift interval you want, have a look at the guide Planning Simulations for LightCones or LightRays. The functionality described there will allow you to calculate the precise number of snapshots and specific redshifts at which they should be written.
My snapshots are too far apart!
The max_box_fraction
keyword, provided when creating the Lightray,
allows the user to control how long a ray segment can be for an
individual dataset. Be default, the LightRay generator will try to
make segments no longer than the size of the box to avoid sampling the
same structures more than once. However, this can be increased in the
case that the redshift interval between datasets is longer than the
box size. Increasing this value should be done with caution as longer
ray segments run a greater risk of coming back to somewhere near their
original position.
What if I have a zoom-in simulation?
A zoom-in simulation has a high resolution region embedded within a
larger, low resolution volume. In this type of simulation, it is likely
that you will want the ray segments to stay within the high resolution
region. To do this, you must first specify the size of the high
resolution region when creating the LightRay using the
max_box_fraction
keyword. This will make sure that
the calculation of the spacing of the segment datasets only takes into
account the high resolution region and not the full box size. If your
high resolution region is not a perfect cube, specify the smallest side.
Then, in the call to
make_light_ray()
,
use the left_edge
and right_edge
keyword arguments to specify the
precise location of the high resolution region.
Technically speaking, the ray segments should no longer be periodic
since the high resolution region is only a sub-volume within the
larger domain. To make the ray segments non-periodic, set the
periodic
keyword to False. The LightRay generator will continue
to generate randomly oriented segments until it finds one that fits
entirely within the high resolution region. If you have a high
resolution region that can move and change shape slightly as structure
forms, use the min_level keyword to mandate that the ray segment only
pass through cells that are refined to at least some minimum level.
If the size of the high resolution region is not large enough to
span the required redshift interval, the LightRay generator can
be configured to treat the high resolution region as if it were
periodic simply by setting the periodic
keyword to True. This
option should be used with caution as it will lead to the creation
of disconnected ray segments within a single dataset.
I want a continous trajectory over the entire ray.
Set the minimum_coherent_box_fraction
keyword argument to a very
large number, like infinity (numpy.inf
).
Advanced Spectrum Generation
In addition to generating a basic spectrum as demonstrated in the annotated example, the user can also customize the generated spectrum in a variety of ways. One can choose which spectral lines to deposit or choose different settings for the characteristics of the spectrograph, and more. The following code goes through the process of setting these properties and shows what impact it has on resulting spectra.
For this demonstation, we’ll be using a light ray passing through a very dense
disk of gas, taken from the initial output from the AGORA isolated box
simulation using ART-II in Kim et al. (2016).
If you’d like to try to reproduce the spectra included below you can get
the LightRay
file from the Trident sample data using the
command:
$ wget http://trident-project.org/data/sample_data/ART-II_ray.h5
Now, we’ll load up the ray using yt:
import yt
import trident
ray = yt.load('ART-II_ray.h5')
Setting the spectrograph
Let’s set the characteristics of the spectrograph we will use to create this spectrum. We can either choose the wavelength range and resolution and line spread function explicitly, or we can choose one of the preset instruments that come with Trident. To list the presets and their respective values, use this command:
print(trident.valid_instruments)
Currently, we have three settings for the Cosmic Origins Spectrograph available:
COS-G130M
, COS-G140L
, and COS-G160M
, but we plan to add more
instruments soon. To use one of them, we just use the name string in the
SpectrumGenerator
class:
sg = trident.SpectrumGenerator('COS-G130M')
But instead, let’s just set our wavelength range manually from 1150 angstroms to 1250 angstroms with a resolution of 0.01 angstroms:
sg = trident.SpectrumGenerator(lambda_min=1150, lambda_max=1250, dlambda=0.01)
From here, we can pass the ray to the
SpectrumGenerator
object to use in the
construction of a spectrum.
Choosing what absorption features to include
There is a LineDatabase
class that controls which
spectral lines you can add to your spectrum. Trident provides you with a default
LineDatabase
with 213 spectral lines commonly used
in CGM and IGM studies, but you can create your own
LineDatabase
with different lines. To see a list of
all the lines included in the default line list:
ldb = trident.LineDatabase('lines.txt')
print(ldb)
which is reading lines from the ‘lines.txt’ file present in the data directory (see where is Trident installed?) We can specify any subset of these spectral lines to use when creating the spectrum from our master line list. So if you’re interested in just looking at neutral hydrogen lines in your spectrum, you can see what lines will be included with the command:
print(ldb.parse_subset('H I'))
As a first pass, we’ll create a spectrum that just include lines produced by hydrogen:
sg.make_spectrum(ray, lines=['H'])
sg.plot_spectrum('spec_H.png')
The resulting spectrum contains a nice, big Lyman-alpha feature.

If, instead, we want to shows the lines that would be in our spectral range due to carbon, nitrogen, and oxygen, we can do the following:
sg.make_spectrum(ray, lines=['C', 'N', 'O'])
sg.plot_spectrum('spec_CNO.png')
And now we have:

We can see how these two spectra combined when we include all of the same lines:
sg.make_spectrum(ray, lines=['H', 'C', 'N', 'O'])
sg.plot_spectrum('spec_HCNO.png')
which gives:

We can get even more specific, by generating a spectrum that only contains lines due to a single ion species. For example, we might just want the lines from four-times-ionized nitrogen, N V:
sg.make_spectrum(ray, lines=['N V'])
sg.plot_spectrum('spec_NV.png')
This spectrum only shows a couple of small lines on the right hand side.

But if that level of specificity isn’t enough, we can request individual lines:
sg.make_spectrum(ray, lines=['C I 1193', 'C I 1194'])
sg.plot_spectrum('spec_CI_1193_1194.png')
And we end up with:

Or we can just include all of the available lines in our
LineDatabase
with:
sg.make_spectrum(ray, lines='all')
sg.plot_spectrum('spec_all.png')
Giving us:

To understand how to further customize your spectra, look at the documentation
for the SpectrumGenerator
and
LineDatabase
classes and other
API documentation.
Setting Wavelength Bounds Automatically
If you are interested in creating a spectrum that contains all possible
absorption features for a given set of lines, the
SpectrumGenerator
can be configured to
automatically enlarge the wavelength window until all absorption is captured. This is
done by setting the lambda_min
and lambda_max
keywords to ‘auto’
and specifying a bin size with the dlambda
keyword:
sg = trident.SpectrumGenerator(lambda_min='auto', lambda_max='auto',
dlambda=0.01)
sg.make_spectrum("ray.h5", lines=['H I 1216'])
sg.plot_spectrum('spec_auto.png')

Note, the above example is for a different ray than is used in the previous examples. The resulting spectrum will minimally contain all absorption present in the ray. This should be used with care when depositing multiple lines as this can lead to an extremely large spectrum.
Making Spectra in Velocity Space
Trident can be configured to create spectra in velocity space instead of
wavelength space where velocity corresponds to the velocity offset from
the rest wavelength of a given line. This can be done by providing the
keyword bin_space='velocity'
to the
SpectrumGenerator
:
sg = trident.SpectrumGenerator(lambda_min='auto', lambda_max='auto',
dlambda=1., bin_space='velocity')
sg.make_spectrum("ray.h5", lines=['H I 1216'])
sg.plot_spectrum('spec_velocity.png')

When working in velocity space, limits and bin sizes should be provided in km/s. If more than one transition is added to the spectrum (e.g., Ly-a and Ly-b), the zero point will correspond to the rest wavelength of the first transition added.
Making Spectra from a Subset of a Ray
The situation may arise where you want to see the spectrum that is generated
by only a portion of the gas along a line of sight. For example, you may want to
see the spectrum of only the cold gas. This can be done by creating a
YTCutRegion
from a loaded ray
dataset:
import trident
import yt
ds = yt.load('ray.h5')
all_data = ds.all_data()
cold_gas = ds.cut_region(all_data, 'obj["gas", "temperature"] < 10000')
sg = trident.SpectrumGenerator(lambda_min=1200, lambda_max=1225,
dlambda=0.01)
# spectrum of entire ray
sg.make_spectrum(all_data, lines=['H I 1216'])
all_spectrum = sg.flux_field[:]
# spectrum of cold gas
sg.make_spectrum(cold_gas, lines=['H I 1216'])
cold_spectrum = sg.flux_field[:]
trident.plot_spectrum(sg.lambda_field, [all_spectrum, cold_spectrum],
label=['all gas', 'cold gas'], stagger=None)

Fitting Absorption Spectra
This tool can be used to fit absorption spectra, particularly those generated using Trident. For more details on its uses and implementation please see (Egan et al. (2013)). If you find this tool useful we encourage you to cite accordingly.
Loading an Absorption Spectrum
To load an absorption spectrum created by
SpectrumGenerator
,
specify the output file name. It is advisable to use either an .h5
or .fits file, rather than an ascii file to save the spectrum as rounding
errors produced in saving to a ascii file will negatively impact fit quality.
f = h5py.File('spectrum.h5')
wavelength = f["wavelength"][:]
flux = f['flux'][:]
f.close()
Specifying Species Properties
Before fitting a spectrum, you must specify the properties of all the species included when generating the spectrum.
The physical properties needed for each species are the rest wavelength, f-value, gamma value, and atomic mass. These will be the same values as used to generate the initial absorption spectrum. These values are given in list form as some species generate multiple lines (as in the OVI doublet). The number of lines is also specified on its own.
To fine tune the fitting procedure and give results in a minimal number of optimizing steps, we specify expected maximum and minimum values for the column density, doppler parameter, and redshift. These values can be well outside the range of expected values for a typical line and are mostly to prevent the algorithm from fitting to negative values or becoming numerically unstable.
Common initial guesses for doppler parameter and column density should also be given. These values will not affect the specific values generated by the fitting algorithm, provided they are in a reasonably appropriate range (ie: within the range given by the max and min values for the parameter).
For a spectrum containing both the H Lyman-alpha line and the OVI doublet, we set up a fit as shown below.
HI_parameters = {'name':'HI',
'f': [.4164],
'Gamma':[6.265E8],
'wavelength':[1215.67],
'numLines':1,
'maxN': 1E22, 'minN':1E11,
'maxb': 300, 'minb':1,
'maxz': 6, 'minz':0,
'init_b':30,
'init_N':1E14}
OVI_parameters = {'name':'OVI',
'f':[.1325,.06580],
'Gamma':[4.148E8,4.076E8],
'wavelength':[1031.9261,1037.6167],
'numLines':2,
'maxN':1E17,'minN':1E11,
'maxb':300, 'minb':1,
'maxz':6, 'minz':0,
'init_b':20,
'init_N':1E12}
speciesDicts = {'HI':HI_parameters,'OVI':OVI_parameters}
Generating Fit of Spectrum
After loading a spectrum and specifying the properties of the species used to generate the spectrum, an appropriate fit can be generated.
from trident.absorption_spectrum.absorption_spectrum_fit import generate_total_fit
orderFits = ['OVI','HI']
fitted_lines, fitted_flux = generate_total_fit(wavelength,
flux, orderFits, speciesDicts)
The orderFits variable is used to determine in what order the species should be fitted. This may affect the results of the resulting fit, as lines may be fit as an incorrect species. For best results, it is recommended to fit species the generate multiple lines first, as a fit will only be accepted if all of the lines are fit appropriately using a single set of parameters. At the moment no cross correlation between lines of different species is performed.
The parameters of the lines that are needed to fit the spectrum are contained
in the fitted_lines
variable. Each species given in orderFits
will
be a key in the fitted_lines
dictionary. The entry for each species
key will be another dictionary containing entries for ‘N’,’b’,’z’, and
‘group#’ which are the column density, doppler parameter, redshift,
and associate line complex respectively. The i th line
of a given species is then given by the parameters N[i]
, b[i]
,
and z[i]
and is part of the same complex (and was fitted at the same time)
as all lines with the same group number as group#[i]
.
The fitted_flux
is an ndarray of the same size as flux
and
wavelength
that contains the cumulative absorption spectrum generated
by the lines contained in fitted_lines
.
Saving a Spectrum Fit
Saving the results of a fitted spectrum for further analysis is accomplished automatically using the h5 file format. A group is made for each species that is fit, and each species group has a group for the corresponding N, b, z, and group# values.
Procedure for Generating Fits
To generate a fit for a spectrum
generate_total_fit()
is called.
This function controls the identification of line complexes, the fit
of a series of absorption lines for each appropriate species, checks of
those fits, and returns the results of the fits.
Finding Line Complexes
Line complexes are found using the _find_complexes
function. The process by which line complexes are found involves walking
through the array of flux in order from minimum to maximum wavelength, and
finding series of spatially contiguous cells whose flux is less than some
limit. These regions are then checked in terms of an additional flux limit
and size. The bounds of all the passing regions are then listed and returned.
Those bounds that cover an exceptionally large region of wavelength space will
be broken up if a suitable cut point is found. This method is only appropriate
for noiseless spectra.
The optional parameter complexLim
(default = 0.999), controls the limit
that triggers the identification of a spatially contiguous region of flux
that could be a line complex. This number should be very close to 1 but not
exactly equal. It should also be at least an order of magnitude closer to 1
than the later discussed fitLim
parameter, because a line complex where
the flux of the trough is very close to the flux of the edge can be incredibly
unstable when optimizing.
The fitLim
parameter controls what is the maximum flux that the trough
of the region can have and still be considered a line complex. This
effectively controls the sensitivity to very low column absorbers. Default
value is fitLim
= 0.99. If a region is identified where the flux of the
trough is greater than this value, the region is simply ignored.
The minLength
parameter controls the minimum number of array elements
that an identified region must have. This value must be greater than or
equal to 3 as there are a minimum of 3 free parameters that must be fit.
Default is minLength
= 3.
The maxLength
parameter controls the maximum number of array elements
that an identified region can have before it is split into separate regions.
Default is maxLength
= 1000. This should be adjusted based on the
resolution of the spectrum to remain appropriate. The value correspond
to a wavelength of roughly 50 angstroms.
The splitLim
parameter controls how exceptionally large regions are split.
When such a region is identified by having more array elements than
maxLength
, the point of maximum flux (or minimum absorption) in the
middle two quartiles is identified. If that point has a flux greater than
or equal to splitLim
, then two separate complexes are created: one from
the lower wavelength edge to the minimum absorption point and the other from
the minimum absorption point to the higher wavelength edge. The default
value is splitLim
=.99, but it should not drastically affect results, so
long as the value is reasonably close to 1.
Fitting a Line Complex
After a complex is identified, it is fitted by iteratively adding and optimizing a set of Voigt Profiles for a particular species until the region is considered successfully fit. The optimizing is accomplished using scipy’s least squares optimizer. This requires an initial estimate of the parameters to be fit (column density, b-value, redshift) for each line.
Each time a line is added, the guess of the parameters is based on the difference between the line complex and the fit so far. For the first line this just means the initial guess is based solely on the flux of the line complex. The column density is given by the initial column density given in the species parameters dictionary. If the line is saturated (some portion of the flux with a value less than .1) than the larger initial column density guess is chosen. If the flux is relatively high (all values >.9) than the smaller initial guess is given. These values are chosen to make optimization faster and more stable by being closer to the actual value, but the final results of fitting should not depend on them as they merely provide a starting point.
After the parameters for a line are optimized for the first time, the optimized parameters are then used for the initial guess on subsequent iterations with more lines.
The complex is considered successfully fit when the sum of the squares of
the difference between the flux generated from the fit and the desired flux
profile is less than errBound
. errBound
is related to the optional
parameter to
generate_total_fit()
maxAvgError
by the number of array elements in the region such that
errBound
= number of elements * maxAvgError
.
There are several other conditions under which the cycle of adding and optimizing lines will halt. If the error of the optimized fit from adding a line is an order of magnitude worse than the error of the fit without that line, then it is assumed that the fitting has become unstable and the latest line is removed. Lines are also prevented from being added if the total number of lines is greater than the number of elements in the flux array being fit divided by 3. This is because there must not be more free parameters in a fit than the number of points to constrain them.
Checking Fit Results
After an acceptable fit for a region is determined, there are several steps the algorithm must go through to validate the fits.
First, the parameters must be in a reasonable range. This is a check to make sure that the optimization did not become unstable and generate a fit that diverges wildly outside the region where the fit was performed. This way, even if particular complex cannot be fit, the rest of the spectrum fitting still behaves as expected. The range of acceptability for each parameter is given in the species parameter dictionary. These are merely broad limits that will prevent numerical instability rather than physical limits.
In cases where a single species generates multiple lines (as in the OVI doublet), the fits are then checked for higher wavelength lines. Originally the fits are generated only considering the lowest wavelength fit to a region. This is because we perform the fitting of complexes in order from the lowest wavelength to the highest, so any contribution to a complex being fit must come from the lower wavelength as the higher wavelength contributions would already have been subtracted out after fitting the lower wavelength.
Saturated Lyman Alpha Fitting Tools
In cases where a large or saturated line (there exists a point in the complex
where the flux is less than .1) fails to be fit properly at first pass, a
more robust set of fitting tools is used to try and remedy the situation.
The basic approach is to simply try a much wider range of initial parameter
guesses in order to find the true optimization minimum, rather than getting
stuck in a local minimum. A set of hard coded initial parameter guesses
for Lyman alpha lines is given by the _get_test_lines
function
Also included in these parameter guesses is an an initial guess of a high
column cool line overlapping a lower column warm line, indictive of a
broad Lyman alpha (BLA) absorber.
Internals and Extensions
These internal classes and related extensions used to be part of yt but are now contained within Trident. The internal classes are documented below and can be used independently, but the primary Trident interfaces outlined in the main documentation are recommended.
Internal Classes
AbsorptionSpectrum
For documentation on the main interface to spectrum creation in Trident, see Spectrum Generation.
The AbsorptionSpectrum
is the internal class for creating absorption spectra in Trident from
LightRay
objects. The
AbsorptionSpectrum
and its workhorse method
make_spectrum()
return two arrays, one with wavelengths, the other with the normalized
flux values at each of the wavelength values. It can also output a text file
listing all important lines.
Method for Creating Absorption Spectra
Once a LightRay
has been created traversing a dataset using the Light Ray Generator,
a series of arrays store the various fields of the gas parcels (represented
as cells) intersected along the ray.
AbsorptionSpectrum
steps through each element of the
LightRay
’s
arrays and calculates the column density for desired ion by multiplying its
number density with the path length through the cell. Using these column
densities along with temperatures to calculate thermal broadening, voigt
profiles are deposited on to a featureless background spectrum. By default,
the peculiar velocity of the gas is included as a doppler redshift in addition
to any cosmological redshift of the data dump itself.
Subgrid Deposition
For features not resolved (i.e. possessing narrower width than the spectral
resolution),
AbsorptionSpectrum
performs subgrid deposition. The subgrid deposition algorithm creates a number
of smaller virtual bins, by default the width of the virtual bins is 1/10th
the width of the spectral feature. The Voigt profile is then deposited
into these virtual bins where it is resolved, and then these virtual bins
are numerically integrated back to the resolution of the original spectral bin
size, yielding accurate equivalent widths values.
AbsorptionSpectrum
informs the user how many spectral features are deposited in this fashion.
Creating an Absorption Spectrum
Initialization
To instantiate an
AbsorptionSpectrum
object, the arguments required are the
minimum and maximum wavelengths (assumed to be in Angstroms), and the number
of wavelength bins to span this range (including the endpoints)
from trident.absorption_spectrum.absorption_spectrum import AbsorptionSpectrum
sp = AbsorptionSpectrum(900.0, 1800.0, 10001)
Adding Features to the Spectrum
Absorption lines and continuum features can then be added to the spectrum. To add a line, you must know some properties of the line: the rest wavelength, f-value, gamma value, and the atomic mass in amu of the atom. That line must be tied in some way to a field in the dataset you are loading, and this field must be added to the LightRay object when it is created. Below, we will add the H Lyman-alpha line, which is tied to the neutral hydrogen field (‘H_p0_number_density’).
my_label = 'HI Lya'
field = 'H_p0_number_density'
wavelength = 1215.6700 # Angstroms
f_value = 4.164E-01
gamma = 6.265e+08
mass = 1.00794
sp.add_line(my_label, field, wavelength, f_value, gamma, mass, label_threshold=1.e10)
In the the call to
add_line()
the field
argument tells the spectrum generator which
field from the ray data to use to calculate the column density. The
label_threshold
keyword tells the spectrum generator to add all lines
above a column density of 10 10 cm -2 to the
text line list output at the end. If None is provided, as is the default,
no lines of this type will be added to the text list.
Continuum features with optical depths that follow a power law can be added
with the
add_continuum()
function. Like adding lines, you must specify details like the wavelength
and the field in the dataset and LightRay that is tied to this feature.
The wavelength refers to the location at which the continuum begins to be
applied to the dataset, and as it moves to lower wavelength values, the
optical depth value decreases according to the defined power law. The
normalization value is the column density of the linked field which results
in an optical depth of 1 at the defined wavelength. Below, we add the hydrogen
Lyman continuum.
my_label = 'HI Lya'
field = 'H_p0_number_density'
wavelength = 912.323660 # Angstroms
normalization = 1.6e17
index = 3.0
sp.add_continuum(my_label, field, wavelength, normalization, index)
Making the Spectrum
Once all the lines and continuua are added, the spectrum is made with the
make_spectrum()
function.
wavelength, flux = sp.make_spectrum('lightray.h5',
output_file='spectrum.fits',
line_list_file='lines.txt')
A spectrum will be made using the specified ray data and the wavelength and
flux arrays will also be returned. If you set the optional
use_peculiar_velocity
keyword to False, the lines will not incorporate
doppler redshifts to shift the deposition of the line features.
Three output file formats are supported for writing out the spectrum: fits,
hdf5, and ascii. The file format used is based on the extension provided
in the output_file
keyword: .fits
for a fits file,
.h5
for an hdf5 file, and anything else for an ascii file.
Note
To write out a fits file, you must install the astropy python library in order to access the astropy.io.fits module. You can usually do this by simply running pip install astropy at the command line.
Generating Spectra in Parallel
Spectrum generation is parallelized using a multi-level strategy where each absorption line is deposited by a different processor. If the number of available processors is greater than the number of lines, then the deposition of individual lines will be divided over multiple processors.
Absorption spectrum creation can be run in parallel simply by adding the following
to the top of the script and running with mpirun
.
import yt
yt.enable_parallelism()
For more information on parallelism in yt, see Parallel Computation With yt.
Extensions
Testing
We maintain a series of tests in Trident to make sure the code gives consistent results and to catch accidental breakages in our source code and dependencies. These tests are run by Travis automatically and regularly to assure consistency in functionality, but you can run them locally too (see below). The tests consist of a mix of unit tests (tests to assure Trident functions don’t actively fail) and answer tests (tests comparing newly generated results against some old established results to assure consistency).
Running the Test Suite
Running the test suite requires a version of Trident installed from source (see Step 2: Install Trident).
The tests are run using the pytest
Python module. This can be
installed with conda
or pip
.
$ conda install pytest
The test suite requires a number of datasets for testing functionality.
Trident comes with a helper script that will download all the datasets and
untar them. Before running this, make sure you have the
answer_test_data_dir
variable set in your config file (see Step 3: Get Ionization Table and Verify Installation).
This variable should point to a directory where these datasets will be stored.
The helper script is located in the tests
directory of the Trident source.
$ cd tests
$ python download_test_data.py
If this is your first time running the tests, then you need to generate a “gold standard” for the answer tests. Follow Generating Gold Standard Answer Test Results for Comparison before continuing with running the tests, otherwise your answer tests will fail.
Make sure you’re on the desired version of yt and trident that you want to
test and use (usually the tip of the development branch i.e., main
).
$ export TRIDENT_GENERATE_TEST_RESULTS=0
$ cd /path/to/yt/
$ git checkout main
$ pip install -e .
$ cd /path/to/trident
$ git checkout main
$ pip install -e .
The test suite is run by calling py.test
from within the tests
directory.
$ cd tests
$ py.test
============================= test session starts ==============================
platform darwin -- Python 3.6.0, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: /Users/britton/Documents/work/yt/extensions/trident/trident, inifile:
collected 52 items
test_absorption_spectrum.py ..........
test_download.py .
test_generate.py .
test_instrument.py .
test_ion_balance.py ............
test_light_ray.py .....
test_line_database.py .......
test_lsf.py ....
test_pipelines.py ...
test_plotting.py .
test_ray_generator.py .
test_spectrum_generator.py ......
========================= 52 passed in 117.32 seconds ==========================
If a test fails for some reason, you will be given a detailed traceback and reason for it failing. You can use this to identify what is wrong with your source or perhaps a change in the code of your dependencies. The tests should take around ten minutes to run.
Generating Gold Standard Answer Test Results for Comparison
In order to assure the Trident codebase gives consistent results over time,
we compare the outputs of tests of new versions of Trident against an older,
vetted version of the code we think gives accurate results. To create this
“gold standard” result from the older version of the code, you must roll back
the Trident and yt source back to the older “trusted” versions of the code.
You can find the tags for the most recent trusted versions of the code by
running gold_standard_versions.py
and then rebuilding yt and Trident
with these versions of the code. Lastly, set the
TRIDENT_GENERATE_TEST_RESULTS
environment variable to 1 and run the tests:
$ cd tests
$ python gold_standard_versions.py
Latest Gold Standard Commit Tags
yt = 953248239966
Trident = test-standard-v2
To update to them, `git checkout <tag>` in appropriate repository
$ cd /path/to/yt
$ git checkout 953248239966
$ pip install -e .
$ cd /path/to/trident
$ git checkout test-standard-v2
$ pip install -e .
$ export TRIDENT_GENERATE_TEST_RESULTS=1
$ cd tests
$ py.test
The test results should now be stored in the answer_test_data_dir
that
you specified in your Trident configuration file. You may now run the actual
tests (see Running the Test Suite) with your current version of yt and
Trident comparing against these gold standard results.
The Tests Failed – What Do I Do?
If the tests have failed (either locally, or through the automatically generated test from Travis), you want to figure out what caused the breakage. It was either a change in trident or a change in one of Trident’s dependencies (e.g., yt). So first examine the output from py.test to see if you can deduce what went wrong.
Sometimes it isn’t obvious what caused the break, in which case you may need to use git bisect to track down the change, either in Trident or in yt. First, start with the tip of yt, and bisect the changes in Trident since its gold standard version (see below). If that doesn’t ID the bad changeset, then do the same with yt back to its gold standard version. Once you have identified the specific commit that caused the tests to break, you have to identify if it was a good or bad change. If the unit tests failed and some functionality no longer works, then it was a bad, and you’ll want to change the code that caused the break. On the other hand, if the answer tests changed, and they did so because of an improvement to the code, then you’ll simply want to go about Updating the Testing Gold Standard.
Updating the Testing Gold Standard
Periodically, the gold standard for our answer tests must be updated as bugs
are caught or new more accurate behavior is enabled that causes the answer
tests to fail. The first thing to do
is to identify the most accurate version of the code (e.g., changesets for
yt and trident that give the desired behavior). Tag the Trident changeset with
the next gold standard iteration. You can see the current iteration by looking
in the .travis.yml
file at the TRIDENT_GOLD
entry–increment this and
tag the changeset. Update the .travis.yml
file so that the YT_GOLD
and
TRIDENT_GOLD
entries point to your desired changeset and tag. You have to
explicitly push the new tag (hereafter test-standard-v3
) to your repository
(here: origin
. Issue a pull request.
$ git tag test-standard-v3 <trident-changeset>
$ ... edit .travis.yml files to update YT_GOLD=<yt changeset>
$ ... and TRIDENT_GOLD=test-standard-v3
$ git add .travis.yml
$ git commit
$ git push origin test-standard-v3
$ <MAKE PULL REQUEST>
Once the pull request has been accepted, someone with admin access to the
main trident repository (here upstream
) will have to push the gold standard
tag.
$ git push upstream test-standard-v3
Lastly, that person will have to also clear Travis’ cache, so that it regenerates new answer test results. This can be done manually here: https://travis-ci.org/trident-project/trident/caches .
Frequently Asked Questions
Why don’t I have any absorption features in my spectrum?
There are many reasons you might not have any absorption features in your spectrum, but we’ll cover a few of the basic explanations here.
Your absorbers are in a different part of the spectrum than you’re plotting. Make sure you are plotting the wavelength range where you expect to see the absorption by taking into account the wavelength of your absorption features coupled with the redshift of your dataset:
To see the wavelength of specific ionic transitions, see the line list in:
/trident/trident/data/line_lists/lines.txt
.Your sightlines do not have sufficient column densities of the desired ions to actually make an absorption feature. Look at the total column density of your desired ions in your sightline by multiplying the density times the path length and summing it all up. Here is an example for showing the total O VI column density in a ray:
import trident <generate/load your ray object> trident.add_ion_fields(ray, ['O VI']) ad = ray.all_data() print((ad[('gas', 'O_p5_number_density')] * ad[('gas', 'dl')]).sum())Depending on the ion, you usually need to see at least
to have any appreciable absorption. Try sending a sightline through a denser region in your simulation that might have more of that ion.
I don’t have a metallicity field in my dataset–What can I do?
In order to estimate the density of ions throughout your dataset, Trident needs a metallicity field. But some datasets may not have one generated by default. I highly recommend re-running the dataset with metals present, as this will lead to the best estimate of ions from Trident, but if you just want to create a “dummy” metallicity field, include the following code at the top of your script to automatically add a uniform metallicity field to any datasets loaded lacking one (in this case it’s 0.3 solar metallicity). For more information on creating derived fields like this one, see the yt documentation on derived fields
import yt
import numpy as np
def _metallicity(field, data):
factor = 0.3 # 0.3 solar metallicity
return (
data.ds.arr(np.ones_like(data["gas", "density"]), 'Zsun') * factor
)
yt.add_field(
("gas", "metallicity"),
function=_metallicity,
sampling_type="local",
units="Zsun",
)
What functions are available and what is their syntax?
Go see the full documentation for all of our available classes and functions in the API Documentation.
What version of Trident am I running?
To learn what version of Trident you’re running, type:
$ python
>>> import trident
>>> print(trident.__version__)
If you have a version ending in dev, it means you’re on the development branch and you should also figure out which particular changeset you’re running. You can do this by:
$ cd <path/to/trident>
$ git log --pretty=format:'%h' -n 1
To figure out what version of yt you’re running, type:
$ yt version
If you’re writing to the mailing list with a problem, be sure to include all of the above with your bug report or question.
Where is Trident installed? Where are its data files?
One can easily identify where Trident is installed:
$ python
>>> import trident
>>> print(trident.path)
The data files are located in that path with an appended /data
.
How do I join the mailing list?
You can join our mailing list for announcements, bugs reports, and changes at:
https://groups.google.com/forum/#!forum/trident-project-users
How do I learn more about the algorithms used in Trident?
We have a full description of all the methods used in Trident including citations to previous related works in our Trident method paper.
How do I cite Trident in my research?
Check out our citation page.
How do I get an invite to the Trident slack channel?
Click on this link.
API Reference
Generating Rays
Create a yt LightRay object for a single dataset (eg CGM). |
|
Create a yt LightRay object for multiple consecutive datasets (eg IGM). |
|
A 1D object representing the path of a light ray passing through a simulation. |
Generating Spectra
Preferred class for generating, storing, and plotting absorption-line spectra. |
|
Base class for generating absorption spectra. |
|
An instrument class for specifying a spectrograph/telescope pair |
|
A class representing a spectrograph's line spread function. |
|
A class representing an individual atomic transition. |
|
Class for storing and selecting collections of spectral lines. |
Plotting Spectra
Load a previously saved spectrum from disk. |
|
Plot a spectrum or a collection of spectra and save to disk. |
Adding Ion Fields
Preferred method for adding ion fields to a yt dataset. |
|
Add ion fraction field to a yt dataset for the desired ion. |
|
Add ion number density field to a yt dataset for the desired ion. |
|
Add ion mass density field to a yt dataset for the desired ion. |
|
Add ion mass field to a yt dataset for the desired ion. |
Miscellaneous Utilities
Create a one-zone hydro dataset for use as test data. |
|
Create a one-zone ray object for use as test data. |
|
Convert an integer to a Roman numeral. |
|
Convert a Roman numeral to an integer. |
|
Return the path where the trident source is installed. |
|
Print a Trident ASCII logo to the screen. |
|
Verify that the bulk of Trident's functionality is working. |
|
Fit an absorption-line spectrum into line profiles. |
Citation
If you use Trident for a research application, please cite the Trident method paper in your work with the bibtex entry below:
@ARTICLE{2017ApJ...847...59H,
author = {{Hummels}, C.~B. and {Smith}, B.~D. and {Silvia}, D.~W.},
title = "{Trident: A Universal Tool for Generating Synthetic Absorption Spectra from Astrophysical Simulations}",
journal = {\apj},
archivePrefix = "arXiv",
eprint = {1612.03935},
primaryClass = "astro-ph.IM",
keywords = {cosmology: theory, methods: data analysis, methods: numerical, radiative transfer },
year = 2017,
month = sep,
volume = 847,
eid = {59},
pages = {59},
doi = {10.3847/1538-4357/aa7e2d},
adsurl = {http://adsabs.harvard.edu/abs/2017ApJ...847...59H},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
Changelog
This document summarizes changes to the codebase from different releases.
Contributors
The CREDITS file has an updated list of contributors to the codebase.
Version 1.3 (August 22, 2022)
This is a bug fix release and updates Trident to using yt-4, which has a variety of improvements for Trident including full support for particle-based datasets. See yt 4.
Bug Fixes
Version 1.2.3 (March 18, 2020)
This is a bug fix release.
Enhancements
Move testing to circleci. (PR 109)
Bug Fixes
Version 1.2.2 (November 14, 2019)
This is a bug fix release.
Bug Fixes
Shift wavelength of velocity center to redshift from light ray solution (PR 102)
Version 1.2.1 (October 1, 2019)
This is a bug fix release.
Bug Fixes
Logging info doesn’t use correct units (PR 99)
Version 1.2 (September 19, 2019)
New Features
Bug Fixes
off by one error in subgrid index calculation (PR 85)
fixing error in _ion_mass (PR 81)
H_p0_number_density to default, H_number_density to alias (PR 78)
Implements a better way of calculating the redshift on the light ray (PR 71)
Assures that ion_fraction field reflects on-disk fields (PR 64)
Fix atomic data for Si II 1260 according to Morton 2003 (PR 43)
A check to avoid failure when no continuum absorbers were found in the ray (PR 39)
Auto-addition of H_nuclei_density to a LightRay when present in the base dataset (PR 39)
Adding max_box_fraction as a kwarg to make_compound_ray (PR 37)
Updated trident_path() to be OS Independent (PR 36)
simplify setting up ion fields using the “local” field type (PR 30)
split and join filenames using os.sep instead of assuming unix (PR 29)
updated oscillator strengths and gamma’s for Si II 1206 and Si III 1260 (PR 25)
Minor Enhancements
Calculating LOS velocity with relative velocities to account for bulk motion (PR 93)
Enabling use of output_absorbers_file kwarg in SpectrumGenerator (PR 58)
Switching imports from yt.analysis_modules to yt_astro_analysis (PR 55)
Enable passing in-memory LineDatabase to SpectrumGenerator (PR 42)
Added equivalent width calculation to line_observables_dict (PR 40)
Numerous documentation updates
Updates and fixes to testing
Version 1.1 (November 18, 2017)
Trident development has changed from mercurial to git, and the source has moved from bitbucket to github. This was done in recognition that more people interact with git/github than do with hg/bitbucket, as well as to follow our major dependency yt in making the same transition. All previous repository history (e.g., commits, versions, tags, etc.) is retained under this transition. For users operating on the development branch of Trident, you must re-install Trident in order to continue to get updates. The installation instructions were updated accordingly.
We totally rebuilt the testing interface to Trident, which includes more coverage in unit tests and answer tests over both grid-based and particle-based datasets. We now have continuous integration through Travis that tests the code daily and with each new pull request to assure consistent code results and to minimize bugs. For more information, see Testing.
Much of the original Trident codebase was developed in yt as the base classes
AbsorptionSpectrum
andLightRay
. We have now stripped these classes out of yt and moved them entirely into Trident for more flexibility, stability, and autonomy moving forward. This should not affect the user as these changes were behind the scenes.Added
store_observables
keyword tomake_spectrum()
to store a dictionary of observable properties (e.g., tau, column density, and thermal_b) for each cell along a line of sight for use in post-processing. See source ofSpectrumGenerator
for more information.Added an approximate
flux_error
field to output spectra, since many observational tools require its presence. Seeerror_func()
for more details.Made
min_tau
a keyword tomake_spectrum()
to enable higher precision (although more time intensive) absorption line deposition.Added ability to specify an arbitrary noise vector with
add_noise_vector()
.A bugfix was made in yt to the temperature field for Gadget-based code outputs. The internal energy field was mistakenly being read in co-moving instead of physical units, which led to gas temperatures being low by a factor of (1+z). This is now resolved in yt dev and thus we recommend Trident users use yt dev until yt 3.5 stable is released.
Another bugfix was made in Trident dependency astropy to the convolve function, which is used in
apply_lsf()
. This may cause slight backwards-incompatible changes when applying line spread functions to post-process spectra.Replaced internal instances of
particle_type
withsampling_type
to match similar yt conversion.
Version 1.0 (November 16, 2017)
Initial release. See our method paper for details.
Create absorption-line spectra for any trajectory through a simulated data set mimicking both background quasar and down-the-barrel configurations.
Reproduce the spectral characteristics of common instruments like the Cosmic Origins Spectrograph.
Operate across the ultraviolet, optical, and infrared using customizable absorption-line lists.
Trace simulated physical structures directly to spectral features.
Approximate the presence of ion species absent from the simulation outputs.
Generate column density maps for any ion.
Provide support for all major astrophysical hydrodynamical codes.
Help
If you run into problems with any aspect of Trident, please follow the steps below. Don’t worry, we’ll help you get it sorted out.
Update the Code
The documentation is built for the latest version of Trident. Try Updating to the Latest Version to assure your code matches what the documentation describes. Remember to update to the latest version of yt too.
Search Documentation and Mailing List Archives
Most use cases for Trident can be found in our documentation and method paper. Try searching through the documentation using the search window in the upper left part of the screen.
If that doesn’t work, try looking at specific problems we might have addressed in our Frequently Asked Questions.
Lastly, try searching the archives of our mailing list. Chances are that someone else may have encountered the problem that you have and already wrote to the list. You can search the list here.
Contact our Mailing List
Compose a message to our low-volume mailing list. Remember to include details like the operating system you’re using, the type of dataset you’re trying to reduce, the version of Trident and yt you’re using (find it out here), and of course, a description of the problem you’re having with any relevant traceback errors. Our mailing list is located here:
https://groups.google.com/forum/#!forum/trident-project-users
Join our Slack Channel
We have a slack channel for help and discussions amongst the users and developers of Trident. You can generate an invite for yourself by clicking on this link and following the instructions.