Gluex Case Study

This is derived from the original web document "Getting started with HDGeant"

D. Lawrence 11/11/04
D. Lawrence 4/6/06 Updated
D. Lawrence 12/5/06 Converted to Wiki
D. Lawrence 1/18/07 Finished Converting to Wiki
D. Lawrence 2/4/08 Tweaked for calibDB among others
D. Lawrence 2/17/09 Overhauled to reflect current state of code
M. Ito 1/5/10 Added HDDS to list of 3rd party software
D. Lawrence 9/30/14 Replace "make" with "scons"


This document gives the bare minimum to get you up and going with running the GlueX GEANT-based simulation program hdgeant and looking at the output. More details are given in other HOWTOs (to be written).

The basic steps to running and analyzing a GlueX simulation are:

  1. Getting and compiling the source code
  2. Configuring and running the hdgeant program
  3. Running one of the Hall-D DANA-based reconstruction programs (This could involve customizing the C++ program for your needs or using one of the pre-built plugins to create ROOT TTrees)

Getting and compiling the source code

Required 3rd Party Packages

The source relies on several software packages that are not kept in the Hall-D repository (with the exception of HDDS, which is kept there). These must be downloaded and installed prior to compiling the Hall-D source. Here is a list of what you will need.


Make sure you set your JANA_HOME environment variable to point to the directory containing the bin, lib, and include directories after building JANA.

(Note that the source for JANA is maintained in an svn repository located here: with the tagged versions located here:


Make sure you set your CERN and CERN_LEVEL environment variables and that the cernlib script is in your path as it will be used by the makefiles.

Check here for possible issues with CERNLIB installation.

If you don't already have it installed, you will also need Motif. You can download OpenMotif from here for many platforms (including Mac OS X).


Make sure your ROOTSYS environment variable is set and that the root-config script is in your path. Also, make sure the ROOT shared libraries directory (usually $ROOTSYS/lib) is in your LD_LIBRARY_PATH (DYLD_LIBRARY_PATH on OS X) environment variable.


(note that binaries exist on the JLab CUE for certain platforms:)

Note: You only need the XERCES Perl module if you're going to be modifying the data model.

XERCES Perl module

Note: You only need the XALAN package if you're going to be modifying the data model AND you want to convert between hddm template and schema files. You can install either the java version or the C version. See the source for the hddm-schema and schema-hddm perl scripts for how to select which to use.

XALAN-Java (xalan.jar)


The HDDS package contains all of the geometry information for GlueX. See the wiki page "HOWTO use the stand-alone HDDS system" for instruction on how to build it.

Getting the Hall-D Source

The Hall-D specific source code is kept in a Subversion repository on the JLab CUE. The repository can be accessed via anonymous svn checkout (even from offsite) but you must have a CUE account and you must belong to the "halld" unix group in order to commit changes back into it. (Contact the JLab computer center for help on getting a CUE account.)

First, you will need to create a directory where your files will be kept. (I use a directory called HallD in my home directory.)

Next, you can check out the code:


1. Check that subversion is installed on your system. Many Linux distributions now install subversion with the development tools so it may already be there (check for an executable named "svn").

2. Check out the source:

You will probably want to get the most recent stable release. These are tagged in the repository in the top level "tags" directory. To get a list of available tagged releases, issue the following subversion command:

svn ls

You should see a list of directories, some with names like: sim-recon-YYYY-MM-DD where YYYY-MM-DD are the year, month and day the release was made.

For example, if you wanted to check out the sim-recon-2010-03-29 release you would enter the following:

svn co svn co

where the second line obtains a copy of the calibration constants from the repository where they are being kept temporarily until an actual database is developed.

Note that if you check out a tagged release, you should not commit any changes to that source back into the repository (in subversion, you can commit changes to tagged code just as easily as to the main trunk.)

If you want to develop code to contribute back into the repository for others to use, you should checkout from the trunk of the tree:

svn co

Setting up your Environment

Several environment variables are needed to compile and run the Hall-D software. The following is a list of environment variables related to 3rd party packages:

  • CERN

Environment variables needed that identify locations of Hall-D files

  • HALLD_MY (optional)


Suppose you checked out the sim-recon from the trunk into a directory called /home/Bob/GlueX. Then you would use the following environment variables:

setenv HALLD_HOME /home/Bob/GlueX/sim-recon setenv JANA_CALIB_URL file:///home/Bob/GlueX/calib setenv JANA_GEOMETRY_URL xmlfile://${HALLD_HOME}/src/programs/Simulation/hdds/main_HDDS.xml

If you wish to edit programs and then link against libraries in a common area you can do so through use of the HALLD_MY environment variable whose use is explained here.

in addition, you'll probably want to set your PATH environment variable to include the $HALLD_HOME/bin/$BMS_OSNAME directory to avoid having type in the full path every time you run a program.


If you have checked out a fresh and complete set of source code (e.g. a tagged release) then you should run "scons install" in the src directory to do a top-level build. Building the source directories in the right sequence is important in order for all dependencies to be satisfied. The SConstruct and SConscript files are set up to build in the proper sequence when invoked from the top-level directory src. When doing this you should make sure that you have not defined the HALLD_MY environment variable.

scons -j32 install

After performing a top-level build one can modify source and invoke "scons -u install" only in the specific sub-directory without having to do a top-level build every time.

scons -u install

There are a set of scripts that give examples of how to install the various GlueX software packages. In many cases, the scripts can be run to do the installation automatically, as long as some of the choices made for directory location are acceptable to you.

Modifying a Program

Programs and plugins generally are kept in the sim-recon/src/programs directory tree. To modify a program one can either edit the code where it is an re-run "scons -u" or copy the whole directory and modify the copy.

Example: to build a private version of the hd_ana program called hd_ana_bob do the following:

cd $HALLD_HOME/src/programs/Analysis cp -r hd_ana hd_ana_bob

... edit SConscript file to include hd_ana_bob ...

cd hd_ana_bob mv scons -u install -c .... edit files ... scons -u install

Notice that the file including main() was renamed from to This is because the generic build system uses the basename of the main()-containing file as the name of the executable. You also *must* edit the SConscript file in the parent directory of the directory containing your source to include your source directory. See more details here.

The scons -u install -c step is needed to clear the objects and dependency files for the file which now does not exist by that name.

Modifying a Library

Code useful to more than one program should be kept in a library. Library code is stored in the sim-recon/src/libraries directory tree. Among other things, this is where all of the reconstruction code is kept. To modify code in a library, just go to the appropriate directory, edit the code and invoke "scons -u install".

Example: to change the way the Forward Calorimeter reconstruction works, do the following:

cd $HALLD_HOME/src/libraries/FCAL .... edit files .... scons -u install cd $HALLD_HOME/src/programs/Analysis

Modifying the Geometry

If you do not need to modify the geometry, then you can skip this item and go straight to "RUNNING HDGEANT"..

The geometry for the simulation is kept in a set of XML files in the copy of HDDS you are using (see Required 3rd Party Packages above).

If you are using your own version of HDDS, you can modify the appropriate XML file(s) and then do a "scons install" in your $HDDS_HOME directory. This will build the hdds-geant executable and run it on the main_HDDS.xml to produce a file named hddsGeant3.F. The hddsGeant3.F file is FORTRAN source which contains all of the geometry definitions for the entire GlueX detector. hddsGeant3.F, will be compiled and put into a library in a subdirectory of $HDDS_HOME/lib.

Note that if you make changes to the geometry that you feel should be propagated back into the repository, you should contact Richard Jones ( who is the acting gatekeeper for the geometry and simulation just to make sure he's aware of the changes and that they are implemented in a consistent way.


The simulation program comes in 2 forms hdgeant for batch usage and hdgeant++ for interactive usage. These instructions focus on the batch version hdgeant.

I'll just give a couple of hints here:

  1. When running hdgeant, make sure the
    file exists in the current working directory. This can be found in the
    sim-recon/src/programs/Simulation/HDGeant directory.
    The file is in ASCII format and has lots of comments to help you configure it to suit your needs.
  2. Make sure your JANA_CALIB_URL environment variable is set
    and points to the location of the CCDB version you wish to use. Some
    instructions on this can be found here.
    (If offsite, you should consider using an SQLite file)
    setenv JANA_CALIB_URL sqlite:///home/jdoe/HallD/ccdb.sqlite

The file should be edited to suit your specific simulation. Most importantly, you need to define the source of events. There are 3 main options which are documented in the example file. They are:

  • coherent bremstrahlung photon generator(built-in)
  • single particle gun(built-in)
  • read events from an external file in HDDM (Hall-D Data Model) format.

An HDDM file can be generated using any of the following programs:

  • genr8 (sim-recon/src/programs/Simulation/genr8) generator for specific reaction via isobar model
  • bggen (sim-recon/src/programs/Simulation/bggen) generator for full hadronic photoproduction spectrum
  • genphoton (sim-recon/src/programs/Simulation/genphoton) external particle gun for single photons
  • genpi (sim-recon/src/programs/Simulation/genpi) external particle gun for single pions

The program genr8_2_hddm (sim-recon/src/programs/Simulation/genr8_2_hddm) can be used to convert the output of genr8, genphoton, and genpi into HDDM format.

After setting up your file and creating any generated events file (if your not using a built-in generator) then simply invoke hdgeant with no arguments. This will produce an hddm formatted output file with a name specified in the file.

The simulation produces several output files. The default name of the one with the detector responses that you want is called hdgeant.hddm (unless you specified something else in the file".

The hdgeant.hddm file still contains some "pristine" hits. Some smearing of these is required to accurately model detector responses. In general, the calorimeter values are smeared inside of the reconstruction program while drift chamber hits are are smeared using an intermediate program called mcsmear.

Reading in Simulated Data

In addition to the hddm tools (sim-recon/src/programs/Utilities) that can be used to scan data in an hddm file, there are several programs in the Hall-D arsenal that can read this in as well. All of them rely on the libHDDM.a library so new programs can access the data too with minimal effort.

Several tools exist that can be used to look at the simulated data file, all of which have the full reconstruction software built-in. Below are descriptions of the most useful ones.


A simple utility program is available called "hd_dump" which is based on the JANA framework.

Running hd_dump with no arguments will print a usage message. Essentially though, you just give it the name of the hdgeant.hddm file when you run it. It will list the objects available event by event (pausing for user input after each event). If you give it any "'-Dobjectname" options on the command line, it will attempt to print the contents of all objects of type objectname as well.

Example of using hd_dump:

>hd_dump -DDMCThrown hdgeant.hddm Registered factories: (56 total) Name: nrows: tag: ---------------- ------- -------------- DBCALMCResponse 15 DBCALGeometry 1 DBCALShower 2 DBCALTruthShower 6 DBCALPhoton 2 DHDDMBCALHit 11 DCDCHit 10 DCDCTrackHit 10 DFDCHit 51 DFDCPseudo 5 DFDCCathodeCluster 12 DFDCSegment 1 DFDCIntersection 4 DFDCPseudo 8 "WIRESONLY" DFDCPseudo 5 "CORRECTED" DFCALGeometry 1 DSCTruthHit 1 DSCHit 1 DTOFMCResponse 2 DTOFHit 2 DTOFGeometry 1 DHDDMTOFHit 2 DTOFHit 2 "MC" DTrack 1 "ALT2" DTrack 1 "ALT3" DTrack 1 DTrackCandidate 1 DTrackCandidate 1 "CDC" DTrackCandidate 1 "FDC" DTrackCandidate 1 "FDCCathodes" DTrackCandidate 1 "FDCpseudo" DTrackCandidate 1 "THROWN" DMCTrackHit 24 DMCThrown 1 DMCTrajectoryPoint 2 DTrack 1 "THROWN" DTrackFitter 1 DTrackFitter 1 "ALT1" DTrackHitSelector 1 DTrackHitSelector 1 "ALT1" DPhoton 2 DParticle 1 DParticle 1 "THROWN" DMCThrown: q: x(cm): y(cm): z(cm): E(GeV): t(ns): p(GeV/c): theta(deg): phi(deg): type: pdgtype: myid: parentid: mech: ---------------------------------------------------------------------------------------------------------------- +1 0.0 0.0 65.0 1.694 -999.000 1.688 16.329 -7.040 8 0 1 0 0


The hdview2 program is a simple event viewer written in ROOT. It can be used to help visualize the event using 2-D projections. Not all features work on it and it is geared a little toward charged particle tracking at the moment. Start it by just passing it the name of an hddm file on the command line.

Example of using hdview2:

>hdview2 hdgeant.hddm


The hd_root program can be used to make histograms and trees in a ROOT file. The program itself only creates an empty ROOT file and then cycles through events. The usefulness comes in through plugins that make and fill the histos and trees. The idea is that one can specify any number of plugins on the command line and the histograms and trees created by all of them will go into a single ROOT file. It is important to note that reconstruction algorithms are all compiled into hd_root, but it is left to the plugins to actually activate them.

Example: Using hd_root to make ROOT file with TTree containing information on CDC hits.

hd_root --plugin=cdc_hists --auto_activate=DCDCTrackHit hdgeant.hddm

janaroot plugin

In the most general case, one can use the janaroot plugin to create a separate ROOT file with trees filled with the objects of interest. Since this plugin creates its own ROOT file, it can be used with any of the Hall-D reconstruction programs including hd_root and hd_ana.

Example: Using the janaroot plugin to create TTrees with generated particles and reconstructed photons

hd_ana --plugin=janaroot --auto_activate=DMCThrown --auto_activate=DPhoton hdgeant.hddm

Making your own custom program

If you are not familiar or comfortable with using plugins then you may just want to create your own custom program. It is probably easiest to just copy the source for hd_root and modify it to create and fill the histograms/trees that you want.

The main idea behind JANA is that it passes around C++ objects that contain the data of interest. In order to access the data you want, you must first figure out what object(s) contain it. You can browse or search for the objects you need in the doxygen generated on-line documentation.

Using plugins

All DANA executables can have plugins attached at run time to extend the functionality of the program. For those who are unfamiliar with the term, a plugin is just a dynamically linked object that contains routines that can be accessed by a running executable.

Several plugins have been written that can add histograms/trees to a ROOT file. For example, the "mcthrown_hists plugin will create histograms and fill them with the "thrown" values from Monte Carlo data. There is also a cdc_hists plugin that will produce histograms related specifically to the CDC. In general, these plugins will create a separate directory in the ROOT file to place their histograms/trees. This allows mulitple plugins to be attached to the same executable without risk of conflict between histogram names. Source code for the plugins resides in the sim-recon/src/programs/Analysis/plugins directory.

Although a plugin can be attached to any DANA program, a generic program hd_root exists that serves as a shell specifically written for this task (see above). For example, to create a root file with the thrown values histograms and the histograms used to study the acceptance of the GlueX detector, type this:

hd_root --plugin=mcthrown_hists --plugin=acceptance_hists hdgeant.hddm

This will create a ROOT file called hd_root.root. Inside the file would be two directories called "THROWN" and "ACCEPTANCE" containing the histograms produced by the respective plugins.

You can create your own plugin using one of the existing plugins as a template.

This is the GlueX Offline Frequently-Asked Questions list. If you cannot find an answer to your question, please send it to the Offline Software Coordinator (currently Mark Ito).


Where do I find information about GlueX software?

The best place is this wiki. The highest-level page dealing with software is the Offline Software page. The offline page is linked from the front page of the wiki at .

GlueX Administrative Tasks

How do I get an account on the wiki?

All members of the group at JLab can log into the wiki. Use your JLab CUE username and password.

How do I get privilege to check stuff into the Subversion repository?

Become a member of the halld unix group.

How do I get emails when files are checked into the Subversion repository? Stop the emails from coming?

If you look in the directory /group/halld/Repositories/svnroot/hooks there is a file called post-commit, with several backups. That is the file you need to change. If you look at it it is pretty clear what to do. People in the halld group should have privilege to modify it.

Note that you can "subscribe" to only a subset of check-ins as well, selection by directory.

How do I add a new author to the DocDB?

Any GlueXer with the writer privilege can add a new author. Go to . (Thanks Zisis!)

Building GlueX Software

How do I do a complete build of GlueX Software, starting from scratch?

See the instructions in "The gluex_install System" section of the Version Management System document.

How do I upgrade my $GLUEX_TOP?

New versions of the packages come out all of the time. Sometimes you have to upgrade.

1. Identify a version set to use for the upgrade.

Version set XML files can be found at

or at JLab in


and have names like version_2.22_jlab.xml, for example. If you select a "_jlab.xml" file then you will likely not have any inconsistencies in the build. Copy the file into your $GLUEX_TOP directory. For definiteness let's call the file


2. Create an environment set-up file for the upgraded system

For definiteness, let us assume you are using tcsh as your shell. The file $GLUEX_TOP/setup.csh should look like:

setenv GLUEX_TOP /your/location/for/gluex_top setenv BUILD_SCRIPTS $GLUEX_TOP/build_scripts source $BUILD_SCRIPTS/gluex_env_version.csh /your/location/for/gluex_top/version.xml

Make a copy in $GLUEX_TOP, for definiteness call it setup_upgrade.csh. Edit the last line to be

source $BUILD_SCRIPTS/gluex_env_version.csh /your/location/for/gluex_top/version_upgrade.xml

Now the command

source /your/location/for/gluex_top/setup_upgrade.csh

will set-up the environment for the upgraded version set (not yet built).

3. Do the build

After setting up the environment as described in the previous step, build the updates

cd $GLUEX_TOP make -f $BUILD_SCRIPTS/Makefile_all gluex


How do I get a computer account at JLab?

You need to register as a JLab user. The registration form has a box for "I want an account" (or something like that). In any case registration must be completed before getting an account. Also you will need a sponsor. Your sponsor must be a member of JLab staff. The process starts with a link on this page

Click on "online" in the first sentence of the first paragraph.

How do I install a certificate in my browser so that I can view secure web sites at JLab (without creating a security exception)?

Go to , select the browser you are using, and follow the instructions.

I cannot log into the ifarm machines. How do I get access?

You have to be member of the "halld" Unix group (or an analogous Unix group associated with one of the other experimental halls).

How do I become a member of the "halld" Unix group?

Ask the Software Coordinator to add you.

How do I subscribe to a GlueX email list?

There is a wiki page listing email lists of interest to GlueX collaboarators. To subscribe to any of them, click on the appropriate "information page".

How do I get personal web space at JLab?

The standard /home/username/public_html has been deprecated at the Lab as of August 2011. Another system has been set up on the /userweb partition. There is a HOWTO that explains the procedure for getting space there.

What is the Volatile disk?

We now have access to up to 40 TB of disk space on the "volatile" file system.

  • storage is temporary; there are scripts which run to delete old files
  • a quota system is enforced by the operation of these cleaning scripts
  • the system has two disk usage level triggers, called "quota" and "reservation", the former is a global maximum per group, the latter is used when the disk as a whole runs out of space (the sum of all quotas exceeds the available space, by design)
  • the system is explained at the bottom of the page referenced in Sandy's message

The /volatile partition is only available from the farm nodes at the Lab. This space is independent of our "work" space (/work/halld). Work space is not subject to deletion by scripts.

Two possible use cases for this space are (1) for staging files into and out of farm nodes and (2) for storage of data that are permanently stored elsewhere (e. g., on tape) for repeated analysis at the Lab.

There is a directory, /volatile/halld/home, for users to put their personal directories.

See for details of who has what space and how the space is managed.

What is the difference between /work/halld and /work/halld2?

See the email announcing /work/halld2 as well as the description of the issues with the Lustre partition.

How do I check how much space Hall D is using on the Lustre disks?

From a recent CCPR:

You can see the current lustre quota, and usage, for Hall D with the following:

> lfs quota -hg halld /lustre Disk quotas for group halld (gid 267): Filesystem used quota limit grace files quota limit grace /lustre 103.6T 115T 120T - 7666476 0 0 -

Note that this shows the Hall D quota for all of the Lustre-based disks; the quota has no knowledge of cache/volatile/work.

Why does sudo not work for me?

At JLab, /apps/bin/sudo is configured for use by Computer Center personnel. It does not reference your local sudoers file. To use sudo locally, make sure you are using /usr/bin/sudo (i. e., check your PATH variable).

How do I kill all of my farm jobs?

will kill all farm jobs submitted by the logged in user, independent of the state of the jobs.

How to access RCDB with python (e.g. plot accumulated triggers)

  1. On most machines at JLab
  2. Source /group/halld/Software/build_scripts/gluex_env_jlab.csh
  3. Use /apps/anaconda/PRO/bin/python2
  4. Example: > python2 # run script that plots accumulated triggers over the run

How do I get a Scientific Computing Certificate?

/site/bin/jcert -create

For more details see SciComp's network certificate page.

How do I find out what share of the Farm is allocated to Hall D?

The computer center maintains a page on that here. You need to scroll down to see the table.


How do I resolve problem jobs so I can start the workflow going again?

When you have hit the error limit, SWIF will stop submitting jobs from your workflow. By default, there is no limit set. If you have set a limit and have reached the limit, you can use 'modify-jobs' if you want to retry them with altered cores/time/ram/disk. If you simply want to try again, use 'retry-jobs'. To just fail them, use 'abandon-jobs'.

If you want to put off thinking about it, you can up the error threshold to, say, 1000 by using 'swif run -workflow <> -errorlimit 1000'.

Suppose a successful job, upon scrutiny later, looks suspicious and I want to re-run it. Is there a way to to this?

You can use the undocumented '-resurrect' option of 'swif retry-jobs' then specify which jobs to resurrect either by name or id. You can also specify job names and ids by pattern, but you need to make sure the pattern isn't too broad because this command will search all jobs, not just ones that were successful.


I have a build of sim-recon that I want to use. How do I set up my environment?

Source either the setenv.csh or the the that was created when sim-recon was built. For the details, see the setenv section of the wiki page Setting Up the GlueX Environment.

How do I get debug symbols in my sim-recon binary?

Debug symbols are put in by default by the SBMS system. You don't have to do anything.

How do I create EVIO data from an HDDM file produced by HDGeant?

When unstable particles appear in the simulation, which ones are decayed by the event generator versus hdgeant/hdgeant4?

In general, the choice is up to you. If you are writing an event generator and you want to produce an unstable particle like a τ- lepton or an η meson, you can either convert it immediately into the daughter particles and pass those to hdgeant/hdgeant4 for tracking, or you can pass the unstable particle directly to hdgeant/hdgeant4 and let it decay in the simulation. In order for hdgeant or hdgeant4 to handle the decay, the mother particle needs to be included in the list of known particles to the underlying Geant library. The known particle lists for the two Geant versions are similar, but not identical. See the following references for details.

  1. Geant3 Reference Manual, see Table CONS-301 on page 58-59.
  2. Geant4 User's Guide, separate tables for leptons, mesons, baryons, ions, and others.

Unstable particles that appear in these lists are those that the Geant/Geant4 library has some knowledge of, and can generate the major decay modes with the correct branching ratios. The fact that these libraries can simulate the in-flight decays of these particles does not mean that it should be done that way. There are limitations to the decay models employed in these simulation libraries, particularly the fact that the daughter particle momenta are distributed uniformly in phase space. For example, if you inject a bunch of ω(783) mesons into hdgeant or hdgeant4 and then generate a Dalitz plot of their 3-pion decays, you will see that the interior of the Dalitz ellipse is uniformly populated rather than weighted by the physical amplitude for isoscalar,vector -> 3π decays. One way to address this would be to generate the decays instead within the event generator, using the proper physics amplitude. Then hdgeant/hdgeant4 would only see the 3 pions from omega decay and would propagate/decay those according to their lifetimes and branching fractions. The more standard approach to this problem in GlueX analyses would be to generate all of the hadronic decays using a uniform phase-space distribution, and then apply the physical decay distribution later on in the analysis as a factor in the PWA amplitude. However, that is beyond the scope of this FAQ. For the present purposes, you can either embrace phase-space decay distributions and let hdgeant/hdgeant4 perform the decays, or else take control of the decays directly in the event generator and make the decay distributions as realistic as you wish. In general, I recommend the following rule of thumb as a best practice for GlueX simulations.

If the unstable particle has a sufficiently long lifetime that it can travel a measurable distance (>0.1 mm) in the detector before decaying, the decay should be handled by hdgeant/hdgeant4. If on the other hand, the unstable particle lifetime is too short to allow its decay vertex to be distinguished from the primary event vertex then the decays should be handled by the event generator. This second rule gives the experimenter more flexibility in terms of which decay branches are selected for simulation, and saves time that otherwise would be spent in the simulation following products of decay branches that are not the focus of the study.

"But," someone asks, "I don't want to write my own generator. I want to use bggen (pythia) or genr8, or some other channel-specific generator that I found in the sim-recon codebase. What happens then?"

The answer depends on the specific event generator tool you are using, but in general they tend to conform to the best-practice rule given above. In particular, bggen only generates the following unstable objects as final state particles.

  1. AntiLambda
  2. AntiNeutron
  3. AntiSigma+
  4. AntiSigma-
  5. AntiSigma0
  6. AntiXi+
  7. AntiXi0"
  8. Eta
  9. K+
  10. K-
  11. KLong
  12. KShort
  13. Lambda
  14. Muon+
  15. Muon-
  16. Neutron
  17. Pi+
  18. Pi-
  19. Pi0
  20. Sigma+
  21. Sigma-
  22. Sigma0
  23. Xi-
  24. Xi0

You can verify that all of the unstable particles in this list are known particle types to both hdgeant and hdgeant4. Any unstable objects not in this list, that appear in bggen as products from the primary interaction vertex, are tagged in the generated output file as intermediate resonances, not final state particles. When reading from the bggen event source, the hdgeant/hdgeant4 simulation tracks only those that are tagged as final state particles, ignoring the intermediate nodes in the decay tree. This way there is no double-counting of parent and daughter particles in the simulation, yet the record of which final-state particles conceptually belong to which parent resonances is still available in the simulation output file in case the user wants to check it.


How do I use a CCDB SQLite file?

There are instructions on this wiki page.

I am getting a disk I/O error reading an SQLite version of the CCDB. What is wrong?

If your error looks like

> ccdb ls CCDB provider unable to connect to sqlite:////work/halld/user/ccdb.sqlite. Aborting command. Exception details: (sqlite3.OperationalError) disk I/O error [SQL: u'SELECT "schemaVersions"."schemaVersion" AS "schemaVersions_schemaVersion", "schemaVersions".id AS "schemaVersions_id" \nFROM "schemaVersions"\n LIMIT ? OFFSET ?'] [parameters: (1, 0)]

it is likely you are trying to use a SQLite file from a Lustre-based disk system. SQLite is incompatible with Lustre. You must move your file to a different, non-Lustre file system.


Where can I get help with Git?

See Git Help Resources on this wiki.

What are the basic git commands that I need to know?

  • git help: get a help message (e. g., git help clone)
  • git clone: get a complete copy of a repository
  • git status: show the status of your files (changed, added to index, etc.)
  • git checkout: change to another branch
  • git add: add modified files to the index
  • git commit: commit files in the index to your repository
  • git branch: show, create, delete branches
  • git tag: show, create, delete tags
  • git fetch: get remote repository information, do not change files
  • git pull: get changed files from remote repository
  • git push: write changed files to remote repository

What happens when I do a git clone?

When you clone a repository to your local disk, you get not only get the main line of code (in SVN: the trunk, in Git: the master branch), but all branches and tags and the history of all of the above. The directory that is created contains the actual files being controlled consistent with the master branch. In the top level directory there is a hidden directory, .git, that contains all of the versioning information. There are no .git directories in the subdirectories. You are meant to work with the files you see. You do not checkout the code to a separate directory as you do with SVN or CVS. The repository and the working files co-exist in the same tree.

What is a remote repository?

A remote repository is any repository outside of your repository that your repository is aware of. A network of remote repositories will have a common ancestor. When you clone a repository, your new repository is automatically aware if one remote repository: the repository from which it was cloned. The remote repository is called "origin" on your local repository by default. Remote repositories can be added and removed. They can be pushed to or pulled from. To see your remote repositories do a "git remote".

What is a tracking branch?

If you're on a tracking branch and type git push, Git automatically knows which server and branch to push to. Also, running git pull while on one of these branches fetches all the remote references and then automatically merges in the corresponding remote branch.


How do I create a tracking branch?

When branches are created using the --track option, they will be set up linked to the remote branch. For example, if you wanted to create a new branch from the master branch from the origin remote, using this would set it up so it would pull from the remote and branch automatically:

$ git branch --track feature1 origin/master

Branch feature1 set up to track remote branch refs/remotes/origin/master.


How do I make an existing branch track a remote branch?

As of Git 1.8.0:

git branch -u upstream/foo

Or, if local branch foo is not the current branch:

git branch -u upstream/foo foo

Or, if you like to type longer commands, these are equivalent to the above two:

git branch --set-upstream-to=upstream/foo git branch --set-upstream-to=upstream/foo foo

As of Git 1.7.0:

git branch --set-upstream foo upstream/foo

All of the above commands will cause local branch foo to track remote branch foo from remote upstream. The old (1.7.x) syntax is deprecated in favor of the new (1.8+) syntax. The new syntax is intended to be more intuitive and easier to remember.


What is a topic branch?

A topic branch is a branch that is created, and possibly shared, to work on some particular issue. The name of the branch should be descriptive of the issue, of course. The concept emphasizes separation of work on one issue from work on others. One may have several topic branches going at the same time. The word "topic" is descriptive and has no operational significance.

What is the difference between Git and GitHub?

Git is the underlying software package that performs the direct operations on Git repositories. It is open source. As a software project, it is completely independent of GitHub. One can profitably run a project under Git without ever using a web browser.

GitHub is a commercial site that is in the business of hosting Git repositories and providing web tools for working with them. Jefferson Lab has a contract with GitHub to host repositories for doing Lab-related work. Public repositories are free; private repositories can potentially incur charges.

How do I get privilege to create pull requests on GitHub?

Find the instructions here.

Why is there no "git pull-request" command?

Generically, a pull request is any communication from one developer to another asking that changes on the one's branch be incorporated on the others remote branch via a "git pull". As your question implies, there is no native Git command for doing this.

On GitHub, there is a site-specific set of tools (web pages and web forms) for issuing pull requests which includes a conversation between developers on the proposed change. These tools work only for the case when the source and the target branch are both hosted on GitHub. The repositories need not be owned by the same account.

How are we notified of pull requests?

Those interested in acting on pull requests should "watch" the appropriate repository on GitHub. Go to settings/notification to configure how you are notified (email, website, or both). Then go to the repository page to mark the repository as being watched/not-watched/ignored by you.

What is the difference between a clone and a fork?

A fork is a clone from a GitHub-hosted repository to another GitHub-hosted repository. It is a GitHub-specific term.

What happens to forks when a repository is deleted or changes visibility?

When you delete a public repository, one of the existing public forks is chosen to be the new parent repository. All other repositories are forked off of this new parent and subsequent pull requests go to this new parent.


What happens when I checkout another branch when there are uncommitted changes on the current branch?

If the new branch contains edits that are different from the current branch for that particular changed file, then it will not allow you to switch branches until the change is committed or stashed. If the changed file is the same on both branches (that is, the committed version of that file), then you can switch freely.


I cannot clone a GitHub repository on the Gluon Cluster. What is the trick?

setenv HTTPS_PROXY https://jprox:8082


export HTTPS_PROXY=https://jprox:8082

I am getting a "403 Forbidden" error when trying to push to a GitHub repository from JLab. What is wrong?

You might be using an old version of git. On the JLab CUE or on the Gluon Cluster, use the version of git from /apps/bin and not the one in /usr/bin. (Version 1.7.1 has given problems; version seems OK. See In addition the version of /apps/bin/git on the CentOS 6.2 farm nodes (e. g., ifarm62) gives error messages. It is not known if commands are executing correctly or not.

After a branch is deleted on GitHub, I still see it when I do a on my local repository, even after a . How do I get rid of it?

will remove all such stale branches.


How do I get to the compare view on GitHub?

To get to the compare view, append /compare to your repository's path.

Tags have changed on a remote repository. How do I update them on my local repository?

On the local clone:

git fetch --prune --tags

I am getting SSL errors when trying to access GitHub from JLab. What is wrong?

For example, you might see the error:

Peer certificate cannot be authenticated with known CA certificates.


Peer's certificate issuer has been marked as not trusted by the user.

In late 2016 through early 2017, after JLab enacted stricter security measures for encrypted web traffic (e.g. HTTPS), users had to reference a JLab security certificate in their git configuration when authenticating to GitHub. After that period, the reference is not only unnecessary, but prevents git from working with GitHub repositories using https.

To get rid of the reference remove the "sslCAInfo" line from your ~/.gitconfig file.

How do I use secure shell (ssh) to access GitHub?

You can clone a repository, for example sim-recon, by typing

git clone ssh://

at the command line. Note that you have to specify user "git" in the URL. This will not work unless you have previously added your public ssh key to your GitHub account. To add your key:

  1. Log into GitHub
  2. Navigate to "settings" (click on far upper right icon, select "settings")
  3. Click on "SSH and GPG keys"
  4. Click on "New SSH key" (near the upper right)
  5. Enter a title (reminder about where the key is from) and your public key.
  6. Click on "Add SSH key"

If you run an ssh-agent that provides your passphrase, you can operate password free now.

What if after this I get the "X11 forwarding request failed on channel 0" message?

That does not affect the success of your command. To get rid of it create a file ~/.ssh/config that contains

Host ForwardX11 no

with only user read/write permissions. After that you will not get the message.

How do I restart the automatic pull-request test?

From Sean Dobbs:

From the main GitHub page:

Settings -> Webhooks -> [select the script, it's the only one]

Scroll down to "Recent Deliveries", and if you click on the hashes, it gives you details about the messages that were recently sent to the script. If you look at those, and pick the one corresponding to the pull request you are interested in, with an "action" of "opened", you can click the "Redeliver" button to trigger the action again.


Leave a Reply

Your email address will not be published. Required fields are marked *