Difference: IanConnellyTutorials (1 vs. 42)

Revision 4206 Mar 2017 - CallumKilby

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 553 to 553
  xrdcp:
Changed:
<
<
xrdcp -r myFile root://eosatlas//eos/user/c/ckilby
>
>
xrdcp -r myFile root://eosuser.cern.ch//eos/user/c/ckilby
 

-- CallumKilby - 23 Feb 2017

Revision 4101 Mar 2017 - LewisWilkins

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Revision 4023 Feb 2017 - CallumKilby

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 530 to 530
  -- CallumKilby - 19 Jan 2016
Added:
>
>

UPDATE 4 Feb 2017 (correct address to access CERNbox)

If you want to access your CERNbox eos user area (e.g. /eos/user/c/ckilby), the address used is different to your ATLAS eos user area.

For ATLAS eos user area, use:

root://eosatlas.cern.ch/

For CERNbox eos user area, use:

root://eosuser.cern.ch/

Example usage:

eos ls:

eos root://eosuser.cern.ch ls /eos/user/c/ckilby

xrdcp:

xrdcp -r myFile root://eosatlas//eos/user/c/ckilby

-- CallumKilby - 23 Feb 2017

 

Job Transformations

- Want to use Reco_tf.py script
- Information about options
- Search for pathena to see an example to submit to the grid a Reco_tf.py job
asetup 17.8.0.9,AtlasProduction
Reco_tf.py -h
--inputEVNTFile INPUTEVNTFILE
--outputNTUP_TRUTHFile NTUP_TRUTH

Revision 3907 Mar 2016 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 557 to 557
 Note in SLC6 cluster the files have moved to:
less /var/lib/torque/spool/[PBS_ID].OU for output or .ER for error
Changed:
<
<

Webpage Monitoring

>
>

Webpage Monitoring (inc Hadoop)

  For the SLC5 Faraday cluster (accessible using qsub, qstat on linappserv0)
Line: 566 to 566
 For the SLC6 Faraday cluster (accessible using qsub, qstat on linappserv5)

https://server6.pp.rhul.ac.uk/cgi-bin/pbswebmon.py

Added:
>
>
For the Hadoop file system, information can be accessed from the following url (when connected to the pp private network, however Stella does not appear to be sufficient so connect via ssh tunnel or firefox window from linappserv1)

http://192.168.101.253:50070/

Help! My jobs queued and died and I did not get a log file

This normally happens if one of the nodes has a problem. If you have been fortuantely to run a group of jobs, look at the nodes they have been sent to (qstat -tn1), if you see a pattern of numbers which then stops (ie node28, node29, node30) and then no more jobs, the likelihood is that node31 has a problem. The system sees the queue is empty and passes jobs to it. They then fail to run without any error report back to the system, so the next queued job is also sent to that node. In this way, a whole range of jobs can disappear down a black hole. If you suspect this, check the webpage monitoring system to see if there are any other nodes without any jobs running on them and then send a email to sysadmin. You can sometimes check by attempting to ssh onto the node. If you cannot, there is a problem. If you can, see if you can access cvmfs. In some cases, automount will have failed which causes new jobs to fail when sent to the node.

 

Deleting all your jobs quickly

You can do this with qselect and xargs:

Revision 3819 Jan 2016 - CallumKilby

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 522 to 522
 
Added:
>
>

UPDATE 3 Jan 2016 (correct address to access files)

When trying to access files on eos through RHUL machines, it is likely that you will need to use eosatlas.cern.ch in the file path. This is opposed to simply using eosatlas in the past.

e.g. root://eosatlas.cern.ch//eos/atlas/user/c/ckilby/mc15_13TeV.361108.PowhegPythia8EvtGen_AZNLOCTEQ6L1_Ztautau.recon.RDO.e3601_s2757_r7245

-- CallumKilby - 19 Jan 2016

 

Job Transformations

- Want to use Reco_tf.py script
- Information about options
- Search for pathena to see an example to submit to the grid a Reco_tf.py job
asetup 17.8.0.9,AtlasProduction
Reco_tf.py -h
--inputEVNTFile INPUTEVNTFILE
--outputNTUP_TRUTHFile NTUP_TRUTH
Line: 604 to 612
  It is also possible to add user access to your own personal area on this webpage.
Changed:
<
<
To give institute access go to -> Institutes
>
>
To give institute access go to -> Institutes
  To give acces to personal area -> Personal Repository

Revision 3721 Jul 2015 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 596 to 596
  I haven't tried to do this yet, but the event picking/ event interface should be working with these files, simplifying the issue slightly.
Added:
>
>

Adding RHUL Users to Institute SVN Area at CERN

Any current user with access writes to the RHUL area on SVN can add any other CERN user (it does not need to be a RHUL user).

One needs to use the following website : https://atlas-svnadmin.cern.ch/

It is also possible to add user access to your own personal area on this webpage.

To give institute access go to -> Institutes

To give acces to personal area -> Personal Repository

A tip is to right click the folder and select "Update From SVN" if you want to access folders which are contained within the root repository. You can give access rights to any level within the SVN tree.

To add a user, search or enter their CERN username into the input box and then click "Grant Rights". You can do this for multiple users one after another, but make sure you then click "Commit Changes into SVN" to propagate the changes to the SVN area.

 
META FILEATTACHMENT attachment="findNumberOfFreeCPUs.py.txt" attr="" comment="Python script to return the number of available cpus on the cluster and list the nodes with problem statuses" date="1406728594" name="findNumberOfFreeCPUs.py.txt" path="findNumberOfFreeCPUs.py.txt" size="1568" user="pwap009" version="5"
META FILEATTACHMENT attachment="extractCMD.py.txt" attr="" comment="Python script to extract event from RAW" date="1437048140" name="extractCMD.py.txt" path="extractCMD.py.txt" size="1335" user="pwap009" version="1"
META FILEATTACHMENT attachment="JiveXML_jobOptions_ESDRecEx.py.txt" attr="" comment="Jive XML job option production" date="1437048718" name="JiveXML_jobOptions_ESDRecEx.py.txt" path="JiveXML_jobOptions_ESDRecEx.py.txt" size="2296" user="pwap009" version="1"

Revision 3616 Jul 2015 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 564 to 564
  qselect -u $USER | xargs qdel
Changed:
<
<

>
>

Event Displays with ATLANTIS

8 TeV Event Displays

Recently, the instructions on the main Atlantis webpage do not work. Normally you run a Grid job with the event and run number and there are scripts which extract the RAW event from the correct dataset and processes to produce the JiveXML.

Here are some instructions on how to get around this and produce 8 TeV event displays hopefully even once people have forgotten all about Run 1.

1/ You need to find the lumiblock your interesting event is in. Best way is to look in a D3PD with that event and look in the lbnumber branch.

2/ You need to find the RAW dataset which contains your event. You can use rucio list-files to list the files in a dataset. RAW datasets are stored one per runnumber, and then multiple files per lumiblock (and multiple lumiblocks per run).

3/ You can do something like rucio list-files data12_8TeV.<runnumber>*RAW* | grep lbXXXX to list the files. You may need trial and error to get the correct naming of the dataset. The lb number has 4 values. If lumi block is 14, then you grep for lb0014, if it is 124 then you grep for lb0124 etc.

4/ Request a transfer of the dataset (DaTRi) to a RHUL or CERN scratchdisk (you will need to find the proper naming eg. UKI-LT2-RHUL_SCRATCHDISK). Use this webpage and list all the files you wish to transfer https://rucio-ui.cern.ch/request_rule.

5/ Wait and then download the RAW files locally.

6/ You now need to find and extract the one event which could be in any of the lumiblock files. Fortunately there is a tool to do this. You need to setup athena (setupATLAS; asetup 20.1.5.8,gcc48,opt,here,AtlasProduction;) in a working directory. The program you will use is AtlCopyBSEvent.exe however, I have a script which is a quicker way to test all files ( extractCMD.py.txt) and is run as python extractCMD.py <runnumber> <eventnumber>. It will need to be updated to point to the directory containing all the RAW files you're interested in.

7/ You should now have some files which have the format runXXXXXXXX_eventYYYYYYYY_RAW.pool.root. This contains a single RAW data event that you've been searching for. It then needs to be processed to produce a JiveXML file which Atlantis can use.

8/ You need to create a symlink in your directory : ln -s <file> test.pool.root and then run athena JiveXML_jobOptions_ESDRecEx.py.txt (I have attached this job option to the twiki too)

9/ This will produce (finally) a JiveXML file which you can then just open with atlantis (open using the GUI).

10/ You can follow the instructions and play around with your event in atlantis : https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/Atlantis

13 TeV

You should be able to follow the instructions here (https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/Atlantis) or listed here (/afs/cern.ch/atlas/project/Atlantis/Tutorial/data15_xml*.txt)

I haven't tried to do this yet, but the event picking/ event interface should be working with these files, simplifying the issue slightly.

 
META FILEATTACHMENT attachment="findNumberOfFreeCPUs.py.txt" attr="" comment="Python script to return the number of available cpus on the cluster and list the nodes with problem statuses" date="1406728594" name="findNumberOfFreeCPUs.py.txt" path="findNumberOfFreeCPUs.py.txt" size="1568" user="pwap009" version="5"
Added:
>
>
META FILEATTACHMENT attachment="extractCMD.py.txt" attr="" comment="Python script to extract event from RAW" date="1437048140" name="extractCMD.py.txt" path="extractCMD.py.txt" size="1335" user="pwap009" version="1"
META FILEATTACHMENT attachment="JiveXML_jobOptions_ESDRecEx.py.txt" attr="" comment="Jive XML job option production" date="1437048718" name="JiveXML_jobOptions_ESDRecEx.py.txt" path="JiveXML_jobOptions_ESDRecEx.py.txt" size="2296" user="pwap009" version="1"

Revision 3502 Feb 2015 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 505 to 505
 

Note the special way of opening the TFile. TFile::Open ensures it uses the correct method of opening which would otherwise need to be supplied as an option to the TFile constructor and ensures that it is transparent to whether the file is stored at a URL or locally.

Added:
>
>

UPDATE 2 Feb 2015

This method has become obsolete. Now we should use the Emi package which includes certificate authentication.

If you want to debug anything relating to xrootd connections in root itself, use:

export XrdSecDEBUG="1" # 1-3 for more debugging info
setupATLAS
localSetupROOT
localSetupEmi
# Alternatively this line will just provide the authentication path
# export X509_CERT_DIR="/cvmfs/grid.cern.ch/etc/grid-security/certificates"
voms-proxy-init -voms atlas
root -l
TFile* g = TFile::Open(root://xrootd.esc.qmul.ac.uk//atlas/atlaslocalgroupdisk/rucio/user/morrisj/06/15/user.morrisj.4259793._000016.smtD3PD.root")
 

Job Transformations

- Want to use Reco_tf.py script
- Information about options
- Search for pathena to see an example to submit to the grid a Reco_tf.py job
asetup 17.8.0.9,AtlasProduction
Reco_tf.py -h
--inputEVNTFile INPUTEVNTFILE
--outputNTUP_TRUTHFile NTUP_TRUTH

Revision 3418 Dec 2014 - TomCraneAdmin

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 291 to 291
 You are able to access every single parton which is created in the hard interaction and also in the soft hadronisation afterwards using something like this (for HZ->bbvv):
Changed:
<
<
int main() {
>
>
int main() {
  //Set up generation

Revision 3330 Jul 2014 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 548 to 548
 
Changed:
<
<
META FILEATTACHMENT attachment="findNumberOfFreeCPUs.py.txt" attr="" comment="Python script to return the number of available cpus on the cluster." date="1406641534" name="findNumberOfFreeCPUs.py.txt" path="findNumberOfFreeCPUs.py.txt" size="1045" user="pwap009" version="4"
>
>
META FILEATTACHMENT attachment="findNumberOfFreeCPUs.py.txt" attr="" comment="Python script to return the number of available cpus on the cluster and list the nodes with problem statuses" date="1406728594" name="findNumberOfFreeCPUs.py.txt" path="findNumberOfFreeCPUs.py.txt" size="1568" user="pwap009" version="5"

Revision 3229 Jul 2014 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 545 to 545
 You can do this with qselect and xargs:

qselect -u $USER | xargs qdel \ No newline at end of file

Added:
>
>

META FILEATTACHMENT attachment="findNumberOfFreeCPUs.py.txt" attr="" comment="Python script to return the number of available cpus on the cluster." date="1406641534" name="findNumberOfFreeCPUs.py.txt" path="findNumberOfFreeCPUs.py.txt" size="1045" user="pwap009" version="4"

Revision 3126 Jun 2014 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 504 to 504
 

Note the special way of opening the TFile. TFile::Open ensures it uses the correct method of opening which would otherwise need to be supplied as an option to the TFile constructor and ensures that it is transparent to whether the file is stored at a URL or locally.

Added:
>
>

Job Transformations

- Want to use Reco_tf.py script
- Information about options
- Search for pathena to see an example to submit to the grid a Reco_tf.py job
asetup 17.8.0.9,AtlasProduction
Reco_tf.py -h
--inputEVNTFile INPUTEVNTFILE
--outputNTUP_TRUTHFile NTUP_TRUTH
 

Faraday Cluster

Checking the status of jobs as they run

Revision 3019 Jun 2014 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 483 to 483
 
source /afs/cern.ch/project/eos/installation/atlas/etc/setup.sh
export EOS_MGM_URL=root://eosatlas.cern.ch
kinit iconnell # Sometimes need kerberos authentication
Changed:
<
<
eos cp -r /eos/atlas/atlascerngroupdisk/phys-higgs/HSG8/MiniML/ZllPythia/ ./ [For Example]
>
>
Then eos cp -r /eos/atlas/atlascerngroupdisk/phys-higgs/HSG8/MiniML/ZllPythia/ ./ OR TFile* f = TFile::Open("/eos/atlas/atlascerngroupdisk/phys-higgs/HSG8/MiniML/ttbar/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_TOP.e1727_a188_a205_r4540_p1569/Run_117050_ML_7B.root")
 

Revision 2918 Jun 2014 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 481 to 481
  Note that without the export of EOS_MGM_URL, one needs to append: root://eosatlas/eos/atlas/blahblah when using URLs like the one in the example below.
source /afs/cern.ch/project/eos/installation/atlas/etc/setup.sh
Changed:
<
<
xport EOS_MGM_URL=root://eosatlas.cern.ch eos cp -r /eos/atlas/atlascerngroupdisk/phys-higgs/HSG8/MiniML/ZllPythia/ ./ [For Example]
>
>
export EOS_MGM_URL=root://eosatlas.cern.ch kinit iconnell # Sometimes need kerberos authentication eos cp -r /eos/atlas/atlascerngroupdisk/phys-higgs/HSG8/MiniML/ZllPythia/ ./ [For Example]
 
Added:
>
>
There is also a way of accessing files stored in the xrootd filesystem outside of CERN (ie at QMUL). Make sure you use linappserv5 for this, as it seems to work best with SLC6 setup of GCC and ROOT. You also need to have a Grid Certificate located in the ~/.globus directory. The VOMS proxy setup shown here is not supported much anymore, but seems to work better in some cases where the setupATLAS options do not work so well.

source /afs/cern.ch/sw/lcg/external/gcc/4.6.2/x86_64-slc6-gcc46-opt/setup.sh
source /afs/cern.ch/sw/lcg/app/releases/ROOT/5.34.09/x86_64-slc6-gcc46-opt/root/bin/thisroot.sh
source /afs/cern.ch/project/gd/LCG-share/current_3.2/etc/profile.d/grid_env.sh
voms-proxy-init --voms atlas
root -l
TFile* f = TFile::Open("root://xrootd.esc.qmul.ac.uk//atlas/atlaslocalgroupdisk/rucio/user/morrisj/b6/51/user.morrisj.034509._00001.merge.smtD3PD.root")


Note the special way of opening the TFile. TFile::Open ensures it uses the correct method of opening which would otherwise need to be supplied as an option to the TFile constructor and ensures that it is transparent to whether the file is stored at a URL or locally.

 

Faraday Cluster

Checking the status of jobs as they run

Revision 2814 May 2014 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 508 to 509
 Note in SLC6 cluster the files have moved to:
less /var/lib/torque/spool/[PBS_ID].OU for output or .ER for error
Added:
>
>

Webpage Monitoring

For the SLC5 Faraday cluster (accessible using qsub, qstat on linappserv0)

http://gfm02.pp.rhul.ac.uk/cgi-bin/pbswebmon.py

For the SLC6 Faraday cluster (accessible using qsub, qstat on linappserv5)

https://server6.pp.rhul.ac.uk/cgi-bin/pbswebmon.py

 

Deleting all your jobs quickly

You can do this with qselect and xargs:

Revision 2712 May 2014 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 471 to 471
  Calling this, and typing in your CERN password, will allow access through afs as if you are logged in to lxplus (when using the private lxplus areas).
Added:
>
>

Using EOS on linappserv

This took a while to dig out and work properly ( webpage).

EOS is like CASTOR (apparently) and is a file system at CERN. It is not backed up but is often used to store data. One can load files directly into ROOT using TFile::Open("root://eosatlas/eos/...") and TFile will use the correct implementation to open the file (else you need to specify its a WEB location).

To access EOS locations on linappserv, use:

Note that without the export of EOS_MGM_URL, one needs to append: root://eosatlas/eos/atlas/blahblah when using URLs like the one in the example below.

source /afs/cern.ch/project/eos/installation/atlas/etc/setup.sh
xport EOS_MGM_URL=root://eosatlas.cern.ch
eos cp -r /eos/atlas/atlascerngroupdisk/phys-higgs/HSG8/MiniML/ZllPythia/ ./ [For Example]

 

Faraday Cluster

Checking the status of jobs as they run

Revision 2608 May 2014 - BenjaminSowden

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 491 to 491
 less /var/spool/pbs/spool/[PBS_ID].OU for output or .ER for error
Added:
>
>
Note in SLC6 cluster the files have moved to:
less /var/lib/torque/spool/[PBS_ID].OU for output or .ER for error
 

Deleting all your jobs quickly

You can do this with qselect and xargs:

Revision 2503 Mar 2014 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 10 to 10
  My short tutorial on configuring ROOT on Windows.
Added:
>
>

Mac Mavericks Issues

Debugging

gdb is not really supported any more in Mavericks. I have tried to use MacPorts to set up gdb and given it a signed certificate to control the running code. However, it ahs not been successful. For some reason it seems to be unable to read and ROOT debug objects (giving just ?? for the objects). However, an alternative debugger is available: lldb. This is packaged up within the Xcode command line tools (xcode-select --install) which is able to see the ROOT debug information and does not need any signing with certificates. One should use this on OSX 10.9.

A comparison between gdb and lldb is available here.

Compiling ROOT

It is reccommended to compile root from the 5.34-patches branch taken from Git for Mavericks. The latest tags also appear to be compatible.


git clone http://root.cern.ch/git/root.git
cd root
git tag -l
git checkout -b v5-34-17 v5-34-17 or git checkout -t origin/v5-34-00-patches
 

ROOT

Using log scales and stacking in ROOT

Line: 410 to 427
  will display the libraries that are needed to run.
Changed:
<
<

ATLAS Specfics

>
>

ATLAS Specifics

 

Checking if a grid site is scheduled to be offline

Revision 2404 Sep 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 441 to 441
  Following which one can use the information here to get AMI information without using the browser.
Changed:
<
<

ATLAS Notes - Compiling on lxplus

>
>

ATLAS CONF/INT Notes - Compiling on lxplus

  There is good information available here on note writing. In particular, there are instructions about how to source a newer TeX version, as the Atlas style file has some clashes with the older hyperref package.
Added:
>
>

Accessing private afs areas (on lxplus) through linappserv

The RHUL servers are setup with afs access, which means areas on afs (most usefully /afs/cern.ch) are available through the file system.

Typically only public areas are accessible, but it is possible to access private areas related to your normal lxplus login. To achieve this, one needs to authenticate themselves using kerberos.

kinit <CERN Username>

Calling this, and typing in your CERN password, will allow access through afs as if you are logged in to lxplus (when using the private lxplus areas).

 

Faraday Cluster

Checking the status of jobs as they run

Revision 2313 Aug 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 86 to 86
 

Then to compile with this library you use the advice given above.

Added:
>
>

Changing histogram axis scale without altering numbers

The TGaxis class enables you to define the number of significant figures which are printed in the axis scale. I was not aware of this functionality until recently.

For removing a factor of 1000 and adding 10^3 into the axis title...

  #include <TGaxis.h>

   TGaxis::SetMaxDigits(3)
 

BASH Shell (Scripting and tips)

Revision 2218 Jul 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 411 to 411
  Furthermore there is a Twiki page (thanks Simon) which can be used to check whether the problem has been reported by others (check email archives) and how to report the problem if necessary (ie no information listed anywhere about downtime). https://twiki.cern.ch/twiki/bin/viewauth/Atlas/AtlasDAST#Users_Section_Users_Please_Read
Added:
>
>
Also check here: http://atlas-agis-dev.cern.ch/agis/
 

Increasing the lifetime of the VOMS proxy

Revision 2107 Jun 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 428 to 429
  Following which one can use the information here to get AMI information without using the browser.
Added:
>
>

ATLAS Notes - Compiling on lxplus

There is good information available here on note writing. In particular, there are instructions about how to source a newer TeX version, as the Atlas style file has some clashes with the older hyperref package.

 

Faraday Cluster

Checking the status of jobs as they run

Revision 2006 May 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 124 to 124
 *.ps
Added:
>
>

Converting .eps files into .jpg files and making a webpage

gs -sDEVICE=jpeg -dJPEGQ=100 -dNOPAUSE -dBATCH -dSAFER -r300 -sOutputFile=myfile.jpg myfile.eps
mogrify -trim -resize 800x600 myfile.jpg

<html>
<head>
</head>
<body>
<iframe name="myframe" src=""></iframe></br>
<a href="./location_of_file.jpg" target="myframe">Click to put in iframe</a></br>
</body>
</html>
With this code setup, one can have a webpage where you click a link and the .jpg image is made to appear in your iframe. One can make a script which would read in the location of the files and then comstruct the syntax for the html in a relative simple way using a bash for loop and reading the file line by line.

Note that web browsers cannot display .eps files and also if you do not resize the .jpg made from the .eps then you will have an extrememly large .jpg file as .eps resolutions are very large.

 

Error checking bash script commands

As per the link here, commands in terminal will have a success/fail result in the special character $?. For example:

Line: 438 to 453
 You can do this with qselect and xargs:

qselect -u $USER | xargs qdel

Deleted:
<
<

Revision 1903 Apr 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 18 to 18
  THStack* stack = new THStack("Name","Title");
Changed:
<
<
stack->Add(histogram_name);
>
>
stack->Add(histogram_name);
  Setting up a log scale is easy enough in ROOT. Assuming you have a TCanvas you apply the following:

TCanvas* canvas = new TCanvas("Name","Title",10,10,900,900)

Changed:
<
<
canvas->SetLogy(); //You can also do SetLogx();
>
>
canvas->SetLogy(); //You can also do SetLogx();
 
Changed:
<
<
stack->SetMinimum(1);
>
>
stack->SetMinimum(1);
  This final line is important because if you are stacking some histograms, the canvas will typically default to showing 3 or 4 orders of magnitude down from the largest maximum. It is a bit strange but it means that if you have a large background plot and a tiny signal plot, it may not appear straight away from just stacking and setting the scale to log. Setting the minimum to 1 forces the scale to go down to zero (ie log(1) = 0) and the maximum will remain the largest value from the histogram. This was particularly useful for me when I was plotting signal and background scaled number of events passing an MVA cut because to begin with the background events dwarf the amount of signal and only when the cuts progress towards the end of the x scale did the background get cut to the point where there were similar amounts of signal and background events.
Line: 135 to 137
 

Emacs - How to make backspace key delete backwards

Changed:
<
<
Whether it be an issue with my emacs setup on lxplus, it seems that my default backspace behaviour in
emacs -nw
is to be the same as the delete character. To change this in each session do:
>
>
Whether it be an issue with my emacs setup on lxplus, it seems that my default backspace behaviour in
emacs -nw

is to be the same as the delete character. To change this in each session do:

 
M-x (ie Esc-x)
normal-erase-is-backspace-mode
Line: 181 to 184
 

Using Pythia

Changed:
<
<
Pythia is an event generator which simulates particle collisions and the uses the Lund string model to propagate the hadronisation of the interaction. Pythia 8 is currently the latest incarnation and is written in C++. It is possible to set up a local installation of Pythia on your own laptop and assuming you have ROOT installed also, you can use both sets of libraries to carry out a parton level analysis of anything you might want to model. Obviously Pythia does not have any detector level effects (as it is parton level) so you are basically working at a "even-better-than-best-case-scenario" with your results. That said, the distributions found should be similar to the proper MC and data sets. It is worth pointing out that currently pile-up is not well modelled in Pythia 8, not that it has been something I have had to worry about so far, but that was the reason from switching from Pythia 8 back to Pythia 6 in the generation of MC11b and MC11c.
>
>
Pythia is an event generator which simulates particle collisions and the uses the Lund string model to propagate the hadronisation of the interaction. Pythia 8 is currently the latest incarnation and is written in C++. It is possible to set up a local installation of Pythia on your own laptop and assuming you have ROOT installed also, you can use both sets of libraries to carry out a parton level analysis of anything you might want to model. Obviously Pythia does not have any detector level effects (as it is parton level) so you are basically working at a "even-better-than-best-case-scenario" with your results. That said, the distributions found should be similar to the proper MC and data sets. It is worth pointing out that currently pile-up is not well modelled in Pythia 8, not that it has been something I have had to worry about so far, but that was the reason from switching from Pythia 8 back to Pythia 6 in the generation of MC11b and MC11c.
  You can get Pythia from here.
Line: 241 to 242
 
Changed:
<
<
The example workbook which is available on the main site is very good at introducing the syntax that Pythia uses. The standard procedure is: set up the collision->decide what parton modelling is required (ie hadronisation? multiple interactions?)->run event loop->include analysis in event loop->fin.
>
>
The example workbook which is available on the main site is very good at introducing the syntax that Pythia uses. The standard procedure is: set up the collision->decide what parton modelling is required (ie hadronisation? multiple interactions?)->run event loop->include analysis in event loop->fin.
 
Changed:
<
<
You are able to access every single parton which is created in the hard interaction and also in the soft hadronisation afterwards using something like this (for HZ->bbvv):
>
>
You are able to access every single parton which is created in the hard interaction and also in the soft hadronisation afterwards using something like this (for HZ->bbvv):
 
int main() {
Line: 289 to 290
 

Extracting the luminosity of data files

Changed:
<
<
This is going here as I struggled far more than I should have to get the correct options. You use atlas-lumicalc to extract the luminosity associated with a particular good runs list. The GRL is used as a mask to only use data which has been taken at a time when all the detector functions were working correctly.
>
>
This is going here as I struggled far more than I should have to get the correct options. You use atlas-lumicalc to extract the luminosity associated with a particular good runs list. The GRL is used as a mask to only use data which has been taken at a time when all the detector functions were working correctly.
  At the moment I am using dataset containers which have been made by the Top group to collect together the runs by period of data taking. This means on the face of it, the runs which are inside it are not explicitly clear. There are however DQ2 commands to extract these out. The following is a script I wrote which (when VOMS and DQ2 is set up) will list the datasets and extract the run number and write it to the terminal screen. This uses bash string manipulation which is a bit cumbersome but does the job (# means after the first thing listed, % means before the thing listed).
Line: 350 to 351
  The issue with this is two fold:
  1. ) When compiling and linking, the makefile/gcc needs to know where libraries reside in order to include them.
Changed:
<
<
  1. ) At run time, these libraries also need to be identified.
>
>
  1. ) At run time, these libraries also need to be identified.
 This means there are two things to include in the makefile:
-Wl, -rpath $(abspath./directory)
Line: 366 to 366
  This is used to link the library at compilation so it knows to include it.
Changed:
<
<
Sidenote: Ideally, a library needs to be called
 lib<Name>.so
in order to include it as
-l<Name>
>
>
Sidenote: Ideally, a library needs to be called
 lib<Name>.so

in order to include it as

-l<Name>
  Both of these terms (which one could set to a makefile variable) need to be listed as library inclusions in the building phase of the compilation and linking phase.

Diagnostics

Changed:
<
<
 readelf -d <Executable> 
will list the shared objects and the rpath.
 ldd <Executable> 
will display the libraries that are needed to run.
>
>
 readelf -d <Executable> 

will list the shared objects and the rpath.

 ldd <Executable> 

will display the libraries that are needed to run.

 

ATLAS Specfics

Line: 385 to 393
  One can also check here (http://bourricot.cern.ch/blacklisted_production.html) for a summary of site status.
Changed:
<
<
Furthermore there is a Twiki page (thanks Simon) which can be used to check whether the problem has been reported by others (check email archives) and how to report the problem if necessary (ie no information listed anywhere about downtime). https://twiki.cern.ch/twiki/bin/viewauth/Atlas/AtlasDAST#Users_Section_Users_Please_Read
>
>
Furthermore there is a Twiki page (thanks Simon) which can be used to check whether the problem has been reported by others (check email archives) and how to report the problem if necessary (ie no information listed anywhere about downtime). https://twiki.cern.ch/twiki/bin/viewauth/Atlas/AtlasDAST#Users_Section_Users_Please_Read
 

Increasing the lifetime of the VOMS proxy

Line: 394 to 401
 voms-proxy-init --voms atlas -valid 192:00
Changed:
<
<
You can put a larger number there. Anything too large will be capped to the maximum allowed by the proxy. In my case I have an alias called VOMS which sources the script for the proxy and calls voms-proxy-init --voms atlas, so I can just do VOMS -valid 192:00 to increase using that alias
>
>
You can put a larger number there. Anything too large will be capped to the maximum allowed by the proxy. In my case I have an alias called VOMS which sources the script for the proxy and calls voms-proxy-init --voms atlas, so I can just do VOMS -valid 192:00 to increase using that alias
 

Using pyAMI to access dataset information

Line: 426 to 432
 ... less /var/spool/pbs/spool/[PBS_ID].OU for output or .ER for error \ No newline at end of file
Added:
>
>

Deleting all your jobs quickly

You can do this with qselect and xargs:

qselect -u $USER | xargs qdel

Revision 1814 Mar 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 406 to 406
 

Following which one can use the information here to get AMI information without using the browser. \ No newline at end of file

Added:
>
>

Faraday Cluster

Checking the status of jobs as they run

If you want to see your program output when it is running on a cluster node, do:

qstat -n

This shows the node it is running on.

Then ssh to that node:

ssh pbs1
...
ssh node[NUM]
...
less /var/spool/pbs/spool/[PBS_ID].OU for output or .ER for error 
 \ No newline at end of file

Revision 1711 Mar 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 165 to 165
 This makes use of file handlers (here assigned to 5 and 6) to loop through the file (just make sure each item is on a new line in the files) and voila!
Added:
>
>

Using coloured text in C++ output

See this link.

Basic premise for using red text then switching back to console default:

    std::string redtext = "\033[0;31m";
    std::string blacktext = "\033[0m";
    std::cout << redtext << std::endl;
    std::cout << "************************************************" << std::endl;
    std::cout << "====> LOADING CONFIGURATION FILE OPTIONS " << std::endl;
    std::cout << blacktext << std::endl;
 

Using Pythia

Pythia is an event generator which simulates particle collisions and the uses the Lund string model to propagate the hadronisation of the interaction. Pythia 8 is currently the latest incarnation and is written in C++. It is possible to set up a local installation of Pythia on your own laptop and assuming you have ROOT installed also, you can use both sets of libraries to carry out a parton level analysis of anything you might want to model.

Revision 1608 Mar 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 382 to 382
  You can put a larger number there. Anything too large will be capped to the maximum allowed by the proxy. In my case I have an alias called VOMS which sources the script for the proxy and calls voms-proxy-init --voms atlas, so I can just do VOMS -valid 192:00 to increase using that alias \ No newline at end of file
Added:
>
>

Using pyAMI to access dataset information

Following the setup here, one can setup pyAMI with the commands

setupATLAS
localSetupPyAMI

Following which one can use the information here to get AMI information without using the browser.

 \ No newline at end of file

Revision 1505 Mar 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 367 to 367
  I've had the problem a few times that Grid jobs have been sent to a site, only for retrieval to be impossible at a later date because the site is blacklisted/ offline. This link provides a list of scheduled and unscheduled maintenance which may explain why the site is down and for how long : https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasGridDowntime
Added:
>
>

Further information for GRID sites

One can also check here (http://bourricot.cern.ch/blacklisted_production.html) for a summary of site status.

Furthermore there is a Twiki page (thanks Simon) which can be used to check whether the problem has been reported by others (check email archives) and how to report the problem if necessary (ie no information listed anywhere about downtime). https://twiki.cern.ch/twiki/bin/viewauth/Atlas/AtlasDAST#Users_Section_Users_Please_Read

 

Increasing the lifetime of the VOMS proxy

Revision 1405 Mar 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 366 to 366
 

Checking if a grid site is scheduled to be offline

I've had the problem a few times that Grid jobs have been sent to a site, only for retrieval to be impossible at a later date because the site is blacklisted/ offline. This link provides a list of scheduled and unscheduled maintenance which may explain why the site is down and for how long : https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasGridDowntime \ No newline at end of file

Added:
>
>

Increasing the lifetime of the VOMS proxy

voms-proxy-init --voms atlas -valid 192:00

You can put a larger number there. Anything too large will be capped to the maximum allowed by the proxy. In my case I have an alias called VOMS which sources the script for the proxy and calls voms-proxy-init --voms atlas, so I can just do VOMS -valid 192:00 to increase using that alias

 \ No newline at end of file

Revision 1325 Feb 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 360 to 360
 
 readelf -d <Executable> 
will list the shared objects and the rpath.
 ldd <Executable> 
will display the libraries that are needed to run. \ No newline at end of file
Added:
>
>

ATLAS Specfics

Checking if a grid site is scheduled to be offline

I've had the problem a few times that Grid jobs have been sent to a site, only for retrieval to be impossible at a later date because the site is blacklisted/ offline. This link provides a list of scheduled and unscheduled maintenance which may explain why the site is down and for how long : https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasGridDowntime

 \ No newline at end of file

Revision 1223 Jan 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 313 to 313
  This syntax indicates to take all runs between the ones hyphenated, but the equals sign seems to work in this case but not the one above.
Added:
>
>
https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/GoodRunListsForAnalysis
 

Interesting thing with C dynamic libraries

Revision 1111 Jan 2013 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 10 to 10
  My short tutorial on configuring ROOT on Windows.
Added:
>
>

ROOT

 
Changed:
<
<

Using log scales and stacking in ROOT

>
>

Using log scales and stacking in ROOT

  To create a stack of histograms typically uses the THStack class. To create a stack of TGraphs you use the TMultiGraph class which works in a similar way.
Line: 29 to 30
  This final line is important because if you are stacking some histograms, the canvas will typically default to showing 3 or 4 orders of magnitude down from the largest maximum. It is a bit strange but it means that if you have a large background plot and a tiny signal plot, it may not appear straight away from just stacking and setting the scale to log. Setting the minimum to 1 forces the scale to go down to zero (ie log(1) = 0) and the maximum will remain the largest value from the histogram. This was particularly useful for me when I was plotting signal and background scaled number of events passing an MVA cut because to begin with the background events dwarf the amount of signal and only when the cuts progress towards the end of the x scale did the background get cut to the point where there were similar amounts of signal and background events.
Changed:
<
<

Macro for stacking TGraphs

>
>

Macro for stacking TGraphs

  This is a simple little macro but it took a little bit of time to work out all the options to draw the lines properly. The alternative would be to use Glen's script to read out a text file of numbers which plot multiple TGraphs on the same canvas which is slightly different to this. My macro takes two already made TGraphs and will scale them to whatever required luminosity and plot them on the same canvas in the same way THStack does.
Line: 59 to 60
 return 0; }
Changed:
<
<

Other macros

>
>

Other macros

  See my webpage for additional scripts and macros I have written as it is generally more convinient to keep them there.
Added:
>
>

Merging ROOT files

This isn't something I have used properly yet, but I have just come across it. In addition to the TFileMerger class available in ROOT, when you set up your enviroment, you also get a program called hadd which uses the syntax:

hadd <targetfile.root> <input files.root>
The program will add the histograms and ttree entries together. Whilst this can be used to combine, say, all your background samples into one single sample, I realise that it could be used as a simple alternative to PROOF and parallel processing.

My particular analysis program accepts as input the directory of a sample in which are a list of ROOT files which get added to a TChain and then my program, created using MakeClass, will run over the TChain. If I have a large number of ROOT files though, I can now split the same sample into a number of dataset directories, and run the analysis over smaller directories and then merge the results once all processing has been achieved.

In the long run it will probably be better to understand and use PROOF as, from what I have read, that dynamically alters the work load on available nodes to optimise the processing and then merges them together at the end. I think you can also use PROOF-lite to take advantage of multicore processors in your laptop/desktop if you are processing it locally which you wouldn't be able to do with hadd.

Using with Root dictionaries

CINT is able to produce libraries for specific ROOT objects. The main example is wanting to save a vector of TLorentzVectors to a TTree. By default this will not work. However, you can generate libraries to allow this to work. They take the command (in ROOT CINT):

>> root
>> gInterpreter->GenerateDictionary("vector<TLorentzVector>","TLorentzVector.h,vector")

Then to compile with this library you use the advice given above.

BASH Shell (Scripting and tips)

Ctrl-z in terminal - Pausing a process

If you have a process running, typing ctrl+z will pause that process allowing you to use the command line prompt.

If you then type bg it is equivalent to having originally set your program running with an & at the end - ie it tells it to resume running in the background.

You can also type fg to bring it back to running in the foreground where you can see output, but cannot access the command line.

Apparently typing jobs will tell you what is running in the background and allow you to call them to the foreground if need be.

Getting the full path of multiple files

In terminal you can use:

find . 
to retrieve the list of all files and folders in the current directory, much like ls.

However, this is useful if you want to get hold of the full path directory of a file, for example:

find `pwd` -type f
will return the list of all files in the current directory with their full path (as using pwd instead of . gives the static/global path to the files).

This can save some time if you need to list them in file to run over with a script.

Merging Multiple .ps files into one .pdf

gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=merged.pdf \
*.ps

Error checking bash script commands

As per the link here, commands in terminal will have a success/fail result in the special character $?. For example:

ls *.root
RESULT=$?
if [ $RESULT -ne 0 ]; then
    echo "ERROR in the command"
fi

Emacs - How to make backspace key delete backwards

Whether it be an issue with my emacs setup on lxplus, it seems that my default backspace behaviour in

emacs -nw
is to be the same as the delete character. To change this in each session do:
M-x (ie Esc-x)
normal-erase-is-backspace-mode

BASH Scripting : Looping through two arrays simultaneously

I wanted to have two arrays, one with a filename and the other with an output file name. I needed to use shell scripting to manipulate the file names because of some systematics which needed renaming. I orginially set up a loop over a list of filenames which can be performed as:

FILES=(file1.root file2.root textfile.txt)
for F in ${FILES[@]}; do
    echo $F
done

However you cannot quite use this as you would in C++ as you are not using a counter, per se, to control it.

An alternative method taken from here explains how to simultaneously loop through two distinct lists:

# First have your lists in two files (listA.txt, listB.txt here)
while read -u 5 aVAR && read -u 6 bVAR; do
    echo $aVAR "   " $bVAR
done 5<listA.txt 6<listB.txt

This makes use of file handlers (here assigned to 5 and 6) to loop through the file (just make sure each item is on a new line in the files) and voila!

 

Using Pythia

Pythia is an event generator which simulates particle collisions and the uses the Lund string model to propagate the hadronisation of the interaction. Pythia 8 is currently the latest incarnation and is written in C++. It is possible to set up a local installation of Pythia on your own laptop and assuming you have ROOT installed also, you can use both sets of libraries to carry out a parton level analysis of anything you might want to model.

Line: 169 to 271
  The status codes (and more information about the particle classes and extra things) are available on the html document. This is installed locally or can be accessed online at the Lund hosted version here.
Changed:
<
<

Ctrl-z in terminal - Pausing a process

If you have a process running, typing ctrl+z will pause that process allowing you to use the command line prompt.

If you then type bg it is equivalent to having originally set your program running with an & at the end - ie it tells it to resume running in the background.

You can also type fg to bring it back to running in the foreground where you can see output, but cannot access the command line.

Apparently typing jobs will tell you what is running in the background and allow you to call them to the foreground if need be.

Merging ROOT files

This isn't something I have used properly yet, but I have just come across it. In addition to the TFileMerger class available in ROOT, when you set up your enviroment, you also get a program called hadd which uses the syntax:

hadd <targetfile.root> <input files.root>
The program will add the histograms and ttree entries together. Whilst this can be used to combine, say, all your background samples into one single sample, I realise that it could be used as a simple alternative to PROOF and parallel processing.

My particular analysis program accepts as input the directory of a sample in which are a list of ROOT files which get added to a TChain and then my program, created using MakeClass, will run over the TChain. If I have a large number of ROOT files though, I can now split the same sample into a number of dataset directories, and run the analysis over smaller directories and then merge the results once all processing has been achieved.

In the long run it will probably be better to understand and use PROOF as, from what I have read, that dynamically alters the work load on available nodes to optimise the processing and then merges them together at the end. I think you can also use PROOF-lite to take advantage of multicore processors in your laptop/desktop if you are processing it locally which you wouldn't be able to do with hadd.

Getting the full path of multiple files

>
>

Analysis Specific

 
Changed:
<
<
In terminal you can use:
find . 
to retrieve the list of all files and folders in the current directory, much like ls.

However, this is useful if you want to get hold of the full path directory of a file, for example:

find `pwd` -type f
will return the list of all files in the current directory with their full path (as using pwd instead of . gives the static/global path to the files).

This can save some time if you need to list them in file to run over with a script.

Merging Multiple .ps files into one .pdf

gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=merged.pdf \
*.ps

Extracting the luminosity of data files

>
>

Extracting the luminosity of data files

  This is going here as I struggled far more than I should have to get the correct options. You use atlas-lumicalc to extract the luminosity associated with a particular good runs list. The GRL is used as a mask to only use data which has been taken at a time when all the detector functions were working correctly.
Line: 254 to 313
  This syntax indicates to take all runs between the ones hyphenated, but the equals sign seems to work in this case but not the one above.
Deleted:
<
<

Error checking bash script commands

As per the link here, commands in terminal will have a success/fail result in the special character $?. For example:

ls *.root
RESULT=$?
if [ $RESULT -ne 0 ]; then
    echo "ERROR in the command"
fi

Emacs - How to make backspace key delete backwards

Whether it be an issue with my emacs setup on lxplus, it seems that my default backspace behaviour in

emacs -nw
is to be the same as the delete character. To change this in each session do:
M-x (ie Esc-x)
normal-erase-is-backspace-mode
 

Interesting thing with C dynamic libraries

Line: 319 to 358
 
 readelf -d <Executable> 
will list the shared objects and the rpath.
 ldd <Executable> 
will display the libraries that are needed to run.
Deleted:
<
<

Using with Root dictionaries

CINT is able to produce libraries for specific ROOT objects. The main example is wanting to save a vector of TLorentzVectors to a TTree. By default this will not work. However, you can generate libraries to allow this to work. They take the command (in ROOT CINT):

>> root
>> gInterpreter->GenerateDictionary("vector<TLorentzVector>","TLorentzVector.h,vector")

Then to compile with this library you use the advice given above.

Revision 1013 Dec 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 273 to 273
 M-x (ie Esc-x) normal-erase-is-backspace-mode \ No newline at end of file
Added:
>
>

Interesting thing with C dynamic libraries

    // Wierd code to load the dynamic library and extract function
    // This would work if it was a C library but it is not
    // C++ does name mangling which stops you from just using
    // the name of the function to get it from the library
    // This is because C++ allows fn overload but C does not.
    // Try using ROOT to load instead.
    /*
    void* mv1cLib = dlopen("../libMV1c_cxx.so", RTLD_LAZY);
    float (*mv1cFcn)(float,float,float,float,float,float,float);
    *(void **)(&mv1cFcn) = dlsym(mv1cLib, "mv1cEval");
    */
    //gROOT->LoadMacro("MV1c_cxx.so");

Some notes on compiling against dynamic libraries/ libraries which are not listed in LD_LIBRARY_PATH

The issue with this is two fold:

  1. ) When compiling and linking, the makefile/gcc needs to know where libraries reside in order to include them.
  2. ) At run time, these libraries also need to be identified.

This means there are two things to include in the makefile:

-Wl, -rpath $(abspath./directory)

This adds a hardcoded part to the executable which says at runtime "check this path for libraries" (nb -Wl is gcc saying "pass this option to the linker").

Then one needs:

-L$(abspath ./directory) -lMyLibrary

This is used to link the library at compilation so it knows to include it.

Sidenote: Ideally, a library needs to be called

 lib<Name>.so
in order to include it as
-l<Name>

Both of these terms (which one could set to a makefile variable) need to be listed as library inclusions in the building phase of the compilation and linking phase.

Diagnostics

 readelf -d <Executable> 
will list the shared objects and the rpath.
 ldd <Executable> 
will display the libraries that are needed to run.

Using with Root dictionaries

CINT is able to produce libraries for specific ROOT objects. The main example is wanting to save a vector of TLorentzVectors to a TTree. By default this will not work. However, you can generate libraries to allow this to work. They take the command (in ROOT CINT):

>> root
>> gInterpreter->GenerateDictionary("vector<TLorentzVector>","TLorentzVector.h,vector")

Then to compile with this library you use the advice given above.

Revision 905 Dec 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 264 to 264
  echo "ERROR in the command" fi \ No newline at end of file
Added:
>
>

Emacs - How to make backspace key delete backwards

Whether it be an issue with my emacs setup on lxplus, it seems that my default backspace behaviour in

emacs -nw
is to be the same as the delete character. To change this in each session do:
M-x (ie Esc-x)
normal-erase-is-backspace-mode
 \ No newline at end of file

Revision 822 Nov 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 253 to 253
 -r="datarun-datarun,datarun,datarun-datarun..." This syntax indicates to take all runs between the ones hyphenated, but the equals sign seems to work in this case but not the one above. \ No newline at end of file
Added:
>
>

Error checking bash script commands

As per the link here, commands in terminal will have a success/fail result in the special character $?. For example:

ls *.root
RESULT=$?
if [ $RESULT -ne 0 ]; then
    echo "ERROR in the command"
fi
 \ No newline at end of file

Revision 720 Nov 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 213 to 213
 gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=merged.pdf *.ps \ No newline at end of file
Added:
>
>

Extracting the luminosity of data files

This is going here as I struggled far more than I should have to get the correct options. You use atlas-lumicalc to extract the luminosity associated with a particular good runs list. The GRL is used as a mask to only use data which has been taken at a time when all the detector functions were working correctly.

At the moment I am using dataset containers which have been made by the Top group to collect together the runs by period of data taking. This means on the face of it, the runs which are inside it are not explicitly clear. There are however DQ2 commands to extract these out. The following is a script I wrote which (when VOMS and DQ2 is set up) will list the datasets and extract the run number and write it to the terminal screen. This uses bash string manipulation which is a bit cumbersome but does the job (# means after the first thing listed, % means before the thing listed).

# To list the datasets inside a container
# @param DATASET=list.txt // Put in list.txt the data containers you want to check
# From https://twiki.cern.ch/twiki/bin/viewauth/Atlas/DQ2ClientsHowTo#DatasetsContainerCommands
# Ian Connelly 20 Nov 2012

DATASET=list.txt

while IFS= read -r DATASET; do
  echo $DATASET
  for LINE in `dq2-list-datasets-container $DATASET`; do
    #data12_8TeV.00207589.physics_Muons.merge.NTUP_TOPMU.f467_m1191_p1104_p1141_tid00929888_00
    RUN=${LINE#data12_8TeV.*}
    RUN=${RUN%*.*.*.*.*}
    RUN=${RUN#00*}
    echo -n "$RUN,"
  done  
  echo ""
  echo ""
done < $DATASET

Once you run this, you will have on terminal screen the data container name and then the list of runs inside it. If you go to atlas-lumicalc, upload your GRL and in the extra options line put:

-r "comma-separated-string-of-runs"

Make sure there are no spaces between runs and commas. You can also use (it seems):

-r="datarun-datarun,datarun,datarun-datarun..."
This syntax indicates to take all runs between the ones hyphenated, but the equals sign seems to work in this case but not the one above.
 \ No newline at end of file

Revision 608 Oct 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 207 to 207
  This can save some time if you need to list them in file to run over with a script.

\ No newline at end of file

Added:
>
>

Merging Multiple .ps files into one .pdf

gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=merged.pdf \
*.ps
 \ No newline at end of file

Revision 525 Jul 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 191 to 191
  In the long run it will probably be better to understand and use PROOF as, from what I have read, that dynamically alters the work load on available nodes to optimise the processing and then merges them together at the end. I think you can also use PROOF-lite to take advantage of multicore processors in your laptop/desktop if you are processing it locally which you wouldn't be able to do with hadd.

Added:
>
>

Getting the full path of multiple files

In terminal you can use:

find . 
to retrieve the list of all files and folders in the current directory, much like ls.

However, this is useful if you want to get hold of the full path directory of a file, for example:

find `pwd` -type f
will return the list of all files in the current directory with their full path (as using pwd instead of . gives the static/global path to the files).

This can save some time if you need to list them in file to run over with a script.

 \ No newline at end of file

Revision 424 Mar 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"
Changed:
<
<

Tutorials and tips that may come in useful

>
>

Tutorials and tips that may come in useful

  This is still a massive work in progress. At some point I will copy out some of the notes I have made which have been useful and put them here.
Changed:
<
<

Setting up ROOT on Windows

>
>
<-- Defines index at the top of the page -->

Setting up ROOT on Windows

  My short tutorial on configuring ROOT on Windows.
Changed:
<
<

Using log scales and stacking in ROOT

>
>

Using log scales and stacking in ROOT

  To create a stack of histograms typically uses the THStack class. To create a stack of TGraphs you use the TMultiGraph class which works in a similar way.
Line: 27 to 29
  This final line is important because if you are stacking some histograms, the canvas will typically default to showing 3 or 4 orders of magnitude down from the largest maximum. It is a bit strange but it means that if you have a large background plot and a tiny signal plot, it may not appear straight away from just stacking and setting the scale to log. Setting the minimum to 1 forces the scale to go down to zero (ie log(1) = 0) and the maximum will remain the largest value from the histogram. This was particularly useful for me when I was plotting signal and background scaled number of events passing an MVA cut because to begin with the background events dwarf the amount of signal and only when the cuts progress towards the end of the x scale did the background get cut to the point where there were similar amounts of signal and background events.
Changed:
<
<

Macro for stacking TGraphs

>
>

Macro for stacking TGraphs

  This is a simple little macro but it took a little bit of time to work out all the options to draw the lines properly. The alternative would be to use Glen's script to read out a text file of numbers which plot multiple TGraphs on the same canvas which is slightly different to this. My macro takes two already made TGraphs and will scale them to whatever required luminosity and plot them on the same canvas in the same way THStack does.
Line: 57 to 59
 return 0; }
Changed:
<
<

Other macros

>
>

Other macros

  See my webpage for additional scripts and macros I have written as it is generally more convinient to keep them there.
Changed:
<
<

Using Pythia

>
>

Using Pythia

  Pythia is an event generator which simulates particle collisions and the uses the Lund string model to propagate the hadronisation of the interaction. Pythia 8 is currently the latest incarnation and is written in C++. It is possible to set up a local installation of Pythia on your own laptop and assuming you have ROOT installed also, you can use both sets of libraries to carry out a parton level analysis of anything you might want to model. Obviously Pythia does not have any detector level effects (as it is parton level) so you are basically working at a "even-better-than-best-case-scenario" with your results. That said, the distributions found should be similar to the proper MC and data sets.
Line: 167 to 169
  The status codes (and more information about the particle classes and extra things) are available on the html document. This is installed locally or can be accessed online at the Lund hosted version here.
Changed:
<
<

Ctrl-z in terminal - Pausing a process

>
>

Ctrl-z in terminal - Pausing a process

  If you have a process running, typing ctrl+z will pause that process allowing you to use the command line prompt.
Line: 176 to 178
 You can also type fg to bring it back to running in the foreground where you can see output, but cannot access the command line.

Apparently typing jobs will tell you what is running in the background and allow you to call them to the foreground if need be.

Added:
>
>

Merging ROOT files

This isn't something I have used properly yet, but I have just come across it. In addition to the TFileMerger class available in ROOT, when you set up your enviroment, you also get a program called hadd which uses the syntax:

hadd <targetfile.root> <input files.root>
The program will add the histograms and ttree entries together. Whilst this can be used to combine, say, all your background samples into one single sample, I realise that it could be used as a simple alternative to PROOF and parallel processing.

My particular analysis program accepts as input the directory of a sample in which are a list of ROOT files which get added to a TChain and then my program, created using MakeClass, will run over the TChain. If I have a large number of ROOT files though, I can now split the same sample into a number of dataset directories, and run the analysis over smaller directories and then merge the results once all processing has been achieved.

In the long run it will probably be better to understand and use PROOF as, from what I have read, that dynamically alters the work load on available nodes to optimise the processing and then merges them together at the end. I think you can also use PROOF-lite to take advantage of multicore processors in your laptop/desktop if you are processing it locally which you wouldn't be able to do with hadd.

Revision 313 Mar 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

Line: 166 to 166
 The status codes allow you to find partons which are final state or intermediate state partons. In addition, there is another method called pythia.process[i] which apparently only accesses the hard partons whereas .event[i] accesses all of them. Note though that if you turn off too many effects so you only have the hard interaction, .process[i] stops functioning (from what I have heard) so you can only access through .event[i].

The status codes (and more information about the particle classes and extra things) are available on the html document. This is installed locally or can be accessed online at the Lund hosted version here.

Added:
>
>

Ctrl-z in terminal - Pausing a process

If you have a process running, typing ctrl+z will pause that process allowing you to use the command line prompt.

If you then type bg it is equivalent to having originally set your program running with an & at the end - ie it tells it to resume running in the background.

You can also type fg to bring it back to running in the foreground where you can see output, but cannot access the command line.

Apparently typing jobs will tell you what is running in the background and allow you to call them to the foreground if need be.

Revision 213 Mar 2012 - IanConnelly

Line: 1 to 1
 
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

This is still a massive work in progress. At some point I will copy out some of the notes I have made which have been useful and put them here.

Added:
>
>

Setting up ROOT on Windows

 My short tutorial on configuring ROOT on Windows.
Added:
>
>

Using log scales and stacking in ROOT

To create a stack of histograms typically uses the THStack class. To create a stack of TGraphs you use the TMultiGraph class which works in a similar way.

THStack* stack = new THStack("Name","Title");

stack->Add(histogram_name);

Setting up a log scale is easy enough in ROOT. Assuming you have a TCanvas you apply the following:

TCanvas* canvas = new TCanvas("Name","Title",10,10,900,900)

canvas->SetLogy(); //You can also do SetLogx();

stack->SetMinimum(1);

This final line is important because if you are stacking some histograms, the canvas will typically default to showing 3 or 4 orders of magnitude down from the largest maximum. It is a bit strange but it means that if you have a large background plot and a tiny signal plot, it may not appear straight away from just stacking and setting the scale to log. Setting the minimum to 1 forces the scale to go down to zero (ie log(1) = 0) and the maximum will remain the largest value from the histogram. This was particularly useful for me when I was plotting signal and background scaled number of events passing an MVA cut because to begin with the background events dwarf the amount of signal and only when the cuts progress towards the end of the x scale did the background get cut to the point where there were similar amounts of signal and background events.

Macro for stacking TGraphs

This is a simple little macro but it took a little bit of time to work out all the options to draw the lines properly. The alternative would be to use Glen's script to read out a text file of numbers which plot multiple TGraphs on the same canvas which is slightly different to this. My macro takes two already made TGraphs and will scale them to whatever required luminosity and plot them on the same canvas in the same way THStack does.

tgraphstack(){
    
    TFile* f = new TFile("f_hist.root", "OPEN");
    
    TGraph* signal = (TGraph*)f->Get("scaled signal;1");
    TGraph* background = (TGraph*)f->Get("scaled background;1");
    TMultiGraph* multiG = new TMultiGraph("multiG", "Evolution of number of events with BDT cut scaled to 20fb^{-1}");
    
    TCanvas* c = new TCanvas("c","c", 2000, 2000);
    c->SetLogy();
    c->RangeAxis(-1,1,0,1000000);
    multiG->Add(background);
    multiG->Add(signal);
    //Interesting behaviour where TMultiGraph just sets the minimum y axis to 3 orders of magnitude less than the maximum point being plotted.
    //Using SetMinimum forces the minimum point to be fixed
    //Here use 1 because log(1) = 0
    multiG->SetMinimum(1);
    multiG->Draw("AL");
    //This option draws axes and plots as a line
    //This has to come after the call to draw
    multiG->GetXaxis()->SetTitle("t_{BDT} cut"); 
    multiG->GetYaxis()->SetTitle("Log(scaled number of events)");     //Use Draw("AL") to say plot axis and line

return 0;
}

Other macros

See my webpage for additional scripts and macros I have written as it is generally more convinient to keep them there.

Using Pythia

Pythia is an event generator which simulates particle collisions and the uses the Lund string model to propagate the hadronisation of the interaction. Pythia 8 is currently the latest incarnation and is written in C++. It is possible to set up a local installation of Pythia on your own laptop and assuming you have ROOT installed also, you can use both sets of libraries to carry out a parton level analysis of anything you might want to model. Obviously Pythia does not have any detector level effects (as it is parton level) so you are basically working at a "even-better-than-best-case-scenario" with your results. That said, the distributions found should be similar to the proper MC and data sets. It is worth pointing out that currently pile-up is not well modelled in Pythia 8, not that it has been something I have had to worry about so far, but that was the reason from switching from Pythia 8 back to Pythia 6 in the generation of MC11b and MC11c.

You can get Pythia from here.

Once installed you can compile Pythia and ROOT libraries using the following Makefile. In addition, you will need to edit a line in one of the xml files. I forget which, but on the first compile an error will come up which should point you in the direction of it. The reason being that the file uses a relative path directory (ie ../folder) instead of an absolute one. This means if you try and compile outside of the folder that Pythia generates the example codes, you will get this error, so bear this in mind.

# Glen Cowan, RHUL Physics, February 2012
# Editted for use by Ian Connelly, Feb 2012

PROGNAME      = signal
SOURCES       = signal.cc
INCLUDES      = 
OBJECTS       = $(patsubst %.cc, %.o, $(SOURCES))
ROOTCFLAGS   := $(shell root-config --cflags)
ROOTLIBS     := $(shell root-config --libs)
ROOTGLIBS    := $(shell root-config --glibs)
ROOTLIBS     := $(shell root-config --nonew --libs)
ROOTINCDIR   := $(shell root-config --incdir)
CFLAGS       += $(ROOTCFLAGS)
LIBS         += $(ROOTLIBS)
#  Not sure why Minuit isn't being included -- put in by hand
#
LIBS         += -lMinuit
LDFLAGS       = -O

#  Now the Pythia stuff
#

# Location of Pythia8 directories 
# (default is parent directory, change as necessary)
PY8DIR=/Users/Ian/Documents/pythia8160
PY8INCDIR=$(PY8DIR)/include
PY8LIBDIR=$(PY8DIR)/lib
PY8LIBDIRARCH=$(PY8DIR)/lib/archive


# Include Pythia and Pythia examples config files
-include $(PY8DIR)/config.mk
-include $(PY8DIR)/examples/config.mk

LIBS += -lpythia8
LIBS += -llhapdfdummy

$(PROGNAME):  $(OBJECTS) 
      g++ -o $@ $(OBJECTS) $(LDFLAGS) -L$(PY8LIBDIRARCH) $(LIBS)

%.o : %.cc $(INCLUDES) 
   g++  ${CFLAGS} -c -I$(ROOTINCDIR) -I$(PY8INCDIR) -g -o $@ $<

test:
   @echo $(PY8LIBDIRARCH)

clean:   
   -rm -f ${PROGNAME} ${OBJECTS}

The example workbook which is available on the main site is very good at introducing the syntax that Pythia uses. The standard procedure is: set up the collision->decide what parton modelling is required (ie hadronisation? multiple interactions?)->run event loop->include analysis in event loop->fin.

You are able to access every single parton which is created in the hard interaction and also in the soft hadronisation afterwards using something like this (for HZ->bbvv):

int main() {
    
    //Set up generation
    
    Pythia pythia;
    pythia.readString("HiggsSM:ffbar2HZ = on"); //Higgsstrahlung production
    pythia.readString("Beams:eCM = 8000."); //eCM = 8TeV
    pythia.readString("25:m0 = 125"); //Higgs mass 125
    pythia.readString("25:onMode = off"); //Turn off all Higgs decay modes
    pythia.readString("25:onIfAny = 5 -5"); //Allow H->bb
    pythia.readString("23:onMode = off"); //Turn off all Z decay mdoes
    pythia.readString("23:onIfAny = 12 14 16 -12 -14 -16"); //Allow Z->vv
    pythia.readString("PartonLevel:MI = off ! no multiple interactions");
    pythia.readString("PartonLevel:ISR = off ! no initial-state radiation");
    pythia.readString("PartonLevel:FSR = off ! no final-state radiation");
    pythia.readString("HadronLevel:Hadronize = off");
    
    //Initialise any TFiles, TTrees, TH1s, vectors before event loop
    pythia.init();

    //Event loop
    for(int iEvent = 0; iEvent < 30000; ++iEvent){
    pythia.next();
        for(int i = 0; i < pythia.event.size(); ++i){
            //Then these are the annihilated hardest incoming partons
            if(pythia.event[i].status() == -21){ 
                
            }

        }
    }
return 0;
}

The status codes allow you to find partons which are final state or intermediate state partons. In addition, there is another method called pythia.process[i] which apparently only accesses the hard partons whereas .event[i] accesses all of them. Note though that if you turn off too many effects so you only have the hard interaction, .process[i] stops functioning (from what I have heard) so you can only access through .event[i].

The status codes (and more information about the particle classes and extra things) are available on the html document. This is installed locally or can be accessed online at the Lund hosted version here.

Revision 114 Jan 2012 - IanConnelly

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="IanConnelly"

Tutorials and tips that may come in useful

This is still a massive work in progress. At some point I will copy out some of the notes I have made which have been useful and put them here.

My short tutorial on configuring ROOT on Windows.

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding RHUL Physics Department TWiki? Send feedback