Installation Guide

Troubleshooting Guide

1. Introduction

This document walks you through the steps to troubleshoot your glideinWMS installation in order to get your jobs running. This manual does not necessarily cover all the possible errors however it guides the user through the steps required to find potential problems, where the log files are and where to look for possible debugging information. This manual assumes that you have installed and started the required glideinWMS services. Please refer to the glideinWMS installation instructions for the installation instructions. Where ever applicable, this document makes references to the installation manual.

1.1 Prerequisites

We will refer to the glideinWMS diagram above while describing the troubleshooting steps. This guide will walk through this diagram through the life cycle of a job and how different glideinWMS services interact with each other until the job runs to completion.

  1. A Job is submitted to the user pool
  2. Frontend maps your job to an entry point
  3. The factory submits a glidein to the entry point
  4. The glidein starts on the remote machine
  5. The new resource is registered
  6. The user pool assigns the job to the resource.
  7. The job runs on the resource
  8. The glidein completes

In order to complete this guide, you will need to have the above components of GlideWMS installed as well as the below software products. Please verify that all products are compatible with your environment:
Software Products Version Comments
glideinWMS V2.x Should work for other versions with minor differences.
Condor Latest version
VDT Latest version

You will also need a job to run. Create a shell script for a simple hello world (e.g.

#!/usr/bin/env bash

echo hello world
Then, create a job file for this (e.g. myjob.job):
requirements = (Memory >= 1 && OpSys == "LINUX" ) && (Arch == "INTEL" || Arch == "X86_64")
universe = vanilla
executable = <DIRECTORY>/
arguments = 70
notification = Error
input =
output = <DIRECTORY>/test_job.output.$(Process)
error = <DIRECTORY>/test_job.error.$(Process)
log = <DIRECTORY>/test_job.log.$(Process)
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
queue 1
Replace <DIRECTORY> with the directory you have created the shell script.
The following will submit the job to the user pool collector.
condor_submit myjob.job

1.2 Definitions

Variables used through out the upgrade process are explained below. The values of these variable may change based on your installation.

Variable Comments Default
GLIDEINWMS_HOME Directory where glideinWMS binaries will be installed /home/gfactory/glideinWMS
GLIDECONDOR_HOME Directory where condor is installed /home/gfactory/glidecondor
GLIDEINWMS_INSTALLABLE_DIR Directory where you have extracted the glideinWMS installable /home/gfactory/glideinWMS
GLIDEINWMS_POOLCOLLECTOR_HOME Directory where glideinWMS pool collector is installed /home/gfactory/glidecondor
GLIDEINWMS_USERSCHEDD_HOME Directory where glideinWMS user schedd is installed
GLIDEINWMS_WMSCOLLECTOR_HOME Directory where glideinWMS collector is installed /home/gfactory/glidecondor
GLIDEINWMS_GFACTORY_HOME Directory where the glideinWMS GFactory is installed /home/gfactory/glideinsubmit/glidein_v1_0
GLIDEINWMS_VOFRONTEND_HOME Directory where the glideinWMS VO front end is installed /home/frontend/frontstage
GLIDEINWMS_GLIDEINSUBMITDIR Directory containing configuration for gfactory and glideins

2. General Issues

This section contains tips and troubles relevant to all phases of a job's execution.

2.1 Authentication Issues

Many glideinWMS issues are caused by authentication. Make sure that your proxy and certificate are correct. Each process needs a proxy/cert that is owned by that user.
Also, make sure that this cert has authorization to run a job by running a command such as (all on one line):

X509_USER_CERT=/tmp/x509up_u<UID> globus-job-run -a -r <gatekeeper in factory config>
Note that /tmp/x509up_u<UID> is the typical location for kerberos certificates, but use the proper location if the place of your server certificate varies.

2.2 Wrong sourced

Always source the correct before running any commands. Many problems are caused are by using the wrong path/environment, (for instance, sourcing the user pool then running WMS collector commands). Run "which condor_q" to see if your path is correct.
Note: If you are using VDT and source the (e.g. for voms-proxy-init), this may change your path/environment, and you may need to run again.

3. Problems submitting your job

Symptoms: Error submitting user job
Useful files: GLIDEINWMS_USERSCHEDD_HOME/condor_local/logs/SchedLog
Debugging Steps:

If you encounter errors submitting your job using condor_submit, the error messages printed on the screen will be useful in identifying potential problems. Occasionally, you can additional information in the condor schedd logs.

Always make sure that you have sourced the and that the path and environment is correct. source $GLIDEINWMS_USERSCHEDD_HOME/ Based on the actual condor scheduler, you can find scheduler logfile, SchedLog, in one of the sub directories of directory listed by “condor_config_val local_dir”

If you are installing all services on one machine (not recommended but sometimes useful for testing) make sure that the user collector and wms collector are on two different ports (such as 9618 and 8618). You can do "ps -ef" to see if the processes are started (should be multiple condor_masters, condor_schedds and condor_procd for each machine). Make sure they are running as the proper users (user schedd should be probably be run as root. wms collector should be run as root if you want privsep).

Also refer to the Collector install for verification steps.

4. User Jobs Stay Idle

Symptoms:User job stays idle and there are no glideins submitted that correspond to your job.

This step involves the interaction of the VO frontend and WMS Factory. Hence, there are two separate facilities to see why no glideins are being created.

4.1 Frontend unable to map your job to any entry point

Symptoms: User job stays idle and there is no information in the frontend logs about glideins required to run your job.
Debugging Steps:

Check if the VO frontend is running. If not start it.

VO Frontend processes periodically query for user jobs in the user schedd. Once you have submitted the job, VO frontend should notice it during its next queering cycle. Once the frontend identifies potential entry points that can run your job, it will reflect this information in the glideclient classad in WMS collector for that corresponding entry point. You can find this information by running “condor_status -any -pool <wms collector fqdn>”

Check for error messages in logs located in GLIDEINWMS_VOFRONTEND_HOME/log. Assuming that you have named frontend main group as “main”, check the log files in GLIDEINWMS_VOFRONTEND_HOME/group_main/log.

[2009-12-07T15:16:25-05:00 12398] For Idle 19 (effective 19 old 19) Running 0 (max 10000)
[2009-12-07T15:16:25-05:00 12398] Glideins for Total 0 Idle 0 Running 0
[2009-12-07T15:16:25-05:00 12398] Advertize Request idle 11 max_run 22
You should notice something like above in the logs corresponding to your job. If the frontend does not identify any entry that can run your job, then either the the desired entry is not configured in the glidein factory or the requirements you have expressed in your jobs are not correct.

Also, check the security classad to make sure the proxy/cert for the frontend is correct. It should be chmod 600 and owned by the frontend user.
If using voms, try to query the information to verify:

X509_USER_CERT=<vofronted_proxy_location> voms-proxy-info.

4.2 Factory does not submit glideins corresponding to your job

Symptoms:User job stays idle and there are no glideins submitted to the glidein queue that correspond to your job.
However, the VO frontend does detect the job and attempts to advertise to the factory
Useful Files: GLIDEINWMS_GFACTORY_HOME/<entry>/log
Debugging Steps:

Once the frontend identifies potential entry points that can run your job, it will reflect this information in the glideclient classad in WMS collector for that corresponding entry point. You can find this information by running “condor_status -any -pool <wms collector>” Glidein factory looks up the glideclient classad, queries the wms collector to find out distribution of existing glideins in the glidein queues and submits additional glideins as required. Once the factory has submitted the required glideins, you can see them by queering glideins queue using command, “condor_q -g -pool <wms collector>”

If you do not see any glideins corresponding to your job,

5. glideins stay idle

Symptoms: glidein stays idle and do not start running.
Useful Files:
Debugging Steps:

Once the glideins are submitted, they should start running on the remote sites. Time taken for them to enter the running state could vary based on the site, how busy the site is, priority your glideins have on the site.

If the glideins stay idle for quite some time,

6. Resource is not registered in user collector.

Symptoms: glidein start running but “condor_status -pool <user collector>” does not show any new resource.
Useful Files:
GLIDEINWMS_GFACTORY_HOME/<entry>/log/<glidein jobid>.out
GLIDEINWMS_GFACTORY_HOME/<entry>/log/<glidein jobid>.err
Debugging Steps:

Once the glidein starts running, the glidein startup script downloads condor files and other relevant files from the factories web area. It then does the required checks, generates condor configuration files and starts condor_startd daemon. This condor_startd reports to the user collector as a resource on which the user job is supposed to run. If the glidein job exists and you never see a resource in the user collector, the problem is generally related to bootstrapping the processes on the worker nodes.

If the glidein job has completed, you should be able to look for output and error logs for the glidein job in directory GLIDEINWMS_GFACTORY_HOME/<entry>/log. The files are named are job.<glidein jobid>.out and job.<glidein jobid>.err. Most common cause for the failures is mismatch in the architecture of condor binaries used and that of the worker nodes. Starting in glideinWMS 2.2, you can configure entry points to use different condor binaries. In case condor daemons are crashing, you can browse the logs of condor daemons by using tools available in the /glideinWMS/factory/tools

One possible error that can appear at this point is a problem due to the version of GLIBC:

Starting monitoring condor at Fri Jun 18 10:11:27 CDT 2010 (1276873887)
/usr/local/osg-ce/OSG.DIRS/wn_tmp/glide_rP2945/main/condor/sbin/condor_master: /lib/tls/i686/nosegneg/ version `GLIBC_2.4' not found (required by /usr/local/osg-ce/OSG.DIRS/wn_tmp/glide_rP2945/main/condor/sbin/condor_master)
In this case, the version of glibc on the worker node is less than the glibc that condor is using. For instance, this can happen if the factory is on SL5, but the worker node is SL4. Condor has special binaries for glib2.3, so you can re-install/re-compile using these binaries. For advanced users, you can configure multiple tarballs for various architectures in the factory config.

7. User Job does not start on the registered resource

Symptoms:Your job does not start running on the resource created by a running glidein jobs.
Useful Files:
Debugging Steps:

On some versions of Condor, there is a problem with the swap. Make sure that GLIDEINWMS_USERSCHEDD_HOME/etc/condor_config.local contains RESERVED_SWAP=0

condor_config_val reserved_swap
The above should return 0.

Once the glidein starts running on the worker node and successfully starts required condor daemons, condor_startd registers as a resource in the user pool collector. If your job does not start running on the resource, check that the requirements expressed by the user job can be satisfied by the resource. If not, understand the constraints that are not satisfied and tweak the requirements.

You can get further information on this by running:

condor_q -g -analyze
2.000: Run analysis summary. Of 2 machines,
1 are rejected by your job's requirements
1 reject your job because of their own requirements
0 match but are serving users with a better priority in the pool
0 match but reject the job for unknown reasons
0 match but will not currently preempt their existing job
0 are available to run your job
There will be one "machine" that will act as the monitor and will reject the job due to its own requirements (it is the OWNER). If 1 is rejected by your jobs requirements, check GLIDEINWMS_USERSCHEDD_HOME/condor_local/log/ShadowLog for errors.
You can also run the following to get more information about the classads:
condor_q -l

If the job is held, make sure the user schedd is running as root (if getting permission denied). Run "condor_q -analyze" to see what is holding the process.

8. Miscellaneous Issues

This section contains a collection of issues only experienced in specific environments. This lists certain types of problems only seen in particular configurations.

glideinWMS support: