sourCEntral - mobile manpages

pdf

datalad-crawl

SYNOPSIS

datalad−crawl [--version] [-h] [-l LEVEL] [-p {condor}] [--is-pipeline] [-t]
[-r] [-C CHDIR]
[file]

DESCRIPTION

Crawl online resource to create or update a dataset.

Examples:

$ datalad crawl # within a dataset having .datalad/crawl/crawl.cfg

OPTIONS

file configuration (or pipeline if --is-pipeline) file
defining crawling, or a directory of a dataset on
which to perform crawling using its standard crawling
specification. Constraints: value must be a string
[Default: None]

--version show the program’s version and license information
-h
, --help, --help-np
show this help message. --help-np forcefully disables
the use of a pager for displaying the help message
-l
LEVEL, --log-level LEVEL
set logging verbosity level. Choose among critical,
error, warning, info, debug. Also you can specify an
integer <10 to provide even more debugging information
-p
{condor}, --pbs-runner {condor}
execute command by scheduling it via available PBS.
For settings, config file will be consulted
--is-pipeline
flag if provided file is a Python script which defines
pipeline(). [Default: False]
-t
, --is-template
flag if provided value is the name of the template to
use. [Default: False]
-r
, --recursive
flag to crawl subdatasets as well (for now serially).
[Default: False]
-C
CHDIR , --chdir CHDIR
directory to chdir to for crawling. Constraints: value
must be a string [Default: None]

AUTHORS

datalad is developed by The DataLad Team and Contributors <team AT datalad DOT org>.

pdf