Spatial ecology in collaboration with the University of Basilicata - DiCEM (Dipartimento delle culture europee e del mediterraneo).
Spatio-Temporal Data Analyses Using Free and Open Source Software
Matera, Italy, 6th-10th June 2016
The open source spatio-temporal data analyses and processing summer school is an immersion 5 day experience opening new horizons on the use of the vast potentials of the Linux environment and the command line approach to process data. We will guide newbies who have never used a command line terminal to a stage which will allow them to understand and use very advanced open source data processing routines. Our focus is to enhance the approach of self-learning allowing participants to keep on progressing and updating their skills in a continuously evolving technological environment.
Dr. Francesco Lovergine (CNR Bari, Italy)
Dr. Salvatore Manfreda (University of Basilicata Matera, Italy)
Dr. Raffaele Albano (University of Basilicata Matera, Italy)
Dr. Alba Mininni (University of Basilicata Matera, Italy)
The summer school is aimed at students who are currently at the master or doctoral level, as well as researchers and professionals with a common interest in spatio-temporal data analysis and modelling. Nonetheless, we accept undergraduate students as well. Participants should have basic computer skills and a strong desire to learn command line tools to process data. We expect students to have a special interest on geographical data analyses, previous experience in Geographic Information Systems would be helpful. Students need to bring their own laptops with a minimum of 4GB RAM and 30GB free disk space.
Registration is on a first come, first served basis and will be closed when 25 participants have signed up. Therefore, we encourage participants to register ASAP. A waiting list will be established in case of exceeding the limit.
The summer school provides students with the opportunity to develop key skills required for advanced spatial data processing. Throughout the training students will focus on developing independent learning skills which will be fundamental for a continuous learning process of advanced data processing. This is a progressing journey of development with the availability of more complex data and the ongoing technological revolution. Within the course many different, complementary and sometimes overlapping tools will be presented to provide an overview of the existing open source software available for spatial data processing. We will discuss their strengths, weaknesses and specificity for different data processing objectives (eg.: modelling, data filtering, query, GIS analyses, graphics or reporting) and data types. In particular, we will guide students to practice the use of different types of software and tools with the objective to assist in gaining a steep learning curve, which is generally experienced while using the new approach of analysing data within a programming command line environment. Broadly, we focus our training on helping students to develop independent learning skills to find online help, solutions and strategies to fix bugs and independently progress with complex data processing problems.
The Academic Programme is divided into the following areas of study and interactions:
Lectures: (15min to 1h each) Students will take part in a series of lectures introducing basics functions of tools, theoretical aspects and background information, which is needed for a better understanding of the deeper concepts to be successively applied in data processing.
Hands on Tutorials: Students will be guided during hands on sessions where trainers will perform data analyses on real case study datasets, while the students fill follow those example procedure using their laptops. During tutorials sessions students are supported by two trainers, one for the demonstrations and one to supervise the students' work as well as helping with individual guidance on coding.
Hands on Exercises: In addition to tutorials and lectures, students are encouraged to take up their own independent study during the exercise sessions. Specific tasks will be set allowing to reinforce the newly learned data processing capacity presented in lectures and practically learned during the tutorial sessions. Such exercise sessions equip students with the confidence and skills to become independent learners and effectively engaged with the demands of advanced spatial-data processing.
Depending on the number of participants and their previous knowledge in programming, the more or the less topics can be addressed in accordance to the students' needs. The exercises and examples will be cross-disciplinary: forestry, landscape planning, predictive modelling and species distribution, mapping, nature conservation, computational social science and other spatially related fields of studies. Nevertheless these case studies are template procedures and could be applied to any thematic applications and disciplines.
Round table discussions: these sessions are mainly focused on exchanging experiences, needs and point of views. We aim at clarify specific student’s needs and challenges, focus on how to help and how to find solutions while problem solving.
Our summer school will enable students to further develop and enhance their spatio-temporal data processing skills. Most importantly, it will allow them to start using professionally a fully functional open source operating system with software. With continuous practise during the week students will get more and more familiar with the command line and will focus on developing specific areas, including:
Summer school certification: At the end of the summer school the attendees will receive a course certificate upon successful completion of the course, although it is up to the participant’s university to recognize this as official course credit.
Time table: (7h teaching/day)
Working on students needs
Session 1. Getting started: Knowing each other and LINUX OS (Amatulli, Casalegno)
This session introduce the overall course program and Linux operating system. We also learn how to install and use a virtual environment operating system.
Session 2. Jump start LINUX Bash programming (Amatulli)
During this session we explore and practice the basics of BASH terminal command line. The acquired skills will be used in all following sections.
Session 3. Discovering the power of AWK programming language (Amatulli).
This session is fundamental for data filtering and preparation, bulk data download, text files manipulation, descriptive statistics and basic mathematical operation on large files. Students will access, query, understanding and cleaning up data, perform data filtering using bash command line. We use AWK which is an extremely versatile and powerful programming language for working on text files, performing data extraction and reporting or to squeeze data before importing them into R or other software types.
Session 4. Getting started with GNUPLOT (Amatulli).
This session introduces the command-line driven graphic utility GNUPLOT. Even though it has very sophisticated graphical options for a final layout definition, Gunuplot is a very powerful tool for rapid and effective preliminary data visualization. It is embedded in the bash-awk terminal and we can perform data filtering, random data extraction and many other operations quick visualisation of data sets for preliminary analyses.
Session 5. Exploring and understanding geographical data: introduction to GDAL/OGR libraries and Pktools (Amatulli).
This section introduces data manipulation for geospatial data processing on the command line using GDAL and OGR libraries.
Starting from day 3, we are presenting 2 parallel sections which require different programming level. You can select one based on your programming experience or thematic focus.
This session will focus on DEM-based approaches in hydrological applications. DEMs contains a simple information such as an array of elevation data that may represent an extraordinary source of knowledge in several fields: hydrology, ecology, agronomy and engineering. The present session will introduce the potential use of DEMs to predict surface water routing, to estimate expected solar radiation, to describe the structure of a river network, to describe basin morphology/connectivity, and flood prone areas.
This session will focus on the use of QGIS and GRASS for geospatial analysis and map visualization in the field of hydrology and flood risk. We do not expect any previous knowledge of QGIS or GRASS, but basic knowledge of geospatial data and reference system will be helpful. The present session will present the use of QGIS and its external algorithms for the hydrological characterization of a watershed and the assessment of the direct costs of flood event.
Parallel session 6a. Getting started with the R environment for statistical computing, modelling and graphics (Mininni).
In this section the use of R for statistical computation will be introduced. It is not expected to have any previous knowledge of R. Rather than concentrating on already built in scripting routines we will focus on different R-data structures and how to open, query and plot data easily. If the student are already familiar with R we could skip this section and provide supervision on your data processing needs or follow an advanced R hands-on exercise.
Parallel session 6b. Spatial and temporal data analyses in R. (Casalegno)
This section require to have an already preexisting knowledge in R.
Parallel session 7a. Command line GIS - Getting started with GRASS and Qgis (Casalegno).
This session will introduce the use of GRASS geographic information system in its command line interface for spatial-data processing, and the use of QGIS for map visualization and overlay tool. We do not expect any previous knowledge of GRASS or Qgis, but will use basic BASH and GDAL command line skills acquired in the previous days. This session is interesting for people that deal with general GIS analysis, hydrological modelling, DEM, vector/ raster data integration etc. This section is fundamental for the topic of tomorrow.
Parallel session 7b. Batch processing data using GRASS (Amatulli).
This parallel section assumes good understanding and discrete control of Bash command line and full knowledge of GRASS location and mapset. It is mainly indicate for people that already move in the direction of large data processing and are willing to explore more on batch job processing and multicore data manipulation for Geo-Computation. (This section can be dropped if nobody have used GRASS)
Session 10. Advanced hydrological modelling with GRASS (Amatulli) (all students)
This section summarize the power of GRASS for hydrological modelling using a multicore add-on. A brief talk is presented followed by a small study case.
Parallel session 11a. Species distribution modelling (Casalegno) (2h)
This section is proposed as complementary to session 9a and will be focus on one or two of the proposed study cases.
Parallel session 11b. Remote sensing applications (Amatulli) (2h)
Session 11. Geospatial data processing: find the right tool for the job (all students, Amatulli, Casalegno, Manfreda and Albano)
This session summarize and review the tools presented till now, explore more open source libraries available and aims to clarify any issues or questions arising till now.
Session 12. Jump start with Python language (Lovergine) (All students)
Session 13. Geospatial Python (Lovergine) (All students)
Session 14. Spatial DataBase Management System - Geo-database: vector data manipulation in SpatiaLite
(running out of time…. probably we will provide a parallel session or remove. To be confirmed depending on availability of time)
CONCLUSION – Focus on the students' projects needs and how to get going with the use of free and open source tools for advanced spatial data processing.
Round table, question and answer session on specific data processing needs and how to follow up the summer school using open source softwares as daily working toolbox.
Social Dinner: Enjoy south italian local products in a wonderful atmosphere in the ancient “sassi”
We will conclude our full immersion week with a dinner in a local restaurant, so be ready to process excellent food & wine rather than data.