Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
92 commits
Select commit Hold shift + click to select a range
5c77c9e
Update README.md
lexcomber Aug 1, 2024
eef5f8c
update vignettes v0.0.1.3
lexcomber Aug 5, 2024
310ec8c
minir checks cols4all update
lexcomber Oct 10, 2024
7ea19f3
monor corrections / typos for v0.0.1.3
lexcomber Jan 3, 2025
307f782
minor typo corrections to v0.0.1.3
lexcomber Jan 3, 2025
d99d746
minor typo corrections to v0.0.1.3
lexcomber Jan 3, 2025
7ffe0e3
update to v1.0.0
lexcomber Apr 7, 2025
20a03fc
Update README.md
lexcomber Apr 7, 2025
7a29aa2
Update README.md
lexcomber Apr 7, 2025
9ac3968
Update README.md
lexcomber Apr 7, 2025
409433a
Update to v1.0.0
lexcomber Apr 7, 2025
f077618
some minor typos
lexcomber Apr 7, 2025
3121310
update to readme
lexcomber Apr 7, 2025
4e0af7c
update from proj
lexcomber Apr 7, 2025
26265a1
update from proj
lexcomber Apr 7, 2025
4a733ad
added background links
lexcomber Apr 8, 2025
4abe40a
update background links
lexcomber Apr 8, 2025
b8ecaf8
update background links
lexcomber Apr 8, 2025
a034ae8
update background links
lexcomber Apr 8, 2025
b5f6ada
update to vignette
lexcomber Apr 10, 2025
4cae8d9
big revision
lexcomber Apr 14, 2025
27159aa
stgam v1.0.0 creation
lexcomber Apr 30, 2025
33f3b2f
Add files via upload
lexcomber Apr 30, 2025
a5f44da
Add files via upload
lexcomber Apr 30, 2025
c652ddb
Delete vignettes/space-time-gam.Rmd
lexcomber Apr 30, 2025
e44a29a
Delete vignettes/space-time-gam.Rmd
lexcomber Apr 30, 2025
a2aa001
update to urls
lexcomber Apr 30, 2025
5216766
update doi's
lexcomber Apr 30, 2025
0defd7a
Add files via upload
lexcomber Apr 30, 2025
9f81df4
Delete vignettes/space-time-gam-intro_rev.Rmd
lexcomber Apr 30, 2025
0397dfa
Add files via upload
lexcomber Apr 30, 2025
42e3b8e
updates for build
lexcomber Apr 30, 2025
4c357c4
TVC in evaluate_models()
lexcomber May 1, 2025
032b19a
update urls
lexcomber May 1, 2025
1d709d3
update urls
lexcomber May 1, 2025
9c1e36d
Merge branch 'dev'
lexcomber May 1, 2025
f873e1e
edit to readme
lexcomber May 1, 2025
c1bd921
vignette tidy!
lexcomber May 1, 2025
976727f
edits to get through CRAN checks
lexcomber May 2, 2025
ee977b5
small vignette edit
lexcomber May 2, 2025
b09e10a
edits for CRAN
lexcomber May 3, 2025
95dbd16
typos and syntax
lexcomber May 8, 2025
13edb71
minor edit to vignette
lexcomber May 8, 2025
39d001c
minor update to vignette
lexcomber May 8, 2025
96148fe
minor edits
lexcomber May 8, 2025
51797d5
Imports edit
lexcomber May 8, 2025
af9946e
precompiled data for vignette CRAN checks
lexcomber May 12, 2025
5e04acb
vignette update (precompiled data)
lexcomber May 12, 2025
7ed7bdc
update to data
lexcomber Jun 4, 2025
ab0dca0
vignette tidy
lexcomber Jun 5, 2025
982e810
v1.0.1
lexcomber Jun 7, 2025
27b5007
v1.0.1
lexcomber Jun 7, 2025
3c7c0db
updates to v1.0.2
lexcomber Jun 12, 2025
0007f09
updates to v1.0.2
lexcomber Jun 12, 2025
9c82ad7
Typo - Update space-time-gam-intro_rev.Rmd
nickbearman Jul 22, 2025
707d223
Merge pull request #6 from nickbearman/patch-1
lexcomber Jul 23, 2025
ad28035
Update README.md
lexcomber Jul 23, 2025
0edf61e
updates to v 1.0.1
lexcomber Aug 7, 2025
5533ccf
stgam 1.2.0
lexcomber Jan 26, 2026
2ca994d
updated description
lexcomber Jan 26, 2026
845d10a
updates to description
lexcomber Jan 26, 2026
285a872
update to Description
lexcomber Jan 26, 2026
1f63364
Update README.md
lexcomber Jan 26, 2026
c376ec3
update to 1.2.0
lexcomber Jan 26, 2026
afb9e3e
git fix
lexcomber Jan 26, 2026
5eb544a
stagm 1.2.0 vignettes
lexcomber Jan 26, 2026
8953d90
update 1.2.0
lexcomber Jan 26, 2026
f82733e
update 1.2.0
lexcomber Jan 26, 2026
108cdd9
Update README.md
lexcomber Jan 26, 2026
c496308
update 1.2.0
lexcomber Jan 26, 2026
2e96c57
update 1.2.0
lexcomber Jan 26, 2026
e310082
update
lexcomber Jan 26, 2026
e62794f
update 1.2.0
lexcomber Jan 27, 2026
0fce44f
update 1.2.0
lexcomber Jan 27, 2026
bee227d
update Description 1.2.0
lexcomber Jan 27, 2026
232f269
update description
lexcomber Jan 27, 2026
53c7aac
update description
lexcomber Jan 28, 2026
dce71e0
update to 1.2.0
lexcomber Jan 28, 2026
6422daa
Update R-CMD-check workflow to r-lib actions v2
lexcomber Jan 28, 2026
f5f57fd
yaml edit
lexcomber Jan 28, 2026
edb0fe4
vignette tidy for 1.2.0 to CRAN
lexcomber Jan 29, 2026
40a0e46
final vignette refinement 1.2.0
lexcomber Jan 29, 2026
44953a8
refine evalaute_models()
lexcomber Feb 5, 2026
e0a0f10
update to gam_model_rank
lexcomber Feb 27, 2026
6abc4c9
update to evaluate_models()
lexcomber Mar 9, 2026
8e70505
update to evaluate_models()
lexcomber Mar 9, 2026
a7c7534
typo
lexcomber Mar 23, 2026
2f9b1fa
", (" to " ("
cadam00 Mar 31, 2026
221790d
family to model_family
cadam00 Apr 1, 2026
0d0ea93
stopCluster inside on.exit for safety
cadam00 Apr 1, 2026
526d9c5
Merge pull request #8 from cadam00/master
lexcomber Apr 13, 2026
ffbc74f
small typo
lexcomber Apr 13, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/R-CMD-check.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Workflow derived from https://github.com/r-lib/actions/tree/v2/examples
# Need help debugging build failures? Start at https://github.com/r-lib/actions#where-to-find-help

on:
push:
branches: [main, master]
pull_request:
branches: [main, master]

name: R-CMD-check

Expand All @@ -22,9 +22,9 @@ jobs:
config:
- {os: macos-latest, r: 'release'}
- {os: windows-latest, r: 'release'}
- {os: ubuntu-latest, r: 'devel', http-user-agent: 'release'}
- {os: ubuntu-latest, r: 'release'}
- {os: ubuntu-latest, r: 'oldrel-1'}
- {os: ubuntu-latest, r: 'devel', http-user-agent: 'release'}
- {os: ubuntu-latest, r: 'release'}
- {os: ubuntu-latest, r: 'oldrel-1'}

env:
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
Expand Down
37 changes: 20 additions & 17 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,45 +1,48 @@
Package: stgam
Title: Spatially and Temporally Varying Coefficient Models Using Generalized Additive Models
Version: 0.0.1.3
Version: 1.2.0
Authors@R: c(
person("Lex", "Comber", email = "a.comber@leeds.ac.uk", role = c("aut", "cre")),
person("Paul", "Harris", email = "paul.harris@rothamsted.ac.uk", role = c("ctb")),
person("Gonzalo", "Irisarri", email = "jirisarr@uwyo.edu", role = c("ctb")),
person("Chris", "Brunsdon", email = "christopher.brunsdon@mu.ie", role = c("ctb"))
)
Author: Lex Comber [aut, cre],
Paul Harris [ctb],
Gonzalo Irisarri [ctb],
Chris Brunsdon [ctb]
Maintainer: Lex Comber <a.comber@leeds.ac.uk>
Description: A framework for specifying spatially, temporally and spatially-and-temporally varying coefficient models using Generalized Additive Models with Gaussian Process smooths. The smooths are parameterised with location and / or time attributes. Importantly the framework supports the investigation of the presence and nature of any space-time dependencies in the data, allows the user to evaluate different model forms (specifications) and to pick the most probable model or to combine multiple varying coefficient models using Bayesian Model Averaging. For more details see: Brunsdon et al (2023) <doi:10.4230/LIPIcs.GIScience.2023.17>, Comber et al (2023) <doi:10.4230/LIPIcs.GIScience.2023.22> and Comber et al (2024) <doi:10.1080/13658816.2023.2270285>.
Description: A framework for undertaking space and time varying coefficient models (varying parameter models) using a Generalized Additive Model (GAM) with smooths approach. The framework suggests the need to investigate for the presence and nature of any space-time dependencies in the data. It proposes a workflow that creates and refines an initial space-time GAM and includes tools to create and evaluate multiple model forms. The workflow sequence is to: i) Prepare the data by lengthening it to have a single location and time variables for each observation. ii) Create all possible space and/or time models in which each predictor is specified in different ways in smooths. iii) Evaluate each model via their AIC value and pick the best one. iv) Create the final model. v) Calculate the varying coefficient estimates to quantify how the relationships between the target and predictor variables vary over space, time or space-time. vi) Create maps, time series plots etc. The number of knots used in each smooth can be specified directly or iteratively increased. This is illustrated with a climate point dataset of the dry rain forest in South America. This builds on work in Comber et al (2024) <doi:10.1080/13658816.2023.2270285> and Comber et al (2004) <doi:10.3390/ijgi13120459>.
License: MIT + file LICENSE
Encoding: UTF-8
Roxygen: list(markdown = TRUE)
RoxygenNote: 7.2.3
RoxygenNote: 7.3.2
Suggests:
cols4all,
knitr,
purrr,
knitr,
ggplot2,
cowplot,
rmarkdown,
sf,
testthat (>= 3.0.0),
tidyr
tidyr,
lubridate,
sf,
gratia,
kableExtra
Config/testthat/edition: 3
URL: https://github.com/lexcomber/stgam
BugReports: https://github.com/lexcomber/stgam/issues
Depends:
R (>= 2.10),
R (>= 4.1.0),
mgcv (>= 1.9-1),
glue
LazyData: true
Imports:
cowplot,
foreach,
doParallel,
parallel,
dplyr,
foreach,
ggplot2,
glue,
grDevices,
magrittr,
metR,
mgcv (>= 1.9-1),
parallel,
tidyselect
purrr,
stringr
VignetteBuilder: knitr
38 changes: 15 additions & 23 deletions NAMESPACE
Original file line number Diff line number Diff line change
@@ -1,45 +1,37 @@
# Generated by roxygen2: do not edit by hand

export(calculate_vcs)
export(do_bma)
export(effect_size)
export(evaluate_models)
export(gam_model_probs)
export(plot_1d_smooth)
export(plot_2d_smooth)
importFrom(cowplot,plot_grid)
export(gam_model_rank)
importFrom(doParallel,registerDoParallel)
importFrom(dplyr,across)
importFrom(dplyr,arrange)
importFrom(dplyr,mutate)
importFrom(dplyr,relocate)
importFrom(dplyr,rename)
importFrom(dplyr,select)
importFrom(dplyr,slice_head)
importFrom(dplyr,tibble)
importFrom(foreach,"%dopar%")
importFrom(foreach,foreach)
importFrom(ggplot2,aes)
importFrom(ggplot2,coord_sf)
importFrom(ggplot2,geom_contour)
importFrom(ggplot2,geom_contour_filled)
importFrom(ggplot2,geom_line)
importFrom(ggplot2,geom_ribbon)
importFrom(ggplot2,geom_sf)
importFrom(ggplot2,ggplot)
importFrom(ggplot2,theme_bw)
importFrom(ggplot2,xlab)
importFrom(ggplot2,ylab)
importFrom(ggplot2,ylim)
importFrom(glue,glue)
importFrom(grDevices,dev.off)
importFrom(grDevices,pdf)
importFrom(magrittr,"%>%")
importFrom(mgcv,gam)
importFrom(parallel,detectCores)
importFrom(mgcv,k.check)
importFrom(mgcv,predict.gam)
importFrom(mgcv,s)
importFrom(mgcv,te)
importFrom(parallel,makeCluster)
importFrom(parallel,stopCluster)
importFrom(stats,BIC)
importFrom(purrr,map2_chr)
importFrom(stats,as.formula)
importFrom(stats,family)
importFrom(stats,formula)
importFrom(stats,predict)
importFrom(tidyselect,all_of)
importFrom(stats,reformulate)
importFrom(stats,sd)
importFrom(stringr,str_detect)
importFrom(stringr,str_replace)
importFrom(stringr,str_split)
importFrom(utils,head)
importFrom(utils,installed.packages)
27 changes: 27 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,30 @@

* expanded the output of `do_bma` to include averaged $\hat{y}$ and working residuals
* returns weighted vary coefficient estimates appended to input data

# stgam 1.0.0

* space-time GAMs are reformatted to include Tensor Product smooths for combined space-time
* modelling averaging is removed

# stgam 1.0.1

* London borough data (`lb`) corrected
* typos in vignette corrected

# stgam 1.0.2

* fix to vignette plot

# stgam 1.1.0

* updates to functions and vignette to use `te()` tensor product smooths for space-time smooths, replacing `t2()`
* inclusion of t-values in coefficient estimates

# stgam 1.2.0

* updates to main function (`evaluate_models()`) for user specification of `k` or to increase `k` automatically
* new function for quantifying the effect size of each model term (`effect_size()`)
* updates to `gam_model_rank` to evaluate models by AIC and to report `k` for each smooth
* new vignettes
* new case study dataset
Empty file added R/.Rapp.history
Empty file.
51 changes: 0 additions & 51 deletions R/calculate_vc.R

This file was deleted.

91 changes: 91 additions & 0 deletions R/calculate_vcs.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
#' Extracts varying coefficient estimates (for SVC, TVC and STVC models).
#'
#' @param input_data the data used to create the GAM model in `data.frame`, `tibble` or `sf` format. This can be the original data used to create the model or another surface with location and time attributes.
#' @param mgcv_model a GAM model with smooths created using the `mgcv` package
#' @param terms a vector of names starting with "Intercept" plus the names of the covariates used in the GAM model (these are the names of the variables in the `input_data` used to construct the model).
#'
#' @return A `data.frame` of the input data, the coefficient estimates, the standard errors and the t-values estimates for each covariate. It can be used to generate coefficient estimates for specific time slices and over gridded surfaces as described in the package vignette.
#' @importFrom dplyr mutate
#' @importFrom stats predict
#'
#' @examples
#' require(dplyr)
#' require(doParallel)
#' # define input data
#' data("chaco")
#' input_data <-
#' chaco |>
#' # create Intercept as an addressable term
#' mutate(Intercept = 1)
#' # create a model for example as result of running `evaluate_models`
#' gam.m = gam(ndvi ~ 0 + s(X, Y, by = Intercept) +
#' s(X, Y, by = tmax) + s(X, Y, by = pr), data = input_data)
#' # calculate the Varying Coefficients
#' terms = c("Intercept", "tmax", "pr")
#' vcs = calculate_vcs(input_data, gam.m, terms)
#' vcs |> select(ndvi, X, Y, starts_with(c("b_", "se_", "t_")), yhat)
#'
#' @export
calculate_vcs <- function(input_data, mgcv_model, terms = NULL) {
# --- Input validation ---
if (!inherits(mgcv_model, "gam")) {
stop("Error: 'mgcv_model' must be a GAM object from mgcv::gam().")
}
if (!is.data.frame(input_data)) {
stop("Error: 'input_data' must be a data.frame.")
}
if (is.null(terms)) {
# Default to all parametric terms
terms <- attr(mgcv_model$terms, "term.labels")
}
if (!all(terms %in% names(input_data))) {
missing_terms <- setdiff(terms, names(input_data))
stop(paste("Error: The following terms are missing from input_data:",
paste(missing_terms, collapse = ", ")))
}
if (is.null(terms)) {
terms <- names(input_data)
}
n_t <- length(terms)

# preallocate output list
output_data <- input_data

# create a template for modifying predictor columns
input_data_copy <- input_data
n_rows <- nrow(input_data)

# build all term-modified datasets at once
term_mats <- diag(n_t)
colnames(term_mats) <- terms

# loop over terms efficiently
for (i in seq_len(n_t)) {
# assign 0/1 indicators for this term
for (t in seq_along(terms)) {
input_data_copy[[terms[t]]] <- term_mats[i, t]
}

# single predict() call returning both fit and SE
pred <- predict(mgcv_model, newdata = input_data_copy, se.fit = TRUE)
b.j <- pred$fit
se.j <- pred$se.fit
t.j <- b.j / se.j

# append results
term_i <- terms[i]
output_data[[paste0("b_", term_i)]] <- b.j
output_data[[paste0("se_", term_i)]] <- se.j
output_data[[paste0("t_", term_i)]] <- t.j
}

# optional predicted response (only if all terms present)
if (all(terms %in% names(input_data))) {
output_data$yhat <- predict(mgcv_model, newdata = input_data)
}

return(output_data)
}



56 changes: 19 additions & 37 deletions R/data.R
Original file line number Diff line number Diff line change
@@ -1,43 +1,25 @@
#' US States Economic Productivity Data (1970-1985)
#' Chaco dry rainforest data (2012-2022)
#'
#' A dataset of annual economic productivity data for the 48 contiguous US states (with Washington DC merged into Maryland), from 1970 to 1985 (17 years) in long format. The data productivity data table was extracted from the `plm` package.
#' A point dataset of NDVI and climate data. The data are sample of 2000 observations of Normalised Difference Vegetation Index (NDVI) (2012-2022) of the Chaco dry rainforest in South America with some climate data. These are found via Google Earth Engine (Gorelick et al., 2017). The NDVI data is sourced from the PKU GIMMS NDVI v1.2 dataset, which provides NDVI observations at 1/12° spatial resolution at bi-monthly intervals from 1982 to 2022 (Li et al., 2023). The climate data was derived from the TerraClimate dataset (IDAHO_EPSCOR/TERRACLIMATE). Maximum temperature (`tmax`) and Precipitation (`pr`) were selected and means calculated for each monthly image across all pixels.
#'
#' @format A tibble with 816 rows and 14 columns.
#' @format A `sf` POINT dataset with 2000 observations and 12 fields.
#' \describe{
#' \item{state}{The name of the state}
#' \item{GEOID}{The state code}
#' \item{region}{The region}
#' \item{pubC}{Public capital which is composed of highways and streets (hwy) water and sewer facilities (water) and other public buildings and structures (util)}
#' \item{hwy}{Highway and streets assets}
#' \item{water}{Water utility assets}
#' \item{util}{Other public buildings and structures}
#' \item{privC}{Private captial stock}
#' \item{gsp}{Gross state product}
#' \item{emp}{Labour input measured by the employment in non-agricultural payrolls}
#' \item{unemp}{State unemployment rate capture elements of the business cycle}
#' \item{X}{Easting in metres from USA Contiguous Equidistant Conic projection (ESRI:102005)}
#' \item{Y}{Northing in metres from USA Contiguous Equidistant Conic projection (ESRI:102005)}
#' \item{id}{An observation identifier}
#' \item{ndvi}{Normalised Difference Vegetation Index (NDVI)}
#' \item{tmax}{Maximum temperature (°C)}
#' \item{pr}{Preciptation}
#' \item{month}{A continous integer variable from 1 to 120}
#' \item{year}{The year of observation}
#' \item{lon}{Longitude in degrees (WGS84)}
#' \item{lat}{Latitude in degrees (WGS84)}
#' \item{X}{Easting in metres from the SIRGAS 2000 / Brazil Mercator projection (EPSG:5641)}
#' \item{Y}{Northing in metres from the SIRGAS 2000 / Brazil Mercator projection (EPSG:5641)}
#' \item{geometry}{The spatial geometry of the observation in the SIRGAS 2000 / Brazil Mercator projection (EPSG:5641)}
#' }
#' @source Croissant, Yves, Giovanni Millo, and Kevin Tappe. 2022. Plm: Linear Models for Panel Data
#' @source Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., & Moore, R. (2017). Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment, 202, 18–27. https://doi.org/10.1016/J.RSE.2017.06.031
#' @source Li, M., Cao, S., Zhu, Z., Wang, Z., Myneni, R. B., & Piao, S. (2023). Spatiotemporally consistent global dataset of the GIMMS Normalized Difference Vegetation Index (PKU GIMMS NDVI) from 1982 to 2022. Earth System Science Data, 15(9), 4181–4203. https://doi.org/10.5194/ESSD-15-4181-2023
#'
#' @examples
#' data(productivity)
"productivity"

#' US States boundaries
#'
#' A dataset of of the boundaries of 48 contiguous US states (with Washington DC merged into Maryland), extracted from the `spData` package.
#'
#' @format A `sf` polygon dataset with 48 rows and 6 fields.
#' \describe{
#' \item{GEOID}{The state code}
#' \item{NAME}{The name of the state}
#' \item{REGION}{The region}
#' \item{total_pop_10}{Population in 2010}
#' \item{total_pop_15}{Population in 2015}
#' }
#' @source Bivand, Roger, Jakub Nowosad, and Robin Lovelace. 2019. spData: Datasets for Spatial Analysis. R package
#'
#' @examples
#' data(us_data)
"us_data"
#' library(sf)
#' data("chaco")
"chaco"
Loading
Loading