Setup

Package Installation

Note that postmastr is not available on CRAN:

remotes::install_github("slu-openGIS/postmastr")

p <- c( "tidyverse","stringi","tidycensus",
        "textclean","tidygeocoder" )
install.packages(p)
library( postmastr )
library( tidyverse )
library( stringi )
library( tidycensus )
library( textclean )
library( tidygeocoder )

Geocoding Addresses

Data Cleaning with the postmastr Package

Basic Organization of postmastr

To parse our grammar of street addresses, functions can be grouped in two ways. All functions begin with the prefix pm_ in order to take advantage of RStudio’s auto-complete functionality.

First, we have major groups of functions based on their associated grammatical element:

  • house - house number
  • houseAlpha - alphanumeric house number
  • houseFrac - fractional house number
  • street - street name
  • streetDir - street prefix and suffix direction
  • streetSuf - street suffix
  • unit - unit name and number
  • city - city
  • state - state
  • postal - postal code

For each group of function, we have a similar menu of options that describe the verb (action) the function implements. For the state family of functions, for instance:

  • pm_state_detect() - does a given street address contain a state name or abbreviation?
  • pm_state_any() - does a any street address contain a state name or abbreviation?
  • pm_state_all() - do all street addresses contain a state name or abbreviation?
  • pm_state_none() - returns a tibble of street addresses that do not contain a state name or abbreviation
  • pm_state_parse() - parses street addresses that do contain a street name or abbreviation
  • pm_state_std() - standardizes the parsed state data to return upper-case abbreviations

Building Dictionaries

Thanks to Akila Forde for contributions to this section.

The postmastr packages utilizes a collection of “dictionaries” that are used for standardization and disambiguation purposes.

x.dir <- pm_dictionary( type = "directional", 
                        locale = "us",
                        filter = c("N", "S", "E", "W", "NE", "NW", "SW", "SE"))

head( x.dir, 10 )
x.state <- pm_dictionary( type = "state", 
                          case = c("upper"), 
                          locale = "us")
head( x.state, 10 )

NOTE: The city dictionary functionality is powered by the get_acs function from the tidycensus package. This requires a Census Bureau API key, which can be obtained at http://api.census.gov/data/key_signup.html. Once you have a key, the census_api_key function from tidycensus should be used to set-up the key before proceeding with the creation of any dictionary objects for cities. (Excludes Armed Americas, Armed Americas Europe Middle East etc)

tidycensus::census_api_key( "yourkeyhere" )
af_pm_dictionary_us_cities <- function(append, states){

  out <- af_pm_get_tidycensus(states = states)

  # optionally append
  if (missing(append) == FALSE){

    # bind rows
    out <- dplyr::bind_rows(out, append)

    # re-order observations
    out <- out[order(out$city.input),]

  }

  # return output
  return(out)

}

# us cities via tidycensus
af_pm_get_tidycensus <- function(states){

  # global bindings
  state.abb = NAME = NULL

  # download data
  states %>%
    base::split(states) %>%
    purrr::map_df(~ suppressMessages(
      tidycensus::get_acs(year = 2020, state = .x, geography = "place", variable = "B01003_001"))) -> data

  # create state dictionary
  dict <- data.frame(
    state.name = c(datasets::state.name),
    state.abb = c(datasets::state.abb),
    stringsAsFactors = FALSE
  )

  dict <- dplyr::filter(dict, state.abb %in% states)
  dict <- dict$state.name

  # parse state names
  data %>%
    dplyr::select(NAME) %>%
    dplyr::mutate(NAME = stringr::str_replace_all(NAME, pattern = ",", replace = "")) %>%
    pm_parse_place(dictionary = dict) %>%
    dplyr::mutate(NAME = stringr::str_trim(NAME, side = "right")) -> data

  # create directory
  dict <- c("city", "town", "village", "CDP")

  # parse place types
  data %>%
    pm_parse_place(dictionary = dict) -> data

  # clean-up output
  data %>%
    dplyr::mutate(NAME = stringr::str_trim(NAME, side = "right")) %>%
    dplyr::distinct(NAME, .keep_all = TRUE) %>%
    dplyr::rename(city.input = NAME) -> data

  # re-order observations
  data <- data[order(data$city.input),]

  # return output
  return(data)

}

pm_parse_place <- function(.data, dictionary){

  # global bindings
  NAME = pm.place = NULL

  # iterate over observations
  .data %>%
    dplyr::mutate(pm.place = purrr::map(NAME, ~ pm_extract_pattern(.x, dictionary = dictionary, end = TRUE))) -> .data

  # clean address data
  .data %>%
    tidyr::unnest(pm.place) %>%
    dplyr::filter(is.na(pm.place) == FALSE) %>%
    dplyr::mutate(pm.place = as.character(pm.place)) %>%
    dplyr::mutate(NAME = stringr::str_replace(NAME,
                                              pattern = stringr::str_c("\\b", pm.place, "\\b$"),
                                              replacement = "")) %>%
    dplyr::select(-pm.place) -> .data

  return(.data)

}


# iterate over dictionary items per observations
pm_extract_pattern <- function(x, dictionary, end = TRUE){

  # create pattern vector
  patternVector <- dictionary

  patternVector %>%
    base::split(patternVector) %>%
    purrr::map( ~ stringr::str_extract(x, pattern = ifelse (end == TRUE,
                                                            stringr::str_c("\\b", .x, "\\b$"),
                                                            stringr::str_c("\\b", .x, "\\b")))) -> out

  return(out)

}
# takes a bit to process
x.city <- af_pm_dictionary_us_cities(,state.abb) %>% mutate_if(is.character, str_to_upper)
head( x.city )

Zip Code Cleanup

address <- 
 c(
    "1 WEST 54TH ST NO 38, NEW YORK, NY  10019",
    "202 FLANDERS DRAKESTOWN ROAD, FLANDERS, NJ  7836",
    "PO BOX 840, ARLINGTON, MN  55307",
    "7211 HAVEN AVENUE SUITE E-565, RANCHO CUCAMONGA, CA  91701",
    "18810 U S HWY 41, MASARYKTOWN, FL  34604",
    "1200 W INTERNATIONAL SPEEDWAY BLVD, DAYTONA BEACH, FL  32114", 
    "16105 SWINGLEY RIDGE ROAD UNIT 773, CHESTERFIELD, MO  63017",
    "C/O EVAN RUSSO 57 TAYLOR BROOK RD, HANCOCK, VT  5748",
    "2917 E 47TH ST APT B, INDIANAPOLIS, IN  46205",
    "3280 N 960 E, LEHI, UT  84043",
    "702 SECOND AVE E, OSKALOOSA, IA  52577",
    "3875 POWDER SPRINGS RD STE C, POWDER SPRINGS, GA  30127",
    "1730 GIDDINGS AVENUE SOUTHEAST, GRAND RAPIDS, MI  49507", 
    "PO BOX 74, TAHOLAH, WA  98587",
    "BOX 444, INKSTER, MI  48141",
    "1990 NE 163RD STREET - SUITE 233, NORTH MIAMI BEACH, FL  33162",
    "ONE BEAR PLACE 98006, WACO, TX  76798", 
    "3023 S UNIVERSITY DRIVE 103, FORT WORTH, TX  76109",
    "POST OFFICE BOX 6037, ALAMEDA, CA  94501"
 )

Missing numbers in zip codes:

address[c(2,8)]
## [1] "202 FLANDERS DRAKESTOWN ROAD, FLANDERS, NJ  7836"    
## [2] "C/O EVAN RUSSO 57 TAYLOR BROOK RD, HANCOCK, VT  5748"
# Assumes the zipcodes are always the last word in the addess, code below has infrastructure if no zip
x.last_words <- stri_extract_last_words( address )


## Loop to add a zero in front of any shortened zipcodes, only if the zipcode is four digits long
for ( i in 1:length( x.last_words ) ){
  if ( nchar( x.last_words[i]) == 4 ) 
    {
       address[i] <- 
         paste( word( address[i], 1,-2 ),
         paste0( "0", x.last_words[i] ) ) 
    }
}

address[c(2,8)]
## [1] "202 FLANDERS DRAKESTOWN ROAD, FLANDERS, NJ  07836"    
## [2] "C/O EVAN RUSSO 57 TAYLOR BROOK RD, HANCOCK, VT  05748"

Parsing Addresses

# requires a data frame
id <- paste0( "id-", 1:length(address) )
d.address <- data.frame( id, address )
head( d.address )
## Need to ensure the data have a unique identification
d.address <- pm_identify( d.address, var="address" )
head( d.address )
## Parse the data for cities, and address
d.parsed <- 
  d.address %>% 
  pm_parse( input='full', 
            address="address", 
            output = "short", 
            keep_parsed = "limited",
            dir_dict=x.dir, 
            state_dict=x.state, 
            city_dict=x.city )

d.parsed <- d.parsed %>%  mutate_if( is.character, str_to_upper )
d.parsed 



Geocoding with tidygeocoder

https://jessecambon.github.io/tidygeocoder/

Demo

library(tibble)
library(dplyr)
library(tidygeocoder)

address_single <- tibble(singlelineaddress = c(
  "11 Wall St, NY, NY",
  "600 Peachtree Street NE, Atlanta, Georgia"
))

address_components <- tribble(
  ~street, ~cty, ~st,
  "11 Wall St", "NY", "NY",
  "600 Peachtree Street NE", "Atlanta", "GA"
)
census_s1 <- address_single %>%
  geocode(address = singlelineaddress, method = "census", verbose = TRUE)
#> 
#> Number of Unique Addresses: 2
#> Executing batch geocoding...
#> Batch limit: 10,000
#> Passing 2 addresses to the US Census batch geocoder
#> Querying API URL: https://geocoding.geo.census.gov/geocoder/locations/addressbatch
#> Passing the following parameters to the API:
#> format : "json"
#> benchmark : "Public_AR_Current"
#> vintage : "Current_Current"
#> Query completed in: 1 seconds
osm_s1 <- geo(
  address = address_single$singlelineaddress, method = "osm",
  lat = latitude, long = longitude
)
#> Passing 2 addresses to the Nominatim single address geocoder
#> Query completed in: 2 seconds

Geocoding API Options

geo( ..., method="osm" )

method = the geocoding service used in the call.

  • “osm”: Open Street Map (default)
  • “census”: US Census. Geographic coverage is limited to the United States. Batch geocoding is supported.
  • “arcgis”: ArcGIS.
  • “geocodio”: Geocodio. Geographic coverage is limited to the United States and Canada. An API key must be stored in the environmental variable “GEOCODIO_API_KEY”. Batch geocoding is supported.
  • “iq”: Location IQ. An API key must be stored in the environmental variable “LOCATIONIQ_API_KEY”.
  • “google”: Google. An API key must be stored in the environmental variable “GOOGLEGEOCODE_API_KEY”.
  • “opencage”: OpenCage. An API key must be stored in the environmental variable “OPENCAGE_KEY”.
  • “mapbox”: Mapbox. An API key must be stored in the environmental variable “MAPBOX_API_KEY”.
  • “here”: HERE. An API key must be stored in the environmental variable “HERE_API_KEY”. Batch geocoding is supported, but must be explicitly called with mode = “batch”.
  • “tomtom”: TomTom. An API key must be stored in the environmental variable “TOMTOM_API_KEY”. Batch geocoding is supported.
  • “mapquest”: MapQuest. An API key must be stored in the environmental variable “MAPQUEST_API_KEY”. Batch geocoding is supported.
  • “bing”: Bing. An API key must be stored in the environmental variable “BINGMAPS_API_KEY”. Batch geocoding is supported, but must be explicitly called with mode = “batch”.
  • “geoapify”: Geoapify. An API key must be stored in the environmental variable “GEOAPIFY_KEY”.
  • “cascade” [Deprecated] use geocode_combine or geo_combine instead.

Cascade Option:

The “cascade” method first uses one geocoding service and then uses a second geocoding service if the first service didn’t return results. The services and order is specified by the cascade_order argument. Note that this is not compatible with full_results = TRUE as geocoding services have different columns that they return.

Primary Considerations:

  • Do they allow batch processing?
  • Do they use machine learning to match addresses?
    • Census is pretty literal
    • Google Maps API is ‘smart’
  • Are there limits?
  • What is the price?

API Keys


Getting Keys

It is different for every service, but for some it is a simple request:

https://api.census.gov/data/key_signup.html




Installing Keys

API keys are loaded from R environmental variables.

usethis::edit_r_environ()
# add: 
GEOCODIO_API_KEY="YourAPIKeyHere"

Run usethis::edit_r_environ() to open your .Renviron file and add an API key as an environmental variable. For example, add the line GEOCODIO_API_KEY=“YourAPIKeyHere”

Project Example

https://nonprofit-open-data-collective.github.io/open-1023-ez-dataset/

  1. Clean up addresses.
  2. Geocode first with the free Census tool.
  3. Send unsuccessful cases to Google Maps API.
  4. Geocode PO Boxes by post office.
  5. Geocode incomplete / incorrect addresses by ZIP.