5 Processing

VisionEval is designed to be easy to install, run, and summarize, even when comparing scores of different scenarios. It produces consistent and detailed performance metrics. The user can modify the metrics produced by the model or define their own from data exported from the model. There are also several ways to think about validation of the model within the VisionEval mindset. These topics are explored in this chapter.

5.1 Running VisionEval

VisionEval is implemented entirely in the R statistical language and operates on recent versions of Microsoft Windows. All development work in done there, although macOS and Linux versions are usually distributed. A fully self-contained installer for the more recent production release of VisionEval can be found on the download page. It permits installation of the full VisionEval platform, to include example data, even behind firewalls that prevent access to R Project and GitHub repositories.

Once installed the user assembles data into a standard directory structure. Once the model run script is customized by the user it is typically run from a command prompt. Running it in this manner allows several different scenarios to be run at the same time with minimal user interaction. The results can then be mined or visualized using a variety of VisionEval and third-party products. Some users use R Shiny or similar interactive environments for summarizing and visualizing the output from VisionEval. Such an environment is especially useful when comparing key metrics from a large number of scenarios.

5.2 Typical outputs

VisionEval generates a large set of performance metrics at varying summary levels. Several pre-defined metrics are compiled for mobility, economic, land use, environmental, and energy categories in each model run. They can be tabulated for individual scenarios or compared to other scenarios, as well as visualized using a variety of tools.

The intermediate data generated during the various VisionEval module steps can be compiled as performance metrics, both in absolute and per-capita terms and at various geographies. Traditional transportation network metrics such as VMT, vehicle and person hours of travel, and total delay are easily compiled by overall or focused areas within the model. Likewise, emission estimates and fuel consumption are tabulated. These can be viewed in standard reports or in VEScenarioManager files, especially when comparing such values between scenarios.

One example of a set of region-wide performance metrics used by Oregon DOT includes:

  • Mobility
    • Daily per capita VMT
    • Annual walk trips per capita
    • Daily Bike trips per capita
  • Economy
    • Annual all vehicle delay per capita (hours)
    • Daily household parking costs
    • Annual HH vehicle operating cost (fuel, taxes, parking)
    • Annual HH ownership costs (depreciation, vehicle maintenance, tires, finance charge, insurance, registration)
  • Land Use
    • Residents liming in mixed use areas
    • Housing type (SF: MF)
  • Environmental
    • Annual GHG emissions per capita
    • HH vehicle GHG/mile
    • Commercial vehicle GHG/mile
    • Transit Vehicle GHG/mile
  • Energy
    • Annual all vehicle fuel consumption per capita (gallons)
    • Average all vehicle fuel efficiency (net miles per gallon)
    • Annual external social costs per households (total/% paid)

5.3 Exporting data

Most of the data generated during a VisionEval model run can be exported (using exporter.R) if desired for further analyses. The user can then mine and visualize the data using a variety of open source and proprietary tools. This provides the user with considerable flexibility for creating more detailed statistics than those provided by the program. These VisionEval outputs might further serve as inputs to other models (e.g., emissions models, economic impact models) and visualization tools, and compilation of additional performance metrics.

5.4 Validation

Setting up the model includes the steps required to apply the model for a given study. It is somewhat related to validation, both for informing what types of studies that VisionEval are appropriately sensitive to and interpreting the results. See the Getting Started page on the wiki for an overview of getting started initially.

Validation is the assessment of a model’s suitability for its intended purpose, often informed by comparisons against information not used in its original development. In traditional transportation planning models the comparison of observed versus modeled link flows is often a key component of validation. VisionEval is a data-driven model in that most of its inputs values are exogenously defined rather than emergent behavior from complex mathematical equations. Its aggregate representation of travel demand dictates that it be validated at the same level, with an emphasis on a wider number of comparisons than many traditional models.

The metric used in validation can range from relatively few, such as per-capita mobility estimates (e.g., VMT and VHT by mode), to a large number of more detailed targets. Examples of the latter include comparisons to external sources (e.g., HPMS data, DMV data), sensitivity tests of key variables, and comparison to comparable communities. An example of detailed validation criteria used by the Oregon DOT provides examples of these targets.

There are several options for making adjustments in order to calibrate and validate the models. These adjustments vary in difficulty, and the most appropriate approach varies by module. From easiest to most difficult the options for making adjustments are:

  • Self-calibration: Several of the modules are self-calibrating in that they automatically adjust calculations to match input values without intervention by the user.[Selected value should be validated to confirm the calculations are done correctly]
  • Adjustment of model inputs: Some modules allow the user to optionally enter data that can be used to adjust the models to improve their match to observed conditions.
  • Model estimation data: Several modules use data specific to the region where the model is deployed, such as household synthesis. Functions within each module generate cross-tabulations required from these data. Census PUMSCensus PUMS data from Oregon were used to develop the original models, and should be replaced with PUMS data for the modeled area.
  • Model estimation scripts: An advanced user or developer can make adjustments to the model code itself in order to facilitate better matching observed local behavior or patterns. This, of course, is the most difficult option and opens up potential for significant errors, but it is possible for users that know what they are doing.

The main validation targets have historically included household income, vehicle ownership, vehicle miles of travel, and fuel consumption. The number of workers and drivers within each geography have recently become more widely used. These statistical comparisons can be made for the modeled area as a whole or for large geographies nested within them (e.g., Azones, Mareas). Sensitivity tests should be performed to evaluate the reasonableness (eg. correct direction and magnitude) of the VisionEval output estimates. Comparable community applications of VisionEval may also provide a reasonableness check that the model is functioning appropriately.

Note that HPMS definition of VMT differs from that used in VisionEval. VisionEval reports on all household travel regardless of where it occurs, and adds Commercial vehicle and Heavy Duty Truck and Bus travel on MPO roads. HPMS reports vehicular travel of all modes on roads within the MPO boundary.

Additional detail on validation can be found in this validation document