Building: Windsor Hotel
Room: Oak Room
Date: 2015-09-28 04:15 PM – 04:30 PM
Last modified: 2015-08-29
Abstract
It is almost paradoxical that as more biodiversity data become widely available and interoperable, the wail for missing, critical information becomes louder. The maturity of biodiversity data standards, coupled with increasingly sophisticated data mining techniques and ongoing literature markup and semantic indexing, allows even easier detection and quantification of potential gaps during meta-analyses—which might account for the perceived volume of data gaps.
A straightforward consequence of identifying emerging gaps is the need to fill them, following the realization that, by missing adequate data expected to exist in biodiversity databases, conservation policies built on that knowledge may be flawed. One widely acknowledged avenue for gap filling is digitization of assets, be it straight data acquisition or (equally importantly) standardization and databasing of already-digital assets such as old files.
But the capture of such data is often what permits further gaps to be discovered. Sufficient data volumes, and adequate data quality and fitness-for-use assessments, may allow for (a) the detection of patterns in the data hinting to gaps and (b) cross-referencing other sources, revealing discrepancies between expected and realized data. Thus, the hunt for biodiversity data produces gaps, and when these are hunted down with more data, new gaps emerge.
We will show a few examples of this cycle both in literature and current research, and will provide a few general guidelines based on a forthcoming data gap analysis overview.