More and more attention is being placed on the need to gather, analyze, and present data in the nonprofit and philanthropic sectors. The fascination with data and how to present it seems to have entered into a hyper-intensive period. Online knowledge services such as Philanthropedia, mapping tools such as Philanthropy In/Sight and the growth in other data visualizations tools (see Lucy Berhnolz’s post) continue to push the demand for data access and analysis.
A recent sidebar conversation with Lucy and two seemingly innocent comments on a recent post raise a question though about the data quest. What if the data we’re using and seeking to obtain more of is flawed or incorrect? Can we operate with a level of uncertainty about the quality of the data and trust that we’re not trafficking in meaningless or wrong data?
From my work experiences at several grantmakers, the challenge of generating correct data starts with the need to maintain fidelity and uniformity in classifying grants. However, many terms used to classify grants – such as youth development, advocacy, or policy change – do not have consistent definitions.
Instead, how foundation personnel, whether a grants manager or program officer, define and apply those terms may vary even when using the National Taxonomy of Exempt Entities (NTEE) definitions as a base. Other less-defined terms, such as at-risk youth, will generate even more variation as funders adopt different interpretations of both ‘at-risk’ and ‘youth’. Ultimately, the variations in interpretations result in different coding approaches and inputs at the base level that ripple outward as the data becomes publically available and analyzed.
So, what’s our true threshold for inaccurate or incomplete data? And how do we ensure that our data is correct and not useless?
Tags: Philanthropy, Data, Nonprofit, Grants Management
Bookmark/Search this post with: