951 resultados para Spatial Data Quality


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Precision horticulture and spatial analysis applied to orchards are a growing and evolving part of precision agriculture technology. The aim of this discipline is to reduce production costs by monitoring and analysing orchard-derived information to improve crop performance in an environmentally sound manner. Georeferencing and geostatistical analysis coupled to point-specific data mining allow to devise and implement management decisions tailored within the single orchard. Potential applications range from the opportunity to verify in real time along the season the effectiveness of cultural practices to achieve the production targets in terms of fruit size, number, yield and, in a near future, fruit quality traits. These data will impact not only the pre-harvest but their effect will extend to the post-harvest sector of the fruit chain. Chapter 1 provides an updated overview on precision horticulture , while in Chapter 2 a preliminary spatial statistic analysis of the variability in apple orchards is provided before and after manual thinning; an interpretation of this variability and how it can be managed to maximize orchard performance is offered. Then in Chapter 3 a stratification of spatial data into management classes to interpret and manage spatial variation on the orchard is undertaken. An inverse model approach is also applied to verify whether the crop production explains environmental variation. In Chapter 4 an integration of the techniques adopted before is presented. A new key for reading the information gathered within the field is offered. The overall goal of this Dissertation was to probe into the feasibility, the desirability and the effectiveness of a precision approach to fruit growing, following the lines of other areas of agriculture that already adopt this management tool. As existing applications of precision horticulture already had shown, crop specificity is an important factor to be accounted for. This work focused on apple because of its importance in the area where the work was carried out, and worldwide.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aimee Guidera, Director of the National Data Quality Campaign, delivered the second annual Lee Gurel '48 Lecture in Education, "From Dartboards to Dashboards: The Imperative of Using Data to Improve Student Outcomes." Aimee Rogstad Guidera is the Founding Executive Director of the Data Quality Campaign. She manages a growing partnership among national organizations collaborating to improve the quality, accessibility and use of education data to improve student achievement. Working with 10 Founding Partners, Aimee launched the DQC in 2005 with the goal of every state having a robust longitudinal data system in place by 2009. The Campaign is now in the midst of its second phase focusing on State Actions to ensure effective data use. Aimee joined the National Center for Educational Accountability as Director of the Washington, DC office in 2003. During her eight previous years in various roles at the National Alliance of Business, Aimee supported the corporate community's efforts to increase achievement at all levels of learning. As NAB Vice President of Programs, she managed the Business Coalition Network, comprised of over 1,000 business led coalitions focused on improving education in communities across the country. Prior to joining the Alliance, Aimee focused on school readiness, academic standards, education goals and accountability systems while in the Center for Best Practices at the National Governors Association. She taught for the Japanese Ministry of Education in five Hiroshima high schools where she interviewed educators and studied the Japanese education system immediately after receiving her AB from Princeton University’s Woodrow Wilson School of Public & International Affairs. Aimee also holds a Masters Degree in Public Policy from Harvard’s John F. Kennedy School of Government.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a state-of-the-art application of smoothing for dependent bivariate binomial spatial data to Loa loa prevalence mapping in West Africa. This application is special because it starts with the non-spatial calibration of survey instruments, continues with the spatial model building and assessment and ends with robust, tested software that will be used by the field scientists of the World Health Organization for online prevalence map updating. From a statistical perspective several important methodological issues were addressed: (a) building spatial models that are complex enough to capture the structure of the data but remain computationally usable; (b)reducing the computational burden in the handling of very large covariate data sets; (c) devising methods for comparing spatial prediction methods for a given exceedance policy threshold.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We investigate the changes of extreme European winter (December-February) precipitation back to 1700 and show for various European regions that return periods of extremely wet and dry winters are subject to significant changes both before and after the onset of anthropogenic influences. Generally, winter precipitation has become more extreme. We also examine the spatial pattern of the changes of the extremes covering the last 300 years where data quality is sufficient. Over central and Eastern Europe dry winters occurred more frequently during the 18th and the second part of the 19th century relative to 1951–2000. Dry winters were less frequent during both the 18th and 19th century over the British Isles and the Mediterranean. Wet winters have been less abundant during the last three centuries compared to 1951–2000 except during the early 18th century in central Europe. Although winter precipitation extremes are affected by climate change, no obvious connection of these changes was found to solar, volcanic or anthropogenic forcing. However, physically meaningful interpretation with atmospheric circulation changes was possible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Geospatial information systems are used to analyze spatial data to provide decision makers with relevant, up-to-date, information. The processing time required for this information is a critical component to response time. Despite advances in algorithms and processing power, we still have many “human-in-the-loop” factors. Given the limited number of geospatial professionals, analysts using their time effectively is very important. The automation and faster humancomputer interactions of common tasks that will not disrupt their workflow or attention is something that is very desirable. The following research describes a novel approach to increase productivity with a wireless, wearable, electroencephalograph (EEG) headset within the geospatial workflow.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Increasing amounts of clinical research data are collected by manual data entry into electronic source systems and directly from research subjects. For this manual entered source data, common methods of data cleaning such as post-entry identification and resolution of discrepancies and double data entry are not feasible. However data accuracy rates achieved without these mechanisms may be higher than desired for a particular research use. We evaluated a heuristic usability method for utility as a tool to independently and prospectively identify data collection form questions associated with data errors. The method evaluated had a promising sensitivity of 64% and a specificity of 67%. The method was used as described in the literature for usability with no further adaptations or specialization for predicting data errors. We conclude that usability evaluation methodology should be further investigated for use in data quality assurance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND We describe the setup of a neonatal quality improvement tool and list which peer-reviewed requirements it fulfils and which it does not. We report on the so-far observed effects, how the units can identify quality improvement potential, and how they can measure the effect of changes made to improve quality. METHODS Application of a prospective longitudinal national cohort data collection that uses algorithms to ensure high data quality (i.e. checks for completeness, plausibility and reliability), and to perform data imaging (Plsek's p-charts and standardized mortality or morbidity ratio SMR charts). The collected data allows monitoring a study collective of very low birth-weight infants born from 2009 to 2011 by applying a quality cycle following the steps 'guideline - perform - falsify - reform'. RESULTS 2025 VLBW live-births from 2009 to 2011 representing 96.1% of all VLBW live-births in Switzerland display a similar mortality rate but better morbidity rates when compared to other networks. Data quality in general is high but subject to improvement in some units. Seven measurements display quality improvement potential in individual units. The methods used fulfil several international recommendations. CONCLUSIONS The Quality Cycle of the Swiss Neonatal Network is a helpful instrument to monitor and gradually help improve the quality of care in a region with high quality standards and low statistical discrimination capacity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In land systems, equitably managing trade-offs between planetary boundaries and human development needs represents a grand challenge in sustainability oriented initiatives. Informing such initiatives requires knowledge about the nexus between land use, poverty, and environment. This paper presents results from Lao PDR, where we combined nationwide spatial data on land use types and the environmental state of landscapes with village-level poverty indicators. Our analysis reveals two general but contrasting trends. First, landscapes with paddy or permanent agriculture allow a greater number of people to live in less poverty but come at the price of a decrease in natural vegetation cover. Second, people practising extensive swidden agriculture and living in intact environments are often better off than people in degraded paddy or permanent agriculture. As poverty rates within different landscape types vary more than between landscape types, we cannot stipulate a land use–poverty–environment nexus. However, the distinct spatial patterns or configurations of these rates point to other important factors at play. Drawing on ethnicity as a proximate factor for endogenous development potentials and accessibility as a proximate factor for external influences, we further explore these linkages. Ethnicity is strongly related to poverty in all land use types almost independently of accessibility, implying that social distance outweighs geographic or physical distance. In turn, accessibility, almost a precondition for poverty alleviation, is mainly beneficial to ethnic majority groups and people living in paddy or permanent agriculture. These groups are able to translate improved accessibility into poverty alleviation. Our results show that the concurrence of external influences with local—highly contextual—development potentials is key to shaping outcomes of the land use–poverty–environment nexus. By addressing such leverage points, these findings help guide more effective development interventions. At the same time, they point to the need in land change science to better integrate the understanding of place-based land indicators with process-based drivers of land use change.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.