875 resultados para information quality
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
This paper presents a framework for considering quality control of volunteered geographic information (VGI). Different issues need to be considered during the conception, acquisition and post-acquisition phases of VGI creation. This includes items such as collecting metadata on the volunteer, providing suitable training, giving corrective feedback during the mapping process and use of control data, among others. Two examples of VGI data collection are then considered with respect to this quality control framework, i.e. VGI data collection by National Mapping Agencies and by the most recent Geo-Wiki tool, a game called Cropland Capture. Although good practices are beginning to emerge, there is still the need for the development and sharing of best practice, especially if VGI is to be integrated with authoritative map products or used for calibration and/or validation of land cover in the future.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016
Resumo:
Citizens are increasingly becoming an important source of geographic information, sometimes entering domains that had until recently been the exclusive realm of authoritative agencies. This activity has a very diverse character as it can, amongst other things, be active or passive, involve spatial or aspatial data and the data provided can be variable in terms of key attributes such as format, description and quality. Unsurprisingly, therefore, there are a variety of terms used to describe data arising from citizens. In this article, the expressions used to describe citizen sensing of geographic information are reviewed and their use over time explored, prior to categorizing them and highlighting key issues in the current state of the subject. The latter involved a review of 100 Internet sites with particular focus on their thematic topic, the nature of the data and issues such as incentives for contributors. This review suggests that most sites involve active rather than passive contribution, with citizens typically motivated by the desire to aid a worthy cause, often receiving little training. As such, this article provides a snapshot of the role of citizens in crowdsourcing geographic information and a guide to the current status of this rapidly emerging and evolving subject.
Resumo:
In this work we deal with video streams over TCP networks and propose an alternative measurement to the widely used and accepted peak signal to noise ratio (PSNR) due to the limitations of this metric in the presence of temporal errors. A test-bed was created to simulate buffer under-run in scalable video streams and the pauses produced as a result of the buffer under-run were inserted into the video before being employed as the subject of subjective testing. The pause intensity metric proposed in [1] was compared with the subjective results and it was shown that in spite of reductions in frame rate and resolution, a correlation with pause intensity still exists. Due to these conclusions, the metric may be employed in layer selection in scalable video streams. © 2011 IEEE.
Resumo:
Sensing technology is a key enabler of the Internet of Things (IoT) and could produce huge volume data to contribute the Big Data paradigm. Modelling of sensing information is an important and challenging topic, which influences essentially the quality of smart city systems. In this paper, the author discusses the relevant technologies and information modelling in the context of smart city and especially reports the investigation of how to model sensing and location information in order to support smart city development.
Resumo:
It is often assumed (for analytical convenience, but also in accordance with common intuition) that consumer preferences are convex. In this paper, we consider circumstances under which such preferences are (or are not) optimal. In particular, we investigate a setting in which goods possess some hidden quality with known distribution, and the consumer chooses a bundle of goods that maximizes the probability that he receives some threshold level of this quality. We show that if the threshold is small relative to consumption levels, preferences will tend to be convex; whereas the opposite holds if the threshold is large. Our theory helps explain a broad spectrum of economic behavior (including, in particular, certain common commercial advertising strategies), suggesting that sensitivity to information about thresholds is deeply rooted in human psychology.
Resumo:
We have limited knowledge on the potential pattern similarities/differences of trust’s role that may exist in information use obtained through intra- and extra-organizational relationships. This study addresses this question by investigating how trust leads to information use. Data from 338 intra-organizational and a sub-ample of 158 interorganizational dyadic information exchange-relationships showed that trust is an important driver of the utilization of market information in both cases. Trust has no direct relationship to information use, instead has a strong indirect effect through a mediator, perceived quality of information. The effects of trust on the use of information obtained through inter- and extra-organizational dyadic relationships proved to be similar.
Resumo:
Over the last couple of years there has been an ongoing debate on how sales managers contribute to organizational value. Direct measures between sales-marketing interface quality and company performance are compromised, as company performance is influenced by a plethora of other factors. We advocate that the use of sales information is the missing link between sales-marketing relationship quality and organizational outcomes. We propose and empirically test a model on how sales-marketing interface quality affects managerial use of sales information, which in turn leads to enhanced organizational performance. We found that marketing managers rely on sales information if they think that their sales counterpart is trustworthy. Integration between the sales-marketing function contributes to a trust-based relationship.
Resumo:
In the future, competitors will have more and more opportunities to buy the same information; therefore the companies’ competitiveness will not primarily depend on how much information they possess, but rather on how they can “translate” it to their own language. This study aims to examine those factors that have the most significant impact on the degree to which market studies are utilised by companies. Most of the work in this area has studied the use of information in strategic decisions a priori. This paper — while reflecting on the findings of research on organisational theories of information processing — aims to bridge this gap. It proposes and tests a new conceptual framework that examines the use of managerial market research information in decision-making and knowledge creation within one single model. Collected survey data, including all the top-income business enterprises in Hungary indicate that market research findings are efficiently incorporated into the marketing information system only if the marketing manager has trust in the researcher, and believes that the market study is of high quality. Decision-makers are more likely to learn from market studies facilitating the resolution of some specific problem than descriptive studies of a more general nature.
Resumo:
This thesis chronicles the design and implementation of a Internet/Intranet and database based application for the quality control of hurricane surface wind observations. A quality control session consists of selecting desired observation types to be viewed and determining a storm track based time window for viewing the data. All observations of the selected types are then plotted in a storm relative view for the chosen time window and geography is positioned for the storm-center time about which an objective analysis can be performed. Users then make decisions about data validity through visual nearest-neighbor comparison and inspection. The project employed an Object Oriented iterative development method from beginning to end and its implementation primarily features the Java programming language. ^
Resumo:
The population of older adults is rapidly increasing, creating a need for community services that assist vulnerable older adults in maintaining independence and quality of life. Recent evidence confirms the importance of food and nutrition in reaching this objective. The Elderly Nutrition Program (ENP) is part of a system of federally funded community based programs, authorized through the Older Americans Act. ENP services include the home-delivered meals program, which targets frail homebound older adults at nutritional risk. Traditionally, ENP services provide a noon meal 5 days/week. This study evaluated the impact of expanding the home-delivered meals service to include breakfast + lunch, on the nutritional status, quality of life and health care utilization of program participants. ^ This cross-sectional study compared 2 groups. The Breakfast group (n = 167) received a home-delivered breakfast + lunch, 5 days/week. The Comparison group (n = 214) received lunch 5 days/week. Participants, recruited from 5 ENP programs, formed a geographically, racially/ethnically diverse sample. Participants ranged in age from 60–100 years, they were functionally limited, at high nutritional risk, low income, and they lived alone and had difficulty shopping or preparing food. Participant data were collected through in-home interviews and program records. A 24-hour food recall and information on participant demographics, malnutrition risk, functional status, health care use, and applicable quality of life factors were obtained. Service and cost data were collected from program administrators. ^ Breakfast group participants had greater energy/nutrient intakes (p < .05), fewer health care contacts (p < .05), and greater quality of life measured as food security (p < .05) and fewer depressive symptoms (p < .05), than comparison group participants. These benefits were achieved for $1.30/person/day. ^ The study identified links from improvements in nutritional status to enhanced quality of life to diminished health care utilization and expenditures. A model of health, loneliness, food enjoyment, food insecurity, and depression as factors contributing to quality of life for this population, was proposed and tested (p < .01). ^ The breakfast service is an inexpensive addition to traditional home-delivered meals services and can improve the lives of frail homebound older adults. Agencies should be encouraged to expand meals programs to include a breakfast service. ^
Resumo:
This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. ^ Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. ^ This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model’s parsing mechanism. ^ The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents. ^
Resumo:
An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.