9 resultados para Data handling

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

70.00% 70.00%

Publicador:

Resumo:

To date, the processing of wildlife location data has relied on a diversity of software and file formats. Data management and the following spatial and statistical analyses were undertaken in multiple steps, involving many time-consuming importing/exporting phases. Recent technological advancements in tracking systems have made large, continuous, high-frequency datasets of wildlife behavioral data available, such as those derived from the global positioning system (GPS) and other animal-attached sensor devices. These data can be further complemented by a wide range of other information about the animals’ environment. Management of these large and diverse datasets for modelling animal behaviour and ecology can prove challenging, slowing down analysis and increasing the probability of mistakes in data handling. We address these issues by critically evaluating the requirements for good management of GPS data for wildlife biology. We highlight that dedicated data management tools and expertise are needed. We explore current research in wildlife data management. We suggest a general direction of development, based on a modular software architecture with a spatial database at its core, where interoperability, data model design and integration with remote-sensing data sources play an important role in successful GPS data handling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The performance of the surface zone of concrete is acknowledged as a major factor governing the rate of deterioration of reinforced concrete structures as it provides the only barrier to the ingress of water containing dissolved ionic species such as chlorides which, ultimately, initiate corrosion of the reinforcement. In-situ monitoring of cover-zone concrete is therefore critical in attempting to make realistic predictions as to the in-service performance of the structure. To this end, this paper presents developments in a remote interrogation system to allow continuous, real-time monitoring of the cover-zone concrete from an office setting. Use is made of a multi-electrode array embedded within cover-zone concrete to acquire discretized electrical resistivity and temperature measurements, with both parameters monitored spatially and temporally. On-site instrumentation, which allows remote interrogation of concrete samples placed at a marine exposure site, is detailed, together with data handling and processing procedures. Site-measurements highlight the influence of temperature on electrical resistivity and an Arrhenius-based temperature correction protocol is developed using on-site measurements to standardize resistivity data to a reference temperature; this is an advancement over the use of laboratory-based procedures. The testing methodology and interrogation system represents a robust, low-cost and high-value technique which could be deployed for intelligent monitoring of reinforced concrete structures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Free-roaming dogs (FRD) represent a potential threat to the quality of life in cities from an ecological, social and public health point of view. One of the most urgent concerns is the role of uncontrolled dogs as reservoirs of infectious diseases transmittable to humans and, above all, rabies. An estimate of the FRD population size and characteristics in a given area is the first step for any relevant intervention programme. Direct count methods are still prominent because of their non-invasive approach, information technologies can support such methods facilitating data collection and allowing for a more efficient data handling. This paper presents a new framework for data collection using a topological algorithm implemented as ArcScript in ESRI® ArcGIS software, which allows for a random selection of the sampling areas. It also supplies a mobile phone application for Android® operating system devices which integrates Global Positioning System (GPS) and Google Maps™. The potential of such a framework was tested in 2 Italian regions. Coupling technological and innovative solutions associated with common counting methods facilitate data collection and transcription. It also paves the way to future applications, which could support dog population management systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with handling uncertainty as part of the analysis of data from a medical study. The study is investigating connections between the birth weight of babies and the dietary intake of their mothers. Bayesian belief networks were used in the analysis. Their perceived benefits include (i) an ability to represent the evidence emerging from the evolving study, dealing effectively with the inherent uncertainty involved; (ii) providing a way of representing evidence graphically to facilitate analysis and communication with clinicians; (iii) helping in the exploration of the data to reveal undiscovered knowledge; and (iv) providing a means of developing an expert system application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In studies of radiation-induced DNA fragmentation and repair, analytical models may provide rapid and easy-to-use methods to test simple hypotheses regarding the breakage and rejoining mechanisms involved. The random breakage model, according to which lesions are distributed uniformly and independently of each other along the DNA, has been the model most used to describe spatial distribution of radiation-induced DNA damage. Recently several mechanistic approaches have been proposed that model clustered damage to DNA. In general, such approaches focus on the study of initial radiation-induced DNA damage and repair, without considering the effects of additional (unwanted and unavoidable) fragmentation that may take place during the experimental procedures. While most approaches, including measurement of total DNA mass below a specified value, allow for the occurrence of background experimental damage by means of simple subtractive procedures, a more detailed analysis of DNA fragmentation necessitates a more accurate treatment. We have developed a new, relatively simple model of DNA breakage and the resulting rejoining kinetics of broken fragments. Initial radiation-induced DNA damage is simulated using a clustered breakage approach, with three free parameters: the number of independently located clusters, each containing several DNA double-strand breaks (DSBs), the average number of DSBs within a cluster (multiplicity of the cluster), and the maximum allowed radius within which DSBs belonging to the same cluster are distributed. Random breakage is simulated as a special case of the DSB clustering procedure. When the model is applied to the analysis of DNA fragmentation as measured with pulsed-field gel electrophoresis (PFGE), the hypothesis that DSBs in proximity rejoin at a different rate from that of sparse isolated breaks can be tested, since the kinetics of rejoining of fragments of varying size may be followed by means of computer simulations. The problem of how to account for background damage from experimental handling is also carefully considered. We have shown that the conventional procedure of subtracting the background damage from the experimental data may lead to erroneous conclusions during the analysis of both initial fragmentation and DSB rejoining. Despite its relative simplicity, the method presented allows both the quantitative and qualitative description of radiation-induced DNA fragmentation and subsequent rejoining of double-stranded DNA fragments. (C) 2004 by Radiation Research Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The member states of the European Union are faced with the challenges of handling “big data” as well as with a growing impact of the supranational level. Given that the success of efforts at European level strongly depends on corresponding national and local activities, i.e., the quality of implementation and the degree of consistency, this chapter centers upon the coherence of European strategies and national implementations concerning the reuse of public sector information. Taking the City of Vienna’s open data activities as an illustrative example, we seek an answer to the question whether and to what extent developments at European level and other factors have an effect on local efforts towards open data. We find that the European Commission’s ambitions are driven by a strong economic argumentation, while the efforts of the City of Vienna have only very little to do with the European orientation and are rather dominated by lifestyle and administrative reform arguments. Hence, we observe a decoupling of supranational strategies and national implementation activities. The very reluctant attitude at Austrian federal level might be one reason for this, nationally induced barriers—such as the administrative culture—might be another. In order to enhance the correspondence between the strategies of the supranational level and those of the implementers at national and regional levels, the strengthening of soft law measures could be promising.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perfect information is seldom available to man or machines due to uncertainties inherent in real world problems. Uncertainties in geographic information systems (GIS) stem from either vague/ambiguous or imprecise/inaccurate/incomplete information and it is necessary for GIS to develop tools and techniques to manage these uncertainties. There is a widespread agreement in the GIS community that although GIS has the potential to support a wide range of spatial data analysis problems, this potential is often hindered by the lack of consistency and uniformity. Uncertainties come in many shapes and forms, and processing uncertain spatial data requires a practical taxonomy to aid decision makers in choosing the most suitable data modeling and analysis method. In this paper, we: (1) review important developments in handling uncertainties when working with spatial data and GIS applications; (2) propose a taxonomy of models for dealing with uncertainties in GIS; and (3) identify current challenges and future research directions in spatial data analysis and GIS for managing uncertainties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates the unparalleled value of full scale data which has been acquired from ocean trials of Aquamarine Power’s Oyster 800 Wave Energy Converter (WEC) at the European Marine Energy Centre (EMEC), Orkney, Scotland.
High quality prototype and wave data were simultaneously recorded in over 750 distinct sea states (comprising different wave height, wave period and tidal height combinations) and include periods of operation where the hydraulic Power Take-Off (PTO) system was both pressurised (damped operation) and de-pressurised (undamped operation).
A detailed model-prototype correlation procedure is presented where the full scale prototype behaviour is compared to predictions from both experimental and numerical modelling techniques via a high temporal resolution wave-by-wave reconstruction. This unquestionably provides the definitive verification of the capabilities of such research techniques and facilitates a robust and meaningful uncertainty analysis to be performed on their outputs.
The importance of a good data capture methodology, both in terms of handling and accuracy is also presented. The techniques and procedures implemented by Aquamarine Power for real-time data management are discussed, including lessons learned on the instrumentation and infrastructure required to collect high-value data.