26 resultados para pacs: data handling techniques

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To date, the processing of wildlife location data has relied on a diversity of software and file formats. Data management and the following spatial and statistical analyses were undertaken in multiple steps, involving many time-consuming importing/exporting phases. Recent technological advancements in tracking systems have made large, continuous, high-frequency datasets of wildlife behavioral data available, such as those derived from the global positioning system (GPS) and other animal-attached sensor devices. These data can be further complemented by a wide range of other information about the animals’ environment. Management of these large and diverse datasets for modelling animal behaviour and ecology can prove challenging, slowing down analysis and increasing the probability of mistakes in data handling. We address these issues by critically evaluating the requirements for good management of GPS data for wildlife biology. We highlight that dedicated data management tools and expertise are needed. We explore current research in wildlife data management. We suggest a general direction of development, based on a modular software architecture with a spatial database at its core, where interoperability, data model design and integration with remote-sensing data sources play an important role in successful GPS data handling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data flow techniques have been around since the early '70s when they were used in compilers for sequential languages. Shortly after their introduction they were also consideredas a possible model for parallel computing, although the impact here was limited. Recently, however, data flow has been identified as a candidate for efficient implementation of various programming models on multi-core architectures. In most cases, however, the burden of determining data flow "macro" instructions is left to the programmer, while the compiler/run time system manages only the efficient scheduling of these instructions. We discuss a structured parallel programming approach supporting automatic compilation of programs to macro data flow and we show experimental results demonstrating the feasibility of the approach and the efficiency of the resulting "object" code on different classes of state-of-the-art multi-core architectures. The experimental results use different base mechanisms to implement the macro data flow run time support, from plain pthreads with condition variables to more modern and effective lock- and fence-free parallel frameworks. Experimental results comparing efficiency of the proposed approach with those achieved using other, more classical, parallel frameworks are also presented. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of chemometrics in food science has revolutionized the field by allowing the creation of models able to automate a broad range of applications such as food authenticity and food fraud detection. In order to create effective and general models able to address the complexity of real life problems, a vast amount of varied training samples are required. Training dataset has to cover all possible types of sample and instrument variability. However, acquiring a varied amount of samples is a time consuming and costly process, in which collecting samples representative of the real world variation is not always possible, specially in some application fields. To address this problem, a novel framework for the application of data augmentation techniques to spectroscopic data has been designed and implemented. This is a carefully designed pipeline of four complementary and independent blocks which can be finely tuned depending on the desired variance for enhancing model's robustness: a) blending spectra, b) changing baseline, c) shifting along x axis, and d) adding random noise.
This novel data augmentation solution has been tested in order to obtain highly efficient generalised classification model based on spectroscopic data. Fourier transform mid-infrared (FT-IR) spectroscopic data of eleven pure vegetable oils (106 admixtures) for the rapid identification of vegetable oil species in mixtures of oils have been used as a case study to demonstrate the influence of this pioneering approach in chemometrics, obtaining a 10% improvement in classification which is crucial in some applications of food adulteration.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nurse rostering is a difficult search problem with many constraints. In the literature, a number of approaches have been investigated including penalty function methods to tackle these constraints within genetic algorithm frameworks. In this paper, we investigate an extension of a previously proposed stochastic ranking method, which has demonstrated superior performance to other constraint handling techniques when tested against a set of constrained optimisation benchmark problems. An initial experiment on nurse rostering problems demonstrates that the stochastic ranking method is better in finding feasible solutions but fails to obtain good results with regard to the objective function. To improve the performance of the algorithm, we hybridise it with a recently proposed simulated annealing hyper-heuristic within a local search and genetic algorithm framework. The hybrid algorithm shows significant improvement over both the genetic algorithm with stochastic ranking and the simulated annealing hyper-heuristic alone. The hybrid algorithm also considerably outperforms the methods in the literature which have the previously best known results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many of the challenges faced in health care delivery can be informed through building models. In particular, Discrete Conditional Survival (DCS) models, recently under development, can provide policymakers with a flexible tool to assess time-to-event data. The DCS model is capable of modelling the survival curve based on various underlying distribution types and is capable of clustering or grouping observations (based on other covariate information) external to the distribution fits. The flexibility of the model comes through the choice of data mining techniques that are available in ascertaining the different subsets and also in the choice of distribution types available in modelling these informed subsets. This paper presents an illustrated example of the Discrete Conditional Survival model being deployed to represent ambulance response-times by a fully parameterised model. This model is contrasted against use of a parametric accelerated failure-time model, illustrating the strength and usefulness of Discrete Conditional Survival models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of the surface zone of concrete is acknowledged as a major factor governing the rate of deterioration of reinforced concrete structures as it provides the only barrier to the ingress of water containing dissolved ionic species such as chlorides which, ultimately, initiate corrosion of the reinforcement. In-situ monitoring of cover-zone concrete is therefore critical in attempting to make realistic predictions as to the in-service performance of the structure. To this end, this paper presents developments in a remote interrogation system to allow continuous, real-time monitoring of the cover-zone concrete from an office setting. Use is made of a multi-electrode array embedded within cover-zone concrete to acquire discretized electrical resistivity and temperature measurements, with both parameters monitored spatially and temporally. On-site instrumentation, which allows remote interrogation of concrete samples placed at a marine exposure site, is detailed, together with data handling and processing procedures. Site-measurements highlight the influence of temperature on electrical resistivity and an Arrhenius-based temperature correction protocol is developed using on-site measurements to standardize resistivity data to a reference temperature; this is an advancement over the use of laboratory-based procedures. The testing methodology and interrogation system represents a robust, low-cost and high-value technique which could be deployed for intelligent monitoring of reinforced concrete structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Unlabelled single- and double-stranded DNA (ssDNA and dsDNA, respectively) has been detected at concentrations =10-9?M by surface-enhanced Raman spectroscopy. Under appropriate conditions the sequences spontaneously adsorbed to the surface of both Ag and Au colloids through their nucleobases; this allowed highly reproducible spectra with good signal-to-noise ratios to be recorded on completely unmodified samples. This eliminated the need to promote absorption by introducing external linkers, such as thiols. The spectra of model ssDNA sequences contained bands of all the bases present and showed systematic changes when the overall base composition was altered. Initial tests also showed that small but reproducible changes could be detected between oligonucleotides with the same bases arranged in a different order. The spectra of five ssDNA sequences that correspond to different strains of the Escherichia coli bacterium were found to be sufficiently composition-dependent so that they could be differentiated without the need for any advanced multivariate data analysis techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DDR-SDRAM based data lookup techniques are evolving into a core technology for packet lookup applications for data network, benefitting from the features of high density, high bandwidth and low price of DDR memory products in the market. Our proposed DDR-SDRAM based lookup circuit is capable of achieving IP header lookup for network line-rates of up to 10Gbps, providing a solution on high-performance and economic packet header inspections. ©2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. Comet 67P/Churyumov-Gerasimenko is the target of the European Space Agency Rosetta spacecraft rendez-vous mission. Detailed physical characteristation of the comet before arrival is important for mission planning as well as providing a test bed for ground-based observing and data-analysis methods. Aims: To conduct a long-term observational programme to characterize the physical properties of the nucleus of the comet, via ground-based optical photometry, and to combine our new data with all available nucleus data from the literature. Methods: We applied aperture photometry techniques on our imaging data and combined the extracted rotational lightcurves with data from the literature. Optical lightcurve inversion techniques were applied to constrain the spin state of the nucleus and its broad shape. We performed a detailed surface thermal analysis with the shape model and optical photometry by incorporating both into the new Advanced Thermophysical Model (ATPM), along with all available Spitzer 8-24 μm thermal-IR flux measurements from the literature. Results: A convex triangular-facet shape model was determined with axial ratios b/a = 1.239 and c/a = 0.819. These values can vary by as much as 7% in each axis and still result in a statistically significant fit to the observational data. Our best spin state solution has Psid = 12.76137 ± 0.00006 h, and a rotational pole orientated at Ecliptic coordinates λ = 78°(±10°), β = + 58°(±10°). The nucleus phase darkening behaviour was measured and best characterized using the IAU HG system. Best fit parameters are: G = 0.11 ± 0.12 and HR(1,1,0) = 15.31 ± 0.07. Our shape model combined with the ATPM can satisfactorily reconcile all optical and thermal-IR data, with the fit to the Spitzer 24 μm data taken in February 2004 being exceptionally good. We derive a range of mutually-consistent physical parameters for each thermal-IR data set, including effective radius, geometric albedo, surface thermal inertia and roughness fraction. Conclusions: The overall nucleus dimensions are well constrained and strongly imply a broad nucleus shape more akin to comet 9P/Tempel 1, rather than the highly elongated or "bi-lobed" nuclei seen for comets 103P/Hartley 2 or 8P/Tuttle. The derived low thermal inertia of

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to address road safety effectively, it is essential to understand all the factors, which
attribute to the occurrence of a road collision. This is achieved through road safety
assessment measures, which are primarily based on historical crash data. Recent advances
in uncertain reasoning technology have led to the development of robust machine learning
techniques, which are suitable for investigating road traffic collision data. These techniques
include supervised learning (e.g. SVM) and unsupervised learning (e.g. Cluster Analysis).
This study extends upon previous research work, carried out in Coll et al. [3], which
proposed a non-linear aggregation framework for identifying temporal and spatial hotspots.
The results from Coll et al. [3] identified Lisburn area as the hotspot, in terms of road safety,
in Northern Ireland. This study aims to use Cluster Analysis, to investigate and highlight any
hidden patterns associated with collisions that occurred in Lisburn area, which in turn, will
provide more clarity in the causation factors so that appropriate countermeasures can be put
in place.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Free-roaming dogs (FRD) represent a potential threat to the quality of life in cities from an ecological, social and public health point of view. One of the most urgent concerns is the role of uncontrolled dogs as reservoirs of infectious diseases transmittable to humans and, above all, rabies. An estimate of the FRD population size and characteristics in a given area is the first step for any relevant intervention programme. Direct count methods are still prominent because of their non-invasive approach, information technologies can support such methods facilitating data collection and allowing for a more efficient data handling. This paper presents a new framework for data collection using a topological algorithm implemented as ArcScript in ESRI® ArcGIS software, which allows for a random selection of the sampling areas. It also supplies a mobile phone application for Android® operating system devices which integrates Global Positioning System (GPS) and Google Maps™. The potential of such a framework was tested in 2 Italian regions. Coupling technological and innovative solutions associated with common counting methods facilitate data collection and transcription. It also paves the way to future applications, which could support dog population management systems.