26 resultados para Data processing and analysis

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The compound eyes of mantis shrimps, a group of tropical marine crustaceans, incorporate principles of serial and parallel processing of visual information that may be applicable to artificial imaging systems. Their eyes include numerous specializations for analysis of the spectral and polarizational properties of light, and include more photoreceptor classes for analysis of ultraviolet light, color, and polarization than occur in any other known visual system. This is possible because receptors in different regions of the eye are anatomically diverse and incorporate unusual structural features, such as spectral filters, not seen in other compound eyes. Unlike eyes of most other animals, eyes of mantis shrimps must move to acquire some types of visual information and to integrate color and polarization with spatial vision. Information leaving the retina appears to be processed into numerous parallel data streams leading into the central nervous system, greatly reducing the analytical requirements at higher levels. Many of these unusual features of mantis shrimp vision may inspire new sensor designs for machine vision

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infection of humans with the West Nile flavivirus principally occurs via tick and mosquito bites. Here, we document the expression of antigen processing and presentation molecules in West Nile virus (WNV)-infected human skin fibroblast (HFF) cells. Using a new Flavivirus-specific antibody, 4G4, we have analyzed cell surface human leukocyte antigen (HLA) expression on virus-infected cells at a single cell level. Using this approach, we show that West Nile Virus infection alters surface HLA expression on both infected HFF and neighboring uninfected HFF cells. Interestingly, increased surface HLA evident on infected HFF cultures is almost entirely due to virus-induced interferon (IFN)alpha/beta because IFNalpha/beta-neutralizing antibodies completely prevent increased surface HLA expression. In contrast, RT-PCR analysis indicates that WNV infection results in increased mRNAs for HLA-A, -B, and -C genes, and HLA-associated molecules low molecular weight polypeptide-2 (LMP-2) and transporter associated with antigen presentation-1 (TAP-1), but induction of these mRNAs is not diminished in HFF cells cultured with IFNalpha/beta-neutralizing antibodies. Taken together, these data support the idea that that both cytokine-dependent and cytokine-independent mechanisms account for WNV-induced HLA expression in human skin fibroblasts. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The writers measured velocity, pressure and energy distributions, wavelengths, and wave amplitudes along undular jumps in a smooth rectangular channel 0.25 m wide. In each case the upstream flow was a fully developed shear flow. Analysis of the data shows that the jump has strong three-dimensional features and that the aspect ratio of the channel is an important parameter. Energy dissipation on the centerline is far from negligible and is largely constrained to the reach between the start of the lateral shock waves and the first wave crest of the jump, in which the boundary layer develops under a strong adverse pressure gradient. A Boussinesq-type solution of the free-surface profile, velocity, and energy and pressure distributions is developed and compared with the data. Limitations of the two-dimensional analysis are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The enhanced biological phosphorus removal (EBPR) process is regularly used for the treatment of wastewater, but suffers from erratic performance. Successful EBPR relies on the growth of bacteria called polyphosphate-accumulating organisms (PAOs), which store phosphorus intracellularly as polyphosphate, thus removing it from wastewater. Metabolic models have been proposed which describe the measured chemical transformations, however genetic evidence is lacking to confirm these hypotheses. The aim of this research was to generate a metagenomic library from biomass enriched in PAOs as determined by phenotypic data and fluorescence in situ hybridisation (FISH) using probes specific for the only described PAO to date, Candidatus Accumulibacter phosphatis. DNA extraction methods were optimised and two fosmid libraries were constructed which contained 93 million base pairs of metagenomic data. Initial screening of the library for 16S rRNA genes revealed fosmids originating from a range of non-pure-cultured wastewater bacteria. The metagenomic libraries constructed will provide the ability to link phylogenetic and metabolic data for bacteria involved in nutrient removal from wastewater. Keywords DNA extraction; EBPR; metagenomic library; 16S rRNA gene.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fuzzy data has grown to be an important factor in data mining. Whenever uncertainty exists, simulation can be used as a model. Simulation is very flexible, although it can involve significant levels of computation. This article discusses fuzzy decision-making using the grey related analysis method. Fuzzy models are expected to better reflect decision-making uncertainty, at some cost in accuracy relative to crisp models. Monte Carlo simulation is used to incorporate experimental levels of uncertainty into the data and to measure the impact of fuzzy decision tree models using categorical data. Results are compared with decision tree models based on crisp continuous data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and purpose Survey data quality is a combination of the representativeness of the sample, the accuracy and precision of measurements, data processing and management with several subcomponents in each. The purpose of this paper is to show how, in the final risk factor surveys of the WHO MONICA Project, information on data quality were obtained, quantified, and used in the analysis. Methods and results In the WHO MONICA (Multinational MONItoring of trends and determinants in CArdiovascular disease) Project, the information about the data quality components was documented in retrospective quality assessment reports. On the basis of the documented information and the survey data, the quality of each data component was assessed and summarized using quality scores. The quality scores were used in sensitivity testing of the results both by excluding populations with low quality scores and by weighting the data by its quality scores. Conclusions Detailed documentation of all survey procedures with standardized protocols, training, and quality control are steps towards optimizing data quality. Quantifying data quality is a further step. Methods used in the WHO MONICA Project could be adopted to improve quality in other health surveys.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study describes the pedagogical impact of real-world experimental projects undertaken as part of an advanced undergraduate Fluid Mechanics subject at an Australian university. The projects have been organised to complement traditional lectures and introduce students to the challenges of professional design, physical modelling, data collection and analysis. The physical model studies combine experimental, analytical and numerical work in order to develop students’ abilities to tackle real-world problems. A first study illustrates the differences between ideal and real fluid flow force predictions based upon model tests of buildings in a large size wind tunnel used for research and professional testing. A second study introduces the complexity arising from unsteady non-uniform wave loading on a sheltered pile. The teaching initiative is supported by feedback from undergraduate students. The pedagogy of the course and projects is discussed with reference to experiential, project-based and collaborative learning. The practical work complements traditional lectures and tutorials, and provides opportunities which cannot be learnt in the classroom, real or virtual. Student feedback demonstrates a strong interest for the project phases of the course. This was associated with greater motivation for the course, leading in turn to lower failure rates. In terms of learning outcomes, the primary aim is to enable students to deliver a professional report as the final product, where physical model data are compared to ideal-fluid flow calculations and real-fluid flow analyses. Thus the students are exposed to a professional design approach involving a high level of expertise in fluid mechanics, with sufficient academic guidance to achieve carefully defined learning goals, while retaining sufficient flexibility for students to construct there own learning goals. The overall pedagogy is a blend of problem-based and project-based learning, which reflects academic research and professional practice. The assessment is a mix of peer-assessed oral presentations and written reports that aims to maximise student reflection and development. Student feedback indicated a strong motivation for courses that include a well-designed project component.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficiency of presentation of a peptide epitope by a MHC class I molecule depends on two parameters: its binding to the MHC molecule and its generation by intracellular Ag processing. In contrast to the former parameter, the mechanisms underlying peptide selection in Ag processing are poorly understood. Peptide translocation by the TAP transporter is required for presentation of most epitopes and may modulate peptide supply to MHC class I molecules. To study the role of human TAP for peptide presentation by individual HLA class I molecules, we generated artificial neural networks capable of predicting the affinity of TAP for random sequence 9-mer peptides. Using neural network-based predictions of TAP affinity, we found that peptides eluted from three different HLA class I molecules had higher TAP affinities than control peptides with equal binding affinities for the same HLA class I molecules, suggesting that human TAP may contribute to epitope selection. In simulated TAP binding experiments with 408 HLA class I binding peptides, HLA class I molecules differed significantly with respect to TAP affinities of their ligands, As a result, some class I molecules, especially HLA-B27, may be particularly efficient in presentation of cytosolic peptides with low concentrations, while most class I molecules may predominantly present abundant cytosolic peptides.