16 resultados para Data fusion applications
em Universidade do Minho
Resumo:
This paper presents a methodology based on the Bayesian data fusion techniques applied to non-destructive and destructive tests for the structural assessment of historical constructions. The aim of the methodology is to reduce the uncertainties of the parameter estimation. The Young's modulus of granite stones was chosen as an example for the present paper. The methodology considers several levels of uncertainty since the parameters of interest are considered random variables with random moments. A new concept of Trust Factor was introduced to affect the uncertainty related to each test results, translated by their standard deviation, depending on the higher or lower reliability of each test to predict a certain parameter.
Resumo:
The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto
Resumo:
Olive oil quality grading is traditionally assessed by human sensory evaluation of positive and negative attributes (olfactory, gustatory, and final olfactorygustatory sensations). However, it is not guaranteed that trained panelist can correctly classify monovarietal extra-virgin olive oils according to olive cultivar. In this work, the potential application of human (sensory panelists) and artificial (electronic tongue) sensory evaluation of olive oils was studied aiming to discriminate eight single-cultivar extra-virgin olive oils. Linear discriminant, partial least square discriminant, and sparse partial least square discriminant analyses were evaluated. The best predictive classification was obtained using linear discriminant analysis with simulated annealing selection algorithm. A low-level data fusion approach (18 electronic tongue signals and nine sensory attributes) enabled 100 % leave-one-out cross-validation correct classification, improving the discrimination capability of the individual use of sensor profiles or sensory attributes (70 and 57 % leave-one-out correct classifications, respectively). So, human sensory evaluation and electronic tongue analysis may be used as complementary tools allowing successful monovarietal olive oil discrimination.
Resumo:
Dissertação de mestrado em Engenharia Informática
Resumo:
During must fermentation by Saccharomyces cerevisiae strains thousands of volatile aroma compounds are formed. The objective of the present work was to adapt computational approaches to analyze pheno-metabolomic diversity of a S. cerevisiae strain collection with different origins. Phenotypic and genetic characterization together with individual must fermentations were performed, and metabolites relevant to aromatic profiles were determined. Experimental results were projected onto a common coordinates system, revealing 17 statistical-relevant multi-dimensional modules, combining sets of most-correlated features of noteworthy biological importance. The present method allowed, as a breakthrough, to combine genetic, phenotypic and metabolomic data, which has not been possible so far due to difficulties in comparing different types of data. Therefore, the proposed computational approach revealed as successful to shed light into the holistic characterization of S. cerevisiae pheno-metabolome in must fermentative conditions. This will allow the identification of combined relevant features with application in selection of good winemaking strains.
Resumo:
The development of organic materials displaying high two-photon absorption (TPA) has attracted much attention in recent years due to a variety of potential applications in photonics and optoelectronics, such as three-dimensional optical data storage, fluorescence imaging, two-photon microscopy, optical limiting, microfabrication, photodynamic therapy, upconverted lasing, etc. The most frequently employed structural motifs for TPA materials are donor–pi bridge–acceptor (D–pi–A) dipoles, donor–pi bridge–donor (D–pi–D) and acceptor–pi bridge-acceptor (A–pi–A) quadrupoles, octupoles, etc. In this work we present the synthesis and photophysical characterization of quadrupolar heterocyclic systems with potential applications in materials and biological sciences as TPA chromophores. Indole is a versatile building block for the synthesis of heterocyclic systems for several optoelectronic applications (chemosensors, nonlinear optical, OLEDs) due to its photophysical properties and donor electron ability and 4H-pyran-4-ylidene fragment is frequently used for the synthesis of red light-emitting materials. On the other hand, 2-(2,6-dimethyl-4H-pyran-4-ylidene)malononitrile (1) and 1,3-diethyl-dihydro-5-(2,6-dimethyl-4H-pyran-4-ylidene)-2-thiobarbituric (2) units are usually used as strong acceptor moieties for the preparation of π-conjugated systems of the push-pull type. These building blocks were prepared by Knoevenagel condensation of the corresponding ketone precursor with malononitrile or 1,3-diethyl-dihydro-2-thiobarbituric acid. The new quadrupolar 4H-pyran-4-ylidene fluorophores (3) derived from indole were prepared through condensation of 5-methyl-1H-indole-3-carbaldehyde with the acceptor precursors 1 and 2, in the presence of a catalytical amount of piperidine. The new compounds were characterized by the usual spectroscopic techniques (UV-vis., FT-IR and multinuclear NMR - 1H, 13C).
Resumo:
Studies in Computational Intelligence, 616
Resumo:
Since the last two decades mass spectrometry (MS) has been applied to analyse the chemical cellular components of microorganisms, providing rapid and discriminatory proteomic profiles for their species identification and, in some cases, subtyping. The application of MS for the microbial diagnosis is currently well-established. The remarkable reproducibility and objectivity of this method is based on the measurement of constantly expressed and highly abundant proteins, mainly important conservative ribosomal proteins, which are used as markers to generate a cellular fingerprint. Mass spectrometry based on matrix-assisted laser desorption ionization-time of flight (MALDI- TOF) technique has been an important tool for the microbial diagnostic. However, some technical limitation concerning both MALDI-TOF and its used protocols for sample preparation have fostered the research of new mass spectrometry systems (e.g. LC MS/MS). LC MS/MS is able to generate online mass spectra of specific ions with further online sequencing of these ions, which include both specific proteins and DNA fragments. In this work a set of data for yeasts and filamentous fungi diagnostic obtained through an international collaboration project involving partners from Argentina, Brazil, Chile and Portugal will be presented and discussed.
Resumo:
Propolis is a chemically complex biomass produced by honeybees (Apis mellifera) from plant resins added of salivary enzymes, beeswax, and pollen. The biological activities described for propolis were also identified for donor plants resin, but a big challenge for the standardization of the chemical composition and biological effects of propolis remains on a better understanding of the influence of seasonality on the chemical constituents of that raw material. Since propolis quality depends, among other variables, on the local flora which is strongly influenced by (a)biotic factors over the seasons, to unravel the harvest season effect on the propolis chemical profile is an issue of recognized importance. For that, fast, cheap, and robust analytical techniques seem to be the best choice for large scale quality control processes in the most demanding markets, e.g., human health applications. For that, UV-Visible (UV-Vis) scanning spectrophotometry of hydroalcoholic extracts (HE) of seventy-three propolis samples, collected over the seasons in 2014 (summer, spring, autumn, and winter) and 2015 (summer and autumn) in Southern Brazil was adopted. Further machine learning and chemometrics techniques were applied to the UV-Vis dataset aiming to gain insights as to the seasonality effect on the claimed chemical heterogeneity of propolis samples determined by changes in the flora of the geographic region under study. Descriptive and classification models were built following a chemometric approach, i.e. principal component analysis (PCA) and hierarchical clustering analysis (HCA) supported by scripts written in the R language. The UV-Vis profiles associated with chemometric analysis allowed identifying a typical pattern in propolis samples collected in the summer. Importantly, the discrimination based on PCA could be improved by using the dataset of the fingerprint region of phenolic compounds ( = 280-400m), suggesting that besides the biological activities of those secondary metabolites, they also play a relevant role for the discrimination and classification of that complex matrix through bioinformatics tools. Finally, a series of machine learning approaches, e.g., partial least square-discriminant analysis (PLS-DA), k-Nearest Neighbors (kNN), and Decision Trees showed to be complementary to PCA and HCA, allowing to obtain relevant information as to the sample discrimination.
Resumo:
A search for a charged Higgs boson, H±, decaying to a W± boson and a Z boson is presented. The search is based on 20.3 fb−1 of proton-proton collision data at a center-of-mass energy of 8 TeV recorded with the ATLAS detector at the LHC. The H± boson is assumed to be produced via vector-boson fusion and the decays W±→qq′¯ and Z→e+e−/μ+μ− are considered. The search is performed in a range of charged Higgs boson masses from 200 to 1000 GeV. No evidence for the production of an H± boson is observed. Upper limits of 31--1020 fb at 95% CL are placed on the cross section for vector-boson fusion production of an H± boson times its branching fraction to W±Z. The limits are compared with predictions from the Georgi-Machacek Higgs Triplet Model.
Resumo:
The nitrogen dioxide is a primary pollutant, regarded for the estimation of the air quality index, whose excessive presence may cause significant environmental and health problems. In the current work, we suggest characterizing the evolution of NO2 levels, by using geostatisti- cal approaches that deal with both the space and time coordinates. To develop our proposal, a first exploratory analysis was carried out on daily values of the target variable, daily measured in Portugal from 2004 to 2012, which led to identify three influential covariates (type of site, environment and month of measurement). In a second step, appropriate geostatistical tools were applied to model the trend and the space-time variability, thus enabling us to use the kriging techniques for prediction, without requiring data from a dense monitoring network. This method- ology has valuable applications, as it can provide accurate assessment of the nitrogen dioxide concentrations at sites where either data have been lost or there is no monitoring station nearby.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Distributed data aggregation is an important task, allowing the de- centralized determination of meaningful global properties, that can then be used to direct the execution of other applications. The resulting val- ues result from the distributed computation of functions like count, sum and average. Some application examples can found to determine the network size, total storage capacity, average load, majorities and many others. In the last decade, many di erent approaches have been pro- posed, with di erent trade-o s in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of ag- gregation algorithms, it can be di cult and time consuming to determine which techniques will be more appropriate to use in speci c settings, jus- tifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally de nes the concept of aggrega- tion, characterizing the di erent types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.
Resumo:
We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., the third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25× improvement in proof-computation time and a 20× reduction in prover storage space.