934 resultados para methods: data analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vitis vinifera L., the most widely cultivated fruit crop in the world, was the starting point for the development of this PhD thesis. This subject was exploited following on two actual trends: i) the development of rapid, simple, and high sensitive methodologies with minimal sample handling; and ii) the valuation of natural products as a source of compounds with potential health benefits. The target group of compounds under study were the volatile terpenoids (mono and sesquiterpenoids) and C13 norisoprenoids, since they may present biological impact, either from the sensorial point of view, as regards to the wine aroma, or by the beneficial properties for the human health. Two novel methodologies for quantification of C13 norisoprenoids in wines were developed. The first methodology, a rapid method, was based on the headspace solid-phase microextraction combined with gas chromatography-quadrupole mass spectrometry operating at selected ion monitoring mode (HS-SPME/GC-qMS-SIM), using GC conditions that allowed obtaining a C13 norisoprenoid volatile signature. It does not require any pre-treatment of the sample, and the C13 norisoprenoid composition of the wine was evaluated based on the chromatographic profile and specific m/z fragments, without complete chromatographic separation of its components. The second methodology, used as reference method, was based on the HS-SPME/GC-qMS-SIM, allowing the GC conditions for an adequate chromatographic resolution of wine components. For quantification purposes, external calibration curves were constructed with β-ionone, with regression coefficient (r2) of 0.9968 (RSD 12.51 %) and 0.9940 (RSD of 1.08 %) for the rapid method and for the reference method, respectively. Low detection limits (1.57 and 1.10 μg L-1) were observed. These methodologies were applied to seventeen white and red table wines. Two vitispirane isomers (158-1529 L-1) and 1,1,6-trimethyl-1,2-dihydronaphthalene (TDN) (6.42-39.45 μg L-1) were quantified. The data obtained for vitispirane isomers and TDN using the two methods were highly correlated (r2 of 0.9756 and 0.9630, respectively). A rapid methodology for the establishment of the varietal volatile profile of Vitis vinifera L. cv. 'Fernão-Pires' (FP) white wines by headspace solid-phase microextraction combined with comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry (HS-SPME/GCxGC-TOFMS) was developed. Monovarietal wines from different harvests, Appellations, and producers were analysed. The study was focused on the volatiles that seem to be significant to the varietal character, such as mono and sesquiterpenic compounds, and C13 norisoprenoids. Two-dimensional chromatographic spaces containing the varietal compounds using the m/z fragments 93, 121, 161, 175 and 204 were established as follows: 1tR = 255-575 s, 2tR = 0,424-1,840 s, for monoterpenoids, 1tR = 555-685 s, 2tR = 0,528-0,856 s, for C13 norisoprenoids, and 1tR = 695-950 s, 2tR = 0,520-0,960 s, for sesquiterpenic compounds. For the three chemical groups under study, from a total of 170 compounds, 45 were determined in all wines, allowing defining the "varietal volatile profile" of FP wine. Among these compounds, 15 were detected for the first time in FP wines. This study proposes a HS-SPME/GCxGC-TOFMS based methodology combined with classification-reference sample to be used for rapid assessment of varietal volatile profile of wines. This approach is very useful to eliminate the majority of the non-terpenic and non-C13 norisoprenic compounds, allowing the definition of a two-dimensional chromatographic space containing these compounds, simplifying the data compared to the original data, and reducing the time of analysis. The presence of sesquiterpenic compounds in Vitis vinifera L. related products, to which are assigned several biological properties, prompted us to investigate the antioxidant, antiproliferative and hepatoprotective activities of some sesquiterpenic compounds. Firstly, the antiradical capacity of trans,trans-farnesol, cis-nerolidol, α-humulene and guaiazulene was evaluated using chemical (DPPH• and hydroxyl radicals) and biological (Caco-2 cells) models. Guaiazulene (IC50= 0.73 mM) was the sesquiterpene with higher scavenger capacity against DPPH•, while trans,trans-farnesol (IC50= 1.81 mM) and cis-nerolidol (IC50= 1.48 mM) were more active towards hydroxyl radicals. All compounds, with the exception of α-humulene, at non-cytotoxic levels (≤ 1 mM), were able to protect Caco-2 cells from oxidative stress induced by tert-butyl hydroperoxide. The activity of the compounds under study was also evaluated as antiproliferative agents. Guaiazulene and cis-nerolidol were able to more effectively arrest the cell cycle in the S-phase than trans,trans-farnesol and α-humulene, being the last almost inactive. The relative hepatoprotection effect of fifteen sesquiterpenic compounds, presenting different chemical structures and commonly found in plants and plant-derived foods and beverages, was assessed. Endogenous lipid peroxidation and induced lipid peroxidation with tert-butyl hydroperoxide were evaluated in liver homogenates from Wistar rats. With the exception of α-humulene, all the sesquiterpenic compounds under study (1 mM) were effective in reducing the malonaldehyde levels in both endogenous and induced lipid peroxidation up to 35% and 70%, respectively. The developed 3D-QSAR models, relating the hepatoprotection activity with molecular properties, showed good fit (R2LOO > 0.819) with good prediction power (Q2 > 0.950 and SDEP < 2%) for both models. A network of effects associated with structural and chemical features of sesquiterpenic compounds such as shape, branching, symmetry, and presence of electronegative fragments, can modulate the hepatoprotective activity observed for these compounds. In conclusion, this study allowed the development of rapid and in-depth methods for the assessment of varietal volatile compounds that might have a positive impact on sensorial and health attributes related to Vitis vinifera L. These approaches can be extended to the analysis of other related food matrices, including grapes and musts, among others. In addition, the results of in vitro assays open a perspective for the promising use of the sesquiterpenic compounds, with similar chemical structures such as those studied in the present work, as antioxidants, hepatoprotective and antiproliferative agents, which meets the current challenges related to diseases of modern civilization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document presents a tool able to automatically gather data provided by real energy markets and to generate scenarios, capture and improve market players’ profiles and strategies by using knowledge discovery processes in databases supported by artificial intelligence techniques, data mining algorithms and machine learning methods. It provides the means for generating scenarios with different dimensions and characteristics, ensuring the representation of real and adapted markets, and their participating entities. The scenarios generator module enhances the MASCEM (Multi-Agent Simulator of Competitive Electricity Markets) simulator, endowing a more effective tool for decision support. The achievements from the implementation of the proposed module enables researchers and electricity markets’ participating entities to analyze data, create real scenarios and make experiments with them. On the other hand, applying knowledge discovery techniques to real data also allows the improvement of MASCEM agents’ profiles and strategies resulting in a better representation of real market players’ behavior. This work aims to improve the comprehension of electricity markets and the interactions among the involved entities through adequate multi-agent simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: American College of Cardiology/American Heart Association guidelines for the diagnosis and management of heart failure recommend investigating exacerbating conditions such as thyroid dysfunction, but without specifying the impact of different thyroid-stimulation hormone (TSH) levels. Limited prospective data exist on the association between subclinical thyroid dysfunction and heart failure events. METHODS AND RESULTS: We performed a pooled analysis of individual participant data using all available prospective cohorts with thyroid function tests and subsequent follow-up of heart failure events. Individual data on 25 390 participants with 216 248 person-years of follow-up were supplied from 6 prospective cohorts in the United States and Europe. Euthyroidism was defined as TSH of 0.45 to 4.49 mIU/L, subclinical hypothyroidism as TSH of 4.5 to 19.9 mIU/L, and subclinical hyperthyroidism as TSH <0.45 mIU/L, the last two with normal free thyroxine levels. Among 25 390 participants, 2068 (8.1%) had subclinical hypothyroidism and 648 (2.6%) had subclinical hyperthyroidism. In age- and sex-adjusted analyses, risks of heart failure events were increased with both higher and lower TSH levels (P for quadratic pattern <0.01); the hazard ratio was 1.01 (95% confidence interval, 0.81-1.26) for TSH of 4.5 to 6.9 mIU/L, 1.65 (95% confidence interval, 0.84-3.23) for TSH of 7.0 to 9.9 mIU/L, 1.86 (95% confidence interval, 1.27-2.72) for TSH of 10.0 to 19.9 mIU/L (P for trend <0.01) and 1.31 (95% confidence interval, 0.88-1.95) for TSH of 0.10 to 0.44 mIU/L and 1.94 (95% confidence interval, 1.01-3.72) for TSH <0.10 mIU/L (P for trend=0.047). Risks remained similar after adjustment for cardiovascular risk factors. CONCLUSION: Risks of heart failure events were increased with both higher and lower TSH levels, particularly for TSH ≥10 and <0.10 mIU/L.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compositional data naturally arises from the scientific analysis of the chemical composition of archaeological material such as ceramic and glass artefacts. Data of this type can be explored using a variety of techniques, from standard multivariate methods such as principal components analysis and cluster analysis, to methods based upon the use of log-ratios. The general aim is to identify groups of chemically similar artefacts that could potentially be used to answer questions of provenance. This paper will demonstrate work in progress on the development of a documented library of methods, implemented using the statistical package R, for the analysis of compositional data. R is an open source package that makes available very powerful statistical facilities at no cost. We aim to show how, with the aid of statistical software such as R, traditional exploratory multivariate analysis can easily be used alongside, or in combination with, specialist techniques of compositional data analysis. The library has been developed from a core of basic R functionality, together with purpose-written routines arising from our own research (for example that reported at CoDaWork'03). In addition, we have included other appropriate publicly available techniques and libraries that have been implemented in R by other authors. Available functions range from standard multivariate techniques through to various approaches to log-ratio analysis and zero replacement. We also discuss and demonstrate a small selection of relatively new techniques that have hitherto been little-used in archaeometric applications involving compositional data. The application of the library to the analysis of data arising in archaeometry will be demonstrated; results from different analyses will be compared; and the utility of the various methods discussed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developments in the statistical analysis of compositional data over the last two decades have made possible a much deeper exploration of the nature of variability, and the possible processes associated with compositional data sets from many disciplines. In this paper we concentrate on geochemical data sets. First we explain how hypotheses of compositional variability may be formulated within the natural sample space, the unit simplex, including useful hypotheses of subcompositional discrimination and specific perturbational change. Then we develop through standard methodology, such as generalised likelihood ratio tests, statistical tools to allow the systematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require special construction. We comment on the use of graphical methods in compositional data analysis and on the ordination of specimens. The recent development of the concept of compositional processes is then explained together with the necessary tools for a staying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland. Finally we point out a number of unresolved problems in the statistical analysis of compositional processes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main instrument used in psychological measurement is the self-report questionnaire. One of its major drawbacks however is its susceptibility to response biases. A known strategy to control these biases has been the use of so-called ipsative items. Ipsative items are items that require the respondent to make between-scale comparisons within each item. The selected option determines to which scale the weight of the answer is attributed. Consequently in questionnaires only consisting of ipsative items every respondent is allotted an equal amount, i.e. the total score, that each can distribute differently over the scales. Therefore this type of response format yields data that can be considered compositional from its inception. Methodological oriented psychologists have heavily criticized this type of item format, since the resulting data is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians have kept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate both positions and addresses the similarities and differences between the two data collection methods. The ultimate objective is to formulate a guideline when to use which type of item format. The comparison is based on data obtained with both an ipsative and normative version of three psychological questionnaires, which were administered to 502 first-year students in psychology according to a balanced within-subjects design. Previous research only compared the direct ipsative scale scores with the derived ipsative scale scores. The use of compositional data analysis techniques also enables one to compare derived normative score ratios with direct normative score ratios. The addition of the second comparison not only offers the advantage of a better-balanced research strategy. In principle it also allows for parametric testing in the evaluation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In any discipline, where uncertainty and variability are present, it is important to have principles which are accepted as inviolate and which should therefore drive statistical modelling, statistical analysis of data and any inferences from such an analysis. Despite the fact that two such principles have existed over the last two decades and from these a sensible, meaningful methodology has been developed for the statistical analysis of compositional data, the application of inappropriate and/or meaningless methods persists in many areas of application. This paper identifies at least ten common fallacies and confusions in compositional data analysis with illustrative examples and provides readers with necessary, and hopefully sufficient, arguments to persuade the culprits why and how they should amend their ways

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an earlier investigation (Burger et al., 2000) five sediment cores near the Rodrigues Triple Junction in the Indian Ocean were studied applying classical statistical methods (fuzzy c-means clustering, linear mixing model, principal component analysis) for the extraction of endmembers and evaluating the spatial and temporal variation of geochemical signals. Three main factors of sedimentation were expected by the marine geologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. The display of fuzzy membership values and/or factor scores versus depth provided consistent results for two factors only; the ultra-basic component could not be identified. The reason for this may be that only traditional statistical methods were applied, i.e. the untransformed components were used and the cosine-theta coefficient as similarity measure. During the last decade considerable progress in compositional data analysis was made and many case studies were published using new tools for exploratory analysis of these data. Therefore it makes sense to check if the application of suitable data transformations, reduction of the D-part simplex to two or three factors and visual interpretation of the factor scores would lead to a revision of earlier results and to answers to open questions . In this paper we follow the lines of a paper of R. Tolosana- Delgado et al. (2005) starting with a problem-oriented interpretation of the biplot scattergram, extracting compositional factors, ilr-transformation of the components and visualization of the factor scores in a spatial context: The compositional factors will be plotted versus depth (time) of the core samples in order to facilitate the identification of the expected sources of the sedimentary process. Kew words: compositional data analysis, biplot, deep sea sediments

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Class exercise to analyse qualitative data mediated on use of a set of transcripts, augmented by videos from web site. Discussion is around not only how the data is codes, interview bias, dimensions of analysis. Designed as an introduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reflects on key methodological issues emerging from children and young people's involvement in data analysis processes. We outline a pragmatic framework illustrating different approaches to engaging children, using two case studies of children's experiences of participating in data analysis. The article highlights methods of engagement and important issues such as the balance of power between adults and children, training, support, ethical considerations, time and resources. We argue that involving children in data analysis processes can have several benefits, including enabling a greater understanding of children's perspectives and helping to prioritise children's agendas in policy and practice. (C) 2007 The Author(s). Journal compilation (C) 2007 National Children's Bureau.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade, a number of new methods of population genetic analysis based on likelihood have been introduced. This review describes and explains the general statistical techniques that have recently been used, and discusses the underlying population genetic models. Experimental papers that use these methods to infer human demographic and phylogeographic history are reviewed. It appears that the use of likelihood has hitherto had little impact in the field of human population genetics, which is still primarily driven by more traditional approaches. However, with the current uncertainty about the effects of natural selection, population structure and ascertainment of single-nucleotide polymorphism markers, it is suggested that likelihood-based methods may have a greater impact in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on ""Joint Research Using Small Tokamaks"". (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a novel approach in order to increase the recognition power of Multiscale Fractal Dimension (MFD) techniques, when applied to image classification. The proposal uses Functional Data Analysis (FDA) with the aim of enhancing the MFD technique precision achieving a more representative descriptors vector, capable of recognizing and characterizing more precisely objects in an image. FDA is applied to signatures extracted by using the Bouligand-Minkowsky MFD technique in the generation of a descriptors vector from them. For the evaluation of the obtained improvement, an experiment using two datasets of objects was carried out. A dataset was used of characters shapes (26 characters of the Latin alphabet) carrying different levels of controlled noise and a dataset of fish images contours. A comparison with the use of the well-known methods of Fourier and wavelets descriptors was performed with the aim of verifying the performance of FDA method. The descriptor vectors were submitted to Linear Discriminant Analysis (LDA) classification method and we compared the correctness rate in the classification process among the descriptors methods. The results demonstrate that FDA overcomes the literature methods (Fourier and wavelets) in the processing of information extracted from the MFD signature. In this way, the proposed method can be considered as an interesting choice for pattern recognition and image classification using fractal analysis.