960 resultados para Mètodes experimentals
Resumo:
En aquest treball s’ha fet una avaluació comparativa dels resultats que es poden obtenir amb el software SpectraClassifier 1.0 (SC) desenvolupat al nostre grup de recerca, comparant‐lo amb l’SPSS, un programa estadístic informàtic estàndard, en un problema de classificació de tumors cerebrals humans amb dades d’espectroscopia de ressonància magnètica de protó (1H‐ERM). El interès d’aquesta avaluació comparativa radica en la documentació dels resultats obtinguts amb els dos sistemes quan en la correcció dels resultats obtinguts, així com ponderar la versatilitat i usabilitat dels dos paquets de software per a una aplicació concreta d’interès al treball del GABRMN. Per a aquest treball s’han utilitzat dades provinents de dos projecte europeus multicèntrics (INTERPRET i eTumour) en els quals vam participar. Les classes tumorals utilitzades (d’un total de 217 pacients) han sigut les majoritàries des del punt de vista epidemiològic: glioblastoma multiforme, metàstasi, astrocitomes de grau II, ligodendrogliomes de grau II, oligoastrocitomes de grau II i meningiomes de baix grau. Amb les dades d’aquests pacients s’han dissenyat classificadors basats en l’anàlisi discriminant lineal (LDA), s’han avaluat amb diferents mètodes matemàtics i s’han testat amb dades independents. Els resultats han estat satisfactoris, obtenint amb l’SC resultats més robusts amb dades independents respecte la classificació realitzada per l’SPSS.
Resumo:
A graphical processing unit (GPU) is a hardware device normally used to manipulate computer memory for the display of images. GPU computing is the practice of using a GPU device for scientific or general purpose computations that are not necessarily related to the display of images. Many problems in econometrics have a structure that allows for successful use of GPU computing. We explore two examples. The first is simple: repeated evaluation of a likelihood function at different parameter values. The second is a more complicated estimator that involves simulation and nonparametric fitting. We find speedups from 1.5 up to 55.4 times, compared to computations done on a single CPU core. These speedups can be obtained with very little expense, energy consumption, and time dedicated to system maintenance, compared to equivalent performance solutions using CPUs. Code for the examples is provided.
Resumo:
Geochemical data that is derived from the whole or partial analysis of various geologic materialsrepresent a composition of mineralogies or solute species. Minerals are composed of structuredrelationships between cations and anions which, through atomic and molecular forces, keep the elementsbound in specific configurations. The chemical compositions of minerals have specific relationships thatare governed by these molecular controls. In the case of olivine, there is a well-defined relationshipbetween Mn-Fe-Mg with Si. Balances between the principal elements defining olivine composition andother significant constituents in the composition (Al, Ti) have been defined, resulting in a near-linearrelationship between the logarithmic relative proportion of Si versus (MgMnFe) and Mg versus (MnFe),which is typically described but poorly illustrated in the simplex.The present contribution corresponds to ongoing research, which attempts to relate stoichiometry andgeochemical data using compositional geometry. We describe here the approach by which stoichiometricrelationships based on mineralogical constraints can be accounted for in the space of simplicialcoordinates using olivines as an example. Further examples for other mineral types (plagioclases andmore complex minerals such as clays) are needed. Issues that remain to be dealt with include thereduction of a bulk chemical composition of a rock comprised of several minerals from which appropriatebalances can be used to describe the composition in a realistic mineralogical framework. The overallobjective of our research is to answer the question: In the cases where the mineralogy is unknown, arethere suitable proxies that can be substituted?Kew words: Aitchison geometry, balances, mineral composition, oxides
Resumo:
Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices
Resumo:
¿Cómo se podría comprometer la Comunidad Internacional, en un mundo globalizado, para la resolución de conflictos?. En el siglo XXI, pasa necesariamente por un cuestionamiento de los métodos tradicionalmente empleados para la resolución de conflictos y la seguridad (frente a nuevos escenarios nuevas estrategias). Éstas toman forma en las doctrinas de la prevención, transformación, resolución de conflictos, gestión de crisis, y seguridad multidimensional/colectiva. Trasladándolo a Europa, la implantación de políticas comunes en las zonas en conflicto, urge cada día más. No existe una acción exterior colectiva ante el estallido de una crisis, porque al final siempre acaban prevaleciendo las decisiones de los Estados más poderosos. Es este mismo proceso decisional, anclado en las posturas realistas, el que bloquea o retarda todo intento de reacción común. Mientras, la violencia se sucede y asistimos impotentes a escenarios bélicos o escaladas, bajo la mirada atrapada de Occidente. La UE se enfrenta a un desafío cada vez más presente, por conseguir una acción globalizadora en materia de derechos humanos, porque frente a la globalización económica surge la necesidad de contrarestar sus efectos, globalizando también los derechos humanos. Cabría revisar las respuestas y capacidades europeas ante el estallido de una crisis.
Resumo:
In CoDaWork’05, we presented an application of discriminant function analysis (DFA) to 4 differentcompositional datasets and modelled the first canonical variable using a segmented regression modelsolely based on an observation about the scatter plots. In this paper, multiple linear regressions areapplied to different datasets to confirm the validity of our proposed model. In addition to dating theunknown tephras by calibration as discussed previously, another method of mapping the unknown tephrasinto samples of the reference set or missing samples in between consecutive reference samples isproposed. The application of these methodologies is demonstrated with both simulated and real datasets.This new proposed methodology provides an alternative, more acceptable approach for geologists as theirfocus is on mapping the unknown tephra with relevant eruptive events rather than estimating the age ofunknown tephra.Kew words: Tephrochronology; Segmented regression
Resumo:
In March of 2004, the Observatory of European Foreign Policy published a special monograph about Spain in Europe (1996-2004) in digital format. The objective of the monograph was to analyse Spain’s foreign policy agenda and strategy during the period of José María Aznar’s presidency. As the title suggests, one of the initial suppositions of the analysis is the Europeanization of Spanish foreign activities. Is that how it was? Did Aznar’s Spain see the world and relate to it through Brussels? The publication was well received, considering the number of visits received and above all the institutions which asked to link the publication to their web pages. Among these, the EUobserver published the introduction to the piece in English titled Aznar: thinking locally, acting in Europe (described by the EUobserver as a paper of utmost importance). The fact that the elections were held three days after the tragic events of the 11th of March dramatically increased interest in Spain and the implications for Europe. This publication is the second of its type, in this case analysing the period of the Zapatero government (2004-2008). Once again the starting premise (the Europeanization of the agenda and the methods employed) has been considered by the analysts. And once again the articles collected in this publication serve to “triangulate” the analysis. Spain and Europe are two vertices (more or less distant, in essence and in form) which the authors handle in their analysis of the case (third vertex).
Resumo:
First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by asimplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able togenerate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow definingmonitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated
Resumo:
The identification of compositional changes in fumarolic gases of active and quiescent volcanoes is one of the mostimportant targets in monitoring programs. From a general point of view, many systematic (often cyclic) and randomprocesses control the chemistry of gas discharges, making difficult to produce a convincing mathematical-statisticalmodelling.Changes in the chemical composition of volcanic gases sampled at Vulcano Island (Aeolian Arc, Sicily, Italy) fromeight different fumaroles located in the northern sector of the summit crater (La Fossa) have been analysed byconsidering their dependence from time in the period 2000-2007. Each intermediate chemical composition has beenconsidered as potentially derived from the contribution of the two temporal extremes represented by the 2000 and 2007samples, respectively, by using inverse modelling methodologies for compositional data. Data pertaining to fumarolesF5 and F27, located on the rim and in the inner part of La Fossa crater, respectively, have been used to achieve theproposed aim. The statistical approach has allowed us to highlight the presence of random and not random fluctuations,features useful to understand how the volcanic system works, opening new perspectives in sampling strategies and inthe evaluation of the natural risk related to a quiescent volcano
Resumo:
Pounamu (NZ jade), or nephrite, is a protected mineral in its natural form following thetransfer of ownership back to Ngai Tahu under the Ngai Tahu (Pounamu Vesting) Act 1997.Any theft of nephrite is prosecutable under the Crimes Act 1961. Scientific evidence isessential in cases where origin is disputed. A robust method for discrimination of thismaterial through the use of elemental analysis and compositional data analysis is required.Initial studies have characterised the variability within a given nephrite source. This hasincluded investigation of both in situ outcrops and alluvial material. Methods for thediscrimination of two geographically close nephrite sources are being developed.Key Words: forensic, jade, nephrite, laser ablation, inductively coupled plasma massspectrometry, multivariate analysis, elemental analysis, compositional data analysis
Resumo:
Hydrogeological research usually includes some statistical studies devised to elucidate mean background state, characterise relationships among different hydrochemical parameters, and show the influence of human activities. These goals are achieved either by means of a statistical approach or by mixing modelsbetween end-members. Compositional data analysis has proved to be effective with the first approach, but there is no commonly accepted solution to the end-member problem in a compositional framework.We present here a possible solution based on factor analysis of compositions illustrated with a case study.We find two factors on the compositional bi-plot fitting two non-centered orthogonal axes to the most representative variables. Each one of these axes defines a subcomposition, grouping those variables thatlay nearest to it. With each subcomposition a log-contrast is computed and rewritten as an equilibrium equation. These two factors can be interpreted as the isometric log-ratio coordinates (ilr) of three hiddencomponents, that can be plotted in a ternary diagram. These hidden components might be interpreted as end-members.We have analysed 14 molarities in 31 sampling stations all along the Llobregat River and its tributaries, with a monthly measure during two years. We have obtained a bi-plot with a 57% of explained totalvariance, from which we have extracted two factors: factor G, reflecting geological background enhanced by potash mining; and factor A, essentially controlled by urban and/or farming wastewater. Graphicalrepresentation of these two factors allows us to identify three extreme samples, corresponding to pristine waters, potash mining influence and urban sewage influence. To confirm this, we have available analysisof diffused and widespread point sources identified in the area: springs, potash mining lixiviates, sewage, and fertilisers. Each one of these sources shows a clear link with one of the extreme samples, exceptfertilisers due to the heterogeneity of their composition.This approach is a useful tool to distinguish end-members, and characterise them, an issue generally difficult to solve. It is worth note that the end-member composition cannot be fully estimated but only characterised through log-ratio relationships among components. Moreover, the influence of each endmember in a given sample must be evaluated in relative terms of the other samples. These limitations areintrinsic to the relative nature of compositional data
Resumo:
En este proyecto se incluyen las etapas de análisis previo, análisis de requerimientos y diseño técnico de un sistema de apoyo a la diversificación en un IES, siguiendo los métodos y técnicas descritos en el ciclo de vida en cascada o ciclo de vida clásico del software. Puesto que se llegará hasta la especificación del diseño de la aplicación, esta documentación servirá de base al programador del software para implementar la aplicación, según las indicaciones y necesidades especificadas.
Resumo:
Globalization involves several facility location problems that need to be handled at large scale. Location Allocation (LA) is a combinatorial problem in which the distance among points in the data space matter. Precisely, taking advantage of the distance property of the domain we exploit the capability of clustering techniques to partition the data space in order to convert an initial large LA problem into several simpler LA problems. Particularly, our motivation problem involves a huge geographical area that can be partitioned under overall conditions. We present different types of clustering techniques and then we perform a cluster analysis over our dataset in order to partition it. After that, we solve the LA problem applying simulated annealing algorithm to the clustered and non-clustered data in order to work out how profitable is the clustering and which of the presented methods is the most suitable
Resumo:
L’estemsiliosi de la perera és una malaltia fúngica d’una gran importància econòmica a la zona del centre i sud d’Europa. És comparable al motejat de la pomera i pot arribar a assolir el 90 % de pèrdues en pressions elevades de la malaltia. S’ha comprovat que els tractaments amb fungicides presenten una eficàcia de control limitada, i que les mesures sanitàries i de control biològic generen resultats que plantegem com a una eina més a utilitzar en en la integració de mètodes de control. L’objectiu del treball és determinar l’eficàcia de control de Stemphylium vesicarium a partir de diferents soques de Bacilus subtilis en assatjos en situacions controlades
Resumo:
L'objecte del projecte consisteix en investigar les capacitats del programari dedinàmica de fluids computacional FLUENT per simular processos transitoris de combustióquan es cremen sòlids. Com el programari FLUENT no incorpora cap mòdul de combustióde sòlids prims, s'hauran de realitzar les funcions d'usuari adients per tal d'incorporar lesequacions i les condicions de contorns que són rellevants en aquests tipus de problemes. Elmodel resultant es validarà amb dades experimentals per a la combustió de fulls decel•lulosa en flames bidimensionals. També es durà a terme una anàlisi de sensibilitat de lasolució variant els paràmetres del model. En funció dels resultats de la validació es durà aterme una extensió del model per a situacions tridimensionals