32 resultados para Chromatographic columns

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

20.00% 20.00%

Publicador:

Resumo:

La separació d’enantiòmers (isòmers òptics) és molt important en molts diversos camps, com les síntesis quirals, biologia, i en el camp de la farmacologia especialment. És per això, que es fa necessari de disposar de tècniques i mètodes analítics ràpids, fiables i sensibles per a la separació d’enantiòmers. La present tesi s’emmarca en el camp de la separació d’enantiòmers, concretament en la preparació de fases estacionàries quirals per al seu ús en cromatografia liquida. En aquest sentit, s’ha desenvolupat la síntesi i caracterització de molècules polimèriques quirals derivades de l’aminoàcid L-prolina que incorporades en matrius de gel de sílice poden constituïr columnes quirals per a la separació d’enantiòmers per cromatografia liquida. S’han estudiat les característiques enantioselectives d’aquests nous materials en la separació de molècules quirals, trobant-se ésser satisfactòriament enantioselectius. L’interès que suscita l’obtenció d’enantiòmers a gran escala fa que la recerca s’orienti a la recerca de materials amb elevada capacitat de càrrega, és a dir, que puguin donar lloc a la separació d’elevades quantitats d’enantiòmers. Amb aquesta finalitat s’han dut a terme assaigs de capacitat de càrrega, que han posat de manifest la possible aplicació d’aquests materials a la separació preparativa d’enantiòmers. També s’ha parat especial atenció a l’estudi de les característiques de la matriu de gel de sílice, assajant-se altres materials de sílice més porosos i que permeten així treballar amb fluxos més elevats tot reduint-ne el temps d’anàlisi i els costos associats a la separació preparativa d’enantiòmers. L'estudi conformacional d'aquests nous selectors també ha estat contemplat per tal d'explicar l'enantioselectivitat específica que s'observa en certs dissolvents orgànics en els qual es duu a terme la separació dels enantiòmers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By using suitable parameters, we present a uni¯ed aproach for describing four methods for representing categorical data in a contingency table. These methods include:correspondence analysis (CA), the alternative approach using Hellinger distance (HD),the log-ratio (LR) alternative, which is appropriate for compositional data, and theso-called non-symmetrical correspondence analysis (NSCA). We then make an appropriate comparison among these four methods and some illustrative examples are given.Some approaches based on cumulative frequencies are also linked and studied usingmatrices.Key words: Correspondence analysis, Hellinger distance, Non-symmetrical correspondence analysis, log-ratio analysis, Taguchi inertia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We performed a comprehensive study to assess the fit for purpose of four chromatographic conditions for the determination of six groups of marine lipophilic toxins (okadaic acid and dinophysistoxins, pectenotoxins, azaspiracids, yessotoxins, gymnodimine and spirolides) by LC-MS/MS to select the most suitable conditions as stated by the European Union Reference Laboratory for Marine Biotoxins (EURLMB). For every case, the elution gradient has been optimized to achieve a total run-time cycle of 12 min. We performed a single-laboratory validation for the analysis of three relevant matrices for the seafood aquaculture industry (mussels, pacific oysters and clams), and for sea urchins for which no data about lipophilic toxins have been reported before. Moreover, we have compared the method performance under alkaline conditions using two quantification strategies: the external standard calibration (EXS) and the matrix-matched standard calibration (MMS). Alkaline conditions were the only scenario that allowed detection windows with polarity switching in a 3200 QTrap mass spectrometer, thus the analysis of all toxins can be accomplished in a single run, increasing sample throughput. The limits of quantification under alkaline conditions met the validation requirements established by the EURLMB for all toxins and matrices, while the remaining conditions failed in some cases. The accuracy of the method and the matrix effects where generally dependent on the mobile phases and the seafood species. The MMS had a moderate positive impact on method accuracy for crude extracts, but it showed poor trueness for seafood species other than mussels when analyzing hydrolyzed extracts. Alkaline conditions with EXS and recovery correction for OA were selected as the most proper conditions in the context of our laboratory. This comparative study can help other laboratories to choose the best conditions for the implementation of LC-MS/MS according to their own necessities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

S’ha estudiat la bioacumulació de contaminants orgànics persistents en el múscul d’una espècie de peix en dos punts del litoral català: al Port de Barcelona i a la costa de Blanes. Citharus linguatula ha estat escollida per les seves característiques d’hàbitats (està més exposada a la contaminació al ser una espècie bentònica). La metodologia emprada consisteix en la homogeneïtzació amb sulfat de sodi i una extracció assistida amb microones amb n-hexà-acetona(1:1 v/v) durant 20 minuts. Els extractes es netegen i es fraccionen amb una columna cromatogràfica d’alúmina que permet la separació dels extractes en dos fraccions: un amb la majoria dels compostos organoclorats (hexaclorbenzè, DDTs, ciclodiens clorats i policlorbifenils) i l’altre amb els isòmers hexaclorciclohexans i els PAHs. Aquestes dos fraccions són posteriorment analitzades en el GC-MS. S’ha pogut corroborar l’elevada presència de PCBs a Barcelona, així com que en aquest punt de mostreig les espècies estan més exposades a la contaminació per organoclorats. S’ha identificat la presència de DDTs en els dos llocs estudiats. Pel que fa als PAHs s’ha pogut observar que a Barcelona també hi ha més presència d’aquests. Cal destacar que la concentració obtinguda dels compostos no es pot donar com a vàlida per l’existència d’indicis d’errors experimentals o d’injecció.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

És ben sabut que les úniques etapes del cicle de vida del programari que necessàriament s'han d'especificar són les de recollida de requisits i l'anàlisi o l'especificació del programari. La resta (disseny, implementació i prova) es pot generar d'una manera més o menys automàtica a partir de l'anàlisi. En aquest PFC hem volgut estudiar la viabilitat de la construcció automàtica de codi SQL a partir de diagrames de classes d'anàlisi UML. S'ha estès l'eina de modelatge UML Poseidon amb un connector, de manera que amb una interfície molt simple es pot obtenir molt ràpidament l'esquema bàsic d'una base de dades, incloent-hi les taules, les seves columnes, claus rimàries i foranes, i també els disparadors (i les claus úniques) necessaris per a garantir les restriccions de cardinalitat de les associacions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr)transformation to obtain the random vector y of dimension D. The factor model istheny = Λf + e (1)with the factors f of dimension k & D, the error term e, and the loadings matrix Λ.Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysismodel (1) can be written asCov(y) = ΛΛT + ψ (2)where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as theloadings matrix Λ are estimated from an estimation of Cov(y).Given observed clr transformed data Y as realizations of the random vectory. Outliers or deviations from the idealized model assumptions of factor analysiscan severely effect the parameter estimation. As a way out, robust estimation ofthe covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), seePison et al. (2003). Well known robust covariance estimators with good statisticalproperties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), relyon a full-rank data matrix Y which is not the case for clr transformed data (see,e.g., Aitchison, 1986).The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves thissingularity problem. The data matrix Y is transformed to a matrix Z by usingan orthonormal basis of lower dimension. Using the ilr transformed data, a robustcovariance matrix C(Z) can be estimated. The result can be back-transformed tothe clr space byC(Y ) = V C(Z)V Twhere the matrix V with orthonormal columns comes from the relation betweenthe clr and the ilr transformation. Now the parameters in the model (2) can beestimated (Basilevsky, 1994) and the results have a direct interpretation since thelinks to the original variables are still preserved.The above procedure will be applied to data from geochemistry. Our specialinterest is on comparing the results with those of Reimann et al. (2002) for the Kolaproject data

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The statistical analysis of compositional data should be treated using logratios of parts,which are difficult to use correctly in standard statistical packages. For this reason afreeware package, named CoDaPack was created. This software implements most of thebasic statistical methods suitable for compositional data.In this paper we describe the new version of the package that now is calledCoDaPack3D. It is developed in Visual Basic for applications (associated with Excel©),Visual Basic and Open GL, and it is oriented towards users with a minimum knowledgeof computers with the aim at being simple and easy to use.This new version includes new graphical output in 2D and 3D. These outputs could bezoomed and, in 3D, rotated. Also a customization menu is included and outputs couldbe saved in jpeg format. Also this new version includes an interactive help and alldialog windows have been improved in order to facilitate its use.To use CoDaPack one has to access Excel© and introduce the data in a standardspreadsheet. These should be organized as a matrix where Excel© rows correspond tothe observations and columns to the parts. The user executes macros that returnnumerical or graphical results. There are two kinds of numerical results: new variablesand descriptive statistics, and both appear on the same sheet. Graphical output appearsin independent windows. In the present version there are 8 menus, with a total of 38submenus which, after some dialogue, directly call the corresponding macro. Thedialogues ask the user to input variables and further parameters needed, as well aswhere to put these results. The web site http://ima.udg.es/CoDaPack contains thisfreeware package and only Microsoft Excel© under Microsoft Windows© is required torun the software.Kew words: Compositional data Analysis, Software

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table hasn rows and m columns and all probabilities are non-null. This kind of table can beseen as an element in the simplex of n · m parts. In this context, the marginals areidentified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclideanelements of the Aitchison geometry of the simplex can also be translated into the tableof probabilities: subspaces, orthogonal projections, distances.Two important questions are addressed: a) given a table of probabilities, which isthe nearest independent table to the initial one? b) which is the largest orthogonalprojection of a row onto a column? or, equivalently, which is the information in arow explained by a column, thus explaining the interaction? To answer these questionsthree orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independenttwo-way tables and fully dependent tables representing row-column interaction. Animportant result is that the nearest independent table is the product of the two (rowand column)-wise geometric marginal tables. A corollary is that, in an independenttable, the geometric marginals conform with the traditional (arithmetic) marginals.These decompositions can be compared with standard log-linear models.Key words: balance, compositional data, simplex, Aitchison geometry, composition,orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure,contingency table

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aquest projecte consisteix en aplicar el càlcul no lineal en la modelització volumètricanumèrica de l’estructura del sistema de descàrrega d’una columna del claustre de lacatedral de Girona mitjançant el mètode dels elements finits. A la Universitat de Gironas’ha fet diferents estudis del claustre de la catedral de Girona però sempre simulant uncomportament lineal de les característiques dels materials. El programa utilitzat és la versió docent del programa ANSYS disponible al Dept.d’EMCI i l’element emprat ha sigut el SOLID65. Aquest element permet introduircaracterístiques de no linealitat en els models i és adequat per a anàlisi no lineald’elements com la pedra de Girona

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El projecte tracta de la realització de proves de càrrega al Claustre de la Catedral de Girona per tal de conèixer la càrrega que suporten les columnes de pedra del claustre. Un cop coneguda aquesta càrrega, es podrà decidir si son aptes o no per al canvi d’ús previst a la planta superior i es podrà aportar una resposta definitiva a una de les dues hipòtesis plantejades: si es forma o no arc de descàrrega a les columnes del Claustre. D’altre banda, es realitzarà una proposta de tractament de les columnes del Claustre que consistirà a escollir les que hauran de ser reparades i les que hauran de ser substituïdes, en base a una sèrie de càlculs numèrics

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models incorporating more realistic models of customer behavior, as customers choosing from an offerset, have recently become popular in assortment optimization and revenue management. The dynamicprogram for these models is intractable and approximated by a deterministic linear program called theCDLP which has an exponential number of columns. When there are products that are being consideredfor purchase by more than one customer segment, CDLP is difficult to solve since column generationis known to be NP-hard. However, recent research indicates that a formulation based on segments withcuts imposing consistency (SDCP+) is tractable and approximates the CDLP value very closely. In thispaper we investigate the structure of the consideration sets that make the two formulations exactly equal.We show that if the segment consideration sets follow a tree structure, CDLP = SDCP+. We give acounterexample to show that cycles can induce a gap between the CDLP and the SDCP+ relaxation.We derive two classes of valid inequalities called flow and synchronization inequalities to further improve(SDCP+), based on cycles in the consideration set structure. We give a numeric study showing theperformance of these cycle-based cuts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to interpret the biplot it is necessary to know which points usually variables are the ones that are important contributors to the solution, and this information is available separately as part of the biplot s numerical results. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic directly into the graphical display, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. In the contribution biplot one set of points, usually the rows of the data matrix, optimally represent the spatial positions of the cases or sample units, according to some distance measure that usually incorporates some form of standardization unless all data are comparable in scale. The other set of points, usually the columns, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that usually only one common scale for row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, this version of the biplot also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution.