32 resultados para Chromatographic columns
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
This paper establishes a general framework for metric scaling of any distance measure between individuals based on a rectangular individuals-by-variables data matrix. The method allows visualization of both individuals and variables as well as preserving all the good properties of principal axis methods such as principal components and correspondence analysis, based on the singular-value decomposition, including the decomposition of variance into components along principal axes which provide the numerical diagnostics known as contributions. The idea is inspired from the chi-square distance in correspondence analysis which weights each coordinate by an amount calculated from the margins of the data table. In weighted metric multidimensional scaling (WMDS) we allow these weights to be unknown parameters which are estimated from the data to maximize the fit to the original distances. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing a matrix and displaying its rows and columns in biplots.
Resumo:
We consider the joint visualization of two matrices which have common rowsand columns, for example multivariate data observed at two time pointsor split accord-ing to a dichotomous variable. Methods of interest includeprincipal components analysis for interval-scaled data, or correspondenceanalysis for frequency data or ratio-scaled variables on commensuratescales. A simple result in matrix algebra shows that by setting up thematrices in a particular block format, matrix sum and difference componentscan be visualized. The case when we have more than two matrices is alsodiscussed and the methodology is applied to data from the InternationalSocial Survey Program.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
The case of two transition tables is considered, that is two squareasymmetric matrices of frequencies where the rows and columns of thematrices are the same objects observed at three different timepoints. Different ways of visualizing the tables, either separatelyor jointly, are examined. We generalize an existing idea where asquare matrix is descomposed into symmetric and skew-symmetric partsto two matrices, leading to a decomposition into four components: (1)average symmetric, (2) average skew-symmetric, (3) symmetricdifference from average, and (4) skew-symmetric difference fromaverage. The method is illustrated with an artificial example and anexample using real data from a study of changing values over threegenerations.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
This case study deals with a rock face monitoring in urban areas using a Terrestrial Laser Scanner. The pilot study area is an almost vertical, fifty meter high cliff, on top of which the village of Castellfollit de la Roca is located. Rockfall activity is currently causing a retreat of the rock face, which may endanger the houses located at its edge. TLS datasets consist of high density 3-D point clouds acquired from five stations, nine times in a time span of 22 months (from March 2006 to January 2008). The change detection, i.e. rockfalls, was performed through a sequential comparison of datasets. Two types of mass movement were detected in the monitoring period: (a) detachment of single basaltic columns, with magnitudes below 1.5 m3 and (b) detachment of groups of columns, with magnitudes of 1.5 to 150 m3. Furthermore, the historical record revealed (c) the occurrence of slab failures with magnitudes higher than 150 m3. Displacements of a likely slab failure were measured, suggesting an apparent stationary stage. Even failures are clearly episodic, our results, together with the study of the historical record, enabled us to estimate a mean detachment of material from 46 to 91.5 m3 year¿1. The application of TLS considerably improved our understanding of rockfall phenomena in the study area.
Resumo:
This case study deals with a rock face monitoring in urban areas using a Terrestrial Laser Scanner. The pilot study area is an almost vertical, fifty meter high cliff, on top of which the village of Castellfollit de la Roca is located. Rockfall activity is currently causing a retreat of the rock face, which may endanger the houses located at its edge. TLS datasets consist of high density 3-D point clouds acquired from five stations, nine times in a time span of 22 months (from March 2006 to January 2008). The change detection, i.e. rockfalls, was performed through a sequential comparison of datasets. Two types of mass movement were detected in the monitoring period: (a) detachment of single basaltic columns, with magnitudes below 1.5 m3 and (b) detachment of groups of columns, with magnitudes of 1.5 to 150 m3. Furthermore, the historical record revealed (c) the occurrence of slab failures with magnitudes higher than 150 m3. Displacements of a likely slab failure were measured, suggesting an apparent stationary stage. Even failures are clearly episodic, our results, together with the study of the historical record, enabled us to estimate a mean detachment of material from 46 to 91.5 m3 year¿1. The application of TLS considerably improved our understanding of rockfall phenomena in the study area.
Resumo:
Background: Experimental evidences demonstrate that vegetable derived extracts inhibit cholesterol absorption in the gastrointestinal tract. To further explore the mechanisms behind, we modeled duodenal contents with several vegetable extracts. Results: By employing a widely used cholesterol quantification method based on a cholesterol oxidase-peroxidase coupled reaction we analyzed the effects on cholesterol partition. Evidenced interferences were analyzed by studying specific and unspecific inhibitors of cholesterol oxidase-peroxidase coupled reaction. Cholesterol was also quantified by LC/MS. We found a significant interference of diverse (cocoa and tea-derived) extracts over this method. The interference was strongly dependent on model matrix: while as in phosphate buffered saline, the development of unspecific fluorescence was inhibitable by catalase (but not by heat denaturation), suggesting vegetable extract derived H2O2 production, in bile-containing model systems, this interference also comprised cholesterol-oxidase inhibition. Several strategies, such as cholesterol standard addition and use of suitable blanks containing vegetable extracts were tested. When those failed, the use of a mass-spectrometry based chromatographic assay allowed quantification of cholesterol in models of duodenal contents in the presence of vegetable extracts. Conclusions: We propose that the use of cholesterol-oxidase and/or peroxidase based systems for cholesterol analyses in foodstuffs should be accurately monitored, as important interferences in all the components of the enzymatic chain were evident. The use of adequate controls, standard addition and finally, chromatographic analyses solve these issues.
Resumo:
A partir de un extracto metanólico de semillas de Melia azedarach se han aislado los compuestos responsables de su actividad biológica como agentes inhibidores de la alimentación y reguladores del desarrollo sobre Sesamia nonagrioides. Las pruebas biológicas de actividad se han realizado incorporando cada fracción a la dieta, hojas de maíz o dieta sintética, de larvas de 2°estadio del insecto. La técnica de cromatografía líquida de alta resolución a escala semipreparativa, utilizando un cartucho de fase reversa, ha permitido separar dos componentes con cantidad y pureza suficientes para poder ser analizados con posterioridad mediante técnicas espectroscópicas.
Resumo:
The impact, on nitrogen and phosphorous dynamics, of applying compost at different rates was investigated in soils developed on schist in new terraced vineyards (NTV) and in undisturbed areas (NC). Repacked soil columns amended with 0 (control), 50 t ha –1 (T1) and 100 t ha–1 (T2) of compost were studied under laboratory conditions simulating both situations. The columns were maintained for 1 year, during which time a total of 300 mm of simulated rainfall was applied in ten 30 mm applications. Soil organic matter (OM), nitrogen and phosphorous contents were analysed at the end of the study period and leachates were analysed after each simulated rainfall event. Significant differences in nitrate leaching were observed between the control and the treated soils and these differences were greater in the NC (control = 1.368 g, T1 = 1.526 g and T2 = 1.686 g) than in the NTV soils (control = 0.61 g, T1 = = 1.068 g and T2 = 1.283 g). The relative effect was greater in the NTV soils (T1/control = 1.11 vs. 1.75 and T2/control = 1.23 vs. 2.1 for NC and NTV, respectively). The nitrate concentration in the leached water reached up to 400 mg L–1, which implied a risk of groundwater pollution. Phosphorous losses through leaching were very low with concentrations of < 0.15 mg L–1, without any significant differences between treatments. The phosphorous concentrations in the surface horizon increased by 50.8% in T1 and by 66.8% in T2 in the NC soils, compared with increases of 20.3% and 38%, respectively, in the NTV soils. Due to the high infiltration capacity of the study soils, leaching effects must be considered in order to prevent groundwater pollution.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
Hydrogenated nanocrystalline silicon (nc-Si:H) obtained by hot-wire chemical vapour deposition (HWCVD) at low substrate temperature (150 °C) has been incorporated as the active layer in bottom-gate thin-film transistors (TFTs). These devices were electrically characterised by measuring in vacuum the output and transfer characteristics for different temperatures. The field-effect mobility showed a thermally activated behaviour which could be attributed to carrier trapping at the band tails, as in hydrogenated amorphous silicon (a-Si:H), and potential barriers for the electronic transport. Trapped charge at the interfaces of the columns, which are typical in nc-Si:H, would account for these barriers. By using the Levinson technique, the quality of the material at the column boundaries could be studied. Finally, these results were interpreted according to the particular microstructure of nc-Si:H.
Resumo:
This work describes the formation of transformation products (TPs) by the enzymatic degradation at laboratory scale of two highly consumed antibiotics: tetracycline (Tc) and erythromycin (ERY). The analysis of the samples was carried out by a fast and simple method based on the novel configuration of the on-line turbulent flow system coupled to a hybrid linear ion trap – high resolution mass spectrometer. The method was optimized and validated for the complete analysis of ERY, Tc and their transformation products within 10 min without any other sample manipulation. Furthermore, the applicability of the on-line procedure was evaluated for 25 additional antibiotics, covering a wide range of chemical classes in different environmental waters with satisfactory quality parameters. Degradation rates obtained for Tc by laccase enzyme and ERY by EreB esterase enzyme without the presence of mediators were ∼78% and ∼50%, respectively. Concerning the identification of TPs, three suspected compounds for Tc and five of ERY have been proposed. In the case of Tc, the tentative molecular formulas with errors mass within 2 ppm have been based on the hypothesis of dehydroxylation, (bi)demethylation and oxidation of the rings A and C as major reactions. In contrast, the major TP detected for ERY has been identified as the “dehydration ERY-A”, with the same molecular formula of its parent compound. In addition, the evaluation of the antibiotic activity of the samples along the enzymatic treatments showed a decrease around 100% in both cases
Resumo:
This work presents a comparison between three analytical methods developed for the simultaneous determination of eight quinolones regulated by the European Union (marbofloxacin, ciprofloxacin, danofloxacin, enrofloxacin, difloxacin, sarafloxacin, oxolinic acid and flumequine) in pig muscle, using liquid chromatography with fluorescence detection (LC-FD), liquid chromatography-mass spectrometry (LC-MS) and liquid chromatography-tandem mass spectrometry (LC-MS/MS). The procedures involve an extraction of the quinolones from the tissues, a step for clean-up and preconcentration of the analytes by solid-phase extraction and a subsequent liquid chromatographic analysis. The limits of detection of the methods ranged from 0.1 to 2.1 ng g−1 using LC-FD, from 0.3 to 1.8 using LC-MS and from 0.2 to 0.3 using LC-MS/MS, while inter- and intra-day variability was under 15 % in all cases. Most of those data are notably lower than the maximum residue limits established by the European Union for quinolones in pig tissues. The methods have been applied for the determination of quinolones in six different commercial pig muscle samples purchased in different supermarkets located in the city of Granada (south-east Spain).