15 resultados para Frontal Analysis Continuous Capillary Clectrophoresis
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
A capillary microtrap thermal desorption module is developed for near real-time analysis of volatile organic compounds (VOCs) at sub-ppbv levels in air samples. The device allows the direct injection of the thermally desorbed VOCs into a chromatographic column. It does not use a second cryotrap to focalize the adsorbed compounds before entering the separation column so reducing the formation of artifacts. The connection of the microtrap to a GC–MS allows the quantitative determination of VOCs in less than 40 min with detection limits of between 5 and 10 pptv (25 °C and 760 mmHg), which correspond to 19–43 ng m−3, using sampling volumes of 775 cm3. The microtrap is applied to the analysis of environmental air contamination in different laboratories of our faculty. The results obtained indicate that most volatile compounds are easily diffused through the air and that they also may contaminate the surrounding areas when the habitual safety precautions (e.g., working under fume hoods) are used during the manipulation of solvents. The application of the microtrap to the analysis of VOCs in breath samples suggest that 2,5-dimethylfuran may be a strong indicator of a person's smoking status
Resumo:
First: A continuous-time version of Kyle's model (Kyle 1985), known as the Back's model (Back 1992), of asset pricing with asymmetric information, is studied. A larger class of price processes and of noise traders' processes are studied. The price process, as in Kyle's model, is allowed to depend on the path of the market order. The process of the noise traders' is an inhomogeneous Lévy process. Solutions are found by the Hamilton-Jacobi-Bellman equations. With the insider being risk-neutral, the price pressure is constant, and there is no equilibirium in the presence of jumps. If the insider is risk-averse, there is no equilibirium in the presence of either jumps or drifts. Also, it is analised when the release time is unknown. A general relation is established between the problem of finding an equilibrium and of enlargement of filtrations. Random announcement time is random is also considered. In such a case the market is not fully efficient and there exists equilibrium if the sensitivity of prices with respect to the global demand is time decreasing according with the distribution of the random time. Second: Power variations. it is considered, the asymptotic behavior of the power variation of processes of the form _integral_0^t u(s-)dS(s), where S_ is an alpha-stable process with index of stability 0&alpha&2 and the integral is an Itô integral. Stable convergence of corresponding fluctuations is established. These results provide statistical tools to infer the process u from discrete observations. Third: A bond market is studied where short rates r(t) evolve as an integral of g(t-s)sigma(s) with respect to W(ds), where g and sigma are deterministic and W is the stochastic Wiener measure. Processes of this type are particular cases of ambit processes. These processes are in general not of the semimartingale kind.
Resumo:
We present a study of the continuous-time equations governing the dynamics of a susceptible infected-susceptible model on heterogeneous metapopulations. These equations have been recently proposed as an alternative formulation for the spread of infectious diseases in metapopulations in a continuous-time framework. Individual-based Monte Carlo simulations of epidemic spread in uncorrelated networks are also performed revealing a good agreement with analytical predictions under the assumption of simultaneous transmission or recovery and migration processes
Resumo:
Background: Cells have the ability to respond and adapt to environmental changes through activation of stress-activated protein kinases (SAPKs). Although p38 SAPK signalling is known to participate in the regulation of gene expression little is known on the molecular mechanisms used by this SAPK to regulate stress-responsive genes and the overall set of genes regulated by p38 in response to different stimuli.Results: Here, we report a whole genome expression analyses on mouse embryonic fibroblasts (MEFs) treated with three different p38 SAPK activating-stimuli, namely osmostress, the cytokine TNFα and the protein synthesis inhibitor anisomycin. We have found that the activation kinetics of p38α SAPK in response to these insults is different and also leads to a complex gene pattern response specific for a given stress with a restricted set of overlapping genes. In addition, we have analysed the contribution of p38α the major p38 family member present in MEFs, to the overall stress-induced transcriptional response by using both a chemical inhibitor (SB203580) and p38α deficient (p38α-/-) MEFs. We show here that p38 SAPK dependency ranged between 60% and 88% depending on the treatments and that there is a very good overlap between the inhibitor treatment and the ko cells. Furthermore, we have found that the dependency of SAPK varies depending on the time the cells are subjected to osmostress. Conclusions: Our genome-wide transcriptional analyses shows a selective response to specific stimuli and a restricted common response of up to 20% of the stress up-regulated early genes that involves an important set of transcription factors, which might be critical for either cell adaptation or preparation for continuous extra-cellular changes. Interestingly, up to 85% of the up-regulated genes are under the transcriptional control of p38 SAPK. Thus, activation of p38 SAPK is critical to elicit the early gene expression program required for cell adaptation to stress.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollardeutsche mark future exchange, finding good agreement between theory and the observed data.
Resumo:
The kinetics and microstructure of solid-phase crystallization under continuous heating conditions and random distribution of nuclei are analyzed. An Arrhenius temperature dependence is assumed for both nucleation and growth rates. Under these circumstances, the system has a scaling law such that the behavior of the scaled system is independent of the heating rate. Hence, the kinetics and microstructure obtained at different heating rates differ only in time and length scaling factors. Concerning the kinetics, it is shown that the extended volume evolves with time according to αex = [exp(κCt′)]m+1, where t′ is the dimensionless time. This scaled solution not only represents a significant simplification of the system description, it also provides new tools for its analysis. For instance, it has been possible to find an analytical dependence of the final average grain size on kinetic parameters. Concerning the microstructure, the existence of a length scaling factor has allowed the grain-size distribution to be numerically calculated as a function of the kinetic parameters
Resumo:
The continuous wavelet transform is obtained as a maximumentropy solution of the corresponding inverse problem. It is well knownthat although a signal can be reconstructed from its wavelet transform,the expansion is not unique due to the redundancy of continuous wavelets.Hence, the inverse problem has no unique solution. If we want to recognizeone solution as "optimal", then an appropriate decision criterion hasto be adopted. We show here that the continuous wavelet transform is an"optimal" solution in a maximum entropy sense.
Resumo:
The Wigner higher order moment spectra (WHOS)are defined as extensions of the Wigner-Ville distribution (WD)to higher order moment spectra domains. A general class oftime-frequency higher order moment spectra is also defined interms of arbitrary higher order moments of the signal as generalizations of the Cohen’s general class of time-frequency representations. The properties of the general class of time-frequency higher order moment spectra can be related to theproperties of WHOS which are, in fact, extensions of the properties of the WD. Discrete time and frequency Wigner higherorder moment spectra (DTF-WHOS) distributions are introduced for signal processing applications and are shown to beimplemented with two FFT-based algorithms. One applicationis presented where the Wigner bispectrum (WB), which is aWHOS in the third-order moment domain, is utilized for thedetection of transient signals embedded in noise. The WB iscompared with the WD in terms of simulation examples andanalysis of real sonar data. It is shown that better detectionschemes can be derived, in low signal-to-noise ratio, when theWB is applied.
Resumo:
The use of iodine as a catalyst and either acetic or trifluoroacetic acid as a derivatizing reagent for determining the enantiomeric composition of acyclic and cyclic aliphatic chiral alcohols was investigated. Optimal conditions were selected according to the molar ratio of alcohol to acid, the reaction time, and the reaction temperature. Afterwards, chiral stability of chiral carbons was studied. Although no isomerization was observed when acetic acid was used, partial isomerization was detected with the trifluoroacetic acid. A series of chiral alcohols of a widely varying structural type were then derivatized with acetic acid using the optimal conditions. The resolution of the enantiomeric esters and the free chiral alcohols was measured using a capillary gas chromatograph equipped with a CP Chirasil-DEX CB column. The best resolutions were obtained with 2-pentyl acetates (α = 3.00) and 2-hexyl acetates (α = 1.95). This method provides a very simple and efficient experimental workup procedure for analyzing chiral alcohols by chiral-phase GC.
Resumo:
The GS-distribution is a family of distributions that provide an accurate representation of any unimodal univariate continuous distribution. In this contribution we explore the utility of this family as a general model in survival analysis. We show that the survival function based on the GS-distribution is able to provide a model for univariate survival data and that appropriate estimates can be obtained. We develop some hypotheses tests that can be used for checking the underlying survival model and for comparing the survival of different groups.
Resumo:
Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which shows the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.
Mueller matrix microscope with a dual continuous rotating compensator setup and digital demodulation
Resumo:
In this paper we describe a new Mueller matrix (MM) microscope that generalizes and makes quantitative the polarized light microscopy technique. In this instrument all the elements of the MU are simultaneously determined from the analysis in the frequency domain of the time-dependent intensity of the light beam at every pixel of the camera. The variations in intensity are created by the two compensators continuously rotating at different angular frequencies. A typical measurement is completed in a little over one minute and it can be applied to any visible wavelength. Some examples are presented to demonstrate the capabilities of the instrument.
Resumo:
Introducción. Uno de los paradigmas más utilizados en el estudio de la atención es el Continuous Performance Test (CPT). La versión de pares idénticos (CPT-IP) se ha utilizado ampliamente para evaluar los déficits de atención en los trastornos del neurodesarrollo, neurológicos y psiquiátricos. Sin embargo, la localización de la activación cerebral de las redes atencionales varía significativamente según el diseño de resonancia magnética funcional (RMf) usado. Objetivo. Diseñar una tarea para evaluar la atención sostenida y la memoria de trabajo mediante RMf para proporcionar datos de investigación relacionados con la localización y el papel de estas funciones. Sujetos y métodos. El estudio contó con la participación de 40 estudiantes, todos ellos diestros (50%, mujeres; rango: 18-25 años). La tarea de CPT-IP se diseñó como una tarea de bloques, en la que se combinaban los períodos CPT-IP con los de reposo. Resultados. La tarea de CPT-IP utilizada activa una red formada por regiones frontales, parietales y occipitales, y éstas se relacionan con funciones ejecutivas y atencionales. Conclusiones. La tarea de CPT-IP utilizada en nuestro trabajo proporciona datos normativos en adultos sanos para el estudio del sustrato neural de la atención sostenida y la memoria de trabajo. Estos datos podrían ser útiles para evaluar trastornos que cursan con déficits en memoria de trabajo y en atención sostenida.
Resumo:
To develop systems in order to detect Alzheimer’s disease we want to use EEG signals. Available database is raw, so the first step must be to clean signals properly. We propose a new way of ICA cleaning on a database recorded from patients with Alzheimer's disease (mildAD, early stage). Two researchers visually inspected all the signals (EEG channels), and each recording's least corrupted (artefact-clean) continuous 20 sec interval were chosen for the analysis. Each trial was then decomposed using ICA. Sources were ordered using a kurtosis measure, and the researchers cleared up to seven sources per trial corresponding to artefacts (eye movements, EMG corruption, EKG, etc), using three criteria: (i) Isolated source on the scalp (only a few electrodes contribute to the source), (ii) Abnormal wave shape (drifts, eye blinks, sharp waves, etc.), (iii) Source of abnormally high amplitude ( �100 �V). We then evaluated the outcome of this cleaning by means of the classification of patients using multilayer perceptron neural networks. Results are very satisfactory and performance is increased from 50.9% to 73.1% correctly classified data using ICA cleaning procedure.