954 resultados para Time-resolved spectroscopies


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ureaplasma species in amniotic fluid at the time of second-trimester amniocentesis increases the risk of preterm birth, but most affected pregnancies continue to term (Gerber et al. J Infect Dis 2003). We aimed to model intra-amniotic (IA) ureaplasma infection in spiny mice, a species with a relatively long gestation (39 days) that allows investigation of the disposition and possible clearance of ureaplasmas in the feto-placental compartment. Method: Pregnant spiny mice received IA injections of U. parvum serovar 6 (10µL, 1x104 colony-forming-units in PBS) or 10B media (10µL; control) at 20 days (d) of gestation (term=39d). At 37d fetuses (n=3 ureaplasma, n=4 control) were surgically delivered and tissues were collected for; bacterial culture, ureaplasma mba and urease gene expression by PCR, tissue WBC counts and indirect fluorescent antibody (IFA) staining using anti-ureaplasma serovar 6 (rabbit) antiserum. Maternal and fetal plasma IgG was measured by Western blot. Results: Ureaplasmas were not detected by culture or PCR in fetal or maternal tissues but were visualized by IFA within placental and fetal lung tissues, in association with inflammatory changes and elevated WBC counts (p<0.0001). Anti-ureaplasma IgG was detected in maternal (2/2 tested) and fetal (1/2 tested) plasma but not in controls (0/3). Conclusions: IA injection of ureaplasmas in mid-gestation spiny mice caused persistent fetal lung and placental infection even though ureaplasmas were undetectable using standard culture or PCR techniques. This is consistent with resolution of IA infection, which may occur in human pregnancies that continue to term despite detection of ureaplasmas in mid-gestation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most studies examining the temperature–mortality association in a city used temperatures from one site or the average from a network of sites. This may cause measurement error as temperature varies across a city due to effects such as urban heat islands. We examined whether spatiotemporal models using spatially resolved temperatures produced different associations between temperature and mortality compared with time series models that used non-spatial temperatures. We obtained daily mortality data in 163 areas across Brisbane city, Australia from 2000 to 2004. We used ordinary kriging to interpolate spatial temperature variation across the city based on 19 monitoring sites. We used a spatiotemporal model to examine the impact of spatially resolved temperatures on mortality. Also, we used a time series model to examine non-spatial temperatures using a single site and the average temperature from three sites. We used squared Pearson scaled residuals to compare model fit. We found that kriged temperatures were consistent with observed temperatures. Spatiotemporal models using kriged temperature data yielded slightly better model fit than time series models using a single site or the average of three sites' data. Despite this better fit, spatiotemporal and time series models produced similar associations between temperature and mortality. In conclusion, time series models using non-spatial temperatures were equally good at estimating the city-wide association between temperature and mortality as spatiotemporal models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the global policy convergence toward high-stakes testing in schools and the use of test results to ‘steer at a distance’, particularly as it applies to policy-makers’ promise to improve teacher quality. Using Deleuze’s three syntheses of time in the context of the Australian policy blueprint Quality Education, this paper argues that using test scores to discipline teaching repeats the past habit of policy-making as continuing the problem of the unaccountable teacher. This results in local policy-making enfolding test scores in a pure past where the teacher-as-problem is resolved through the use of data from testing to deliver accountability and transparency. This use of the database returns a digitised form of inspection that is a repetition of the habit of teacher-as-problem. While dystopian possibilities are available through the database, in what Deleuze refers to as a control society, for us the challenge is to consider policy-making as a step into an unknown future, to engage with producing policy that is not grounded on the unconscious interiority of solving the teacher problem, but of imagining new ways of conceiving the relationship between policy-making and teaching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a need for better understanding of the processes and new ideas to develop traditional pharmaceutical powder manufacturing procedures. Process analytical technology (PAT) has been developed to improve understanding of the processes and establish methods to monitor and control processes. The interest is in maintaining and even improving the whole manufacturing process and the final products at real-time. Process understanding can be a foundation for innovation and continuous improvement in pharmaceutical development and manufacturing. New methods are craved for to increase the quality and safety of the final products faster and more efficiently than ever before. The real-time process monitoring demands tools, which enable fast and noninvasive measurements with sufficient accuracy. Traditional quality control methods have been laborious and time consuming and they are performed off line i.e. the analysis has been removed from process area. Vibrational spectroscopic methods are responding this challenge and their utilisation have increased a lot during the past few years. In addition, other methods such as colour analysis can be utilised in noninvasive real-time process monitoring. In this study three pharmaceutical processes were investigated: drying, mixing and tabletting. In addition tablet properties were evaluated. Real-time monitoring was performed with NIR and Raman spectroscopies, colour analysis, particle size analysis and compression data during tabletting was evaluated using mathematical modelling. These methods were suitable for real-time monitoring of pharmaceutical unit operations and increase the knowledge of the critical parameters in the processes and the phenomena occurring during operations. They can improve our process understanding and therefore, finally, enhance the quality of final products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In order to rapidly and efficiently screen potential biofuel feedstock candidates for quintessential traits, robust high-throughput analytical techniques must be developed and honed. The traditional methods of measuring lignin syringyl/guaiacyl (S/G) ratio can be laborious, involve hazardous reagents, and/or be destructive. Vibrational spectroscopy can furnish high-throughput instrumentation without the limitations of the traditional techniques. Spectral data from mid-infrared, near-infrared, and Raman spectroscopies was combined with S/G ratios, obtained using pyrolysis molecular beam mass spectrometry, from 245 different eucalypt and Acacia trees across 17 species. Iterations of spectral processing allowed the assembly of robust predictive models using partial least squares (PLS). RESULTS: The PLS models were rigorously evaluated using three different randomly generated calibration and validation sets for each spectral processing approach. Root mean standard errors of prediction for validation sets were lowest for models comprised of Raman (0.13 to 0.16) and mid-infrared (0.13 to 0.15) spectral data, while near-infrared spectroscopy led to more erroneous predictions (0.18 to 0.21). Correlation coefficients (r) for the validation sets followed a similar pattern: Raman (0.89 to 0.91), mid-infrared (0.87 to 0.91), and near-infrared (0.79 to 0.82). These statistics signify that Raman and mid-infrared spectroscopy led to the most accurate predictions of S/G ratio in a diverse consortium of feedstocks. CONCLUSION: Eucalypts present an attractive option for biofuel and biochemical production. Given the assortment of over 900 different species of Eucalyptus and Corymbia, in addition to various species of Acacia, it is necessary to isolate those possessing ideal biofuel traits. This research has demonstrated the validity of vibrational spectroscopy to efficiently partition different potential biofuel feedstocks according to lignin S/G ratio, significantly reducing experiment and analysis time and expense while providing non-destructive, accurate, global, predictive models encompassing a diverse array of feedstocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a detailed pulse-phase-resolved spectral analysis of the persistent high-mass X-ray binary pulsar Vela X-1 observed with Suzaku during 2008 June. The pulse profiles exhibit both intensity and energy dependence with multiple peaks at low energies and double peaks at higher energies. The source shows some spectral evolution over the duration of the observation and care has been taken to average over data with minimum spectral variability for the analysis. We model the continuum with a phenomenological partial covering high-energy cutoff model and a more physical partial covering thermal Comptonization model (CompTT) excluding the time ranges having variable hardness ratio and intensity dependence. For both the models, we detect a cyclotron resonant scattering feature (CRSF) and its harmonic at similar to 25 keV and similar to 50 keV. Both the CRSF fundamental and harmonics parameters are strongly variable over the pulse phase, with the ratio of the two line energies deviating from the classical value of 2. The continuum parameters also show significant variation over the pulse phase and give us some idea about the changing physical conditions that are seen with the changing viewing angle at different pulse phases and obscuration by the accretion stream at some pulse phases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of the unresolved subgrid-scale (SGS) motions on the energy balance of the resolved scales in large eddy simulation (LES) have been investigated actively because modeling the energy transfer between the resolved and unresolved scales is crucial to constructing accurate SGS models. But the subgrid scales not only modify the energy balance, they also contribute to temporal decorrelation of the resolved scales. The importance of this effect in applications including the predictability problem and the evaluation of sound radiation by turbulent flows motivates the present study of the effect of SGS modeling on turbulent time correlations. This paper compares the two-point, two-time Eulerian velocity correlation in isotropic homogeneous turbulence evaluated by direct numerical simulation (DNS) with the correlations evaluated by LES using a standard spectral eddy viscosity. It proves convenient to express the two-point correlations in terms of spatial Fourier decomposition of the velocity field. The LES fields are more coherent than the DNS fields: their time correlations decay more slowly at all resolved scales of motion and both their integral scales and microscales are larger than those of the DNS field. Filtering alone is not responsible for this effect: in the Fourier representation, the time correlations of the filtered DNS field are identical to those of the DNS field itself. The possibility of modeling the decorrelating effects of the unresolved scales of motion by including a random force in the model is briefly discussed. The results could have applications to the problem of computing sound sources in isotropic homogeneous turbulence by LES

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nature of the subducted lithospheric slab is investigated seismologically by tomographic inversions of ISC residual travel times. The slab, in which nearly all deep earthquakes occur, is fast in the seismic images because it is much cooler than the ambient mantle. High resolution three-dimensional P and S wave models in the NW Pacific are obtained using regional data, while inversion for the SW Pacific slabs includes teleseismic arrivals. Resolution and noise estimations show the models are generally well-resolved.

The slab anomalies in these models, as inferred from the seismicity, are generally coherent in the upper mantle and become contorted and decrease in amplitude with depth. Fast slabs are surrounded by slow regions shallower than 350 km depth. Slab fingering, including segmentation and spreading, is indicated near the bottom of the upper mantle. The fast anomalies associated with the Japan, Izu-Bonin, Mariana and Kermadec subduction zones tend to flatten to sub-horizontal at depth, while downward spreading may occur under parts of the Mariana and Kuril arcs. The Tonga slab appears to end around 550 km depth, but is underlain by a fast band at 750-1000 km depths.

The NW Pacific model combined with the Clayton-Comer mantle model predicts many observed residual sphere patterns. The predictions indicate that the near-source anomalies affect the residual spheres less than the teleseismic contributions. The teleseismic contributions may be removed either by using a mantle model, or using teleseismic station averages of residuals from only regional events. The slab-like fast bands in the corrected residual spheres are are consistent with seismicity trends under the Mariana Tzu-Bonin and Japan trenches, but are inconsistent for the Kuril events.

The comparison of the tomographic models with earthquake focal mechanisms shows that deep compression axes and fast velocity slab anomalies are in consistent alignment, even when the slab is contorted or flattened. Abnormal stress patterns are seen at major junctions of the arcs. The depth boundary between tension and compression in the central parts of these arcs appears to depend on the dip and topology of the slab.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc>>1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc. Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc>>1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the origin of life on Earth has long fascinated the minds of the global community, and has been a driving factor in interdisciplinary research for centuries. Beyond the pioneering work of Darwin, perhaps the most widely known study in the last century is that of Miller and Urey, who examined the possibility of the formation of prebiotic chemical precursors on the primordial Earth [1]. More recent studies have shown that amino acids, the chemical building blocks of the biopolymers that comprise life as we know it on Earth, are present in meteoritic samples, and that the molecules extracted from the meteorites display isotopic signatures indicative of an extraterrestrial origin [2]. The most recent major discovery in this area has been the detection of glycine (NH2CH2COOH), the simplest amino acid, in pristine cometary samples returned by the NASA STARDUST mission [3]. Indeed, the open questions left by these discoveries, both in the public and scientific communities, hold such fascination that NASA has designated the understanding of our "Cosmic Origins" as a key mission priority.

Despite these exciting discoveries, our understanding of the chemical and physical pathways to the formation of prebiotic molecules is woefully incomplete. This is largely because we do not yet fully understand how the interplay between grain-surface and sub-surface ice reactions and the gas-phase affects astrophysical chemical evolution, and our knowledge of chemical inventories in these regions is incomplete. The research presented here aims to directly address both these issues, so that future work to understand the formation of prebiotic molecules has a solid foundation from which to work.

From an observational standpoint, a dedicated campaign to identify hydroxylamine (NH2OH), potentially a direct precursor to glycine, in the gas-phase was undertaken. No trace of NH2OH was found. These observations motivated a refinement of the chemical models of glycine formation, and have largely ruled out a gas-phase route to the synthesis of the simplest amino acid in the ISM. A molecular mystery in the case of the carrier of a series of transitions was resolved using observational data toward a large number of sources, confirming the identity of this important carbon-chemistry intermediate B11244 as l-C3H+ and identifying it in at least two new environments. Finally, the doubly-nitrogenated molecule carbodiimide HNCNH was identified in the ISM for the first time through maser emission features in the centimeter-wavelength regime.

In the laboratory, a TeraHertz Time-Domain Spectrometer was constructed to obtain the experimental spectra necessary to search for solid-phase species in the ISM in the THz region of the spectrum. These investigations have shown a striking dependence on large-scale, long-range (i.e. lattice) structure of the ices on the spectra they present in the THz. A database of molecular spectra has been started, and both the simplest and most abundant ice species, which have already been identified, as well as a number of more complex species, have been studied. The exquisite sensitivity of the THz spectra to both the structure and thermal history of these ices may lead to better probes of complex chemical and dynamical evolution in interstellar environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mass resolved multiphoton ionization (MPI) spectra of methyl iodide were obtained in the 430-490 nm region using a time-of-flight (TOF) mass spectrometer. They have the same vibrational structure, which testifies that the fragment species, in the wavelength region under study, are from the photodissociation of multiphoton ionized molecular parent ions. Some features in the spectra are identified as three-photon excitations to 6p and 7s Rydberg states of methyl iodide. Two new vibrational structures of some Rydberg states are observed. The mechanism of ionization and dissociation is also discussed. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A quantitative analysis of the individual compounds in tobacco essential oils is performed by comprehensive two-dimensional gas chromatography (GC x GC) combined with flame ionization detector (FID). A time-of-flight mass spectrometer (TOF/MS) was coupled to GC x GC for the identification of the resolved peaks. The response of a flame ionization detector to different compound classes was calibrated using multiple internal standards. In total, 172 compounds were identified with good match and 61 compounds with high probability value were reliably quantified. For comparative purposes, the essential oil sample was also quantified by one-dimensional gas chromatography-mass spectrometry (GC/MS) with multiple internal standards method. The results showed that there was close agreement between the two analysis methods when the peak purity and match quality in one-dimensional GC/MS are high enough. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Various concurrency control algorithms differ in the time when conflicts are detected, and in the way they are resolved. In that respect, the Pessimistic and Optimistic Concurrency Control (PCC and OCC) alternatives represent two extremes. PCC locking protocols detect conflicts as soon as they occur and resolve them using blocking. OCC protocols detect conflicts at transaction commit time and resolve them using rollbacks (restarts). For real-time databases, blockages and rollbacks are hazards that increase the likelihood of transactions missing their deadlines. We propose a Speculative Concurrency Control (SCC) technique that minimizes the impact of blockages and rollbacks. SCC relies on the use of added system resources to speculate on potential serialization orders and to ensure that if such serialization orders materialize, the hazards of blockages and roll-backs are minimized. We present a number of SCC-based algorithms that differ in the level of speculation they introduce, and the amount of system resources (mainly memory) they require. We show the performance gains (in terms of number of satisfied timing constraints) to be expected when a representative SCC algorithm (SCC-2S) is adopted.