918 resultados para Multi-phase experiments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tricyclo-DNA (tcDNA) is a sugar-modified analogue of DNA currently tested for the treatment of Duchenne muscular dystrophy in an antisense approach. Tandem mass spectrometry plays a key role in modern medical diagnostics and has become a widespread technique for the structure elucidation and quantification of antisense oligonucleotides. Herein, mechanistic aspects of the fragmentation of tcDNA are discussed, which lay the basis for reliable sequencing and quantification of the antisense oligonucleotide. Excellent selectivity of tcDNA for complementary RNA is demonstrated in direct competition experiments. Moreover, the kinetic stability and fragmentation pattern of matched and mismatched tcDNA heteroduplexes were investigated and compared with non-modified DNA and RNA duplexes. Although the separation of the constituting strands is the entropy-favored fragmentation pathway of all nucleic acid duplexes, it was found to be only a minor pathway of tcDNA duplexes. The modified hybrid duplexes preferentially undergo neutral base loss and backbone cleavage. This difference is due to the low activation entropy for the strand dissociation of modified duplexes that arises from the conformational constraint of the tc-sugar-moiety. The low activation entropy results in a relatively high free activation enthalpy for the dissociation comparable to the free activation enthalpy of the alternative reaction pathway, the release of a nucleobase. The gas-phase behavior of tcDNA duplexes illustrates the impact of the activation entropy on the fragmentation kinetics and suggests that tandem mass spectrometric experiments are not suited to determine the relative stability of different types of nucleic acid duplexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An experiment was conceived in which we monitored degradation of GlcDGD. Independent of the fate of the [14C]glucosyl headgroup after hydrolysis from the glycerol backbone, the 14C enters the aqueous or gas phase whereas the intact lipid is insoluble and remains in the sediment phase. Total degradation of GlcDGD then is obtained by combining the increase of radioactivity in the aqueous and gaseous phases. We chose two different sediment to perform this experiment. One is from microbially actie surface sediment sampled in February 2010 from the upper tidal flat of the German Wadden Sea near Wremen (53° 38' 0N, 8° 29' 30E). The other one is deep subsurface sediments recovered from northern Cascadia Margin during Integrated Ocean Drilling Program Expedition 311 [site U1326, 138.2 meters below seafloor (mbsf), in situ temperature 20 °C, water depth 1,828 m. We performed both alive and killed control experiments for comparison. Surface and subsurface sediment slurry were incubated in the dark at in situ temperature, 4 °C and 20 °C for 300 d, respectively. The sterilized slurry was stored at 20 °C. All incubations were carried out under N2 headspace to ensure anaerobic conditions. The sampling frequency was high during the first half-month, i.e., after 1, 2, 7, and 14 d; thereafter, the sediment slurry was sampled every 2 months. At each time point, samples were taken in triplicate for radioactivity measurements. After 300 d of incubation, no significant changes of radioactivity in the aqueous phase were detected. This may be the result of either the rapid turnover of released [14C] glucose or the relatively high limit of detection caused by the slight solubility (equivalent to 2% of initial radioactivity) of GlcDGD in water. Therefore, total degradation of GlcDGD in the dataset was calculated by combining radioactivity of DIC, CH4, and CO2, leading to a minimum estimate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The location of the seaward tip of a subduction thrust controls material transfer at convergent plate margins, and hence global mass balances. At approximately half of those margins, the material of the subducting plate is completely underthrust so that no accretion or even subduction erosion takes place. Along the remaining margins, material is scraped off the subducting plate and added to the upper plate by frontal accretion. We here examine the physical properties of subducting sediments off Costa Rica and Nankai, type examples for an erosional and an accretionary margin, to investigate which parameters control the level where the frontal thrust cuts into the incoming sediment pile. A series of rotary-shear experiments to measure the frictional strength of the various lithologies entering the two subduction zones were carried out. Results include the following findings: (1) At Costa Rica, clay-rich strata at the top of the incoming succession have the lowest strength (µres = 0.19) while underlying calcareous ooze, chalk and diatomite are strong (up to µres = 0.43; µpeak = 0.56). Hence the entire sediment package is underthrust. (2) Off Japan, clay-rich deposits within the lower Shikoku Basin inventory are weakest (µres = 0.13-0.19) and favour the frontal proto-thrust to migrate into one particular horizon between sandy, competent turbidites below and ash-bearing mud above. (3) Taking in situ data and earlier geotechnical testing into account, it is suggested that mineralogical composition rather than pore-pressure defines the position of the frontal thrust, which locates in the weakest, clay mineral-rich (up to 85 wt.%) materials. (4) Smectite, the dominant clay mineral phase at either margin, shows rate strengthening and stable sliding in the frontal 50 km of the subduction thrust (0.0001-0.1 mm/s, 0.5-25 MPa effective normal stress). (5) Progressive illitization of smectite cannot explain seismogenesis, because illite-rich samples also show velocity strengthening at the conditions tested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Highway Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental studies on phase equilibria in the multi-component system PbO-ZnO-CaO-SiO2-FeO-Fe2O3 in air have been conducted to characterize the phase relations of a complex slag system used in the oxidation smelting of lead and in typical lead blast furnace sinters. The liquidus in two pseudoternary sections ZnO-Fe2O3-(PbO + CaO + SiO2) with the CaO/SiO2 weight ratio of 0.1 and the PbO/(CaO + SiO2) weight ratio of 6.2, and with CaO/SiO2 weight ratio of 0.6 and the PbO/(CaO + SiO2) weight ratio of 4.3, have been constructed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was to investigate the impacts of operating conditions and liquid properties on the hydrodynamics and volumetric mass transfer coefficient in activated sludge air-lift reactors. Experiments were conducted in internal and external air-lift reactors. The activated sludge liquid displayed a non-Newtonian rheological behavior. With an increase in the superficial gas velocity, the liquid circulation velocity, gas holdup and mass transfer coefficient increased, and the gas residence time decreased. The liquid circulation velocity, gas holdup and the mass transfer coefficient decreased as the sludge loading increased. The flow regime in the activated sludge air-lift reactors had significant effect on the liquid circulation velocity and the gas holdup, but appeared to have little impact on the mass transfer coefficient. The experimental results in this study were best described by the empirical models, in which the reactor geometry, superficial gas velocity and/or power consumption unit, and solid and fluid properties were employed. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the creation process of the Minimum Information Specification for In Situ Hybridization and Immunohistochemistry Experiments (MISFISHIE). Modeled after the existing minimum information specification for microarray data, we created a new specification for gene expression localization experiments, initially to facilitate data sharing within a consortium. After successful use within the consortium, the specification was circulated to members of the wider biomedical research community for comment and refinement. After a period of acquiring many new suggested requirements, it was necessary to enter a final phase of excluding those requirements that were deemed inappropriate as a minimum requirement for all experiments. The full specification will soon be published as a version 1.0 proposal to the community, upon which a more full discussion must take place so that the final specification may be achieved with the involvement of the whole community. This paper is part of the special issue of OMICS on data standards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By using a complex field with a symmetric combination of electric and magnetic fields, a first-order covariant Lagrangian for Maxwell's equations is obtained, similar to the Lagrangian for the Dirac equation. This leads to a dual-symmetric quantum electrodynamic theory with an infinite set of local conservation laws. The dual symmetry is shown to correspond to a helical phase, conjugate to the conserved helicity. There is also a scaling symmetry, conjugate to the conserved entanglement. The results include a novel form of the photonic wavefunction, with a well-defined helicity number operator conjugate to the chiral phase, related to the fundamental dual symmetry. Interactions with charged particles can also be included. Transformations from minimal coupling to multi-polar or more general forms of coupling are particularly straightforward using this technique. The dual-symmetric version of quantum electrodynamics derived here has potential applications to nonlinear quantum optics and cavity quantum electrodynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We optimized the emission efficiency from a microcavity OLEDs consisting of widely used organic materials, N,N'-di(naphthalene-1-yl)-N,N'-diphenylbenzidine (NPB) as a hole transport layer and tris (8-hydroxyquinoline) (Alq(3)) as emitting and electron transporting layer. LiF/Al was considered as a cathode, while metallic Ag anode was used. TiO2 and Al2O3 layers were stacked on top of the cathode to alter the properties of the top mirror. The electroluminescence emission spectra, electric field distribution inside the device, carrier density, recombination rate and exciton density were calculated as a function of the position of the emission layer. The results show that for certain TiO2 and Al2O3 layer thicknesses, light output is enhanced as a result of the increase in both the reflectance and transmittance of the top mirror. Once the optimum structure has been determined, the microcavity OLED devices can be fabricated and characterized, and comparisons between experiments and theory can be made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eukaryotic membrane proteins cannot be produced in a reliable manner for structural analysis. Consequently, researchers still rely on trial-and-error approaches, which most often yield insufficient amounts. This means that membrane protein production is recognized by biologists as the primary bottleneck in contemporary structural genomics programs. Here, we describe a study to examine the reasons for successes and failures in recombinant membrane protein production in yeast, at the level of the host cell, by systematically quantifying cultures in high-performance bioreactors under tightlydefined growth regimes. Our data show that the most rapid growth conditions of those chosen are not the optimal production conditions. Furthermore, the growth phase at which the cells are harvested is critical: We show that it is crucial to grow cells under tightly-controlled conditions and to harvest them prior to glucose exhaustion, just before the diauxic shift. The differences in membrane protein yields that we observe under different culture conditions are not reflected in corresponding changes in mRNA levels of FPS1, but rather can be related to the differential expression of genes involved in membrane protein secretion and yeast cellular physiology. Copyright © 2005 The Protein Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.