920 resultados para Successive Overrelaxation method with 1 parameter
Resumo:
This study describes the development and validation of a gas chromatography-mass spectrometry (GC-MS) method to identify and quantitate phenytoin in brain microdialysate, saliva and blood from human samples. A solid-phase extraction (SPE) was performed with a nonpolar C8-SCX column. The eluate was evaporated with nitrogen (50°C) and derivatized with trimethylsulfonium hydroxide before GC-MS analysis. As the internal standard, 5-(p-methylphenyl)-5-phenylhydantoin was used. The MS was run in scan mode and the identification was made with three ion fragment masses. All peaks were identified with MassLib. Spiked phenytoin samples showed recovery after SPE of ≥94%. The calibration curve (phenytoin 50 to 1,200 ng/mL, n = 6, at six concentration levels) showed good linearity and correlation (r² > 0.998). The limit of detection was 15 ng/mL; the limit of quantification was 50 ng/mL. Dried extracted samples were stable within a 15% deviation range for ≥4 weeks at room temperature. The method met International Organization for Standardization standards and was able to detect and quantify phenytoin in different biological matrices and patient samples. The GC-MS method with SPE is specific, sensitive, robust and well reproducible, and is therefore an appropriate candidate for the pharmacokinetic assessment of phenytoin concentrations in different human biological samples.
Resumo:
Background Agroforestry is a sustainable land use method with a long tradition in the Bolivian Andes. A better understanding of people’s knowledge and valuation of woody species can help to adjust actor-oriented agroforestry systems. In this case study, carried out in a peasant community of the Bolivian Andes, we aimed at calculating the cultural importance of selected agroforestry species, and at analysing the intracultural variation in the cultural importance and knowledge of plants according to peasants’ sex, age, and migration. Methods Data collection was based on semi-structured interviews and freelisting exercises. Two ethnobotanical indices (Composite Salience, Cultural Importance) were used for calculating the cultural importance of plants. Intracultural variation in the cultural importance and knowledge of plants was detected by using linear and generalised linear (mixed) models. Results and discussion The culturally most important woody species were mainly trees and exotic species (e.g. Schinus molle, Prosopis laevigata, Eucalyptus globulus). We found that knowledge and valuation of plants increased with age but that they were lower for migrants; sex, by contrast, played a minor role. The age effects possibly result from decreasing ecological apparency of valuable native species, and their substitution by exotic marketable trees, loss of traditional plant uses or the use of other materials (e.g. plastic) instead of wood. Decreasing dedication to traditional farming may have led to successive abandonment of traditional tool uses, and the overall transformation of woody plant use is possibly related to diminishing medicinal knowledge. Conclusions Age and migration affect how people value woody species and what they know about their uses. For this reason, we recommend paying particular attention to the potential of native species, which could open promising perspectives especially for the young migrating peasant generation and draw their interest in agroforestry. These native species should be ecologically sound and selected on their potential to provide subsistence and promising commercial uses. In addition to offering socio-economic and environmental services, agroforestry initiatives using native trees and shrubs can play a crucial role in recovering elements of the lost ancient landscape that still forms part of local people’s collective identity.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
Type II collagen is a major chondrocyte-specific component of the cartilage extracellular matrix and it represents a typical differentiation marker of mature chondrocytes. In order to delineate cis-acting elements of the mouse pro$\alpha1$(II) collagen gene that control chondrocyte-specific expression in intact mouse embryos, we generated transgenic mice harboring chimeric constructions in which varying lengths of the promoter and intron 1 sequences were linked to a $\beta$-galactosidase reporter gene. A construction containing a 3000-bp promoter and a 3020-bp intron 1 fragment directed high levels of $\beta$-galactosidase expression specifically to chondrocytes. Successive deletions of intron 1 delineated a 48-bp fragment which targeted $\beta$-galactosidase expression to chondrocytes with the same specificity as the larger intron 1 fragment. When the Col2a1 promoter was replaced with a minimal $\beta$-globin promoter, the 48-bp intron 1 sequence was still able to target expression of the transgene to chondrocytes, specifically. Therefore a 48-bp intron 1 DNA segment of the mouse Col2a1 gene contains the necessary information to confer high-level, temporally correct, chondrocyte expression to a reporter gene in intact mouse embryos and that Col2a1 promoter sequences are dispensable for chondrocyte expression. Nuclear proteins present selectively in mouse primary chondrocytes and rat chondrosarcoma cells bind to the three putative HMG (High-Mobility-Group) domain protein binding sites in this 48-bp sequence and the chondrocyte-specific proteins likely bind the DNA through minor groove. Together, my results indicate that a 48-bp sequence in Col2a1 intron 1 controls chondrocyte-specific expression in vivo and suggest that chondrocytes contain specific nuclear proteins involved in enhancer activity. ^
Resumo:
Many studies in biostatistics deal with binary data. Some of these studies involve correlated observations, which can complicate the analysis of the resulting data. Studies of this kind typically arise when a high degree of commonality exists between test subjects. If there exists a natural hierarchy in the data, multilevel analysis is an appropriate tool for the analysis. Two examples are the measurements on identical twins, or the study of symmetrical organs or appendages such as in the case of ophthalmic studies. Although this type of matching appears ideal for the purposes of comparison, analysis of the resulting data while ignoring the effect of intra-cluster correlation has been shown to produce biased results.^ This paper will explore the use of multilevel modeling of simulated binary data with predetermined levels of correlation. Data will be generated using the Beta-Binomial method with varying degrees of correlation between the lower level observations. The data will be analyzed using the multilevel software package MlwiN (Woodhouse, et al, 1995). Comparisons between the specified intra-cluster correlation of these data and the estimated correlations, using multilevel analysis, will be used to examine the accuracy of this technique in analyzing this type of data. ^
Resumo:
AEgIS experiment’s main goal is to measure the local gravitational acceleration of antihydrogen¯g and thus perform a direct test of the weak equivalence principle with antimatter. In the first phase of the experiment the aim is to measure ¯g with 1% relative precision. This paper presents the antihydrogen production method and a description of some components of the experiment, which are necessary for the gravity measurement. Current status of the AE¯gIS experimental apparatus is presented and recent commissioning results with antiprotons are outlined. In conclusion we discuss the short-term goals of the AE¯gIS collaboration that will pave the way for the first gravity measurement in the near future.
Resumo:
Results of a search for supersymmetry via direct production of third-generation squarks are reported, using 20.3 fb −1 of proton-proton collision data at √s =8 TeV recorded by the ATLAS experiment at the LHC in 2012. Two different analysis strategies based on monojetlike and c -tagged event selections are carried out to optimize the sensitivity for direct top squark-pair production in the decay channel to a charm quark and the lightest neutralino (t 1 →c+χ ˜ 0 1 ) across the top squark–neutralino mass parameter space. No excess above the Standard Model background expectation is observed. The results are interpreted in the context of direct pair production of top squarks and presented in terms of exclusion limits in the m ˜t 1, m ˜ X0 1 ) parameter space. A top squark of mass up to about 240 GeV is excluded at 95% confidence level for arbitrary neutralino masses, within the kinematic boundaries. Top squark masses up to 270 GeV are excluded for a neutralino mass of 200 GeV. In a scenario where the top squark and the lightest neutralino are nearly degenerate in mass, top squark masses up to 260 GeV are excluded. The results from the monojetlike analysis are also interpreted in terms of compressed scenarios for top squark-pair production in the decay channel t ˜ 1 →b+ff ′ +χ ˜ 0 1 and sbottom pair production with b ˜ 1 →b+χ ˜ 0 1 , leading to a similar exclusion for nearly mass-degenerate third-generation squarks and the lightest neutralino. The results in this paper significantly extend previous results at colliders.
Resumo:
Measurements of charged-particle fragmentation functions of jets produced in ultra-relativistic nuclear collisions can provide insight into the modification of parton showers in the hot, dense medium created in the collisions. ATLAS has measured jets in √sNN=2.76 TeV Pb+Pb collisions at the LHC using a data set recorded in 2011 with an integrated luminosity of 0.14 nb−1. Jets were reconstructed using the anti-kt algorithm with distance parameter values R = 0.2, 0.3, and 0.4. Distributions of charged-particle transverse momentum and longitudinal momentum fraction are reported for seven bins in collision centrality for R=0.4 jets with pjetT>100 GeV. Commensurate minimum pT values are used for the other radii. Ratios of fragment distributions in each centrality bin to those measured in the most peripheral bin are presented. These ratios show a reduction of fragment yield in central collisions relative to peripheral collisions at intermediate z values, 0.04≲z≲0.2 and an enhancement in fragment yield for z≲0.04. A smaller, less significant enhancement is observed at large z and large pT in central collisions.
Resumo:
The S0 ↔ S1 spectra of the mild charge-transfer (CT) complexes perylene·tetrachloroethene (P·4ClE) and perylene·(tetrachloroethene)2 (P·(4ClE)2) are investigated by two-color resonant two-photon ionization (2C-R2PI) and dispersed fluorescence spectroscopy in supersonic jets. The S0 → S1 vibrationless transitions of P·4ClE and P·(4ClE)2 are shifted by δν = −451 and −858 cm–1 relative to perylene, translating to excited-state dissociation energy increases of 5.4 and 10.3 kJ/mol, respectively. The red shift is ∼30% larger than that of perylene·trans-1,2-dichloroethene; therefore, the increase in chlorination increases the excited-state stabilization and CT character of the interaction, but the electronic excitation remains largely confined to the perylene moiety. The 2C-R2PI and fluorescence spectra of P·4ClE exhibit strong progressions in the perylene intramolecular twist (1au) vibration (42 cm–1 in S0 and 55 cm–1 in S1), signaling that perylene deforms along its twist coordinate upon electronic excitation. The intermolecular stretching (Tz) and internal rotation (Rc) vibrations are weak; therefore, the P·4ClE intermolecular potential energy surface (IPES) changes little during the S0 ↔ S1 transition. The minimum-energy structures and inter- and intramolecular vibrational frequencies of P·4ClE and P·(4ClE)2 are calculated with the dispersion-corrected density functional theory (DFT) methods B97-D3, ωB97X-D, M06, and M06-2X and the spin-consistent-scaled (SCS) variant of the approximate second-order coupled-cluster method, SCS-CC2. All methods predict the global minima to be π-stacked centered coplanar structures with the long axis of tetrachloroethene rotated by τ ≈ 60° relative to the perylene long axis. The calculated binding energies are in the range of −D0 = 28–35 kJ/mol. A second minimum is predicted with τ ≈ 25°, with ∼1 kJ/mol smaller binding energy. Although both monomers are achiral, both the P·4ClE and P·(4ClE)2 complexes are chiral. The best agreement for adiabatic excitation energies and vibrational frequencies is observed for the ωB97X-D and M06-2X DFT methods.
Resumo:
Ordinal outcomes are frequently employed in diagnosis and clinical trials. Clinical trials of Alzheimer's disease (AD) treatments are a case in point using the status of mild, moderate or severe disease as outcome measures. As in many other outcome oriented studies, the disease status may be misclassified. This study estimates the extent of misclassification in an ordinal outcome such as disease status. Also, this study estimates the extent of misclassification of a predictor variable such as genotype status. An ordinal logistic regression model is commonly used to model the relationship between disease status, the effect of treatment, and other predictive factors. A simulation study was done. First, data based on a set of hypothetical parameters and hypothetical rates of misclassification was created. Next, the maximum likelihood method was employed to generate likelihood equations accounting for misclassification. The Nelder-Mead Simplex method was used to solve for the misclassification and model parameters. Finally, this method was applied to an AD dataset to detect the amount of misclassification present. The estimates of the ordinal regression model parameters were close to the hypothetical parameters. β1 was hypothesized at 0.50 and the mean estimate was 0.488, β2 was hypothesized at 0.04 and the mean of the estimates was 0.04. Although the estimates for the rates of misclassification of X1 were not as close as β1 and β2, they validate this method. X 1 0-1 misclassification was hypothesized as 2.98% and the mean of the simulated estimates was 1.54% and, in the best case, the misclassification of k from high to medium was hypothesized at 4.87% and had a sample mean of 3.62%. In the AD dataset, the estimate for the odds ratio of X 1 of having both copies of the APOE 4 allele changed from an estimate of 1.377 to an estimate 1.418, demonstrating that the estimates of the odds ratio changed when the analysis includes adjustment for misclassification. ^
Resumo:
Net Primary Production was measured using the 14**C uptake method with minor modifications. Melt pond samples were spiked with 0.1µCi ml**-1 of 14**C labelled sodium bicarbonate (Moravek Biochemicals, Brea, USA) and distributed in 10 clear bottles (20 ml each). Subsequently they were incubated for 12 h at -1.3°C under different scalar irradiances (0-420 µmol photons m**-2 s**-1) measured with a spherical sensor (Spherical Micro Quantum Sensor US-SQS/L, Heinz Walz, Effeltrich, Germany). At the end of the incubation, samples were filtered onto 0.2 µm nitrocellulose filters and the particulate radioactive carbon uptake was determined by liquid scintillation counting using Filter count scintillation cocktail (Perkin Elmer, Waltham, USA). The carbon uptake values in the dark were subtracted from the carbon uptake values measured in the light incubations. Dissolved inorganic carbon (DIC) was measured for each sample using the flow injection system (Hall and Aller, 1992). The DIC concentration was taken into account to calculate the amount of labeled bicarbonate incorporated into the cell. Carbon fixation rates were normalized volumetrically and by chlorophyll a. Photosynthesis-irradiance curves (PI curves) were fitted using MATLAB® according to the equation proposed by Platt et al. (1980) including a photoinhibition parameter (beta) and providing the main photosynthetic parameters: maximum Chla normalized carbon fixation rate if there were no photoinhibition (Pb) and the initial slope of the saturation curve (alpha). The derived parameters: light intensity at which photosynthesis is maximal (Im), the carbon fixation rate at that maximal irradiance (Pbm) and the adaptation parameter or photoacclimation index (Ik) were calculated according to Platt et al. (1982).
Resumo:
Net Primary Production was measured using the 14**C uptake method with minor modifications. Seawater samples were spiked with 0.1µCi ml**-1 of 14**C labelled sodium bicarbonate (Moravek Biochemicals, Brea, USA) and distributed in 10 clear bottles (20 ml each). Subsequently they were incubated for 12 h at -1.3°C under different scalar irradiances (0-420 µmol photons m**-2 s**-1) measured with a spherical sensor (Spherical Micro Quantum Sensor US-SQS/L, Heinz Walz, Effeltrich, Germany). At the end of the incubation, samples were filtered onto 0.2 µm nitrocellulose filters and the particulate radioactive carbon uptake was determined by liquid scintillation counting using Filter count scintillation cocktail (Perkin Elmer, Waltham, USA). The carbon uptake values in the dark were subtracted from the carbon uptake values measured in the light incubations. Dissolved inorganic carbon (DIC) was measured for each sample using the flow injection system (Hall and Aller, 1992). The DIC concentration was taken into account to calculate the amount of labeled bicarbonate incorporated into the cell. Carbon fixation rates were normalized volumetrically and by chlorophyll a. Photosynthesis-irradiance curves (PI curves) were fitted using MATLAB® according to the equation proposed by Platt et al. (1980) including a photoinhibition parameter (beta) and providing the main photosynthetic parameters: maximum Chla normalized carbon fixation rate if there were no photoinhibition (Pb) and the initial slope of the saturation curve (alpha). The derived parameters: light intensity at which photosynthesis is maximal (Im), the carbon fixation rate at that maximal irradiance (Pbm) and the adaptation parameter or photoacclimation index (Ik) were calculated according to Platt et al. (1982).
Resumo:
Net Primary Production was measured using the 14**C uptake method with minor modifications. Melted sea ice samples were spiked with 0.1µCi ml**-1 of 14**C labelled sodium bicarbonate (Moravek Biochemicals, Brea, USA) and distributed in 10 clear bottles (20 ml each). Subsequently they were incubated for 12 h at -1.3°C under different scalar irradiances (0-420 µmol photons m**-2 s**-1) measured with a spherical sensor (Spherical Micro Quantum Sensor US-SQS/L, Heinz Walz, Effeltrich, Germany). At the end of the incubation, samples were filtered onto 0.2 µm nitrocellulose filters and the particulate radioactive carbon uptake was determined by liquid scintillation counting using Filter count scintillation cocktail (Perkin Elmer, Waltham, USA). The carbon uptake values in the dark were subtracted from the carbon uptake values measured in the light incubations. Dissolved inorganic carbon (DIC) was measured for each sample using the flow injection system (Hall and Aller, 1992). The DIC concentration was taken into account to calculate the amount of labeled bicarbonate incorporated into the cell. Carbon fixation rates were normalized volumetrically and by chlorophyll a. Photosynthesis-irradiance curves (PI curves) were fitted using MATLAB® according to the equation proposed by Platt et al. (1980) including a photoinhibition parameter (beta) and providing the main photosynthetic parameters: maximum Chla normalized carbon fixation rate if there were no photoinhibition (Pb) and the initial slope of the saturation curve (alpha). The derived parameters: light intensity at which photosynthesis is maximal (Im), the carbon fixation rate at that maximal irradiance (Pbm) and the adaptation parameter or photoacclimation index (Ik) were calculated according to Platt et al. (1982).
Resumo:
Studies on the impact of historical, current and future global change require very high-resolution climate data (less or equal 1km) as a basis for modelled responses, meaning that data from digital climate models generally require substantial rescaling. Another shortcoming of available datasets on past climate is that the effects of sea level rise and fall are not considered. Without such information, the study of glacial refugia or early Holocene plant and animal migration are incomplete if not impossible. Sea level at the last glacial maximum (LGM) was approximately 125m lower, creating substantial additional terrestrial area for which no current baseline data exist. Here, we introduce the development of a novel, gridded climate dataset for LGM that is both very high resolution (1km) and extends to the LGM sea and land mask. We developed two methods to extend current terrestrial precipitation and temperature data to areas between the current and LGM coastlines. The absolute interpolation error is less than 1°C and 0.5 °C for 98.9% and 87.8% of all pixels for the first two 1 arc degree distance zones. We use the change factor method with these newly assembled baseline data to downscale five global circulation models of LGM climate to a resolution of 1km for Europe. As additional variables we calculate 19 'bioclimatic' variables, which are often used in climate change impact studies on biological diversity. The new LGM climate maps are well suited for analysing refugia and migration during Holocene warming following the LGM.
Resumo:
Se ha caracterizado la infiltración de un suelo colocado en una columna de suelo de metacrilato, de base hexagonal de diagonal 1 m y 0,6 m de alto, con una densidad aparente de 1,5 g/cm3. El procedimiento utilizado ha sido la fibra óptica y el método denominado “Active Heating pulse method with Fiber Optic temperature sensing” AHFO method, que consiste en emitir un pulso óptico con láser y medir en el tiempo la señal reflejada, de baja intensidad, en diferentes puntos de la fibra óptica. Del espectro de luz reflejada solamente un rango de frecuencias específico, determinado por análisis de frecuencia, se correlaciona con la temperatura. La precisión en la medida es de ± 0,1ºC en una distancia de ± 12,5 cm. En el interior de la columna se colocó el cable de fibra óptica formando tres hélices concéntricas separadas 20 cm , 40 cm y 60 cm del centro. Asimismo, se cubrió la superficie del suelo con una altura media de agua que osciló entre 1,5 a 2,5 cm a lo largo de los 140 min que duró el proceso de calentamiento del cable. El incremento de temperatura antes y después del calentamiento se utilizó para determinar la infiltración instantánea a partir de la expresión de Perzlmaeir et al (2004) y de los números adimensional de Nusselt y Prandtl. Considerando los errores inherentes al procedimiento de cálculo, los resultados muestran que el AHFO method es una herramienta útil en el estudio de la variabilidad espacial de la infiltración en el suelo que permite, además, determinar su valor. Asimismo, muestra su potencial para incluir dichas estimaciones en la calibración de modelos relacionados con la gestión de recursos hídricos.