975 resultados para direct measurements
Resumo:
About 100 parallel determinations of hydrogen sulfide by the volumetric and photometric methods were made in the layer of coexistence of oxygen with hydrogen sulfide (C layer). Thiosulfates were determined simultaneously. Regardless of locations of the stations, determinations by two methods coincided for the entire range of depths of occurrence of the C layer upper boundary. Within the C layer hydrogen sulfide readings obtained by these two independent methods agreed; thiosulfates were not found by direct measurements. Difference in the readings appears at the lower boundary of the C layer and below it, accompanied by appearance of thiosulfates. It is therefore concluded that it is correct to determine the upper boundary of the C layer by the iodometric method and to use concentration of hydrogen sulfide obtained by this method in the C layer to calculate rate of chemical oxidation of hydrogen sulfide in quasistationary processes.
Resumo:
The stable carbon isotopic composition of the planktonic foraminifera Globigerinoides sacculifer and G. ruber (white) and sedimentary organic matter from the northern Gulf of Aqaba have been investigated to estimate changes in delta13CDIC in surface waters during the last 1,000 years. The high sedimentation rates at the core sites (about 54 cm/Kyear) provide high temporal resolution (~10 years). Recent sediments at the top of the cores reflect conditions younger than 1950. The delta13C records of the planktonic foraminifera from three multicores display similar trends, showing a uniform and consistent pattern before the 1750s, and a gradual decrease of approximately 0.63? over the last two centuries. This decrease seems to track the decrease of delta13CDIC in surface waters, which is mainly caused by the increase of anthropogenic input of 13C-depleted CO2 into the atmosphere. Similarly, a trend towards lighter values of the carbon isotopic composition of sedimentary organic matter (delta13Corg) during the last 200 years supports the interpretation obtained from the planktonic foraminiferal delta13C. Furthermore, direct measurements of seawater show that delta13C of the dissolved inorganic carbon (DIC) in the northern Gulf of Aqaba has decreased by about 0.44 per mil during the period 1979-2000. The average annual decrease is 0.021 per mil, which is similar to that observed globally. The delta13C values of planktonic foraminifera combined with organic matter delta13C from marine sediments are good indicators for reconstructing past changes in atmospheric CO2 concentrations from the northern Gulf of Aqaba.
Resumo:
RESUMEN El apoyo a la selección de especies a la restauración de la vegetación en España en los últimos 40 años se ha basado fundamentalmente en modelos de distribución de especies, también llamados modelos de nicho ecológico, que estiman la probabilidad de presencia de las especies en función de las condiciones del medio físico (clima, suelo, etc.). Con esta tesis se ha intentado contribuir a la mejora de la capacidad predictiva de los modelos introduciendo algunas propuestas metodológicas adaptadas a los datos disponibles actualmente en España y enfocadas al uso de los modelos en la selección de especies. No siempre se dispone de datos a una resolución espacial adecuada para la escala de los proyectos de restauración de la vegetación. Sin embrago es habitual contar con datos de baja resolución espacial para casi todas las especies vegetales presentes en España. Se propone un método de recalibración que actualiza un modelo de regresión logística de baja resolución espacial con una nueva muestra de alta resolución espacial. El método permite obtener predicciones de calidad aceptable con muestras relativamente pequeñas (25 presencias de la especie) frente a las muestras mucho mayores (más de 100 presencias) que requería una estrategia de modelización convencional que no usara el modelo previo. La selección del método estadístico puede influir decisivamente en la capacidad predictiva de los modelos y por esa razón la comparación de métodos ha recibido mucha atención en la última década. Los estudios previos consideraban a la regresión logística como un método inferior a técnicas más modernas como las de máxima entropía. Los resultados de la tesis demuestran que esa diferencia observada se debe a que los modelos de máxima entropía incluyen técnicas de regularización y la versión de la regresión logística usada en las comparaciones no. Una vez incorporada la regularización a la regresión logística usando penalización, las diferencias en cuanto a capacidad predictiva desaparecen. La regresión logística penalizada es, por tanto, una alternativa más para el ajuste de modelos de distribución de especies y está a la altura de los métodos modernos con mejor capacidad predictiva como los de máxima entropía. A menudo, los modelos de distribución de especies no incluyen variables relativas al suelo debido a que no es habitual que se disponga de mediciones directas de sus propiedades físicas o químicas. La incorporación de datos de baja resolución espacial proveniente de mapas de suelo nacionales o continentales podría ser una alternativa. Los resultados de esta tesis sugieren que los modelos de distribución de especies de alta resolución espacial mejoran de forma ligera pero estadísticamente significativa su capacidad predictiva cuando se incorporan variables relativas al suelo procedente de mapas de baja resolución espacial. La validación es una de las etapas fundamentales del desarrollo de cualquier modelo empírico como los modelos de distribución de especies. Lo habitual es validar los modelos evaluando su capacidad predictiva especie a especie, es decir, comparando en un conjunto de localidades la presencia o ausencia observada de la especie con las predicciones del modelo. Este tipo de evaluación no responde a una cuestión clave en la restauración de la vegetación ¿cuales son las n especies más idóneas para el lugar a restaurar? Se ha propuesto un método de evaluación de modelos adaptado a esta cuestión que consiste en estimar la capacidad de un conjunto de modelos para discriminar entre las especies presentes y ausentes de un lugar concreto. El método se ha aplicado con éxito a la validación de 188 modelos de distribución de especies leñosas orientados a la selección de especies para la restauración de la vegetación en España. Las mejoras metodológicas propuestas permiten mejorar la capacidad predictiva de los modelos de distribución de especies aplicados a la selección de especies en la restauración de la vegetación y también permiten ampliar el número de especies para las que se puede contar con un modelo que apoye la toma de decisiones. SUMMARY During the last 40 years, decision support tools for plant species selection in ecological restoration in Spain have been based on species distribution models (also called ecological niche models), that estimate the probability of occurrence of the species as a function of environmental predictors (e.g., climate, soil). In this Thesis some methodological improvements are proposed to contribute to a better predictive performance of such models, given the current data available in Spain and focusing in the application of the models to selection of species for ecological restoration. Fine grained species distribution data are required to train models to be used at the scale of the ecological restoration projects, but this kind of data are not always available for every species. On the other hand, coarse grained data are available for almost every species in Spain. A recalibration method is proposed that updates a coarse grained logistic regression model using a new fine grained updating sample. The method allows obtaining acceptable predictive performance with reasonably small updating sample (25 occurrences of the species), in contrast with the much larger samples (more than 100 occurrences) required for a conventional modeling approach that discards the coarse grained data. The choice of the statistical method may have a dramatic effect on model performance, therefore comparisons of methods have received much interest in the last decade. Previous studies have shown a poorer performance of the logistic regression compared to novel methods like maximum entropy models. The results of this Thesis show that the observed difference is caused by the fact that maximum entropy models include regularization techniques and the versions of logistic regression compared do not. Once regularization has been added to the logistic regression using a penalization procedure, the differences in model performance disappear. Therefore, penalized logistic regression may be considered one of the best performing methods to model species distributions. Usually, species distribution models do not consider soil related predictors because direct measurements of the chemical or physical properties are often lacking. The inclusion of coarse grained soil data from national or continental soil maps could be a reasonable alternative. The results of this Thesis suggest that the performance of the models slightly increase after including soil predictors form coarse grained soil maps. Model validation is a key stage of the development of empirical models, such as species distribution models. The usual way of validating is based on the evaluation of model performance for each species separately, i.e., comparing observed species presences or absence to predicted probabilities in a set of sites. This kind of evaluation is not informative for a common question in ecological restoration projects: which n species are the most suitable for the environment of the site to be restored? A method has been proposed to address this question that estimates the ability of a set of models to discriminate among present and absent species in a evaluation site. The method has been successfully applied to the validation of 188 species distribution models used to support decisions on species selection for ecological restoration in Spain. The proposed methodological approaches improve the predictive performance of the predictive models applied to species selection in ecological restoration and increase the number of species for which a model that supports decisions can be fitted.
Resumo:
Una evolución del método de diferencias finitas ha sido el desarrollo del método de diferencias finitas generalizadas (MDFG) que se puede aplicar a mallas irregulares o nubes de puntos. En este método se emplea una expansión en serie de Taylor junto con una aproximación por mínimos cuadrados móviles (MCM). De ese modo, las fórmulas explícitas de diferencias para nubes irregulares de puntos se pueden obtener fácilmente usando el método de Cholesky. El MDFG-MCM es un método sin malla que emplea únicamente puntos. Una contribución de esta Tesis es la aplicación del MDFG-MCM al caso de la modelización de problemas anisótropos elípticos de conductividad eléctrica incluyendo el caso de tejidos reales cuando la dirección de las fibras no es fija, sino que varía a lo largo del tejido. En esta Tesis también se muestra la extensión del método de diferencias finitas generalizadas a la solución explícita de ecuaciones parabólicas anisótropas. El método explícito incluye la formulación de un límite de estabilidad para el caso de nubes irregulares de nodos que es fácilmente calculable. Además se presenta una nueva solución analítica para una ecuación parabólica anisótropa y el MDFG-MCM explícito se aplica al caso de problemas parabólicos anisótropos de conductividad eléctrica. La evidente dificultad de realizar mediciones directas en electrocardiología ha motivado un gran interés en la simulación numérica de modelos cardiacos. La contribución más importante de esta Tesis es la aplicación de un esquema explícito con el MDFG-MCM al caso de la modelización monodominio de problemas de conductividad eléctrica. En esta Tesis presentamos un algoritmo altamente eficiente, exacto y condicionalmente estable para resolver el modelo monodominio, que describe la actividad eléctrica del corazón. El modelo consiste en una ecuación en derivadas parciales parabólica anisótropa (EDP) que está acoplada con un sistema de ecuaciones diferenciales ordinarias (EDOs) que describen las reacciones electroquímicas en las células cardiacas. El sistema resultante es difícil de resolver numéricamente debido a su complejidad. Proponemos un método basado en una separación de operadores y un método sin malla para resolver la EDP junto a un método de Runge-Kutta para resolver el sistema de EDOs de la membrana y las corrientes iónicas. ABSTRACT An evolution of the method of finite differences has been the development of generalized finite difference (GFD) method that can be applied to irregular grids or clouds of points. In this method a Taylor series expansion is used together with a moving least squares (MLS) approximation. Then, the explicit difference formulae for irregular clouds of points can be easily obtained using a simple Cholesky method. The MLS-GFD is a mesh-free method using only points. A contribution of this Thesis is the application of the MLS-GFDM to the case of modelling elliptic anisotropic electrical conductivity problems including the case of real tissues when the fiber direction is not fixed, but varies throughout the tissue. In this Thesis the extension of the generalized finite difference method to the explicit solution of parabolic anisotropic equations is also given. The explicit method includes a stability limit formulated for the case of irregular clouds of nodes that can be easily calculated. Also a new analytical solution for homogeneous parabolic anisotropic equation has been presented and an explicit MLS- GFDM has been applied to the case of parabolic anisotropic electrical conductivity problems. The obvious difficulty of performing direct measurements in electrocardiology has motivated wide interest in the numerical simulation of cardiac models. The main contribution of this Thesis is the application of an explicit scheme based in the MLS-GFDM to the case of modelling monodomain electrical conductivity problems using operator splitting including the case of anisotropic real tissues. In this Thesis we present a highly efficient, accurate and conditionally stable algorithm to solve a monodomain model, which describes the electrical activity in the heart. The model consists of a parabolic anisotropic partial differential equation (PDE), which is coupled to systems of ordinary differential equations (ODEs) describing electrochemical reactions in the cardiac cells. The resulting system is challenging to solve numerically, because of its complexity. We propose a method based on operator splitting and a meshless method for solving the PDE together with a Runge-Kutta method for solving the system of ODE’s for the membrane and ionic currents.
Resumo:
The association of the TATA binding protein (TBP) to eukaryotic promoters is a possible rate-limiting step in gene expression. Slow promoter binding might be related to TBP’s ability to occlude its DNA binding domain through dimerization. Using a “pull-down” based assay, we find that TBP dimers dissociate slowly (t½ = 6–10 min), and thus present a formidable kinetic barrier to TATA binding. At 10 nM, TBP appears to exist as a mixed population of monomers and dimers. In this state, TATA binding displays burst kinetics that appears to reflect rapid binding of monomers and slow dissociation of dimers. The kinetics of the slow phase is in excellent agreement with direct measurements of the kinetics of dimer dissociation.
Resumo:
Fluid and macromolecule secretion by submucosal glands in mammalian airways is believed to be important in normal airway physiology and in the pathophysiology of cystic fibrosis (CF). An in situ fluorescence method was applied to measure the ionic composition and viscosity of freshly secreted fluid from airway glands. Fragments of human large airways obtained at the time of lung transplantation were mounted in a humidified perfusion chamber and the mucosal surface was covered by a thin layer of oil. Individual droplets of secreted fluid were microinjected with fluorescent indicators for measurement of [Na+], [Cl−], and pH by ratio imaging fluorescence microscopy and viscosity by fluorescence recovery after photobleaching. After carbachol stimulation, 0.1–0.5 μl of fluid accumulated in spherical droplets at gland orifices in ≈3–5 min. In gland fluid from normal human airways, [Na+] was 94 ± 8 mM, [Cl−] was 92 ± 12 mM, and pH was 6.97 ± 0.06 (SE, n = 7 humans, more than five glands studied per sample). Apparent fluid viscosity was 2.7 ± 0.3-fold greater than that of saline. Neither [Na+] nor pH differed in gland fluid from CF airways, but viscosity was significantly elevated by ≈2-fold compared to normal airways. These results represent the first direct measurements of ionic composition and viscosity in uncontaminated human gland secretions and indicate similar [Na+], [Cl−], and pH to that in the airway surface liquid. The elevated gland fluid viscosity in CF may be an important factor promoting bacterial colonization and airway disease.
Resumo:
Successful cryopreservation of most multicompartmental biological systems has not been achieved. One prerequisite for success is quantitative information on cryoprotectant permeation into and amongst the compartments. This report describes direct measurements of cryoprotectant permeation into a multicompartmental system using chemical shift selective magnetic resonance (MR) microscopy and MR spectroscopy. We used the developing zebrafish embryo as a model for studying these complex systems because these embryos are composed of two membrane-limited compartments: (i) a large yolk (surrounded by the yolk syncytial layer) and (ii) differentiating blastoderm cells (each surrounded by a plasma membrane). MR images of the spatial distribution of three cryoprotectants (dimethyl sulfoxide, propylene glycol, and methanol) demonstrated that methanol permeated the entire embryo within 15 min. In contrast, the other cryoprotectants exhibited little or no permeation over 2.5 h. MR spectroscopy and microinjections of cryoprotectants into the yolk inferred that the yolk syncytial layer plays a critical role in limiting the permeation of some cryoprotectants throughout the embryo. This study demonstrates the power of MR technology combined with micromanipulation for elucidating key physiological factors in cryobiology.
Resumo:
We present the first direct measurements of bidirectional motions in an extragalactic radio jet. The radio source 1946+708 is a compact symmetric object with striking S-symmetry identified with a galaxy at a redshift of 0.101. From observations 2 years apart we have determined the velocities of four compact components in the jet, the fastest of which has an apparent velocity of 1.09 h-1c. By pairing up the components, assuming they were simultaneously ejected in opposite directions, we derive a 1 lower limit on the Hubble constant, H0 > 42 km.s-1.Mpc-1.
Resumo:
One main point of our atmospheric-electric measurements over the Atlantic Ocean 1973 was the investigation of the air-earth current density above the sea. In addition to direct measurements at the water surface with a floating net, we calculated the air-earth current density from the electric field and the air conductivity measured simultaneously on board of the ship and during particular ascents in the free atmosphere. During all five ascents the air-earth current density did not change with altitude. For pure maritime air-conditions, the mean air-earth current density was found to be 2.9 pA/m**2. The mean hourly air-earth current density over the Atlantic shows nearly the same 24-hour pattern as measured by Cobb (1977) at the South Pole at the same time. When dust-loaden air masses of African origin reached the ship as well as under continental influence the mean air-earth current density was reduced to 2.1 pA/m**2. The global 24-hour pattern was modified by this continental influences. Finally, it is shown that the values of the air conductivity measured on board R. V. "Meteor" during our earlier expeditions have been influenced by the exhaust of the ship and must therefore be corrected. With this correction, our new mean values of the air-earth current density over the Atlantic are 2.6 pA/m**2 in 1965 and 2.0 pA/m**2 in 1969. From all measurements, the global air-earth current is estimated to be about 1250 A.
Resumo:
Oil shale processing produces an aqueous wastewater stream known as retort water. The fate of the organic content of retort water from the Stuart oil shale project (Gladstone, Queensland) is examined in a proposed packed bed treatment system consisting of a 1:1 mixture of residual shale from the retorting process and mining overburden. The retort water had a neutral pH and an average unfiltered TOC of 2,900 mg l(-1). The inorganic composition of the retort water was dominated by NH4+. Only 40% of the total organic carbon (TOC) in the retort water was identifiable, and this was dominated by carboxylic acids. In addition to monitoring influent and effluent TOC concentrations, CO2 evolution was monitored on line by continuous measurements of headspace concentrations and air flow rates. The column was run for 64 days before it blocked and was dismantled for analysis. Over 98% of the TOC was removed from the retort water. Respirometry measurements were confounded by CO2 production from inorganic sources. Based on predictions with the chemical equilibrium package PHREEQE, approximately 15% of the total CO2 production arose from the reaction of NH4+ with carbonates. The balance of the CO2 production accounted for at least 80% of the carbon removed from the retort water. Direct measurements of solid organic carbon showed that approximately 20% of the influent carbon was held-up in the top 20cm of the column. Less than 20% of this held-up carbon was present as either biomass or as adsorbed species. Therefore, the column was ultimately blocked by either extracellular polymeric substances or by a sludge that had precipitated out of the retort water.
Resumo:
We thank Hilberts and Troch [2006] for their comment on our paper [Cartwright et al, 2005]. Before proceeding with our specific replies to the comments we would first like to clarify the definitions and meanings of equations (1)-(3) as presented by Hilberts and Troch [2006]. First, equation (1) is the fundamental definition of the (complex) effective porosity as derived by Nielsen and Perrochet [2000]. Equations (2) and (3), however, represent the linear frequency response function of the water table in the sand column responding to simple harmonic forcing. This function, which was validated by Nielsen and Perrochet [2000], provides an alternative method for estimating the complex effective porosity from the experimental sand column data in the absence of direct measurements of h_(tot) (which are required if equation (1) is to be used).
Resumo:
Direct measurements of the absorbed energy in femtosecond laser inscription in a range of materials is performed. Key absorption parameters are characterized by fitting numerical modelling to measurements.
Resumo:
The literature relating to evaporation from single droplets of pure liquids, and to the drying of droplets containing solids and of droplet sprays has been reviewed. The heat and mass transfer rates for a single droplet suspended from a nozzle were studied within a 42mm I.D. horizontal wind tunnel designed to supply hot dry air, to simulate conditions encountered in a practical spray dryer. A novel rotating glass nozzle was developed to facilitate direct measurements of droplet weight and core temperature. This design minimised heat conduction through the nozzle. Revised correlations were obtained for heat and mass transfer coefficients, for evaporation from pure water droplets suspended from a rotating nozzle. Nu = 2.0 + 0.27 (l/B)°-18Re°-5Pr°-83 Sh = 2.0 + 0.575 ((T0-T.)/Tomfc) -o.o4Reo.5 ^0.33 Experimental drying studies were carried out on single droplets of different types of skin-forming materials, namely, custard, gelatin, skim milk and fructose at air temperatures ranging from 19°C to 198°C. Dried crusts were recovered and examined by Scanning Electron Microscopy. Skin-forming materials were classified into three types according to the mechanisms of skin formation. In the first type (typified by droplets of custard and starch) skin formed due to gelatinisation at high temperatures. Increasing the drying temperature resulted in increased crust resistance to mass transfer due to increased granule swelling and the crust resistance was completely transferred to a skin resistance at drying temperatures > 150°C. In the second type e.g. gelatin droplets the skin formed immediately drying had taken place at any drying temperature. At drying temperature > 60° C a more resistant skin was formed. In the third type (typified by droplets of skim milk and fructose) the skin appeared on the droplet surface at a certain stage of the drying process under any drying conditions. As the drying temperature was increased the resistance of the skin to mass transfer increased. The drying rate history of any material depended upon the nature of the skin formed which, in turn, depended upon the drying conditions. A mathematical model was proposed for the drying of the first type of skin-forming material. This was based on the assumption that, once all the granules gelatinised at the gelatinisation temperature, a skin appeared instantaneously on the droplet surface. The experimentally-observed times at which the skin appeared on the droplets surfaces were in excellent agreement with those predicted from the model. The work should assist in understanding the fundamentals of paniculate drying processes, particularly when skin-formation occurs and may be a crucial factor in volatiles retention.
Resumo:
Satellite-borne scatterometers are used to measure backscattered micro-wave radiation from the ocean surface. This data may be used to infer surface wind vectors where no direct measurements exist. Inherent in this data are outliers owing to aberrations on the water surface and measurement errors within the equipment. We present two techniques for identifying outliers using neural networks; the outliers may then be removed to improve models derived from the data. Firstly the generative topographic mapping (GTM) is used to create a probability density model; data with low probability under the model may be classed as outliers. In the second part of the paper, a sensor model with input-dependent noise is used and outliers are identified based on their probability under this model. GTM was successfully modified to incorporate prior knowledge of the shape of the observation manifold; however, GTM could not learn the double skinned nature of the observation manifold. To learn this double skinned manifold necessitated the use of a sensor model which imposes strong constraints on the mapping. The results using GTM with a fixed noise level suggested the noise level may vary as a function of wind speed. This was confirmed by experiments using a sensor model with input-dependent noise, where the variation in noise is most sensitive to the wind speed input. Both models successfully identified gross outliers with the largest differences between models occurring at low wind speeds. © 2003 Elsevier Science Ltd. All rights reserved.