77 resultados para Oxygen -- Measurement
Resumo:
Background: The activation of hepatic stellate cells (HSCs) plays a pivotal role during liver injury because the resulting myofibroblasts (MFBs) are mainly responsible for connective tissue re-assembly. MFBs represent therefore cellular targets for anti-fibrotic therapy. In this study, we employed activated HSCs, termed M1-4HSCs, whose transdifferentiation to myofibroblastoid cells (named M-HTs) depends on transforming growth factor (TGF)-β. We analyzed the oxidative stress induced by TGF-β and examined cellular defense mechanisms upon transdifferentiation of HSCs to M-HTs. Results: We found reactive oxygen species (ROS) significantly upregulated in M1-4HSCs within 72 hours of TGF-β administration. In contrast, M-HTs harbored lower intracellular ROS content than M1-4HSCs, despite of elevated NADPH oxidase activity. These observations indicated an upregulation of cellular defense mechanisms in order to protect cells from harmful consequences caused by oxidative stress. In line with this hypothesis, superoxide dismutase activation provided the resistance to augmented radical production in M-HTs, and glutathione rather than catalase was responsible for intracellular hydrogen peroxide removal. Finally, the TGF-β/NADPH oxidase mediated ROS production correlated with the upregulation of AP-1 as well as platelet-derived growth factor receptor subunits, which points to important contributions in establishing antioxidant defense. Conclusion: The data provide evidence that TGF-β induces NADPH oxidase activity which causes radical production upon the transdifferentiation of activated HSCs to M-HTs. Myofibroblastoid cells are equipped with high levels of superoxide dismutase activity as well as glutathione to counterbalance NADPH oxidase dependent oxidative stress and to avoid cellular damage.
Resumo:
The optical and electrical recovery processes of the metastable state of the EL2 defect artificially created in n‐type GaAs by boron or oxygen implantation are analyzed at 80 K using optical isothermal transient spectroscopy. In both cases, we have found an inhibition of the electrical recovery and the existence of an optical recovery in the range 1.1-1.4 eV, competing with the photoquenching effect. The similar results obtained with both elements and the different behavior observed in comparison with the native EL2 defect has been related to the network damage produced by the implantation process. From the different behavior with the technological process, it can be deduced that the electrical and optical anomalies have a different origin. The electrical inhibition is due to the existence of an interaction between the EL2 defect and other implantation‐created defects. However, the optical recovery seems to be related to a change in the microscopic metastable state configuration involving the presence of vacancies
Resumo:
A network of twenty stakes was set up on Johnsons Glacier in order to determine its dynamics. During the austral summers from 1994-95 to 1997-98, we estimated surface velocities, mass balances and ice thickness variations. Horizontal velocity increased dow nstream from 1 m a- 1 near the ice divides to 40 m a- 1 near the ice terminus. The accumulation zone showed low accumulation rates (maximum of 0,6 m a- 1 (ice)), whereas in the lower part of the glacier, ablation rates were 4,3 m a- 1 (ice). Over the 3-year study period, both in the accumulation and ablation zones, we detected a reduction in the ice surface level ranging from 2 to 10 m from the annual ve rt ical velocities and ice-thinning data, the mass balance was obtained and compared with the mass balance field values, resulting in similar estimates. Flux values were calculated using cross-section data and horizontal velocities, and compared with the results obtained by means of mass balance and ice thinning data using the continuity equation. The two methods gave similar results.
Resumo:
Abstract Purpose: Several well-known managerial accounting performance measurement models rely on causal assumptions. Whilst users of the models express satisfaction and link them with improved organizational performance, academic research, of the realworld applications, shows few reliable statistical associations. This paper provides a discussion on the"problematic" of causality in a performance measurement setting. Design/methodology/approach: This is a conceptual study based on an analysis and synthesis of the literature from managerial accounting, organizational theory, strategic management and social scientific causal modelling. Findings: The analysis indicates that dynamic, complex and uncertain environments may challenge any reliance upon valid causal models. Due to cognitive limitations and judgmental biases, managers may fail to trace correct cause-and-effect understanding of the value creation in their organizations. However, even lacking this validity, causal models can support strategic learning and perform as organizational guides if they are able to mobilize managerial action. Research limitations/implications: Future research should highlight the characteristics necessary for elaboration of convincing and appealing causal models and the social process of their construction. Practical implications: Managers of organizations using causal models should be clear on the purposes of their particular models and their limitations. In particular, difficulties are observed in specifying detailed cause and effect relations and their potential for communicating and directing attention. They should therefore construct their models to suit the particular purpose envisaged. Originality/value: This paper provides an interdisciplinary and holistic view on the issue of causality in managerial accounting models.
Resumo:
We present experiments in which the laterally confined flow of a surfactant film driven by controlled surface tension gradients causes the subtended liquid layer to self-organize into an inner upstream microduct surrounded by the downstream flow. The anomalous interfacial flow profiles and the concomitant backflow are a result of the feedback between two-dimensional and three-dimensional microfluidics realized during flow in open microchannels. Bulk and surface particle image velocimetry data combined with an interfacial hydrodynamics model explain the dependence of the observed phenomena on channel geometry.
Resumo:
Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers) used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI) estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI) procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Resumo:
Biogeochemical cycles and sedimentary records in lakes are related to climate controls on hydrology and catchment processes. Changes in the isotopic imposition of the diatom frustules (δ 18 O diatom and δ 13 C diatom ) in lacustrine sediments can be used to reconstruct palaeoclimatic and palaeoenvironmental changes. The Lago Chungará (Andean Altiplano, 18°15 ′ S, 69°10 ′ W, 4520 masl) diatomaceous laminated sediments are made up of white and green multiannual rhythmites. White laminae were formed during short-term diatom super-blooms, and are composed almost exclusively of large-sized Cyclostephanos andinus.These diatoms bloom during mixing events when recycled nutrients from the bottom waters are brought to the surface and/or when nutrients are introduced from the catchment during periods of strong runoff. Conversely, the green laminae are thought to have been deposited over several years and are composed of a mixture of diatoms (mainly smaller valves of C. andinus and Discostella stelligera ) and organic matter. These green laminae reflect the lake's hydrological recovery from a status favouring the diatom super-blooms (white laminae) towards baseline conditions. δ 18 O diatom and δ 13 C diatom from 11,990 to 11,530 cal years BP allow us to reconstruct shifts in the precipitation/evaporation ratio and changes in the lake water dissolved carbon concentration, respectively. δ 18 O diatom values indicate that white laminae formation occurred mainly during low lake level stages, whereas green laminae formation generally occurred during high lake level stages. The isotope and chronostratigraphical data together suggest that white laminae deposition is caused by extraordinary environmental events. El Niño-Southern Oscillation and changes in solar activity are the most likely climate forcing mechanisms that could trigger such events, favouring hydrological changes at interannual-to-decadal scale. This study demonstrates the potential for laminated lake sediments to document extreme pluriannual events.
Resumo:
This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion;(c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.
Resumo:
Oxygen vacancies in metal oxides are known to determine their chemistry and physics. The properties of neutral oxygen vacancies in metal oxides of increasing complexity (MgO, CaO, alpha-Al2O3, and ZnO) have been studied using density functional theory. Vacancy formation energies, vacancy-vacancy interaction, and the barriers for vacancy migration are determined and rationalized in terms of the ionicity, the Madelung potential, and lattice relaxation. It is found that the Madelung potential controls the oxygen vacancy properties of highly ionic oxides whereas a more complex picture arises for covalent ZnO.
Resumo:
Through an interplay between scanning tunneling microscopy experiments and density functional theory calculations, we determine unambiguously the active surface site responsible for the dissociation of water molecules adsorbed on rutile TiO2(110). Oxygen vacancies in the surface layer are shown to dissociate H2O through the transfer of one proton to a nearby oxygen atom, forming two hydroxyl groups for every vacancy. The amount of water dissociation is limited by the density of oxygen vacancies present on the clean surface exclusively. The dissociation process sets in as soon as molecular water is able to diffuse to the active site.
Resumo:
Through an interplay between scanning tunneling microscopy (STM) and density functional theory (DFT) calculations, we show that bridging oxygen vacancies are the active nucleation sites for Au clusters on the rutile TiO2(110) surface. We find that a direct correlation exists between a decrease in density of vacancies and the amount of Au deposited. From the DFT calculations we find that the oxygen vacancy is indeed the strongest Au binding site. We show both experimentally and theoretically that a single oxygen vacancy can bind 3 Au atoms on average. In view of the presented results, a new growth model for the TiO2(110) system involving vacancy-cluster complex diffusion is presented.
Resumo:
Prompt production of charmonium χ c0, χ c1 and χ c2 mesons is studied using proton-proton collisions at the LHC at a centre-of-mass energy of TeX TeV. The χ c mesons are identified through their decay to J/ψγ, with J/ψ → μ + μ − using photons that converted in the detector. A data sample, corresponding to an integrated luminosity of 1.0 fb−1 collected by the LHCb detector, is used to measure the relative prompt production rate of χ c1 and χ c2 in the rapidity range 2.0 < y < 4.5 as a function of the J/ψ transverse momentum from 3 to 20 GeV/c. First evidence for χ c0 meson production at a high-energy hadron collider is also presented.
Resumo:
In this thesis (TFG) the results of the comparison of three assays for the measurement of AhR ligand activity are exposed. This study was part of a collaborative project aiming at the characterization of the AhR signaling activities of known naturally occurring compounds to explore the potential of using non-toxic compounds to treat inflammatory diseases via oral administration. The first goal of this project was to find an assay able to measure AhR-activity, so the comparison of different assays has been done in order to find the most convenient one according to the efficiency, sensitivity and precision. Moreover, other elements with operational nature such as price, toxicity of components or ease of use has been considered. From the use of compounds known from the literature to be AhR ligands, three assays have been tested: (1) P450-GloTM CYP1A2 Induction/Inhibition assay, (2) quantitative Polymerase Chain Reaction (qPCR) and (3) DR. CALUX® Bioassay. Moreover, a different experiment using the last assay was performed for the study in vivo of the transport of the compounds tested. The results of the TFG suggested the DR. CALUX® Bioassay as the most promising assay to be used for the screening of samples as AhR-ligands because it is quicker, easier to handle and less expensive than qPCR and more reproducible than the CYP1A2 Induction/Inhibition assay. Moreover, the use of this assay allowed having a first idea of which compounds are uptaken by the epithelial barrier and in with direction the transport happens.
Resumo:
The objective of this study was to evaluate the methodological characteristics of cost-effectiveness evaluations carried out in Spain, since 1990, which include LYG as an outcome to measure the incremental cost-effectiveness ratio. METHODS: A systematic review of published studies was conducted describing their characteristics and methodological quality. We analyse the cost per LYG results in relation with a commonly accepted Spanish cost-effectiveness threshold and the possible relation with the cost per quality adjusted life year (QALY) gained when they both were calculated for the same economic evaluation. RESULTS: A total of 62 economic evaluations fulfilled the selection criteria, 24 of them including the cost per QALY gained result as well. The methodological quality of the studies was good (55%) or very good (26%). A total of 124 cost per LYG results were obtained with a mean ratio of 49,529
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation