945 resultados para Estimator standard error and efficiency
Resumo:
The effects of light and elevated pCO2 on the growth and photochemical efficiency of the critically endangered staghorn coral, Acropora cervicornis, were examined experimentally. Corals were subjected to high and low treatments of CO2 and light in a fully crossed design and monitored using 3D scanning and buoyant weight methodologies. Calcification rates, linear extension, as well as colony surface area and volume of A. cervicornis were highly dependent on light intensity. At pCO2 levels projected to occur by the end of the century from ocean acidification (OA), A. cervicornis exhibited depressed calcification, but no change in linear extension. Photochemical efficiency (F v /F m ) was higher at low light, but unaffected by CO2. Amelioration of OA-depressed calcification under high-light treatments was not observed, and we suggest that the high-light intensity necessary to reach saturation of photosynthesis and calcification in A. cervicornis may limit the effectiveness of this potentially protective mechanism in this species. High CO2 causes depressed skeletal density, but not linear extension, illustrating that the measurement of extension by itself is inadequate to detect CO2 impacts. The skeletal integrity of A. cervicornis will be impaired by OA, which may further reduce the resilience of the already diminished populations of this endangered species.
Resumo:
The paper examines howfar foreign manufacturing investment in UK industries, together with the spatial agglomeration of those industries, affect technical efficiency. The paper links research on the estimation of technical efficiency,with those literatures demonstrating the economies associated with foreign direct investment and spatial agglomeration. The methodology involves estimation of a stochastic production frontier with random components associated with industry technical inefficiency, and a standard error. The paper also explores whether the degree of foreign involvement has a greater impact on technical efficiency where the domestic industry sector is characterized by comparatively high productivity and spatial agglomeration. The policy implications of the analysis are discussed.
Resumo:
The hepatic disposition and metabolite kinetics of a homologous series of diflunisal O-acyl esters (acetyl, butanoyl, pentanoyl, anti hexanoyl) were determined using a single-pass perfused in situ rat liver preparation. The experiments were conducted using 2% BSA Krebs-Henseleit buffer (pH 7.4), and perfusions were performed at 30 mL/min in each liver. O-Acyl esters of diflunisal and pregenerated diflunisal were injected separately into the portal vein. The venous outflow samples containing the esters and metabolite diflunisal were analyzed by high performance liquid chromatography (HPLC). The normalized outflow concentration-time profiles for each parent ester and the formed metabolite, diflunisal, were analyzed using statistical moments analysis and the two-compartment dispersion model. Data (presented as mean +/- standard error for triplicate experiments) was compared using ANOVA repeated measures, significance level P < 0.05. The hepatic availability (AUC'), the fraction of the injected dose recovered in the outflowing perfusate, for O-acetyldiflunisal (C2D = 0.21 +/- 0.03) was significantly lower than the other esters (0.34-0.38). However, R-N/f(u), the removal efficiency number R-N divided by the unbound fraction in perfusate f(u), which represents the removal efficiency of unbound ester by the liver, was significantly higher for the most lipophilic ester (O-hexanoyldiflunisal, C6D = 16.50 +/- 0.22) compared to the other members of the series (9.57 to 11.17). The most lipophilic ester, C6D, had the largest permeability surface area (PS) product (94.52 +/- 38.20 mt min-l g-l liver) and tissue distribution value VT (35.62 +/- 11.33 mL g(-1) liver) in this series. The MTT of these O-acyl esters of diflunisal were not significantly different from one another. However, the metabolite diflunisal MTTs tended to increase with the increase in the parent ester lipophilicity (11.41 +/- 2.19 s for C2D to 38.63 +/- 9.81 s for C6D). The two-compartment dispersion model equations adequately described the outflow profiles for the parent esters and the metabolite diflunisal formed from the O-acyl esters of diflunisal in the liver.
Resumo:
For the last two decades, the primary instruments for UK regional policy have been discretionary subsidies. Such aid is targeted at “additional” projects - projects that would not have been implemented without the subsidy - and the subsidy should be the minimum necessary for the project to proceed. Discretionary subsidies are thought to be more efficient than automatic subsidies, where many of the aided projects are non-additional and all projects receive the same subsidy rate. The present paper builds on Swales (1995) and Wren (2007a) to compare three subsidy schemes: an automatic scheme and two types of discretionary scheme, one with accurate appraisal and the other with appraisal error. These schemes are assessed on their expected welfare impacts. The particular focus is the reduction in welfare gain imposed by the interaction of appraisal error and the requirements for accountability. This is substantial and difficult to detect with conventional evaluation techniques.
Resumo:
This paper uses data on the world's copper mining industry to measure the impact on efficiency of the adoption of the ISO 14001 environmental standard. Anecdotal and case study literature suggests that firms are motivated to adopt this standard so as to achieve greater efficiency through changes in operating procedures and processes. Using plant level panel data from 1992-2007 on most of the world's industrial copper mines, the study uses stochastic frontier methods to investigate the effects of ISO adoption. The variety of models used in this study find that adoption either tends to improve efficiency or has no impact on efficiency, but no evidence is found that ISO adoption decreases efficiency.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.
Resumo:
Abstract The research problem in the thesis deals with improving the responsiveness and efficiency of logistics service processes between a supplier and its customers. The improvement can be sought by customizing the services and increasing the coordination of activities between the different parties in the supply chain. It is argued that to achieve coordination the parties have to have connections on several levels. In the framework employed in this research, three contexts are conceptualized at which the linkages can be planned: 1) the service policy context, 2) the process coordination context, and 3) the relationship management context. The service policy context consists of the planning methods by which a supplier analyzes its customers' logistics requirements and matches them with its own operational environment and efficiency requirements. The main conclusion related to the service policy context is that it is important to have a balanced selection of both customer-related and supplier-related factors in the analysis. This way, while the operational efficiency is planned a sufficient level of service for the most important customers is assured. This kind of policy planning involves taking multiple variables into the analysis, and there is a need to develop better tools for this purpose. Some new approaches to deal with this are presented in the thesis.The process coordination context and the relationship management context deal with the issues of how the implementation of the planned service policies can be facilitated in an inter-organizational environment. Process coordination includes typically such mechanisms as control rules, standard procedures and programs, but inhighly demanding circumstances more integrative coordination mechanisms may be necessary. In the thesis the coordination problems in third-party logistics relationship are used as an example of such an environment. Relationship management deals with issues of how separate companies organize their relationships to improve the coordination of their common processes. The main implication related to logistics planning is that by integrating further at the relationship level, companies can facilitate the use of the most efficient coordination mechanisms and thereby improve the implementation of the selected logistics service policies. In the thesis, a case of a logistics outsourcing relationship is used to demonstrate the need to address the relationship issues between the service provider andthe service buyer before the outsourcing can be done.The dissertation consists of eight research articles and a summarizing report. The principal emphasis in the articles is on the service policy planning context, which is the main theme of six articles. Coordination and relationship issues are specifically addressed in two of the papers.
Resumo:
We investigated the reactivity and expression of basal lamina collagen by Schwann cells (SCs) cultivated on a supraorganized bovine-derived collagen substrate. SC cultures were obtained from sciatic nerves of neonatal Sprague-Dawley rats and seeded on 24-well culture plates containing collagen substrate. The homogeneity of the cultures was evaluated with an SC marker antibody (anti-S-100). After 1 week, the cultures were fixed and processed for immunocytochemistry by using antibodies against type IV collagen, S-100 and p75NTR (pan neurotrophin receptor) and for scanning electron microscopy (SEM). Positive labeling with antibodies to the cited molecules was observed, indicating that the collagen substrate stimulates SC alignment and adhesion (collagen IV labeling - organized collagen substrate: 706.33 ± 370.86, non-organized collagen substrate: 744.00 ± 262.09; S-100 labeling - organized collagen: 3809.00 ± 120.28, non-organized collagen: 3026.00 ± 144.63, P < 0.05) and reactivity (p75NTR labeling - organized collagen: 2156.33 ± 561.78, non-organized collagen: 1424.00 ± 405.90, P < 0.05; means ± standard error of the mean in absorbance units). Cell alignment and adhesion to the substrate were confirmed by SEM analysis. The present results indicate that the collagen substrate with an aligned suprastructure, as seen by polarized light microscopy, provides an adequate scaffold for SCs, which in turn may increase the efficiency of the nerve regenerative process after in vivo repair.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The need for standardization of the measured blow count number N-spt into a normalized reference energy value is now fully recognized. The present paper extends the existing theoretical approach using the wave propagation theory as framework and introduces an analysis for large displacements enabling the influence of rod length in the measured N-spt values to be quantified. The study is based on both calibration chamber and field tests. Energy measurements are monitored in two different positions: below the anvil and above the sampler. Both experimental and numerical results demonstrate that whereas the energy delivered into the rod stem is expressed as a ratio of the theoretical free-fall energy of the hammer, the effective sampler energy is a function of the hammer height of fall, sampler permanent penetration, and weight of both hammer and rods. Influence of rod length is twofold and produces opposite effects: wave energy losses increase with increasing rod length and in a long rod composition the gain in potential energy from rod weight is significant and may partially compensate measured energy losses. Based on this revised approach, an analytical solution is proposed to calculate the energy delivered to the sampler and efficiency coefficients are suggested to account for energy losses during the energy transference process.
Resumo:
Objective: This study determined the effects of adding monosodium glutamate (MSG) to a standard diet and a fiber-enriched diet on glucose metabolism, lipid profile, and oxidative stress in rats. Methods: Male Wistar rats (65 ± 5 g, n = 8) were fed a standard diet (control), a standard diet supplemented with 100 g of MSG per kilogram of rat body weight, a diet rich in fiber, or a diet rich in fiber supplemented with 100 g of MSG per kilogram of body weight. After 45 d of treatment, sera were analyzed for concentrations of insulin, leptin, glucose, triacylglycerol, lipid hydroperoxide, and total antioxidant substances. A homeostasis model assessment index was estimated to characterize insulin resistance. Results: Voluntary food intake was higher and feed efficiency was lower in animals fed the standard diet supplemented with MSG than in those fed the control, fiber-enriched, or fiber- and MSG-enriched diet. The MSG group had metabolic dysfunction characterized by increased levels of glucose, triacylglycerol, insulin, leptin, and homeostasis model assessment index. The adverse effects of MSG were related to an imbalance between the oxidant and antioxidant systems. The MSG group had increased levels of lipid hydroperoxide and decreased levels of total antioxidant substances. Levels of triacylglycerol and lipid hydroperoxide were decreased in rats fed the fiber-enriched and fiber- and MSG-enriched diets, whereas levels of total antioxidant substances were increased in these animals. Conclusions: MSG added to a standard diet increased food intake. Overfeeding induced metabolic disorders associated with oxidative stress in the absence of obesity. The fiber-enriched diet prevented changes in glucose, insulin, leptin, and triacylglycerol levels that were seen in the MSG group. Because the deleterious effects of MSG, i.e., induced overfeeding, were not seen in the animals fed the fiber-enriched diets, it can be concluded that fiber supplementation is beneficial by discouraging overfeeding and improving oxidative stress that is induced by an MSG diet. © 2005 Elsevier Inc. All rights reserved.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Allgemein erlaubt adaptive Gitterverfeinerung eine Steigerung der Effizienz numerischer Simulationen ohne dabei die Genauigkeit des Ergebnisses signifikant zu verschlechtern. Es ist jedoch noch nicht erforscht, in welchen Bereichen des Rechengebietes die räumliche Auflösung tatsächlich vergröbert werden kann, ohne die Genauigkeit des Ergebnisses signifikant zu beeinflussen. Diese Frage wird hier für ein konkretes Beispiel von trockener atmosphärischer Konvektion untersucht, nämlich der Simulation von warmen Luftblasen. Zu diesem Zweck wird ein neuartiges numerisches Modell entwickelt, das auf diese spezielle Anwendung ausgerichtet ist. Die kompressiblen Euler-Gleichungen werden mit einer unstetigen Galerkin Methode gelöst. Die Zeitintegration geschieht mit einer semi-implizite Methode und die dynamische Adaptivität verwendet raumfüllende Kurven mit Hilfe der Funktionsbibliothek AMATOS. Das numerische Modell wird validiert mit Hilfe einer Konvergenzstudie und fünf Standard-Testfällen. Eine Methode zum Vergleich der Genauigkeit von Simulationen mit verschiedenen Verfeinerungsgebieten wird eingeführt, die ohne das Vorhandensein einer exakten Lösung auskommt. Im Wesentlichen geschieht dies durch den Vergleich von Eigenschaften der Lösung, die stark von der verwendeten räumlichen Auflösung abhängen. Im Fall einer aufsteigenden Warmluftblase ist der zusätzliche numerische Fehler durch die Verwendung der Adaptivität kleiner als 1% des gesamten numerischen Fehlers, wenn die adaptive Simulation mehr als 50% der Elemente einer uniformen hoch-aufgelösten Simulation verwendet. Entsprechend ist die adaptive Simulation fast doppelt so schnell wie die uniforme Simulation.
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström’s sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St–Co, Co–St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St–Co than for Co–St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.