948 resultados para Successive Overrelaxation method with 1 parameter
Resumo:
General note: Title and date provided by Bettye Lane.
Resumo:
International audience
Resumo:
This paper generalizes the model of Salant et al. (1983; Quarterly Journal of Economics, Vol. 98, pp. 185–199) to a successive oligopoly model with product differentiation. Upstream firms produce differentiated goods, retailers compete in quantities, and supply contracts are linear. We show that if retailers buy from all producers, downstream mergers do not affect wholesale prices. Our result replicates that of Salant's, where mergers are not profitable unless the size of the merged firm exceeds 80 per cent of the industry. This result is robust to the type of competition.
Resumo:
Cefaclor is not reducible at a mercury electrode, but it can be determined polarographically and by cathodic stripping voltammetry as its initial alkaline degradation product which is obtained in high yield by hydrolysis of cefaclor in Britton-Robinson (B-R) buffer pH 10 at 50 degrees C for 30 min (reduction peak at pH 10, -0.70 V). Differential pulse polarographic calibration graphs are linear up to at least 1 x 10(-4) mol l(-1). Recoveries of 93% of the cefaclor (n = 3) were obtained from urine spiked with 38.6 mu g ml(-1) using this polarographic method with 1 ml urine made up to 10 ml with pH 10 buffer. Using cathodic stripping voltammetry and accumulating at a hanging mercury drop electrode at -0.2 V for 30 s, linear calibration graphs were obtained from 0.35 to 40 mu g ml(-1) cefaclor in B-R buffer pH 10. A relative standard deviation of 4.2% (eta = 5) was obtained, and the limit of detection was calculated to be 2.9 ng ml(-1). Direct determination of cefaclor in human urine (1 ml of urine was made up to 10 ml with pH 10 buffer) spiked to 0.39 mu g ml(-1) was made (recovery 98.6%). (C) 1999 Elsevier B.V. B.V. All rights reserved.
Resumo:
A simple procedure for recovering potassium dichromate (K2Cr2O7 ) from treatment of residual sulphochromic solution was proposed in the present work by means of cooling crystallization. The decrease of temperature favored the crystallization of K2Cr2O7 due to the decrease of solubility. 5.0 L of sulphochromic wastes containing 48.08 g L-1 of Cr(VI) were treated and the process of crystallization was performed in three steps until crystals were not formed anymore. On each step the content of Crtotal was determined by flame atomic absorption spectrometry and Cr(VI) by colorimetric method with 1,5- diphenylcarbazide, resulting in the removal of 91% and 92% of Crtotal and Cr(VI), respectively. After the last step, the remaining Cr(VI) in the solution was reduced to Cr(III) from the addition of NaHSO3 , recovering via precipitation in pH 8 approximately 36.13 g of Cr(OH)3 . The final supernatant was discarded since chromium content was below the maximum limit established by the Brazilian legislation for effluents discharge, which corresponds to 0.10 and 1.0 mg L-1 of Cr(VI) and Cr(III), respectively. 628.4 g of K2Cr2O7 were recovered and the salt was characterized by X-ray diffraction and differential thermal analysis. Its applicability was compared to the standard K2Cr2O7 when determining the soil organic matter, in which there was no significant difference, thus inferring that the recovered compound may be incorporated on routine analyses. This recovering process allowed the reuse of K2Cr2O7 , thus reducing costs with the acquisition of new reagents and environmental impacts caused by the inadequate discard of sulphochromic solutions.
Resumo:
This paper presents a first attempt to estimate mixing parameters from sea level observations using a particle method based on importance sampling. The method is applied to an ensemble of 128 members of model simulations with a global ocean general circulation model of high complexity. Idealized twin experiments demonstrate that the method is able to accurately reconstruct mixing parameters from an observed mean sea level field when mixing is assumed to be spatially homogeneous. An experiment with inhomogeneous eddy coefficients fails because of the limited ensemble size. This is overcome by the introduction of local weighting, which is able to capture spatial variations in mixing qualitatively. As the sensitivity of sea level for variations in mixing is higher for low values of mixing coefficients, the method works relatively well in regions of low eddy activity.
Resumo:
Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
INTRODUCTION: The goal was to develop an in-house serological method with high specificity and sensitivity for diagnosis and monitoring of Chagas disease morbidity. METHODS: With this purpose, the reactivities of anti-T. cruzi IgG and subclasses were tested in successive serum dilutions of patients from Berilo municipality, Jequitinhonha Valley, Minas Gerais, Brazil. The performance of the in-house ELISA was also evaluated in samples from other relevant infectious diseases, including HIV, hepatitis C (HCV), syphilis (SYP), visceral leishmaniasis (VL), and American tegumentary leishmaniasis (ATL), and noninfected controls (NI). Further analysis was performed to evaluate the applicability of this in-house methodology for monitoring Chagas disease morbidity into three groups of patients: indeterminate (IND), cardiac (CARD), and digestive/mixed (DIG/Mix), based on their clinical status. RESULTS: The analysis of total IgG reactivity at serum dilution 1:40 was an excellent approach to Chagas disease diagnosis (100% sensitivity and specificity). The analysis of IgG subclasses showed cross-reactivity, mainly with NI, VL, and ATL, at all selected serum dilutions. Based on the data analysis, the IND group displayed higher IgG3 levels and the DIG/Mix group presented higher levels of total IgG as compared with the IND and CARD groups. CONCLUSIONS: These findings demonstrated that methodology presents promising applicability in the analysis of anti-T. cruzi IgG reactivity for the differential diagnosis and evaluation of Chagas disease morbidity.
Resumo:
The inclusive jet cross-section is measured in proton--proton collisions at a centre-of-mass energy of 7 TeV using a data set corresponding to an integrated luminosity of 4.5 fb−1 collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-kt algorithm with radius parameter values of 0.4 and 0.6. The double-differential cross-sections are presented as a function of the jet transverse momentum and the jet rapidity, covering jet transverse momenta from 100 GeV to 2 TeV. Next-to-leading-order QCD calculations corrected for non-perturbative effects and electroweak effects, as well as Monte Carlo simulations with next-to-leading-order matrix elements interfaced to parton showering, are compared to the measured cross-sections. A quantitative comparison of the measured cross-sections to the QCD calculations using several sets of parton distribution functions is performed.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
Iowa has been using low slump concrete for repair and surfacing of deteriorated bridge decks on a routine basis since the mid 1960'2. More than 150 bridges have been resurfaced by this method with good results. A study was initiated in 1973 to evaluate 15 bridges resurfaced with low slump concrete, and one bridge resurfaced with latex modified concrete. The evaluation includes an assessment of concrete physical properties, chloride penetration rates, concrete consolidation, and riding qualities of the finished bridge deck. Results indicate that the overall properties of these two types of concrete are quite similar and have resulted in a contractor option concerning which system shall be used on bridge deck repair/resurfacing projects.
Resumo:
In this study it was evaluated the start-up procedures of anaerobic treatment system with three horizontal anaerobic reactors (R1, R2 and R3), installed in series, with volume of 1.2 L each. R1 had sludge blanket, and R2 and R3 had half supporter of bamboo and coconut fiber, respectively. As an affluent, it was synthesized wastewater from mechanical pulping of the coffee fruit by wet method, with a mean value of total chemical oxygen demand (CODtotal) of 16,003 mg L-1. The hydraulic retention time (HRT) in each reactor was 30 h. The volumetric organic loading (VOL) applied in R1 varied from 8.9 to 25.0 g of CODtotal (L d)-1. The mean removal efficiencies of CODtotal varied from 43 to 97% in the treatment system (R1+R2+R3), stabilizing above 80% after 30 days of operation. The mean content of methane in the biogas were of 70 to 76%, the mean volumetric production was 1.7 L CH4 (L reactor d)-1 in the system, and the higher conversions were around at 0.20 L CH4 (g CODremoved)-1 in R1 and R2. The mean values of pH in the effluents ranged from 6.8 to 8.3 and the mean values of total volatile acids remained below 200 mg L-1 in the effluent of R3. The concentrations of total phenols of the affluent ranged from 45 to 278 mg L-1, and the mean removal efficiency was of 52%. The start-up of the anaerobic treatment system occurred after 30 days of operation as a result of inoculation with anaerobic sludge with active microbiota.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
Genetic association analyses of family-based studies with ordered categorical phenotypes are often conducted using methods either for quantitative or for binary traits, which can lead to suboptimal analyses. Here we present an alternative likelihood-based method of analysis for single nucleotide polymorphism (SNP) genotypes and ordered categorical phenotypes in nuclear families of any size. Our approach, which extends our previous work for binary phenotypes, permits straightforward inclusion of covariate, gene-gene and gene-covariate interaction terms in the likelihood, incorporates a simple model for ascertainment and allows for family-specific effects in the hypothesis test. Additionally, our method produces interpretable parameter estimates and valid confidence intervals. We assess the proposed method using simulated data, and apply it to a polymorphism in the c-reactive protein (CRP) gene typed in families collected to investigate human systemic lupus erythematosus. By including sex interactions in the analysis, we show that the polymorphism is associated with anti-nuclear autoantibody (ANA) production in females, while there appears to be no effect in males.