993 resultados para Combining method
Resumo:
Ocular toxoplasmosis is the principal cause of posterior uveitis and a leading cause of blindness. Animal models are required to improve our understanding of the pathogenesis of this disease. The method currently used for the detection of retinal cysts in animals involves the observation, under a microscope, of all the sections from infected eyes. However, this method is time-consuming and lacks sensitivity. We have developed a rapid, sensitive method for observing retinal cysts in mice infected with Toxoplasma gondii. This method involves combining the flat-mounting of retina - a compromise between macroscopic observation and global analysis of this tissue - and the use of an avirulent recombinant strain of T. gondii expressing the Escherichia coli beta-galactosidase gene, visually detectable at the submacroscopic level. Single cyst unilateral infection was found in six out of 17 mice killed within 28 days of infection, whereas a bilateral infection was found in only one mouse. There was no correlation between brain cysts number and ocular infection.
Resumo:
We introduce a new parameter to investigate replica symmetry breaking transitions using finite-size scaling methods. Based on exact equalities initially derived by F. Guerra this parameter is a direct check of the self-averaging character of the spin-glass order parameter. This new parameter can be used to study models with time reversal symmetry but its greatest interest lies in models where this symmetry is absent. We apply the method to long-range and short-range Ising spin-glasses with and without a magnetic field as well as short-range multispin interaction spin-glasses.
Resumo:
Studies testing the High Energy Moisture Characteristic (HEMC) technique in tropical soils are still incipient. By this method, the effects of different management systems can be evaluated. This study investigated the aggregation state of an Oxisol under coffee with Brachiaria between crop rows and surface-applied gypsum rates using HEMC. Soil in an experimental area in the Upper São Francisco region, Minas Gerais, was studied at depths of 0.05 and 0.20 m in coffee rows. The treatments consisted of 0, 7, and 28 Mg ha-1 of agricultural gypsum rates distributed on the soil surface of the coffee rows, between which Brachiaria was grown and periodically cut, and compared with a treatment without Brachiaria between coffee rows and no gypsum application. To determine the aggregation state using the HEMC method, soil aggregates were placed in a Büchner funnel (500 mL) and wetted using a peristaltic pump with a volumetric syringe. The wetting was applied increasingly at two pre-set speeds: slow (2 mm h-1) and fast (100 mm h-1). Once saturated, the aggregates were exposed to a gradually increasing tension by the displacement of a water column (varying from 0 to 30 cm) to obtain the moisture retention curve [M = f (Ψ) ], underlying the calculation of the stability parameters: modal suction, volume of drainable pores (VDP), stability index (slow and fast), VDP ratio, and stability ratio. The HEMC method conferred sensitivity in quantifying the aggregate stability parameters, and independent of whether gypsum was used, the soil managed with Brachiaria between the coffee rows, with regular cuts discharged in the crop row direction, exhibited a decreased susceptibility to disaggregation.
Resumo:
OBJECTIVE: Evaluation of the quantitative antibiogram as an epidemiological tool for the prospective typing of methicillin-resistant Staphylococcus aureus (MRSA), and comparison with ribotyping. METHODS: The method is based on the multivariate analysis of inhibition zone diameters of antibiotics in disk diffusion tests. Five antibiotics were used (erythromycin, clindamycin, cotrimoxazole, gentamicin, and ciprofloxacin). Ribotyping was performed using seven restriction enzymes (EcoRV, HindIII, KpnI, PstI, EcoRI, SfuI, and BamHI). SETTING: 1,000-bed tertiary university medical center. RESULTS: During a 1-year period, 31 patients were found to be infected or colonized with MRSA. Cluster analysis of antibiogram data showed nine distinct antibiotypes. Four antibiotypes were isolated from multiple patients (2, 4, 7, and 13, respectively). Five additional antibiotypes were isolated from the remaining five patients. When analyzed with respect to the epidemiological data, the method was found to be equivalent to ribotyping. Among 206 staff members who were screened, six were carriers of MRSA. Both typing methods identified concordant of MRSA types in staff members and in the patients under their care. CONCLUSIONS: The quantitative antibiogram was found to be equivalent to ribotyping as an epidemiological tool for typing of MRSA in our setting. Thus, this simple, rapid, and readily available method appears to be suitable for the prospective surveillance and control of MRSA for hospitals that do not have molecular typing facilities and in which MRSA isolates are not uniformly resistant or susceptible to the antibiotics tested.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
Despite the considerable environmental importance of mercury (Hg), given its high toxicity and ability to contaminate large areas via atmospheric deposition, little is known about its activity in soils, especially tropical soils, in comparison with other heavy metals. This lack of information about Hg arises because analytical methods for determination of Hg are more laborious and expensive compared to methods for other heavy metals. The situation is even more precarious regarding speciation of Hg in soils since sequential extraction methods are also inefficient for this metal. The aim of this paper is to present a technique of thermal desorption associated with atomic absorption spectrometry, TDAAS, as an efficient tool for quantitative determination of Hg in soils. The method consists of the release of Hg by heating, followed by its quantification by atomic absorption spectrometry. It was developed by constructing calibration curves in different soil samples based on increasing volumes of standard Hg2+ solutions. Performance, accuracy, precision, and quantification and detection limit parameters were evaluated. No matrix interference was detected. Certified reference samples and comparison with a Direct Mercury Analyzer, DMA (another highly recognized technique), were used in validation of the method, which proved to be accurate and precise.
Resumo:
ABSTRACT High cost and long time required to determine a retention curve by the conventional methods of the Richards Chamber and Haines Funnel limit its use; therefore, alternative methods to facilitate this routine are needed. The filter paper method to determine the soil water retention curve was evaluated and compared to the conventional method. Undisturbed samples were collected from five different soils. Using a Haines Funnel and Richards Chamber, moisture content was obtained for tensions of 2; 4; 6; 8; 10; 33; 100; 300; 700; and 1,500 kPa. In the filter paper test, the soil matric potential was obtained from the filter-paper calibration equation, and the moisture subsequently determined based on the gravimetric difference. The van Genuchten model was fitted to the observed data of soil matric potential versus moisture. Moisture values of the conventional and the filter paper methods, estimated by the van Genuchten model, were compared. The filter paper method, with R2 of 0.99, can be used to determine water retention curves of agricultural soils as an alternative to the conventional method.
Resumo:
ABSTRACT The combined incorporation of sewage sludge (SS) and oat straw (OS) to the soil can increase straw carbon mineralization and microbial nitrogen immobilization. This hypothesis was tested in two laboratory experiments, in which SS was incorporated in the soil with and without OS. One treatment in which only straw was incorporated and a control with only soil were also evaluated. The release of CO2 and mineral N in the soil after organic material incorporation was evaluated for 110 days. The cumulative C mineralization reached 30.1 % for SS and 54.7 % for OS. When these organic materials were incorporated together in the soil, straw C mineralization was not altered. About 60 % of organic N in the SS was mineralized after 110 days. This N mineralization index was twice as high as that defined by Resolution 375/2006 of the National Environmental Council. The combined incorporation of SS and OS in the soil caused an immobilization of microbial N of 5.9 kg Mg-1 of OS (mean 3.5 kg Mg-1). The results of this study indicated that SS did not increase straw C mineralization, but the SS rate should be adjusted to compensate for the microbial N immobilization caused by straw.
Resumo:
ABSTRACT Particle density, gravimetric and volumetric water contents and porosity are important basic concepts to characterize porous systems such as soils. This paper presents a proposal of an experimental method to measure these physical properties, applicable in experimental physics classes, in porous media samples consisting of spheres with the same diameter (monodisperse medium) and with different diameters (polydisperse medium). Soil samples are not used given the difficulty of working with this porous medium in laboratories dedicated to teaching basic experimental physics. The paper describes the method to be followed and results of two case studies, one in monodisperse medium and the other in polydisperse medium. The particle density results were very close to theoretical values for lead spheres, whose relative deviation (RD) was -2.9 % and +0.1 % RD for the iron spheres. The RD of porosity was also low: -3.6 % for lead spheres and -1.2 % for iron spheres, in the comparison of procedures – using particle and porous medium densities and saturated volumetric water content – and monodisperse and polydisperse media.
Resumo:
Accurate modeling of flow instabilities requires computational tools able to deal with several interacting scales, from the scale at which fingers are triggered up to the scale at which their effects need to be described. The Multiscale Finite Volume (MsFV) method offers a framework to couple fine-and coarse-scale features by solving a set of localized problems which are used both to define a coarse-scale problem and to reconstruct the fine-scale details of the flow. The MsFV method can be seen as an upscaling-downscaling technique, which is computationally more efficient than standard discretization schemes and more accurate than traditional upscaling techniques. We show that, although the method has proven accurate in modeling density-driven flow under stable conditions, the accuracy of the MsFV method deteriorates in case of unstable flow and an iterative scheme is required to control the localization error. To avoid large computational overhead due to the iterative scheme, we suggest several adaptive strategies both for flow and transport. In particular, the concentration gradient is used to identify a front region where instabilities are triggered and an accurate (iteratively improved) solution is required. Outside the front region the problem is upscaled and both flow and transport are solved only at the coarse scale. This adaptive strategy leads to very accurate solutions at roughly the same computational cost as the non-iterative MsFV method. In many circumstances, however, an accurate description of flow instabilities requires a refinement of the computational grid rather than a coarsening. For these problems, we propose a modified iterative MsFV, which can be used as downscaling method (DMsFV). Compared to other grid refinement techniques the DMsFV clearly separates the computational domain into refined and non-refined regions, which can be treated separately and matched later. This gives great flexibility to employ different physical descriptions in different regions, where different equations could be solved, offering an excellent framework to construct hybrid methods.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
We develop an abstract extrapolation theory for the real interpolation method that covers and improves the most recent versions of the celebrated theorems of Yano and Zygmund. As a consequence of our method, we give new endpoint estimates of the embedding Sobolev theorem for an arbitrary domain Omega
Resumo:
The slow-phase velocity of nystagmus is one of the most sensitive parameters of vestibular function and is currently the standard for evaluating the caloric test. However, the assessment of this parameter requires recording the response by using nystagmography. The aim of this study was to evaluate whether frequency and duration of the caloric nystagmus, as measured by using a clinical test with Frenzel glasses, could predict the result of the recorded test. The retrospective analysis of 222 caloric test results recorded by means of electronystagmography has shown a good association between the 3 parameters for unilateral weakness. The asymmetry observed in the velocity can be predicted by a combination of frequency and duration. On the other hand, no relationship was observed between the parameters for directional preponderance. These results indicate that a clinical caloric test with frequency and duration as parameters can be used to predict the unilateral weakness, which would be obtained by use of nystagmography. We propose an evaluation of the caloric test on the basis of diagrams combining the 3 response parameters.