948 resultados para best estimate method
Resumo:
In the present study, the validation of an enzyme-linked immunosorbent assay (ELISA) for serodiagnosis of canine brucellosis is described. Two different antigenic extracts, obtained by heat or ultrasonic homogenization of microbial antigens from a wild isolate of Brucella canis bacteria, were compared by ELISA and Western blot (WB). A total of 145 canine sera were used to define sensitivity, specificity and accuracy of the ELISA as follows: (1) sera from 34 animals with natural B. canis infection, confirmed by blood culture and PCR, as well as 51 sera samples from healthy dogs with negative results by the agar-gel immunodiffusion (ACID) test for canine brucellosis, were used as the control panel for B. cants infection; and (2) to scrutinize the possibility of cross reactions with other common dog infections in the same geographical area in Brazil, 60 sera samples from dogs harboring known infections by Leptospira sp., Ehrlichia canis, canine distemper virus (CDV), Neospora caninum, Babesia canis and Leishmania chagasi (10 in each group) were included in the study. The ELISA using heat soluble bacterial extract (HE-antigen) as antigen showed the best values of sensitivity (91.18%), specificity (100%) and accuracy (96.47%). In the WB analyses, the HE-antigen showed no cross-reactivity with sera from dogs with different infections, while the B. canis sonicate had various protein bands identified by those sera. The performance of the ELISA standardized with the heat soluble B. canis antigen indicates that this assay can be used as a reliable and practical method to confirm infection by this microorganism, as well as a tool for seroepidemiological studies. (C) 2010 Elsevier Ltd. All rights reserved.
An improved estimate of leaf area index based on the histogram analysis of hemispherical photographs
Resumo:
Leaf area index (LAI) is a key parameter that affects the surface fluxes of energy, mass, and momentum over vegetated lands, but observational measurements are scarce, especially in remote areas with complex canopy structure. In this paper we present an indirect method to calculate the LAI based on the analyses of histograms of hemispherical photographs. The optimal threshold value (OTV), the gray-level required to separate the background (sky) and the foreground (leaves), was analytically calculated using the entropy crossover method (Sahoo, P.K., Slaaf, D.W., Albert, T.A., 1997. Threshold selection using a minimal histogram entropy difference. Optical Engineering 36(7) 1976-1981). The OTV was used to calculate the LAI using the well-known gap fraction method. This methodology was tested in two different ecosystems, including Amazon forest and pasturelands in Brazil. In general, the error between observed and calculated LAI was similar to 6%. The methodology presented is suitable for the calculation of LAI since it is responsive to sky conditions, automatic, easy to implement, faster than commercially available software, and requires less data storage. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Activities involving fauna monitoring are usually limited by the lack of resources; therefore, the choice of a proper and efficient methodology is fundamental to maximize the cost-benefit ratio. Both direct and indirect methods can be used to survey mammals, but the latter are preferred due to the difficulty to come in sight of and/or to capture the individuals, besides being cheaper. We compared the performance of two methods to survey medium and large-sized mammal: track plot recording and camera trapping, and their costs were assessed. At Jatai Ecological Station (S21 degrees 31`15 ``- W47 degrees 34`42 ``-Brazil) we installed ten camera traps along a dirt road directly in front of ten track plots, and monitored them for 10 days. We cleaned the plots, adjusted the cameras, and noted down the recorded species daily. Records taken by both methods showed they sample the local richness in different ways (Wilcoxon, T=231; p;;0.01). The track plot method performed better on registering individuals whereas camera trapping provided records which permitted more accurate species identification. The type of infra-red sensor camera used showed a strong bias towards individual body mass (R(2)=0.70; p=0.017), and the variable expenses of this method in a 10-day survey were estimated about 2.04 times higher compared to track plot method; however, in a long run camera trapping becomes cheaper than track plot recording. Concluding, track plot recording is good enough for quick surveys under a limited budget, and camera trapping is best for precise species identification and the investigation of species details, performing better for large animals. When used together, these methods can be complementary.
Resumo:
The NMR spin coupling parameters, (1)J(N,H) and (2)J(H,H), and the chemical shielding, sigma((15)N), of liquid ammonia are studied from a combined and sequential QM/MM methodology. Monte Carlo simulations are performed to generate statistically uncorrelated configurations that are submitted to density functional theory calculations. Two different Lennard-Jones potentials are used in the liquid simulations. Electronic polarization is included in these two potentials via an iterative procedure with and without geometry relaxation, and the influence on the calculated properties are analyzed. B3LYP/aug-cc-pVTZ-J calculations were used to compute the V(N,H) constants in the interval of -67.8 to -63.9 Hz, depending on the theoretical model used. These can be compared with the experimental results of -61.6 Hz. For the (2)J(H,H) coupling the theoretical results vary between -10.6 to -13.01 Hz. The indirect experimental result derived from partially deuterated liquid is -11.1 Hz. Inclusion of explicit hydrogen bonded molecules gives a small but important contribution. The vapor-to-liquid shifts are also considered. This shift is calculated to be negligible for (1)J(N,H) in agreement with experiment. This is rationalized as a cancellation of the geometry relaxation and pure solvent effects. For the chemical shielding, U(15 N) Calculations at the B3LYP/aug-pcS-3 show that the vapor-to-liquid chemical shift requires the explicit use of solvent molecules. Considering only one ammonia molecule in an electrostatic embedding gives a wrong sign for the chemical shift that is corrected only with the use of explicit additional molecules. The best result calculated for the vapor to liquid chemical shift Delta sigma((15)N) is -25.2 ppm, in good agreement with the experimental value of -22.6 ppm.
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Cobalt catalysts were prepared on supports of SiO(2) and gamma-Al(2)O(3) by the impregnation method, using a solution of Co precursor in methanol. The samples were characterized by XRD, TPR, and Raman spectroscopy and tested in ethanol steam reforming. According to the XRD results, impregnation with the methanolic solution led to smaller metal crystallites than with aqueous solution, on the SiO(2) support. On gamma-Al(2)O(3), all the samples exhibited small crystallites, with either solvent, due to a higher Co-support interaction that inhibits the reduction of Co species. The TPR results were consistent with XRD results and the samples supported on gamma-Al(2)O(3) showed a lower degree of reduction. In the steam reforming of ethanol, catalysts supported on SiO(2) and prepared with the methanolic solution showed the best H(2), CO(2) and CO selectivity. Those supported on gamma-Al(2)O(3) showed lower H(2) selectivity. (C) 2011 Elsevier Ltd. All rights reserved.
Ethanol oxidation reaction on PtCeO(2)/C electrocatalysts prepared by the polymeric precursor method
Resumo:
This paper presents a study of the electrocatalysis of ethanol oxidation reactions in an acidic medium on Pt-CeO(2)/C (20 wt.% of Pt-CeO(2) on carbon XC-72R), prepared in different mass ratios by the polymeric precursor method. The mass ratios between Pt and CeO(2) (3:1, 2:1, 1:1, 1:2, 1:3) were confirmed by Energy Dispersive X-ray Analysis (EDAX). X-ray diffraction (XRD) structural characterization data shows that the Pt-CeO(2)/C catalysts are composed of nanosized polycrystalline non-alloyed deposits, from which reflections corresponding to the fcc (Pt) and fluorite (CeO(2)) structures were clearly observed. The mean crystallite sizes calculated from XRD data revealed that, independent of the mass ratio, a value close to 3 nm was obtained for the CeO(2) particles. For Pt, the mean crystallite sizes were dependent on the ratio of this metal in the catalysts. Low platinum ratios resulted in small crystallites. and high Pt proportions resulted in larger crystallites. The size distributions of the catalysts particles, determined by XRD, were confirmed by Transmission Electron Microscope (TEM) imaging. Cyclic voltammetry and chronoamperometic experiments were used to evaluate the electrocatalytic performance of the different materials. In all cases, except Pt-CeO(2)/C 1:1, the Pt-Ceo(2)/C catalysts exhibited improved performance when compared with Pt/C. The best result was obtained for the Pt-CeO(2)/C 1:3 catalyst, which gave better results than the Pt-Ru/C (Etek) catalyst. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This thesis uses zonal travel cost method (ZTCM) to estimate consumer surplus of Peace & Love festival in Borlänge, Sweden. The study defines counties as zones of origin of the visitors. Visiting rates from each zone are estimated based on survey data. The study is novel due to the fact that mostly TCM has been applied in the environmental and recreational sector, not for short term events, like P&L festival. The analysis shows that travel cost has a significantly negative effect on visiting rate as expected. Even though income has previously shown to be significant in similar studies, it turns out to be insignificant in this study. A point estimate for the total consumer surplus of P&L festival is 35.6 million Swedish kronor. However, this point estimate is associated with high uncertainty since a 95 % confidence interval for it is (17.9, 53.2). It is also important to note that the estimated value only represents one part of the total economic value, the other values of the festival's totaleconomic value have not been estimated in this thesis.
Resumo:
Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Resumo:
We consider methods for estimating causal effects of treatment in the situation where the individuals in the treatment and the control group are self selected, i.e., the selection mechanism is not randomized. In this case, simple comparison of treated and control outcomes will not generally yield valid estimates of casual effects. The propensity score method is frequently used for the evaluation of treatment effect. However, this method is based onsome strong assumptions, which are not directly testable. In this paper, we present an alternative modeling approachto draw causal inference by using share random-effect model and the computational algorithm to draw likelihood based inference with such a model. With small numerical studies and a real data analysis, we show that our approach gives not only more efficient estimates but it is also less sensitive to model misspecifications, which we consider, than the existing methods.
Resumo:
Drinking water distribution networks risk exposure to malicious or accidental contamination. Several levels of responses are conceivable. One of them consists to install a sensor network to monitor the system on real time. Once a contamination has been detected, this is also important to take appropriate counter-measures. In the SMaRT-OnlineWDN project, this relies on modeling to predict both hydraulics and water quality. An online model use makes identification of the contaminant source and simulation of the contaminated area possible. The objective of this paper is to present SMaRT-OnlineWDN experience and research results for hydraulic state estimation with sampling frequency of few minutes. A least squares problem with bound constraints is formulated to adjust demand class coefficient to best fit the observed values at a given time. The criterion is a Huber function to limit the influence of outliers. A Tikhonov regularization is introduced for consideration of prior information on the parameter vector. Then the Levenberg-Marquardt algorithm is applied that use derivative information for limiting the number of iterations. Confidence intervals for the state prediction are also given. The results are presented and discussed on real networks in France and Germany.
Resumo:
When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.
Resumo:
Multi-factor models constitute a useful tool to explain cross-sectional covariance in equities returns. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide interval forecasts.
Resumo:
The present work aims to study the macroeconomic factors influence in credit risk for installment autoloans operations. The study is based on 4.887 credit operations surveyed in the Credit Risk Information System (SCR) hold by the Brazilian Central Bank. Using Survival Analysis applied to interval censured data, we achieved a model to estimate the hazard function and we propose a method for calculating the probability of default in a twelve month period. Our results indicate a strong time dependence for the hazard function by a polynomial approximation in all estimated models. The model with the best Akaike Information Criteria estimate a positive effect of 0,07% for males over de basic hazard function, and 0,011% for the increasing of ten base points on the operation annual interest rate, toward, for each R$ 1.000,00 on the installment, the hazard function suffer a negative effect of 0,28% , and an estimated elevation of 0,0069% for the same amount added to operation contracted value. For de macroeconomics factors, we find statistically significant effects for the unemployment rate (-0,12%) , for the one lag of the unemployment rate (0,12%), for the first difference of the industrial product index(-0,008%), for one lag of inflation rate (-0,13%) and for the exchange rate (-0,23%). We do not find statistic significant results for all other tested variables.