943 resultados para Error
Resumo:
We present a method of estimating HIV incidence rates in epidemic situations from data on age-specific prevalence and changes in the overall prevalence over time. The method is applied to women attending antenatal clinics in Hlabisa, a rural district of KwaZulu/Natal, South Africa, where transmission of HIV is overwhelmingly through heterosexual contact. A model which gives age-specific prevalence rates in the presence of a progressing epidemic is fitted to prevalence data for 1998 using maximum likelihood methods and used to derive the age-specific incidence. Error estimates are obtained using a Monte Carlo procedure. Although the method is quite general some simplifying assumptions are made concerning the form of the risk function and sensitivity analyses are performed to explore the importance of these assumptions. The analysis shows that in 1998 the annual incidence of infection per susceptible woman increased from 5.4 per cent (3.3-8.5 per cent; here and elsewhere ranges give 95 per cent confidence limits) at age 15 years to 24.5 per cent (20.6-29.1 per cent) at age 22 years and declined to 1.3 per cent (0.5-2.9 per cent) at age 50 years; standardized to a uniform age distribution, the overall incidence per susceptible woman aged 15 to 59 was 11.4 per cent (10.0-13.1 per cent); per women in the population it was 8.4 per cent (7.3-9.5 per cent). Standardized to the age distribution of the female population the average incidence per woman was 9.6 per cent (8.4-11.0 per cent); standardized to the age distribution of women attending antenatal clinics, it was 11.3 per cent (9.8-13.3 per cent). The estimated incidence depends on the values used for the epidemic growth rate and the AIDS related mortality. To ensure that, for this population, errors in these two parameters change the age specific estimates of the annual incidence by less than the standard deviation of the estimates of the age specific incidence, the AIDS related mortality should be known to within +/-50 per cent and the epidemic growth rate to within +/-25 per cent, both of which conditions are met. In the absence of cohort studies to measure the incidence of HIV infection directly, useful estimates of the age-specific incidence can be obtained from cross-sectional, age-specific prevalence data and repeat cross-sectional data on the overall prevalence of HIV infection. Several assumptions were made because of the lack of data but sensitivity analyses show that they are unlikely to affect the overall estimates significantly. These estimates are important in assessing the magnitude of the public health problem, for designing vaccine trials and for evaluating the impact of interventions. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
The principle of using induction rules based on spatial environmental data to model a soil map has previously been demonstrated Whilst the general pattern of classes of large spatial extent and those with close association with geology were delineated small classes and the detailed spatial pattern of the map were less well rendered Here we examine several strategies to improve the quality of the soil map models generated by rule induction Terrain attributes that are better suited to landscape description at a resolution of 250 m are introduced as predictors of soil type A map sampling strategy is developed Classification error is reduced by using boosting rather than cross validation to improve the model Further the benefit of incorporating the local spatial context for each environmental variable into the rule induction is examined The best model was achieved by sampling in proportion to the spatial extent of the mapped classes boosting the decision trees and using spatial contextual information extracted from the environmental variables.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
We discuss quantum error correction for errors that occur at random times as described by, a conditional Poisson process. We shoo, how a class of such errors, detected spontaneous emission, can be corrected by continuous closed loop, feedback.
Resumo:
The assumption in analytical solutions for flow from surface and buried point sources of an average water content, (θ) over bar, behind the wetting front is examined. Some recent work has shown that this assumption fitted some field data well. Here we calculated (θ) over bar using a steady state solution based on the work by Raats [1971] and an exponential dependence of the diffusivity upon the water content. This is compared with a constant value of (θ) over bar calculated from an assumption of a hydraulic conductivity at the wetting front of 1 mm day(-1) and the water content at saturation. This comparison was made for a wide range of soils. The constant (θ) over bar generally underestimated (θ) over bar at small wetted radii and overestimated (θ) over bar at large radii. The crossover point between under and overestimation changed with both soil properties and flow rate. The largest variance occurred for coarser texture soils at low-flow rates. At high-flow rates in finer-textured soils the use of a constant (θ) over bar results in underestimation of the time for the wetting front to reach a particular radius. The value of (θ) over bar is related to the time at which the wetting front reaches a given radius. In coarse-textured soils the use of a constant value of (θ) over bar can result in an error of the time when the wetting front reaches a particular radius, as large as 80% at low-flow rates and large radii.
Resumo:
One consistent functional imaging finding from patients with major depression has been abnormality of the anterior cingulate cortex (ACC). Hypoperfusion has been most commonly reported, but some studies suggest relative hyperperfusion is associated with response to somatic treatments. Despite these indications of the possible importance of the ACC in depression there have been relatively few cognitive studies ACC function in patients with major depression. The present study employed a series of reaction time (RT) tasks involving selection with melancholic and nonmelancholic depressed patients, as well as age-matched controls. Fifteen patients with unipolar major depression (7 melancholic, 8 nonmelancholic) and 8 healthy age-matched controls performed a series of response selection tasks (choice RT, spatial Stroop, spatial stimulus-response compatibility (SRC), and a combined Stroop + SRC condition). Reaction time and error data were collected. Melancholic patients were significantly slower than controls on all tasks but were slower than nonmelancholic patients only on the Stroop and Stroop + SRC conditions. Nonmelancholic patients did not differ from the control group on any task. The Stroop task seems crucial in differentiating the two depressive groups, they did not differ on the choice RT or SRC tasks. This may reflect differential task demands, the SRC involved symbolic manipulation that might engage the dorsal ACC and dorsolateral prefrontal cortex (DLPFC) to a greater extent than the, primarily inhibitory, Stroop task which may engage the ventral ACC and orbitofrontal cortex (OFC). This might suggest the melancholic group showed a greater ventral ACC-OFC deficit than the nonmelancholic group, while both groups showed similar dorsal ACC-DLPFC deficit.
Resumo:
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.
Resumo:
The purpose of this paper is to analyze the dynamics of national saving-investment relationship in order to determine the degree of capital mobility in 12 Latin American countries. The analytically relevant correlation is the short-term one, defined as that between changes in saving and investment. Of special interest is the speed at which variables return to the long run equilibrium relationship, which is interpreted as being negatively related to the degree of capital mobility. The long run correlation, in turn, captures the coefficient implied by the solvency constraint. We find that heterogeneity and cross-section dependence completely change the estimation of the long run coefficient. Besides we obtain a more precise short run coefficient estimate compared to the existent estimates in the literature. There is evidence of an intermediate degree of capital mobility, and the coefficients are extremely stable over time.
Resumo:
This paper addresses the investment decisions considering the presence of financial constraints of 373 large Brazilian firms from 1997 to 2004, using panel data. A Bayesian econometric model was used considering ridge regression for multicollinearity problems among the variables in the model. Prior distributions are assumed for the parameters, classifying the model into random or fixed effects. We used a Bayesian approach to estimate the parameters, considering normal and Student t distributions for the error and assumed that the initial values for the lagged dependent variable are not fixed, but generated by a random process. The recursive predictive density criterion was used for model comparisons. Twenty models were tested and the results indicated that multicollinearity does influence the value of the estimated parameters. Controlling for capital intensity, financial constraints are found to be more important for capital-intensive firms, probably due to their lower profitability indexes, higher fixed costs and higher degree of property diversification.
Resumo:
We build a model that incorporates the effect of the innovative ""flex"" car, an automobile that is able to run with either gasoline or alcohol, on the dynamics of fuel prices in Brazil. Our model shows that differences regarding fuel prices will now depend on the proportions of alcohol, gasoline and flex cars in the total stock. Conversely, the demand for each type of car will also depend on the expected future prices of alcohol and gasoline (in addition to the car prices). The model reflects our findings that energy prices are tied in the long run and that causality runs stronger from gasoline to alcohol. The estimated error correction parameter is stable, implying that the speed of adjustment towards equilibrium remains unchanged. The latter result is probably due to a still small fraction of flex cars in the total stock (approx. 5%), despite the fact that its sales nearly reached 100% in 2006. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper examines the hysteresis hypothesis in the Brazilian industrialized exports using a time series analysis. This hypothesis finds an empirical representation into the nonlinear adjustments of the exported quantity to relative price changes. Thus, the threshold cointegration analysis proposed by Balke and Fomby [Balke, N.S. and Fomby, T.B. Threshold Cointegration. International Economic Review, 1997; 38; 627-645.] was used for estimating models with asymmetric adjustment of the error correction term. Amongst sixteen industrial sectors selected, there was evidence of nonlinearities in the residuals of long-run relationships of supply or demand for exports in nine of them. These nonlinearities represent asymmetric and/or discontinuous responses of exports to different representative measures of real exchange rates, in addition to other components of long-run demand or supply equations. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Background-The importance of serum triglyceride levels as a risk factor for cardiovascular diseases is uncertain. Methods and Results-We performed an individual participant data meta-analysis of prospective studies conducted in the Asia-Pacific region. Cox models were applied to the combined data from 26 studies to estimate the overall and region-, sex-, and age-specific hazard ratios for major cardiovascular diseases by fifths of triglyceride values. During 796 671 person-years of follow-up among 96 224 individuals, 670 and 667 deaths as a result of coronary heart disease (CHD) and stroke, respectively, were recorded. After adjustment for major cardiovascular risk factors, participants grouped in the highest fifth of triglyceride levels had a 70% (95% CI, 47 to 96) greater risk of CHD death, an 80% (95% CI, 49 to 119) higher risk of fatal or nonfatal CHD, and a 50% (95% CI, 29% to 76%) increased risk of fatal or nonfatal stroke compared with those belonging to the lowest fifth. The association between triglycerides and CHD death was similar across subgroups defined by ethnicity, age, and sex. Conclusions-Serum triglycerides are an important and independent predictor of CHD and stroke risk in the Asia-Pacific region. These results may have clinical implications for cardiovascular risk prediction and the use of lipid-lowering therapy.