933 resultados para Non-linear error correction models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Projecte de recerca elaborat a partir d’una estada a la University of Groningen, Holanda, entre 2007 i 2009. La simulació directa de la turbulència (DNS) és una eina clau dins de la mecànica de fluids computacional. Per una banda permet conèixer millor la física de la turbulència i per l'altra els resultats obtinguts són claus per el desenvolupament dels models de turbulència. No obstant, el DNS no és una tècnica vàlida per a la gran majoria d'aplicacions industrials degut al elevats costos computacionals. Per tant, és necessari cert grau de modelització de la turbulència. En aquest context, s'han introduïts importants millores basades en la modelització del terme convectiu (no lineal) emprant symmetry-preserving regularizations. En tracta de modificar adequadament el terme convectiu a fi de reduir la producció d'escales més i més petites (vortex-stretching) tot mantenint tots els invariants de les equacions originals. Fins ara, aquest models s'han emprat amb èxit per nombres de Rayleigh (Ra) relativament elevats. En aquest punt, disposar de resultats DNS per a configuracions més complexes i nombres de Ra més elevats és clau. En aquest contexte, s'han dut a terme simulacions DNS en el supercomputador MareNostrum d'una Differentially Heated Cavity amb Ra=1e11 i Pr=0.71 durant el primer any dels dos que consta el projecte. A més a més, s'ha adaptat el codi a fi de poder simular el fluxe al voltant d'un cub sobre una pared amb Re=10000. Aquestes simulacions DNS són les més grans fetes fins ara per aquestes configuracions i la seva correcta modelització és un gran repte degut la complexitat dels fluxes. Aquestes noves simulacions DNS estan aportant nous coneixements a la física de la turbulència i aportant resultats indispensables per al progrés de les modelitzacións tipus symmetry-preserving regularization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different urban structures might affect the life history parameters of Aedes aegypti and, consequently, dengue transmission. Container productivity, probability of daily survival (PDS) and dispersal rates were estimated for mosquito populations in a high income neighbourhood of Rio de Janeiro. Results were contrasted with those previously found in a suburban district, as well as those recorded in a slum. After inspecting 1,041 premises, domestic drains and discarded plastic pots were identified as the most productive containers, collectively holding up to 80% of the total pupae. In addition, three cohorts of dust-marked Ae. aegypti females were released and recaptured daily using BGS-Traps, sticky ovitraps and backpack aspirators in 50 randomly selected houses; recapture rate ranged from 5-12.2% within cohorts. PDS was determined by two models and ranged from 0.607-0.704 (exponential model) and 0.659-0.721 (non-linear model), respectively. Mean distance travelled varied from 57-122 m, with a maximum dispersal of 263 m. Overall, lower infestation indexes and adult female survival were observed in the high income neighbourhood, suggesting a lower dengue transmission risk in comparison to the suburban area and the slum. Since results show that urban structure can influence mosquito biology, specific control strategies might be used in order to achieve cost-effective Ae. aegypti control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evolution of compositions in time, space, temperature or other covariates is frequentin practice. For instance, the radioactive decomposition of a sample changes its composition with time. Some of the involved isotopes decompose into other isotopes of thesample, thus producing a transfer of mass from some components to other ones, butpreserving the total mass present in the system. This evolution is traditionally modelledas a system of ordinary di erential equations of the mass of each component. However,this kind of evolution can be decomposed into a compositional change, expressed interms of simplicial derivatives, and a mass evolution (constant in this example). A rst result is that the simplicial system of di erential equations is non-linear, despiteof some subcompositions behaving linearly.The goal is to study the characteristics of such simplicial systems of di erential equa-tions such as linearity and stability. This is performed extracting the compositional differential equations from the mass equations. Then, simplicial derivatives are expressedin coordinates of the simplex, thus reducing the problem to the standard theory ofsystems of di erential equations, including stability. The characterisation of stabilityof these non-linear systems relays on the linearisation of the system of di erential equations at the stationary point, if any. The eigenvelues of the linearised matrix and theassociated behaviour of the orbits are the main tools. For a three component system,these orbits can be plotted both in coordinates of the simplex or in a ternary diagram.A characterisation of processes with transfer of mass in closed systems in terms of stability is thus concluded. Two examples are presented for illustration, one of them is aradioactive decay

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND In previous meta-analyses, tea consumption has been associated with lower incidence of type 2 diabetes. It is unclear, however, if tea is associated inversely over the entire range of intake. Therefore, we investigated the association between tea consumption and incidence of type 2 diabetes in a European population. METHODOLOGY/PRINCIPAL FINDINGS The EPIC-InterAct case-cohort study was conducted in 26 centers in 8 European countries and consists of a total of 12,403 incident type 2 diabetes cases and a stratified subcohort of 16,835 individuals from a total cohort of 340,234 participants with 3.99 million person-years of follow-up. Country-specific Hazard Ratios (HR) for incidence of type 2 diabetes were obtained after adjustment for lifestyle and dietary factors using a Cox regression adapted for a case-cohort design. Subsequently, country-specific HR were combined using a random effects meta-analysis. Tea consumption was studied as categorical variable (0, >0-<1, 1-<4, ≥ 4 cups/day). The dose-response of the association was further explored by restricted cubic spline regression. Country specific medians of tea consumption ranged from 0 cups/day in Spain to 4 cups/day in United Kingdom. Tea consumption was associated inversely with incidence of type 2 diabetes; the HR was 0.84 [95%CI 0.71, 1.00] when participants who drank ≥ 4 cups of tea per day were compared with non-drinkers (p(linear trend) = 0.04). Incidence of type 2 diabetes already tended to be lower with tea consumption of 1-<4 cups/day (HR = 0.93 [95%CI 0.81, 1.05]). Spline regression did not suggest a non-linear association (p(non-linearity) = 0.20). CONCLUSIONS/SIGNIFICANCE A linear inverse association was observed between tea consumption and incidence of type 2 diabetes. People who drink at least 4 cups of tea per day may have a 16% lower risk of developing type 2 diabetes than non-tea drinkers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Raman spectroscopy has become an attractive tool for the analysis of pharmaceutical solid dosage forms. In the present study it is used to ensure the identity of tablets. The two main applications of this method are release of final products in quality control and detection of counterfeits. Twenty-five product families of tablets have been included in the spectral library and a non-linear classification method, the Support Vector Machines (SVMs), has been employed. Two calibrations have been developed in cascade: the first one identifies the product family while the second one specifies the formulation. A product family comprises different formulations that have the same active pharmaceutical ingredient (API) but in a different amount. Once the tablets have been classified by the SVM model, API peaks detection and correlation are applied in order to have a specific method for the identification and allow in the future to discriminate counterfeits from genuine products. This calibration strategy enables the identification of 25 product families without error and in the absence of prior information about the sample. Raman spectroscopy coupled with chemometrics is therefore a fast and accurate tool for the identification of pharmaceutical tablets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the problem of spatial data mapping. A new method based on wavelet interpolation and geostatistical prediction (kriging) is proposed. The method - wavelet analysis residual kriging (WARK) - is developed in order to assess the problems rising for highly variable data in presence of spatial trends. In these cases stationary prediction models have very limited application. Wavelet analysis is used to model large-scale structures and kriging of the remaining residuals focuses on small-scale peculiarities. WARK is able to model spatial pattern which features multiscale structure. In the present work WARK is applied to the rainfall data and the results of validation are compared with the ones obtained from neural network residual kriging (NNRK). NNRK is also a residual-based method, which uses artificial neural network to model large-scale non-linear trends. The comparison of the results demonstrates the high quality performance of WARK in predicting hot spots, reproducing global statistical characteristics of the distribution and spatial correlation structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides a method to estimate time varying coefficients structuralVARs which are non-recursive and potentially overidentified. The procedureallows for linear and non-linear restrictions on the parameters, maintainsthe multi-move structure of standard algorithms and can be used toestimate structural models with different identification restrictions. We studythe transmission of monetary policy shocks and compare the results with thoseobtained with traditional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to have references for discussing mathematical menus in political science, Ireview the most common types of mathematical formulae used in physics andchemistry, as well as some mathematical advances in economics. Several issues appearrelevant: variables should be well defined and measurable; the relationships betweenvariables may be non-linear; the direction of causality should be clearly identified andnot assumed on a priori grounds. On these bases, theoretically-driven equations onpolitical matters can be validated by empirical tests and can predict observablephenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study explored the links between having older siblings who get drunk, satisfaction with the parent-adolescent relationship, parental monitoring, and adolescents' risky drinking. Regression models were conducted based on a national representative sample of 3725 8th to 10th graders in Switzerland (mean age 15.0, SD = .93) who indicated having older siblings. Results showed that both parental factors and older siblings' drinking behaviour shape younger siblings' frequency of risky drinking. Parental monitoring showed a linear dose-response relationship, and siblings' influence had an additive effect. There was a non-linear interaction effect between parent-adolescent relationship and older sibling's drunkenness. The findings suggest that, apart from avoiding an increasingly unsatisfactory relationship with their children, parental monitoring appears to be important in preventing risky drinking by their younger children, even if the older sibling drinks in such a way. However, a satisfying relationship with parents does not seem to be sufficient to counterbalance older siblings' influence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context There are no evidence syntheses available to guide clinicians on when to titrate antihypertensive medication after initiation. Objective To model the blood pressure (BP) response after initiating antihypertensive medication. Data sources electronic databases including Medline, Embase, Cochrane Register and reference lists up to December 2009. Study selection Trials that initiated antihypertensive medication as single therapy in hypertensive patients who were either drug naive or had a placebo washout from previous drugs. Data extraction Office BP measurements at a minimum of two weekly intervals for a minimum of 4 weeks. An asymptotic approach model of BP response was assumed and non-linear mixed effects modelling used to calculate model parameters. Results and conclusions Eighteen trials that recruited 4168 patients met inclusion criteria. The time to reach 50% of the maximum estimated BP lowering effect was 1 week (systolic 0.91 weeks, 95% CI 0.74 to 1.10; diastolic 0.95, 0.75 to 1.15). Models incorporating drug class as a source of variability did not improve fit of the data. Incorporating the presence of a titration schedule improved model fit for both systolic and diastolic pressure. Titration increased both the predicted maximum effect and the time taken to reach 50% of the maximum (systolic 1.2 vs 0.7 weeks; diastolic 1.4 vs 0.7 weeks). Conclusions Estimates of the maximum efficacy of antihypertensive agents can be made early after starting therapy. This knowledge will guide clinicians in deciding when a newly started antihypertensive agent is likely to be effective or not at controlling BP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Site-specific regression coefficient values are essential for erosion prediction with empirical models. With the objective to investigate the surface-soilconsolidation factor, Cf, linked to the RUSLE's prior-land-use subfactor, PLU, an erosion experiment using simulated rainfall on a 0.075 m m-1 slope, sandy loam Paleudult soil, was conducted at the Agriculture Experimental Station of the Federal University of Rio Grande do Sul (EEA/UFRGS), in Eldorado do Sul, State of Rio Grande do Sul, Brazil. Firstly, a row-cropped area was excluded from cultivation (March 1995), the existing crop residue removed from the field, and the soil kept clean-tilled the rest of the year (to get a degraded soil condition for the intended purpose of this research). The soil was then conventional-tilled for the last time (except for a standard plot which was kept continuously cleantilled for comparison purposes), in January 1996, and the following treatments were established and evaluated for soil reconsolidation and soil erosion until May 1998, on duplicated 3.5 x 11.0 m erosion plots: (a) fresh-tilled soil, continuously in clean-tilled fallow (unit plot); (b) reconsolidating soil without cultivation; and (c) reconsolidating soil with cultivation (a crop sequence of three corn- and two black oats cycles, continuously in no-till, removing the crop residues after each harvest for rainfall application and redistributing them on the site after that). Simulated rainfall was applied with a Swanson's type, rotating-boom rainfall simulator, at 63.5 mm h-1 intensity and 90 min duration, six times during the two-and-half years of experimental period (at the beginning of the study and after each crop harvest, with the soil in the unit plot being retilled before each rainfall test). The soil-surface-consolidation factor, Cf, was calculated by dividing soil loss values from the reconsolidating soil treatments by the average value from the fresh-tilled soil treatment (unit plot). Non-linear regression was used to fit the Cf = e b.t model through the calculated Cf-data, where t is time in days since last tillage. Values for b were -0.0020 for the reconsolidating soil without cultivation and -0.0031 for the one with cultivation, yielding Cf-values equal to 0.16 and 0.06, respectively, after two-and-half years of tillage discontinuation, compared to 1.0 for fresh-tilled soil. These estimated Cf-values correspond, respectively, to soil loss reductions of 84 and 94 %, in relation to soil loss from the fresh-tilled soil, showing that the soil surface reconsolidated intenser with cultivation than without it. Two distinct treatmentinherent soil surface conditions probably influenced the rapid decay-rate of Cf values in this study, but, as a matter of a fact, they were part of the real environmental field conditions. Cf-factor curves presented in this paper are therefore useful for predicting erosion with RUSLE, but their application is restricted to situations where both soil type and particular soil surface condition are similar to the ones investigate in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we examine the effect of tax policy on the relationship between inequality and growth in a two-sector non-scale model. With non-scale models, the longrun equilibrium growth rate is determined by technological parameters and it is independent of macroeconomic policy instruments. However, this fact does not imply that fiscal policy is unimportant for long-run economic performance. It indeed has important effects on the different levels of key economic variables such as per capita stock of capital and output. Hence, although the economy grows at the same rate across steady states, the bases for economic growth may be different.The model has three essential features. First, we explicitly model skill accumulation, second, we introduce government finance into the production function, and we introduce an income tax to mirror the fiscal events of the 1980¿s and 1990¿s in the US. The fact that the non-scale model is associated with higher order dynamics enables it to replicate the distinctly non-linear nature of inequality in the US with relative ease. The results derived in this paper attract attention to the fact that the non-scale growth model does not only fit the US data well for the long-run (Jones, 1995b) but also that it possesses unique abilities in explaining short term fluctuations of the economy. It is shown that during transition the response of the relative simulated wage to changes in the tax code is rather non-monotonic, quite in accordance to the US inequality pattern in the 1980¿s and early 1990¿s.More specifically, we have analyzed in detail the dynamics following the simulation of an isolated tax decrease and an isolated tax increase. So, after a tax decrease the skill premium follows a lower trajectory than the one it would follow without a tax decrease. Hence we are able to reduce inequality for several periods after the fiscal shock. On the contrary, following a tax increase, the evolution of the skill premium remains above the trajectory carried on by the skill premium under a situation with no tax increase. Consequently, a tax increase would imply a higher level of inequality in the economy