933 resultados para Non-linear error correction models
Resumo:
In this paper we examine the effect of tax policy on the relationship between inequality and growth in a two-sector non-scale model. With non-scale models, the longrun equilibrium growth rate is determined by technological parameters and it is independent of macroeconomic policy instruments. However, this fact does not imply that fiscal policy is unimportant for long-run economic performance. It indeed has important effects on the different levels of key economic variables such as per capita stock of capital and output. Hence, although the economy grows at the same rate across steady states, the bases for economic growth may be different.The model has three essential features. First, we explicitly model skill accumulation, second, we introduce government finance into the production function, and we introduce an income tax to mirror the fiscal events of the 1980¿s and 1990¿s in the US. The fact that the non-scale model is associated with higher order dynamics enables it to replicate the distinctly non-linear nature of inequality in the US with relative ease. The results derived in this paper attract attention to the fact that the non-scale growth model does not only fit the US data well for the long-run (Jones, 1995b) but also that it possesses unique abilities in explaining short term fluctuations of the economy. It is shown that during transition the response of the relative simulated wage to changes in the tax code is rather non-monotonic, quite in accordance to the US inequality pattern in the 1980¿s and early 1990¿s.More specifically, we have analyzed in detail the dynamics following the simulation of an isolated tax decrease and an isolated tax increase. So, after a tax decrease the skill premium follows a lower trajectory than the one it would follow without a tax decrease. Hence we are able to reduce inequality for several periods after the fiscal shock. On the contrary, following a tax increase, the evolution of the skill premium remains above the trajectory carried on by the skill premium under a situation with no tax increase. Consequently, a tax increase would imply a higher level of inequality in the economy
Resumo:
Soil moisture is the property which most greatly influences the soil dielectric constant, which is also influenced by soil mineralogy. The aim of this study was to determine mathematical models for soil moisture and the dielectric constant (Ka) for a Hapludalf, two clayey Hapludox and a very clayey Hapludox and test the reliability of universal models, such as those proposed by Topp and Ledieu and their co-workers in the 80's, and specific models to estimate soil moisture with a TDR. Soil samples were collected from the 0 to 0.30 m layer, sieved through a mesh of 0.002 m diameter and packed in PVC cylinders with a 0.1 m diameter and 0.3 m height. Seven samples of each soil class were saturated by capillarity and a probe composed of two rods was inserted in each one of them. Moisture readings began with the saturated soil and concluded when the soil was near permanent wilting point. In each step, the samples were weighed on a precision scale to calculate volumetric moisture. Linear and polynomial models were adjusted for each soil class and for all soils together between soil moisture and the dielectric constant. Accuracy of the models was evaluated by the coefficient of determination, the standard error of estimate and the 1:1 line. The models proposed by Topp and Ledieu and their co-workers were not adequate for estimating the moisture in the soil classes studied. The adjusted linear and polynomial models for the entire set of data of the four soil classes did not have sufficient accuracy for estimating soil moisture. The greater the soil clay and Fe oxide content, the greater the dielectric constant of the medium for a given volumetric moisture. The specific models, θ = 0.40283 - 0.04231 Ka + 0.00194 Ka² - 0.000022 Ka³ (Hapludox) θ = 0.01971 + 0.02902 Ka - 0.00086 Ka² + 0.000012 Ka³ (Hapludox -PF), θ = 0.01692 - 0.00507 Ka (Hapludalf) and θ = 0.08471 + 0.01145 Ka (Hapludox-CA), show greater accuracy and reliability for estimating soil moisture in the soil classes studied.
Resumo:
Soil penetration resistance is an important property that affects root growth and elongation and water movement in the soil. Since no-till systems tend to increase organic matter in the soil, the purpose of this study was to evaluate the efficiency with which soil penetration resistance is estimated using a proposed model based on moisture content, density and organic matter content in an Oxisol containing 665, 221 and 114 g kg-1 of clay, silt and sand respectively under annual no-till cropping, located in Londrina, Paraná State, Brazil. Penetration resistance was evaluated at random locations continually from May 2008 to February 2011, using an impact penetrometer to obtain a total of 960 replications. For the measurements, soil was sampled at depths of 0 to 20 cm to determine gravimetric moisture (G), bulk density (D) and organic matter content (M). The penetration resistance curve (PR) was adjusted using two non-linear models (PR = a Db Gc and PR' = a Db Gc Md), where a, b, c and d are coefficients of the adjusted model. It was found that the model that included M was the most efficient for estimating PR, explaining 91 % of PR variability, compared to 82 % of the other model.
Resumo:
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders.
Resumo:
The nutritional state of the pineapple plant has a large effect on plant growth, on fruit production, and fruit quality. The aim of this study was to assess the uptake, accumulation, and export of nutrients by the irrigated 'Vitória' pineapple plant during and at the end of its development. A randomized block statistical design with four replications was used. The treatments were defined by different times of plant collection: at 270, 330, 390, 450, 510, 570, 690, 750, and 810 days after planting (DAP). The collected plants were separated into the following components: leaves, stem, roots, fruit, and slips for determination of fresh and dry matter weight at 65 ºC. After drying, the plant components were ground for characterization of the composition and content of nutrients taken up and exported by the pineapple plant. The results were subjected to analysis of variance, and non-linear regression models were fitted for the significant differences identified by the F test (p<0.01). The leaves and the stem were the plant components that showed the greatest accumulation of nutrients. For production of 72 t ha-1 of fruit, the macronutrient accumulation in the 'Vitória' pineapple exhibited the following decreasing order: K > N > S > Ca > Mg > P, which corresponded to 898, 452, 134, 129, 126, and 107 kg ha-1, respectively, of total accumulation. The export of macronutrients by the pineapple fruit was in the following decreasing order: K > N > S > Ca > P > Mg, which was equivalent to 18, 17, 11, 8, 8, and 5 %, respectively, of the total accumulated by the pineapple. The 'Vitória' pineapple plant exported 78 kg ha-1 of N, 8 kg ha-1 of P, 164 kg ha-1 of K, 14 kg ha-1 of S, 10 kg ha-1 of Ca, and 6 kg ha-1 of Mg by the fruit. The nutrient content exported by the fruits represent important components of nutrient extraction from the soil, which need to be restored, while the nutrients contained in the leaves, stems and roots can be incorporated in the soil within a program of recycling of crop residues.
Resumo:
Estimating the time since discharge of a spent cartridge or a firearm can be useful in criminal situa-tions involving firearms. The analysis of volatile gunshot residue remaining after shooting using solid-phase microextraction (SPME) followed by gas chromatography (GC) was proposed to meet this objective. However, current interpretative models suffer from several conceptual drawbacks which render them inadequate to assess the evidential value of a given measurement. This paper aims to fill this gap by proposing a logical approach based on the assessment of likelihood ratios. A probabilistic model was thus developed and applied to a hypothetical scenario where alternative hy-potheses about the discharge time of a spent cartridge found on a crime scene were forwarded. In order to estimate the parameters required to implement this solution, a non-linear regression model was proposed and applied to real published data. The proposed approach proved to be a valuable method for interpreting aging-related data.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
Background: MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample.Results: Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace.Conclusion: Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed.
Resumo:
This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.
Resumo:
AIM: Total imatinib concentrations are currently measured for the therapeutic drug monitoring of imatinib, whereas only free drug equilibrates with cells for pharmacological action. Due to technical and cost limitations, routine measurement of free concentrations is generally not performed. In this study, free and total imatinib concentrations were measured to establish a model allowing the confident prediction of imatinib free concentrations based on total concentrations and plasma proteins measurements. METHODS: One hundred and fifty total and free plasma concentrations of imatinib were measured in 49 patients with gastrointestinal stromal tumours. A population pharmacokinetic model was built up to characterize mean total and free concentrations with inter-patient and intrapatient variability, while taking into account α1 -acid glycoprotein (AGP) and human serum albumin (HSA) concentrations, in addition to other demographic and environmental covariates. RESULTS: A one compartment model with first order absorption was used to characterize total and free imatinib concentrations. Only AGP influenced imatinib total clearance. Imatinib free concentrations were best predicted using a non-linear binding model to AGP, with a dissociation constant Kd of 319 ng ml(-1) , assuming a 1:1 molar binding ratio. The addition of HSA in the equation did not improve the prediction of imatinib unbound concentrations. CONCLUSION: Although free concentration monitoring is probably more appropriate than total concentrations, it requires an additional ultrafiltration step and sensitive analytical technology, not always available in clinical laboratories. The model proposed might represent a convenient approach to estimate imatinib free concentrations. However, therapeutic ranges for free imatinib concentrations remain to be established before it enters into routine practice.
Resumo:
This study aimed to use the plantar pressure insole for estimating the three-dimensional ground reaction force (GRF) as well as the frictional torque (T(F)) during walking. Eleven subjects, six healthy and five patients with ankle disease participated in the study while wearing pressure insoles during several walking trials on a force-plate. The plantar pressure distribution was analyzed and 10 principal components of 24 regional pressure values with the stance time percentage (STP) were considered for GRF and T(F) estimation. Both linear and non-linear approximators were used for estimating the GRF and T(F) based on two learning strategies using intra-subject and inter-subjects data. The RMS error and the correlation coefficient between the approximators and the actual patterns obtained from force-plate were calculated. Our results showed better performance for non-linear approximation especially when the STP was considered as input. The least errors were observed for vertical force (4%) and anterior-posterior force (7.3%), while the medial-lateral force (11.3%) and frictional torque (14.7%) had higher errors. The result obtained for the patients showed higher error; nevertheless, when the data of the same patient were used for learning, the results were improved and in general slight differences with healthy subjects were observed. In conclusion, this study showed that ambulatory pressure insole with data normalization, an optimal choice of inputs and a well-trained nonlinear mapping function can estimate efficiently the three-dimensional ground reaction force and frictional torque in consecutive gait cycle without requiring a force-plate.
Resumo:
Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.
Resumo:
The control and regrowth after nicosulfuron reduced rate treatment of Johnsongrass (Sorghum halepense L. Pers.) populations, from seven Argentinean locations, were evaluated in pot experiments to assess if differential performance could limit the design and implementation of integrated weed management programs. Populations from humid regions registered a higher sensibility to reduced rates of nicosulfuron than populations from subhumid regions. This effect was visualised in the values of regression coefficient of the non-linear models (relating fresh weight to nicosulfuron rate), and in the time needed to obtain a 50% reduction of photosynthesis rate and stomatal conductance. The least leaf CO2 exchange of subhumid populations could result in a lower foliar absorption and translocation of nicosulfuron, thus producing less control and increasing their ability to sprout and produce new aerial biomass. The three populations from subhumid regions, with less sensibility to nicosulfuron rates, presented substantial difference in fresh weight, total rhizome length and number of rhizome nodes, when they were evaluated 20 week after treatment. In consequence, a substantial Johnsongrass re-infestation could occur, if rates below one-half of nicosulfuron labeled rate were used to control Johnsongrass in subhumid regions.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD) detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN) have been used for the automatic classification of the two classes (AD and control subjects). Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients.