49 resultados para data driven approach
Resumo:
This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the alpha-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.
Resumo:
Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Survival models involving frailties are commonly applied in studies where correlated event time data arise due to natural or artificial clustering. In this paper we present an application of such models in the animal breeding field. Specifically, a mixed survival model with a multivariate correlated frailty term is proposed for the analysis of data from over 3611 Brazilian Nellore cattle. The primary aim is to evaluate parental genetic effects on the trait length in days that their progeny need to gain a commercially specified standard weight gain. This trait is not measured directly but can be estimated from growth data. Results point to the importance of genetic effects and suggest that these models constitute a valuable data analysis tool for beef cattle breeding.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
Hydrological models featuring root water uptake usually do not include compensation mechanisms such that reductions in uptake from dry layers are compensated by an increase in uptake from wetter layers. We developed a physically based root water uptake model with an implicit compensation mechanism. Based on an expression for the matric flux potential (M) as a function of the distance to the root, and assuming a depth-independent value of M at the root surface, uptake per layer is shown to be a function of layer bulk M, root surface M, and a weighting factor that depends on root length density and root radius. Actual transpiration can be calculated from the sum of layer uptake rates. The proposed reduction function (PRF) was built into the SWAP model, and predictions were compared to those made with the Feddes reduction function (FRF). Simulation results were tested against data from Canada (continuous spring wheat [(Triticum aestivum L.]) and Germany (spring wheat, winter barley [Hordeum vulgare L.], sugarbeet [Beta vulgaris L.], winter wheat rotation). For the Canadian data, the root mean square error of prediction (RMSEP) for water content in the upper soil layers was very similar for FRF and PRF; for the deeper layers, RMSEP was smaller for PRF. For the German data, RMSEP was lower for PRF in the upper layers and was similar for both models in the deeper layers. In conclusion, but dependent on the properties of the data sets available for testing,the incorporation of the new reduction function into SWAP was successful, providing new capabilities for simulating compensated root water uptake without increasing the number of input parameters or degrading model performance.
Resumo:
This paper applies Hierarchical Bayesian Models to price farm-level yield insurance contracts. This methodology considers the temporal effect, the spatial dependence and spatio-temporal models. One of the major advantages of this framework is that an estimate of the premium rate is obtained directly from the posterior distribution. These methods were applied to a farm-level data set of soybean in the State of the Parana (Brazil), for the period between 1994 and 2003. The model selection was based on a posterior predictive criterion. This study improves considerably the estimation of the fair premium rates considering the small number of observations.
Resumo:
A new laboratory method was proposed to establish an easily performed standard for the determination of mobile soil water close to real conditions during the infiltration and redistribution of water in a soil. It consisted of applying a water volume with a tracer ion on top of an undisturbed ring sample on a pressure plate under a known suction or pressure head. Afterwards, soil water mobility was determined by analyzing the tracer-ion concentration in the soil sample. Soil water mobility showed to be a function of the applied water volume. No relation between soil water mobility and applied pressure head could be established with data from the present experiment. A simple one- or two-parameter equation can be fitted to the experimental data to parameterize soil water mobility as a function of applied solute volume. Sandy soils showed higher mobility than loamy soils at low values of applied solute volumes, and both sandy and loamy soils showed an almost complete mobility at high applied solute volumes.
Resumo:
Recently, we have built a classification model that is capable of assigning a given sesquiterpene lactone (STL) into exactly one tribe of the plant family Asteraceae from which the STL has been isolated. Although many plant species are able to biosynthesize a set of peculiar compounds, the occurrence of the same secondary metabolites in more than one tribe of Asteraceae is frequent. Building on our previous work, in this paper, we explore the possibility of assigning an STL to more than one tribe (class) simultaneously. When an object may belong to more than one class simultaneously, it is called multilabeled. In this work, we present a general overview of the techniques available to examine multilabeled data. The problem of evaluating the performance of a multilabeled classifier is discussed. Two particular multilabeled classification methods-cross-training with support vector machines (ct-SVM) and multilabeled k-nearest neighbors (M-L-kNN)were applied to the classification of the STLs into seven tribes from the plant family Asteraceae. The results are compared to a single-label classification and are analyzed from a chemotaxonomic point of view. The multilabeled approach allowed us to (1) model the reality as closely as possible, (2) improve our understanding of the relationship between the secondary metabolite profiles of different Asteraceae tribes, and (3) significantly decrease the number of plant sources to be considered for finding a certain STL. The presented classification models are useful for the targeted collection of plants with the objective of finding plant sources of natural compounds that are biologically active or possess other specific properties of interest.
Resumo:
The stock market suffers uncertain relations throughout the entire negotiation process, with different variables exerting direct and indirect influence on stock prices. This study focuses on the analysis of certain aspects that may influence these values offered by the capital market, based on the Brazil Index of the Sao Paulo Stock Exchange (Bovespa), which selects 100 stocks among the most traded on Bovespa in terms of number of trades and financial volume. The selected variables are characterized by the companies` activity area and the business volume in the month of data collection, i.e. April/2007. This article proposes an analysis that joins the accounting view of the stock price variables that can be influenced with the use of multivariate qualitative data analysis. Data were explored through Correspondence Analysis (Anacor) and Homogeneity Analysis (Homals). According to the research, the selected variables are associated with the values presented by the stocks, which become an internal control instrument and a decision-making tool when it comes to choosing investments.
Resumo:
Valuation of projects for the preservation of water resources provides important information to policy makers and funding institutions. Standard contingent valuation models rely on distributional assumptions to provide welfare measures. Deviations from assumed and actual distribution of benefits are important when designing policies in developing countries, where inequality is a concern. This article applies semiparametric methods to obtain estimates of the benefit from a project for the preservation of an important Brazilian river basin. These estimates lead to significant differences from those obtained using the standard parametric approach.
Resumo:
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Resumo:
Background Stroke mortality rates in Brazil are the highest in the Americas. Deaths from cerebrovascular disease surpass coronary heart disease. Aim To verify stroke mortality rates and morbidity in an area of Sao Paulo, Brazil, using the World Health Organization Stepwise Approach to Stroke Surveillance. Methods We used the World Health Organization Stepwise Approach to Stroke Surveillance structure of stroke surveillance. The hospital-based data comprised fatal and nonfatal stroke (Step 1). We gathered stroke-related mortality data in the community using World Health Organization questionnaires (Step 2). The questionnaire determining stroke prevalence was activated door to door in a family-health-programme neighbourhood (Step 3). Results A total of 682 patients 18 years and above, including 472 incident cases, presented with cerebrovascular disease and were enrolled in Step 1 during April-May 2009. Cerebral infarction (84 center dot 3%) and first-ever stroke (85 center dot 2%) were the most frequent. In Step 2, 256 deaths from stroke were identified during 2006-2007. Forty-four per cent of deaths were classified as unspecified stroke, 1/3 as ischaemic stroke, and 1/4 due to haemorrhagic subtype. In Step 3, 577 subjects over 35 years old were evaluated at home, and 244 cases of stroke survival were diagnosed via a questionnaire, validated by a board-certified neurologist. The population demographic characteristics were similar in the three steps, except in terms of age and gender. Conclusion By including data from all settings, World Health Organization stroke surveillance can provide data to help plan future resources that meet the needs of the public-health system.
Resumo:
Aim. The aim of this study was to understand the heart transplantation experience based on patients` descriptions. Background. To patients with heart failure, heart transplantation represents a possibility to survive and improve their quality of life. Studies have shown that more quality of life is related to patients` increasing awareness and participation in the work of the healthcare team in the post-transplantation period. Deficient relationships between patients and healthcare providers result in lower compliance with the postoperative regimen. Method. A phenomenological approach was used to interview 26 patients who were heart transplant recipients. Patients were interviewed individually and asked this single question: What does the experience of being heart transplanted mean? Participants` descriptions were analysed using phenomenological reduction, analysis and interpretation. Results. Three categories emerged from data analysis: (i) the time lived by the heart recipient; (ii) donors, family and caregivers and (iii) reflections on the experience lived. Living after heart transplant means living in a complex situation: recipients are confronted with lifelong immunosuppressive therapy associated with many side-effects. Some felt healthy whereas others reported persistence of complications as well as the onset of other pathologies. However, all participants celebrated an improvement in quality of life. Health caregivers, their social and family support had been essential for their struggle. Participants realised that life after heart transplantation was a continuing process demanding support and structured follow-up for the rest of their lives. Conclusion. The findings suggest that each individual has unique experiences of the heart transplantation process. To go on living participants had to accept changes and adapt: to the organ change, to complications resulting from rejection of the organ, to lots of pills and food restrictions. Relevance to clinical practice. Stimulating a heart transplant patients spontaneous expression about what they are experiencing and granting them the actual status of the main character in their own story is important to their care.
Resumo:
Background: There are few studies on HIV subtypes and primary and secondary antiretroviral drug resistance (ADR) in community-recruited samples in Brazil. We analyzed HIV clade diversity and prevalence of mutations associated with ADR in men who have sex with men in all five regions of Brazil. Methods: Using respondent-driven sampling, we recruited 3515 men who have sex with men in nine cities: 299 (9.5%) were HIV-positive; 143 subjects had adequate genotyping and epidemiologic data. Forty-four (30.8%) subjects were antiretroviral therapy-experienced (AE) and 99 (69.2%) antiretroviral therapy-naive (AN). We sequenced the reverse transcriptase and protease regions of the virus and analyzed them for drug resistant mutations using World Health Organization guidelines. Results: The most common subtypes were B (81.8%), C (7.7%), and recombinant forms (6.9%). The overall prevalence of primary ADR resistance was 21.4% (i.e. among the AN) and secondary ADR was 35.8% (i.e. among the AE). The prevalence of resistance to protease inhibitors was 3.9% (AN) and 4.4% (AE); to nucleoside reverse transcriptase inhibitors 15.0% (AN) and 31.0% (AE) and to nonnucleoside reverse transcriptase inhibitors 5.5% (AN) and 13.2% (AE). The most common resistance mutation for nucleoside reverse transcriptase inhibitors was 184V (17 cases) and for nonnucleoside reverse transcriptase inhibitors 103N (16 cases). Conclusions: Our data suggest a high level of both primary and secondary ADR in men who have sex with men in Brazil. Additional studies are needed to identify the correlates and causes of antiretroviral therapy resistance to limit the development of resistance among those in care and the transmission of resistant strains in the wider epidemic.
Resumo:
Neuromodulation is the branch of neurophysiology related to the therapeutic effects of electrical stimulations of the nervous system. There are currently different practical applications of neuromodulation techniques for the treatment of various neurological disorders, such as deep brain stimulation for Parkinson`s disease and repetitive transcranial magnetic stimulation (rTMS) for major depression. An increasing number of studies have been devoted to the analgesic effects of rTMS in chronic pain patients. RTMS has been used either as a therapeutic tool per se, or as a preoperative test in patients undergoing epidural precentral gyrus stimulation. High-frequency rTMS (a parts per thousand yen5 Hz) is considered to be excitatory, while low-frequency stimulation (a parts per thousand currency sign1 Hz) is considered to exert an inhibitory effect over neuronal populations of the primary motor cortex. However, other parameters of stimulation may play a central role on its clinical effects such as the type of coil, its orientation over the scalp, and the total number of rTMS sessions performed. Experimental data from animals, healthy volunteers, and neuropathic pain patients have suggested that stimulation of the primary motor cortex by rTMS is able to activate brain regions implicated in the processing of the different aspects of chronic pain, and influence brain regions involved in the endogenous opioid system. Over twenty prospective randomized sham-controlled trials have studied the analgesic effects of rTMS on chronic pain. Most of the patients included in these trials had central or peripheral neuropathic pain. Although most studies used a single session of stimulation, recent studies have shown that the analgesic effects of rTMS may outlast the stimulation period for many days when repetitive sessions are performed. This opens the possibility to use rTMS as a therapeutic tool of its own in the armamentarium against neuropathic pain.