229 resultados para Discrete Choice Model
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The search for more realistic modeling of financial time series reveals several stylized facts of real markets. In this work we focus on the multifractal properties found in price and index signals. Although the usual minority game (MG) models do not exhibit multifractality, we study here one of its variants that does. We show that the nonsynchronous MG models in the nonergodic phase is multifractal and in this sense, together with other stylized facts, constitute a better modeling tool. Using the structure function (SF) approach we detected the stationary and the scaling range of the time series generated by the MG model and, from the linear (non-linear) behavior of the SF we identified the fractal (multifractal) regimes. Finally, using the wavelet transform modulus maxima (WTMM) technique we obtained its multifractal spectrum width for different dynamical regimes. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this work we study an agent based model to investigate the role of asymmetric information degrees for market evolution. This model is quite simple and may be treated analytically since the consumers evaluate the quality of a certain good taking into account only the quality of the last good purchased plus her perceptive capacity beta. As a consequence, the system evolves according to a stationary Markov chain. The value of a good offered by the firms increases along with quality according to an exponent alpha, which is a measure of the technology. It incorporates all the technological capacity of the production systems such as education, scientific development and techniques that change the productivity rates. The technological level plays an important role to explain how the asymmetry of information may affect the market evolution in this model. We observe that, for high technological levels, the market can detect adverse selection. The model allows us to compute the maximum asymmetric information degree before the market collapses. Below this critical point the market evolves during a limited period of time and then dies out completely. When beta is closer to 1 (symmetric information), the market becomes more profitable for high quality goods, although high and low quality markets coexist. The maximum asymmetric information level is a consequence of an ergodicity breakdown in the process of quality evaluation. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The Brazilian Atlantic Forest is one of the richest biodiversity hotspots of the world. Paleoclimatic models have predicted two large stability regions in its northern and central parts, whereas southern regions might have suffered strong instability during Pleistocene glaciations. Molecular phylogeographic and endemism studies show, nevertheless, contradictory results: although some results validate these predictions, other data suggest that paleoclimatic models fail to predict stable rainforest areas in the south. Most studies, however, have surveyed species with relatively high dispersal rates whereas taxa with lower dispersion capabilities should be better predictors of habitat stability. Here, we have used two land planarian species as model organisms to analyse the patterns and levels of nucleotide diversity on a locality within the Southern Atlantic Forest. We find that both species harbour high levels of genetic variability without exhibiting the molecular footprint of recent colonization or population expansions, suggesting a long-term stability scenario. The results reflect, therefore, that paleoclimatic models may fail to detect refugia in the Southern Atlantic Forest, and that model organisms with low dispersal capability can improve the resolution of these models.
Resumo:
The role of exercise training (ET) on cardiac renin-angiotensin system (RAS) was investigated in 3-5 month-old mice lacking alpha(2A-) and alpha(2C-)adrenoceptors (alpha(2A)/alpha(2C)ARKO) that present heart failure (HF) and wild type control (WT). ET consisted of 8-week running sessions of 60 min, 5 days/week. In addition, exercise tolerance, cardiac structural and function analysis were made. At 3 months, fractional shortening and exercise tolerance were similar between groups. At 5 months, alpha(2A)/alpha(2C)ARKO mice displayed ventricular dysfunction and fibrosis associated with increased cardiac angiotensin (Ang) II levels (2.9-fold) and increased local angiotensin-converting enzyme activity (ACE 18%). ET decreased alpha(2A)/alpha(2C)ARKO cardiac Ang II levels and ACE activity to age-matched untrained WT mice levels while increased ACE2 expression and prevented exercise intolerance and ventricular dysfunction with little impact on cardiac remodeling. Altogether, these data provide evidence that reduced cardiac RAS explains, at least in part, the beneficial effects of ET on cardiac function in a genetic model of HF.
Resumo:
beta-blockers, as class, improve cardiac function and survival in heart failure (HF). However, the molecular mechanisms underlying these beneficial effects remain elusive. In the present study, metoprolol and carvedilol were used in doses that display comparable heart rate reduction to assess their beneficial effects in a genetic model of sympathetic hyperactivity-induced HF (alpha(2A)/alpha(2C)-ARKO mice). Five month-old HF mice were randomly assigned to receive either saline, metoprolol or carvedilol for 8 weeks and age-matched wild-type mice (WT) were used as controls. HF mice displayed baseline tachycardia, systolic dysfunction evaluated by echocardiography, 50% mortality rate, increased cardiac myocyte width (50%) and ventricular fibrosis (3-fold) compared with WT. All these responses were significantly improved by both treatments. Cardiomyocytes from HF mice showed reduced peak [Ca(2+)](i) transient (13%) using confocal microscopy imaging. Interestingly, while metoprolol improved [Ca(2+)](i) transient, carvedilol had no effect on peak [Ca(2+)](i) transient but also increased [Ca(2+)] transient decay dynamics. We then examined the influence of carvedilol in cardiac oxidative stress as an alternative target to explain its beneficial effects. Indeed, HF mice showed 10-fold decrease in cardiac reduced/oxidized glutathione ratio compared with WT, which was significantly improved only by carvedilol treatment. Taken together, we provide direct evidence that the beneficial effects of metoprolol were mainly associated with improved cardiac Ca(2+) transients and the net balance of cardiac Ca(2+) handling proteins while carvedilol preferentially improved cardiac redox state. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Sympathetic hyperactivity (SH) and renin angiotensin system (RAS) activation are commonly associated with heart failure (HF), even though the relative contribution of these factors to the cardiac derangement is less understood. The role of SH on RAS components and its consequences for the HF were investigated in mice lacking alpha(2A) and alpha(2C) adrenoceptor knockout (alpha(2A)/alpha(2C) ARKO) that present SH with evidence of HF by 7 mo of age. Cardiac and systemic RAS components and plasma norepinephrine (PN) levels were evaluated in male adult mice at 3 and 7 mo of age. In addition, cardiac morphometric analysis, collagen content, exercise tolerance, and hemodynamic assessments were made. At 3 mo, alpha(2A)/alpha(2C)ARKO mice showed no signs of HF, while displaying elevated PN, activation of local and systemic RAS components, and increased cardiomyocyte width (16%) compared with wild-type mice (WT). In contrast, at 7 mo, alpha(2A)/alpha(2C)ARKO mice presented clear signs of HF accompanied only by cardiac activation of angiotensinogen and ANG II levels and increased collagen content (twofold). Consistent with this local activation of RAS, 8 wk of ANG II AT(1) receptor blocker treatment restored cardiac structure and function comparable to the WT. Collectively, these data provide direct evidence that cardiac RAS activation plays a major role underlying the structural and functional abnormalities associated with a genetic SH-induced HF in mice.
Resumo:
The aim of this study was to test if the critical power model can be used to determine the critical rest interval (CRI) between vertical jumps. Ten males performed intermittent countermovement jumps on a force platform with different resting periods (4.1 +/- 0.3 s, 5.0 +/- 0.4 s, 5.9 +/- 0.6 s). Jump trials were interrupted when participants could no longer maintain 95% of their maximal jump height. After interruption, number of jumps, total exercise duration and total external work were computed. Time to exhaustion (s) and total external work (J) were used to solve the equation Work = a + b . time. The CRI (corresponding to the shortest resting interval that allowed jump height to be maintained for a long time without fatigue) was determined dividing the average external work needed to jump at a fixed height (J) by b parameter (J/s). in the final session, participants jumped at their calculated CRI. A high coefficient of determination (0.995 +/- 0.007) and the CRI (7.5 +/- 1.6 s) were obtained. In addition, the longer the resting period, the greater the number of jumps (44 13, 71 28, 105 30, 169 53 jumps; p<0.0001), time to exhaustion (179 +/- 50, 351 +/- 120, 610 +/- 141, 1,282 +/- 417 s; p<0.0001) and total external work (28.0 +/- 8.3, 45.0 +/- 16.6, 67.6 +/- 17.8, 111.9 +/- 34.6 kJ; p<0.0001). Therefore, the critical power model may be an alternative approach to determine the CRI during intermittent vertical jumps.
Resumo:
The principal aim of studies of enzyme-mediated reactions has been to provide comparative and quantitative information on enzyme-catalyzed reactions under distinct conditions. The classic Michaelis-Menten model (Biochem Zeit 49:333, 1913) for enzyme kinetic has been widely used to determine important parameters involved in enzyme catalysis, particularly the Michaelis-Menten constant (K (M) ) and the maximum velocity of reaction (V (max) ). Subsequently, a detailed treatment of the mechanisms of enzyme catalysis was undertaken by Briggs-Haldane (Biochem J 19:338, 1925). These authors proposed the steady-state treatment, since its applicability was constrained to this condition. The present work describes an extending solution of the Michaelis-Menten model without the need for such a steady-state restriction. We provide the first analysis of all of the individual reaction constants calculated analytically. Using this approach, it is possible to accurately predict the results under new experimental conditions and to characterize and optimize industrial processes in the fields of chemical and food engineering, pharmaceuticals and biotechnology.
Resumo:
This work presents a thermoeconomic optimization methodology for the analysis and design of energy systems. This methodology involves economic aspects related to the exergy conception, in order to develop a tool to assist the equipment selection, operation mode choice as well as to optimize the thermal plants design. It also presents the concepts related to exergy in a general scope and in thermoeconomics which combines the thermal sciences principles (thermodynamics, heat transfer, and fluid mechanics) and the economic engineering in order to rationalize energy systems investment decisions, development and operation. Even in this paper, it develops a thermoeconomic methodology through the use of a simple mathematical model, involving thermodynamics parameters and costs evaluation, also defining the objective function as the exergetic production cost. The optimization problem evaluation is developed for two energy systems. First is applied to a steam compression refrigeration system and then to a cogeneration system using backpressure steam turbine. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with the H(infinity) recursive estimation problem for general rectangular time-variant descriptor systems in discrete time. Riccati-equation based recursions for filtered and predicted estimates are developed based on a data fitting approach and game theory. In this approach, the nature determines a state sequence seeking to maximize the estimation cost, whereas the estimator tries to find an estimate that brings the estimation cost to a minimum. A solution exists for a specified gamma-level if the resulting cost is positive. In order to present some computational alternatives to the H(infinity) filters developed, they are rewritten in information form along with the respective array algorithms. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The advantages offered by the electronic component LED (Light Emitting Diode) have resulted in a quick and extensive application of this device in the replacement of incandescent lights. In this combined application, however, the relationship between the design variables and the desired effect or result is very complex and renders it difficult to model using conventional techniques. This paper consists of the development of a technique using artificial neural networks that makes it possible to obtain the luminous intensity values of brake lights using SMD (Surface Mounted Device) LEDs from design data. This technique can be utilized to design any automotive device that uses groups of SMD LEDs. The results of industrial applications using SMD LED are presented to validate the proposed technique.
Resumo:
This paper discusses the integrated design of parallel manipulators, which exhibit varying dynamics. This characteristic affects the machine stability and performance. The design methodology consists of four main steps: (i) the system modeling using flexible multibody technique, (ii) the synthesis of reduced-order models suitable for control design, (iii) the systematic flexible model-based input signal design, and (iv) the evaluation of some possible machine designs. The novelty in this methodology is to take structural flexibilities into consideration during the input signal design; therefore, enhancing the standard design process which mainly considers rigid bodies dynamics. The potential of the proposed strategy is exploited for the design evaluation of a two degree-of-freedom high-speed parallel manipulator. The results are experimentally validated. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Literature presents a huge number of different simulations of gas-solid flows in risers applying two-fluid modeling. In spite of that, the related quantitative accuracy issue remains mostly untouched. This state of affairs seems to be mainly a consequence of modeling shortcomings, notably regarding the lack of realistic closures. In this article predictions from a two-fluid model are compared to other published two-fluid model predictions applying the same Closures, and to experimental data. A particular matter of concern is whether the predictions are generated or not inside the statistical steady state regime that characterizes the riser flows. The present simulation was performed inside the statistical steady state regime. Time-averaged results are presented for different time-averaging intervals of 5, 10, 15 and 20 s inside the statistical steady state regime. The independence of the averaged results regarding the time-averaging interval is addressed and the results averaged over the intervals of 10 and 20 s are compared to both experiment and other two-fluid predictions. It is concluded that the two-fluid model used is still very crude, and cannot provide quantitative accurate results, at least for the particular case that was considered. (C) 2009 Elsevier Inc. All rights reserved.