886 resultados para Optimization. Markov Chain. Genetic Algorithm. Fuzzy Controller


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to present an economical design of an X chart for a short-run production. The process mean starts equal to mu(0) (in-control, State I) and in a random time it shifts to mu(1) > mu(0) (out-of-control, State II). The monitoring procedure consists of inspecting a single item at every m produced ones. If the measurement of the quality characteristic does not meet the control limits, the process is stopped, adjusted, and additional (r - 1) items are inspected retrospectively. The probabilistic model was developed considering only shifts in the process mean. A direct search technique is applied to find the optimum parameters which minimizes the expected cost function. Numerical examples illustrate the proposed procedure. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tuberculosis is an infection caused mainly by Mycobacterium tuberculosis. A first-line antimycobacterial drug is pyrazinamide (PZA), which acts partially as a prodrug activated by a pyrazinamidase releasing the active agent, pyrazinoic acid (POA). As pyrazinoic acid presents some difficulty to cross the mycobacterial cell wall, and also the pyrazinamide-resistant strains do not express the pyrazinamidase, a set of pyrazinoic acid esters have been evaluated as antimycobacterial agents. In this work, a QSAR approach was applied to a set of forty-three pyrazinoates against M. tuberculosis ATCC 27294, using genetic algorithm function and partial least squares regression (WOLF 5.5 program). The independent variables selected were the Balaban index (I), calculated n-octanol/water partition coefficient (ClogP), van-der-Waals surface area, dipole moment, and stretching-energy contribution. The final QSAR model (N = 32, r(2) = 0.68, q(2) = 0.59, LOF = 0.25, and LSE = 0.19) was fully validated employing leave-N-out cross-validation and y-scrambling techniques. The test set (N = 11) presented an external prediction power of 73%. In conclusion, the QSAR model generated can be used as a valuable tool to optimize the activity of future pyrazinoic acid esters in the designing of new antituberculosis agents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Histamine is an important biogenic amine, which acts with a group of four G-protein coupled receptors (GPCRs), namely H(1) to H(4) (H(1)R - H(4)R) receptors. The actions of histamine at H(4)R are related to immunological and inflammatory processes, particularly in pathophysiology of asthma, and H(4)R ligands having antagonistic properties could be helpful as antiinflammatory agents. In this work, molecular modeling and QSAR studies of a set of 30 compounds, indole and benzimidazole derivatives, as H(4)R antagonists were performed. The QSAR models were built and optimized using a genetic algorithm function and partial least squares regression (WOLF 5.5 program). The best QSAR model constructed with training set (N = 25) presented the following statistical measures: r (2) = 0.76, q (2) = 0.62, LOF = 0.15, and LSE = 0.07, and was validated using the LNO and y-randomization techniques. Four of five compounds of test set were well predicted by the selected QSAR model, which presented an external prediction power of 80%. These findings can be quite useful to aid the designing of new anti-H(4) compounds with improved biological response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chlorpheniramine maleate (CLOR) enantiomers were quantified by ultraviolet spectroscopy and partial least squares regression. The CLOR enantiomers were prepared as inclusion complexes with beta-cyclodextrin and 1-butanol with mole fractions in the range from 50 to 100%. For the multivariate calibration the outliers were detected and excluded and variable selection was performed by interval partial least squares and a genetic algorithm. Figures of merit showed results for accuracy of 3.63 and 2.83% (S)-CLOR for root mean square errors of calibration and prediction, respectively. The ellipse confidence region included the point for the intercept and the slope of 1 and 0, respectively. Precision and analytical sensitivity were 0.57 and 0.50% (S)-CLOR, respectively. The sensitivity, selectivity, adjustment, and signal-to-noise ratio were also determined. The model was validated by a paired t test with the results obtained by high-performance liquid chromatography proposed by the European pharmacopoeia and circular dichroism spectroscopy. The results showed there was no significant difference between the methods at the 95% confidence level, indicating that the proposed method can be used as an alternative to standard procedures for chiral analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

T cells recognize peptide epitopes bound to major histocompatibility complex molecules. Human T-cell epitopes have diagnostic and therapeutic applications in autoimmune diseases. However, their accurate definition within an autoantigen by T-cell bioassay, usually proliferation, involves many costly peptides and a large amount of blood, We have therefore developed a strategy to predict T-cell epitopes and applied it to tyrosine phosphatase IA-2, an autoantigen in IDDM, and HLA-DR4(*0401). First, the binding of synthetic overlapping peptides encompassing IA-2 was measured directly to purified DR4. Secondly, a large amount of HLA-DR4 binding data were analysed by alignment using a genetic algorithm and were used to train an artificial neural network to predict the affinity of binding. This bioinformatic prediction method was then validated experimentally and used to predict DR4 binding peptides in IA-2. The binding set encompassed 85% of experimentally determined T-cell epitopes. Both the experimental and bioinformatic methods had high negative predictive values, 92% and 95%, indicating that this strategy of combining experimental results with computer modelling should lead to a significant reduction in the amount of blood and the number of peptides required to define T-cell epitopes in humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a metal. In all cases the algorithm proved to have faster convergence and superior escaping from local minima. This algorithm was then applied to fit the model dielectric function to data for platinum and aluminum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hepatitis C virus (HCV) is a frequent cause of acute and chronic hepatitis and a leading cause for cirrhosis of the liver and hepatocellular carcinoma. HCV is classified in six major genotypes and more than 70 subtypes. In Colombian blood banks, serum samples were tested for anti-HCV antibodies using a third-generation ELISA. The aim of this study was to characterize the viral sequences in plasma of 184 volunteer blood donors who attended the ""Banco Nacional de Sangre de la Cruz Roja Colombiana,`` Bogota, Colombia. Three different HCV genomic regions were amplified by nested PCR. The first of these was a segment of 180 bp of the 5`UTR region to confirm the previous diagnosis by ELISA. From those that were positive to the 5`UTR region, two further segments were amplified for genotyping and subtyping by phylogenetic analysis: a segment of 380 bp from the NS5B region; and a segment of 391 bp from the E1 region. The distribution of HCV subtypes was: 1b (82.8%), 1a (5.7%), 2a (5.7%), 2b (2.8%), and 3a (2.8%). By applying Bayesian Markov chain Monte Carlo simulation, it was estimated that HCV-1b was introduced into Bogota around 1950. Also, this subtype spread at an exponential rate between about 1970 to about 1990, after which transmission of HCV was reduced by anti-HCV testing of this population. Among Colombian blood donors, HCV genotype 1b is the most frequent genotype, especially in large urban conglomerates such as Bogota, as is the case in other South American countries. J. Med. Virol. 82: 1889-1898, 2010. (C) 2010 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular epidemiological data concerning the hepatitis B virus (HBV) in Chile are not known completely. Since the HBV genotype F is the most prevalent in the country, the goal of this study was to obtain full HBV genome sequences from patients infected chronically in order to determine their subgenotypes and the occurrence of resistance-associated mutations. Twenty-one serum samples from antiviral drug-naive patients with chronic hepatitis B were subjected to full-length PCR amplification, and both strands of the whole genomes were fully sequenced. Phylogenetic analyses were performed along with reference sequences available from GenBank (n = 290). The sequences were aligned using Clustal X and edited in the SE-AL software. Bayesian phylogenetic analyses were conducted by Markov Chain Monte Carlo simulations (MCMC) for 10 million generations in order to obtain the substitution tree using BEAST. The sequences were also analyzed for the presence of primary drug resistance mutations using CodonCode Aligner Software. The phylogenetic analyses indicated that all sequences were found to be the HBV subgenotype F1b, clustered into four different groups, suggesting that diverse lineages of this subgenotype may be circulating within this population of Chilean patients. J. Med. Virol. 83: 1530-1536, 2011. (C) 2011 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: At least for a subset of patients, the clinical diagnosis of mild cognitive impairment (MCI) may represent an intermediate stage between normal aging and dementia. Nevertheless, the patterns of transition of cognitive states between normal cognitive aging and MCI to dementia are not well established. In this study we address the pattern of transitions between cognitive states in patients with MCI and healthy controls, prior to the conversion to dementia. Methods: 139 subjects (78% women, mean age, 68.5 +/- 6.1 years; mean educational level, 11.7 +/- 5.4 years) were consecutively assessed in a memory clinic with a standardized clinical and neuropsychological protocol, and classified as cognitively healthy (normal controls) or with MCI (including subtypes) at baseline. These subjects underwent annual reassessments (mean duration of follow-up: 2.7 +/- 1.1 years), in which cognitive state was ascertained independently of prior diagnoses. The pattern of transitions of the cognitive state was determined by Markov chain analysis. Results: The transitions from one cognitive state to another varied substantially between MCI subtypes. Single-domain MCI (amnestic and non-amnestic) more frequently returned to normal cognitive state upon follow-up (22.5% and 21%, respectively). Among subjects who progressed to Alzheimer`s disease (AD), the most common diagnosis immediately prior conversion was multiple-domain MCI (85%). Conclusion: The clinical diagnosis of MCI and its subtypes yields groups of patients with heterogeneous patterns of transitions between one given cognitive state to another. The presence of more severe and widespread cognitive deficits, as indicated by the group of multiple-domain amnestic MCI may be a better predictor of AD than single-domain amnestic or non-amnestic deficits. These higher-risk individuals could probably be the best candidates for the development of preventive strategies and early treatment for the disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applied econometricians often fail to impose economic regularity constraints in the exact form economic theory prescribes. We show how the Singular Value Decomposition (SVD) Theorem and Markov Chain Monte Carlo (MCMC) methods can be used to rigorously impose time- and firm-varying equality and inequality constraints. To illustrate the technique we estimate a system of translog input demand functions subject to all the constraints implied by economic theory, including observation-varying symmetry and concavity constraints. Results are presented in the form of characteristics of the estimated posterior distributions of functions of the parameters. Copyright (C) 2001 John Wiley Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cropp and Gabric [Ecosystem adaptation: do ecosystems maximise resilience? Ecology. In press] used a simple phytoplanktonzooplankton-nutrient model and a genetic algorithm to determine the parameter values that would maximize the value of certain goal functions. These goal functions were to maximize biomass, maximize flux, maximize flux to biomass ratio, and maximize resilience. It was found that maximizing goal functions maximized resilience. The objective of this study was to investigate whether the Cropp and Gabric [Ecosystem adaptation: do ecosystems maximise resilience? Ecology. In press] result was indicative of a general ecosystem principle, or peculiar to the model and parameter ranges used. This study successfully replicated the Cropp and Gabric [Ecosystem adaptation: do ecosystems maximise resilience? Ecology. In press] experiment for a number of different model types, however, a different interpretation of the results is made. A new metric, concordance, was devised to describe the agreement between goal functions. It was found that resilience has the highest concordance of all goal functions trialled. for most model types. This implies that resilience offers a compromise between the established ecological goal functions. The parameter value range used is found to affect the parameter versus goal function relationships. Local maxima and minima affected the relationship between parameters and goal functions, and between goal functions. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increased integration of wind power into the electric grid, as nowadays occurs in Portugal, poses new challenges due to its intermittency and volatility. Wind power prediction plays a key role in tackling these challenges. The contribution of this paper is to propose a new hybrid approach, combining particle swarm optimization and adaptive-network-based fuzzy inference system, for short-term wind power prediction in Portugal. Significant improvements regarding forecasting accuracy are attainable using the proposed approach, in comparison with the results obtained with five other approaches.