930 resultados para Local optimization algorithms
Resumo:
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
The present investigation is the first part of an initiative to prepare a regional map of the natural abundance of selenium in various areas of Brazil, based on the analysis of bean and soil samples. Continuous-flow hydride generation electrothermal atomic absorption spectrometry (HG-ET AAS) with in situ trapping on an iridium-coated graphite tube has been chosen because of the high sensitivity and relative simplicity. The microwave-assisted acid digestion for bean and soil samples was tested for complete recovery of inorganic and organic selenium compounds (selenomethionine). The reduction of Se(VI) to Se(IV) was optimized in order to guarantee that there is no back-oxidation, which is of importance when digested samples are not analyzed immediately after the reduction step. The limits of detection and quantification of the method were 30 ng L(-1) Se and 101 ng L(-1) Se, respectively, corresponding to about 3 ng g(-1) and 10 ng g(-1), respectively, in the solid samples, considering a typical dilution factor of 100 for the digestion process. The results obtained for two certified food reference materials (CRM), soybean and rice, and for a soil and sediment CRM confirmed the validity of the investigated method. The selenium content found in a number of selected bean samples varied between 5.5 +/- 0.4 ng g(-1) and 1726 +/- 55 ng g(-1), and that in soil samples varied between 113 +/- 6.5 ng g(-1) and 1692 +/- 21 ng g(-1). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Desserts made with soy cream, which are oil-in-water emulsions, are widely consumed by lactose-intolerant individuals in Brazil. In this regard, this study aimed at using response surface methodology (RSM) to optimize the sensory attributes of a soy-based emulsion over a range of pink guava juice (GJ: 22% to 32%) and soy protein (SP: 1% to 3%). WHC and backscattering were analyzed after 72 h of storage at 7 degrees C. Furthermore, a rating test was performed to determine the degree of liking of color, taste, creaminess, appearance, and overall acceptability. The data showed that the samples were stable against gravity and storage. The models developed by RSM adequately described the creaminess, taste, and appearance of the emulsions. The response surface of the desirability function was used successfully in the optimization of the sensory properties of dairy-free emulsions, suggesting that a product with 30.35% GJ and 3% SP was the best combination of these components. The optimized sample presented suitable sensory properties, in addition to being a source of dietary fiber, iron, copper, and ascorbic acid.
Resumo:
Pothomorphe umbellata is a native plant widely employed in the Brazilian popular medicine. This plant has been shown to exert a potent antioxidant activity on the skin and to delay the onset and reduce the incidence of UVB-induced skin damage and photoaging. The aim of this work was to optimize the appearance, the centrifuge stability and the permeation of emulsions containing R umbellata (0. 1% 4-nerolidylchatecol). Experimental design was used to study ternary mixtures models with constraints and graphical representation by phase diagrams. The constraints reduce the possible experimental domain, and for this reason, this methodology offers the maximum information while requiring the minimum investment. The results showed that the appearance follows a linear model, and that the aqueous phase was the principal factor affecting the appearance; the centrifuge stability parameter followed a mathernatic quadratic model and the interactions between factors produced the most stable emulsions; skin permeation was improved by the oil phase, following a linear model generated by data analysis. We propose as optimized P. umbellata formulation: 68.4% aqueous phase, 26.6% oil phase and 5.0% of self-emulsifying phase. This formulation displayed an acceptable compromise between factors and responses investigated. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
A simplex-lattice statistical project was employed to study an optimization method for a preservative system in an ophthalmic suspension of dexametasone and polymyxin B. The assay matrix generated 17 formulas which were differentiated by the preservatives and EDTA (disodium ethylene diamine-tetraacetate), being the independent variable: X-1 = chlorhexidine digluconate (0.010 % w/v); X-2 = phenylethanol (0.500 % w/v); X-3 = EDTA (0.100 % w/v). The dependent variable was the Dvalue obtained from the microbial challenge of the formulas and calculated when the microbial killing process was modeled by an exponential function. The analysis of the dependent variable, performed using the software Design Expert/W, originated cubic equations with terms derived from stepwise adjustment method for the challenging microorganisms: Pseudomonas aeruginosa, Burkholderia cepacia, Staphylococcus aureus, Candida albicans and Aspergillus niger. Besides the mathematical expressions, the response surfaces and the contour graphics were obtained for each assay. The contour graphs obtained were overlaid in order to permit the identification of a region containing the most adequate formulas (graphic strategy), having as representatives: X-1 = 0.10 ( 0.001 % w/v); X-2 = 0.80 (0.400 % w/v); X-3 = 0.10 (0.010 % w/v). Additionally, in order to minimize responses (Dvalue), a numerical strategy corresponding to the use of the desirability function was used, which resulted in the following independent variables combinations: X-1 = 0.25 (0.0025 % w/v); X-2 = 0.75 (0.375 % w/v); X-3 = 0. These formulas, derived from the two strategies (graphic and numerical), were submitted to microbial challenge, and the experimental Dvalue obtained was compared to the theoretical Dvalue calculated from the cubic equation. Both Dvalues were similar to all the assays except that related to Staphylococcus aureus. This microorganism, as well as Pseudomonas aeruginosa, presented intense susceptibility to the formulas independently from the preservative and EDTA concentrations. Both formulas derived from graphic and numerical strategies attained the recommended criteria adopted by the official method. It was concluded that the model proposed allowed the optimization of the formulas in their preservation aspect.
Resumo:
Exposure to oxygen may induce a lack of functionality of probiotic dairy foods because the anaerobic metabolism of probiotic bacteria compromises during storage the maintenance of their viability to provide benefits to consumer health. Glucose oxidase can constitute a potential alternative to increase the survival of probiotic bacteria in yogurt because it consumes the oxygen permeating to the inside of the pot during storage, thus making it possible to avoid the use of chemical additives. This research aimed to optimize the processing of probiotic yogurt supplemented with glucose oxidase using response surface methodology and to determine the levels of glucose and glucose oxidase that minimize the concentration of dissolved oxygen and maximize the Bifidobacterium longum count by the desirability function. Response surface methodology mathematical models adequately described the process, with adjusted determination coefficients of 83% for the oxygen and 94% for the B. longum. Linear and quadratic effects of the glucose oxidase were reported for the oxygen model, whereas for the B. longum count model an influence of the glucose oxidase at the linear level was observed followed by the quadratic influence of glucose and quadratic effect of glucose oxidase. The desirability function indicated that 62.32 ppm of glucose oxidase and 4.35 ppm of glucose was the best combination of these components for optimization of probiotic yogurt processing. An additional validation experiment was performed and results showed acceptable error between the predicted and experimental results.
Resumo:
The Topliss method was used to guide a synthetic path in support of drug discovery efforts toward the identification of potent antimycobacterial agents. Salicylic acid and its derivatives, p-chloro, p-methoxy, and m-chlorosalicylic acid, exemplify a series of synthetic compounds whose minimum inhibitory concentrations for a strain of Mycobacterium were determined and compared to those of the reference drug, p-aminosalicylic acid. Several physicochemical descriptors (including Hammett`s sigma constant, ionization constant, dipole moment, Hansch constant, calculated partition coefficient, Sterimol-L and -B-4 and molecular volume) were considered to elucidate structure-activity relationships. Molecular electrostatic potential and molecular dipole moment maps were also calculated using the AM1 semi-empirical method. Among the new derivatives, m-chlorosalicylic acid showed the lowest minimum inhibitory concentration. The overall results suggest that both physicochemical properties and electronic features may influence the biological activity of this series of antimycobacterial agents and thus should be considered in designing new p-aminosalicylic acid analogs.
Resumo:
An experimental design optimization (Box-Behnken design, BBD) was used to develop a CE method for the simultaneous resolution of propranolol (Prop) and 4-hydroxypropranolol enantiomers and acetaminophen (internal standard). The method was optimized using an uncoated fused silica capillary, carboxymethyl-beta-cyclodextrin (CM-beta-CD) as chiral selector and triethylamine/phosphoric acid buffer in alkaline conditions. A BBD for four factors was selected to observe the effects of buffer electrolyte concentration, pH, CM-beta-CD concentration and voltage on separation responses. Each factor was studied at three levels: high, central and low, and three center points were added. The buffer electrolyte concentration ranged from 25 to 75 mM, the pH ranged from 8 to 9, the CM-beta-CD concentration ranged from 3.5 to 4.5%w/v, and the applied run voltage ranged from 14 to 20 W. The responses evaluated were resolution and migration time for the last peak. The obtained responses were processed by Minitab (R) to evaluate the significance of the effects and to find the optimum analysis conditions. The best results were obtained using 4%w/v CM-beta-CD in 25 mM triethylamine/H(3)PO(4) buffer at pH 9 as running electrolyte and 17 kV of voltage. Resolution values of 1.98 and 1.95 were obtained for Prop and 4-hydroxypropranolol enantiomers, respectively. The total analysis time was around of 15 min. The BBD showed to be an adequate design for the development of a CE method, resulting in a rapid and efficient optimization of the pH and concentration of the buffer, cyclodextrin concentration and applied voltage.
Resumo:
We investigate analytically the first and the second law characteristics of fully developed forced convection inside a porous-saturated duct of rectangular cross-section. The Darcy-Brinkman flow model is employed. Three different types of thermal boundary conditions are examined. Expressions for the Nusselt number, the Bejan number, and the dimensionless entropy generation rate are presented in terms of the system parameters. The conclusions of this analytical study will make it possible to compare, evaluate, and optimize alternative rectangular duct design options in terms of heat transfer, pressure drop, and entropy generation. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Few marine hybrid zones have been studied extensively, the major exception being the hybrid zone between the mussels Mytilus edulis and M. galloprovincialis in southwestern Europe. Here, we focus on two less studied hybrid zones that also involve Mytilus spp.; M. edulis and M. trossulus are sympatric and hybridize on both western and eastern coasts of the Atlantic Ocean. We review the dynamics of hybridization in these two hybrid zones and evaluate the role of local adaptation for maintaining species boundaries. In Scandinavia, hybridization and gene introgression is so extensive that no individuals with pure M. trossulus genotypes have been found. However, M. trossulus alleles are maintained at high frequencies in the extremely low salinity Baltic Sea for some allozyme genes. A synthesis of reciprocal transplantation experiments between different salinity regimes shows that unlinked Gpi and Pgm alleles change frequency following transplantation, such that post-transplantation allelic composition resembles native populations found in the same salinity. These experiments provide strong evidence for salinity adaptation at Gpi and Pgm (or genes linked to them). In the Canadian Maritimes, pure M. edulis and M. trossulus individuals are abundant, and limited data suggest that M. edulis predominates in low salinity and sheltered conditions, whereas M. trossulus are more abundant on the wave-exposed open coasts. We suggest that these conflicting patterns of species segregation are, in part, caused by local adaptation of Scandinavian M. trossulus to the extremely low salinity Baltic Sea environment.
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
We show that quantum mechanics predicts a contradiction with local hidden variable theories for photon number measurements which have limited resolving power, to the point of imposing an uncertainty in the photon number result which is macroscopic in absolute terms. We show how this can be interpreted as a failure of a new premise, macroscopic local realism.
Resumo:
Despite many successes of conventional DNA sequencing methods, some DNAs remain difficult or impossible to sequence. Unsequenceable regions occur in the genomes of many biologically important organisms, including the human genome. Such regions range in length from tens to millions of bases, and may contain valuable information such as the sequences of important genes. The authors have recently developed a technique that renders a wide range of problematic DNAs amenable to sequencing. The technique is known as sequence analysis via mutagenesis (SAM). This paper presents a number of algorithms for analysing and interpreting data generated by this technique.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.