934 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From microscopic models, a Langevin equation can, in general, be derived only as an approximation. Two possible conditions to validate this approximation are studied. One is, for a linear Langevin equation, that the frequency of the Fourier transform should be close to the natural frequency of the system. The other is by the assumption of "slow" variables. We test this method by comparison with an exactly soluble model and point out its limitations. We base our discussion on two approaches. The first is a direct, elementary treatment of Senitzky. The second is via a generalized Langevin equation as an intermediate step.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To integrate data from two-dimensional echocardiography (2D ECHO), three-dimensional echocardiography (3D ECHO), and tissue Doppler imaging (TDI) for prediction of left ventricular (LV) reverse remodeling (LVRR) after cardiac resynchronization therapy (CRT). It was also compared the evaluation of cardiac dyssynchrony by TDI and 3D ECHO. Methods: Twenty-four consecutive patients with heart failure, sinus rhythm, QRS = 120 msec, functional class III or IV and LV ejection fraction (LVEF) = 0.35 underwent CRT. 2D ECHO, 3D ECHO with systolic dyssynchrony index (SDI) analysis, and TDI were performed before, 3 and 6 months after CRT. Cardiac dyssynchrony analyses by TDI and SDI were compared with the Pearson's correlation test. Before CRT, a univariate analysis of baseline characteristics was performed for the construction of a logistic regression model to identify the best predictors of LVRR. Results: After 3 months of CRT, there was a moderate correlation between TDI and SDI (r = 0.52). At other time points, there was no strong correlation. Nine of twenty-four (38%) patients presented with LVRR 6 months after CRT. After logistic regression analysis, SDI (SDI > 11%) was the only independent factor in the prediction of LVRR 6 months of CRT (sensitivity = 0.89 and specificity = 0.73). After construction of receiver operator characteristic (ROC) curves, an equation was established to predict LVRR: LVRR =-0.4LVDD (mm) + 0.5LVEF (%) + 1.1SDI (%), with responders presenting values >0 (sensitivity = 0.67 and specificity = 0.87). Conclusions: In this study, there was no strong correlation between TDI and SDI. An equation is proposed for the prediction of LVRR after CRT. Although larger trials are needed to validate these findings, this equation may be useful to candidates for CRT. (Echocardiography 2012;29:678-687)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A long-standing problem when testing from a deterministic finite state machine is to guarantee full fault coverage even if the faults introduce extra states in the implementations. It is well known that such tests should include the sequences in a traversal set which contains all input sequences of length defined by the number of extra states. This paper suggests the SPY method, which helps reduce the length of tests by distributing sequences of the traversal set and reducing test branching. It is also demonstrated that an additional assumption about the implementation under test relaxes the requirement of the complete traversal set. The results of the experimental comparison of the proposed method with an existing method indicate that the resulting reduction can reach 40%. Experimental results suggest that the additional assumption about the implementation can help in further reducing the test suite length. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The purpose of this study was to evaluate the following: 1) the effects of continuous exercise training and interval exercise training on the end-tidal carbon dioxide pressure (PETCO2) response during a graded exercise test in patients with coronary artery disease; and 2) the effects of exercise training modalities on the association between PETCO2 at the ventilatory anaerobic threshold (VAT) and indicators of ventilatory efficiency and cardiorespiratory fitness in patients with coronary artery disease. METHODS: Thirty-seven patients (59.7 +/- 1.7 years) with coronary artery disease were randomly divided into two groups: continuous exercise training (n = 20) and interval exercise training (n = 17). All patients performed a graded exercise test with respiratory gas analysis before and after three months of the exercise training program to determine the VAT, respiratory compensation point (RCP) and peak oxygen consumption. RESULTS: After the interventions, both groups exhibited increased cardiorespiratory fitness. Indeed, the continuous exercise and interval exercise training groups demonstrated increases in both ventilatory efficiency and PETCO2 values at VAT, RCP, and peak of exercise. Significant associations were observed in both groups: 1) continuous exercise training (PETCO(2)VAT and cardiorespiratory fitness r = 0.49; PETCO(2)VAT and ventilatory efficiency r = -0.80) and 2) interval exercise training (PETCO(2)VAT and cardiorespiratory fitness r = 0.39; PETCO(2)VAT and ventilatory efficiency r = -0.45). CONCLUSIONS: Both exercise training modalities showed similar increases in PETCO2 levels during a graded exercise test in patients with coronary artery disease, which may be associated with an improvement in ventilatory efficiency and cardiorespiratory fitness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the m-machine no-wait flow shop problem where the set-up time of a job is separated from its processing time. The performance measure considered is the total flowtime. A new hybrid metaheuristic Genetic Algorithm-Cluster Search is proposed to solve the scheduling problem. The performance of the proposed method is evaluated and the results are compared with the best method reported in the literature. Experimental tests show superiority of the new method for the test problems set, regarding the solution quality. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water pollution caused by toxic cyanobacteria is a problem worldwide, increasing with eutrophication. Due to its biological significance, genotoxicity should be a focus for biomonitoring pollution owing to the increasing complexity of the toxicological environment in which organisms are exposed. Cyanobacteria produce a large number of bioactive compounds, most of which lack toxicological data. Microcystins comprise a class of potent cyclic heptapeptide toxins produced mainly by Microcystis aeruginosa. Other natural products can also be synthesized by cyanobacteria, such as the protease inhibitor, aeruginosin. The hepatotoxicity of microcystins has been well documented, but information on the genotoxic effects of aeruginosins is relatively scarce. In this study, the genotoxicity and ecotoxicity of methanolic extracts from two strains of M. aeruginosa NPLJ-4, containing high levels of microcystin, and M. aeruginosa NPCD-1, with high levels of aeruginosin, were evaluated. Four endpoints, using plant assays in Allium cepa were applied: rootlet growth inhibition, chromosomal aberrations, mitotic divisions, and micronucleus assays. The microcystin content of M. aeruginosa NPLJ-4 was confirmed through ELISA, while M. aeruginosa NPCD-1 did not produce microcystins. The extracts of M. aeruginosa NPLJ-4 were diluted at 0.01, 0.1, 1 and 10 ppb of microcystins: the same procedure was used to dilute M. aeruginosa NPCD-1 used as a parameter for comparison, and water was used as the control. The results demonstrated that both strains inhibited root growth and induced rootlet abnormalities. The strain rich in aeruginosin was more genotoxic, altering the cell cycle, while microcystins were more mitogenic. These findings indicate the need for future research on non-microcystin producing cyanobacterial strains. Understanding the genotoxicity of M. aeruginosa extracts can help determine a possible link between contamination by aquatic cyanobacteria and high risk of primary liver cancer found in some areas as well as establish water level limits for compounds not yet studied. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rare variants are becoming the new candidates in the search for genetic variants that predispose individuals to a phenotype of interest. Their low prevalence in a population requires the development of dedicated detection and analytical methods. A family-based approach could greatly enhance their detection and interpretation because rare variants are nearly family specific. In this report, we test several distinct approaches for analyzing the information provided by rare and common variants and how they can be effectively used to pinpoint putative candidate genes for follow-up studies. The analyses were performed on the mini-exome data set provided by Genetic Analysis Workshop 17. Eight approaches were tested, four using the trait’s heritability estimates and four using QTDT models. These methods had their sensitivity, specificity, and positive and negative predictive values compared in light of the simulation parameters. Our results highlight important limitations of current methods to deal with rare and common variants, all methods presented a reduced specificity and, consequently, prone to false positive associations. Methods analyzing common variants information showed an enhanced sensibility when compared to rare variants methods. Furthermore, our limited knowledge of the use of biological databases for gene annotations, possibly for use as covariates in regression models, imposes a barrier to further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a general scheme for generating extra cuts during the execution of a Benders decomposition algorithm is presented. These cuts are based on feasible and infeasible master problem solutions generated by means of a heuristic. This article includes general guidelines and a case study with a fixed charge network design problem. Computational tests with instances of this problem show the efficiency of the strategy. The most important aspect of the proposed ideas is their generality, which allows them to be used in virtually any Benders decomposition implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Programa de doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería Instituto Universitario (SIANI)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabajo realizado por: Packard, T. T., Osma, N., Fernández Urruzola, I., Gómez, M

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.