974 resultados para Simulated annealing algorithm
Resumo:
The main objective of this work is to present an efficient method for phasor estimation based on a compact Genetic Algorithm (cGA) implemented in Field Programmable Gate Array (FPGA). To validate the proposed method, an Electrical Power System (EPS) simulated by the Alternative Transients Program (ATP) provides data to be used by the cGA. This data is as close as possible to the actual data provided by the EPS. Real life situations such as islanding, sudden load increase and permanent faults were considered. The implementation aims to take advantage of the inherent parallelism in Genetic Algorithms in a compact and optimized way, making them an attractive option for practical applications in real-time estimations concerning Phasor Measurement Units (PMUs).
Resumo:
rnNitric oxide (NO) is important for several chemical processes in the atmosphere. Together with nitrogen dioxide (NO2 ) it is better known as nitrogen oxide (NOx ). NOx is crucial for the production and destruction of ozone. In several reactions it catalyzes the oxidation of methane and volatile organic compounds (VOCs) and in this context it is involved in the cycling of the hydroxyl radical (OH). OH is a reactive radical, capable of oxidizing most organic species. Therefore, OH is also called the “detergent” of the atmosphere. Nitric oxide originates from several sources: fossil fuel combustion, biomass burning, lightning and soils. Fossil fuel combustion is the largest source. The others are, depending on the reviewed literature, generally comparable to each other. The individual sources show a different temporal and spatial pattern in their magnitude of emission. Fossil fuel combustion is important in densely populated places, where NO from other sources is less important. In contrast NO emissions from soils (hereafter SNOx) or biomass burning are the dominant source of NOx in remote regions.rnBy applying an atmospheric chemistry global climate model (AC-GCM) I demonstrate that SNOx is responsible for a significant part of NOx in the atmosphere. Furthermore, it increases the O3 and OH mixing ratio substantially, leading to a ∼10% increase in the oxidizing efficiency of the atmosphere. Interestingly, through reduced O3 and OH mixing ratios in simulations without SNOx, the lifetime of NOx increases in regions with other dominating sources of NOx
Resumo:
BACKGROUND Driving a car is a complex instrumental activity of daily living and driving performance is very sensitive to cognitive impairment. The assessment of driving-relevant cognition in older drivers is challenging and requires reliable and valid tests with good sensitivity and specificity to predict safe driving. Driving simulators can be used to test fitness to drive. Several studies have found strong correlation between driving simulator performance and on-the-road driving. However, access to driving simulators is restricted to specialists and simulators are too expensive, large, and complex to allow easy access to older drivers or physicians advising them. An easily accessible, Web-based, cognitive screening test could offer a solution to this problem. The World Wide Web allows easy dissemination of the test software and implementation of the scoring algorithm on a central server, allowing generation of a dynamically growing database with normative values and ensures that all users have access to the same up-to-date normative values. OBJECTIVE In this pilot study, we present the novel Web-based Bern Cognitive Screening Test (wBCST) and investigate whether it can predict poor simulated driving performance in healthy and cognitive-impaired participants. METHODS The wBCST performance and simulated driving performance have been analyzed in 26 healthy younger and 44 healthy older participants as well as in 10 older participants with cognitive impairment. Correlations between the two tests were calculated. Also, simulated driving performance was used to group the participants into good performers (n=70) and poor performers (n=10). A receiver-operating characteristic analysis was calculated to determine sensitivity and specificity of the wBCST in predicting simulated driving performance. RESULTS The mean wBCST score of the participants with poor simulated driving performance was reduced by 52%, compared to participants with good simulated driving performance (P<.001). The area under the receiver-operating characteristic curve was 0.80 with a 95% confidence interval 0.68-0.92. CONCLUSIONS When selecting a 75% test score as the cutoff, the novel test has 83% sensitivity, 70% specificity, and 81% efficiency, which are good values for a screening test. Overall, in this pilot study, the novel Web-based computer test appears to be a promising tool for supporting clinicians in fitness-to-drive assessments of older drivers. The Web-based distribution and scoring on a central computer will facilitate further evaluation of the novel test setup. We expect that in the near future, Web-based computer tests will become a valid and reliable tool for clinicians, for example, when assessing fitness to drive in older drivers.
Resumo:
For Northern Hemisphere extra-tropical cyclone activity, the dependency of a potential anthropogenic climate change signal on the identification method applied is analysed. This study investigates the impact of the used algorithm on the changing signal, not the robustness of the climate change signal itself. Using one single transient AOGCM simulation as standard input for eleven state-of-the-art identification methods, the patterns of model simulated present day climatologies are found to be close to those computed from re-analysis, independent of the method applied. Although differences in the total number of cyclones identified exist, the climate change signals (IPCC SRES A1B) in the model run considered are largely similar between methods for all cyclones. Taking into account all tracks, decreasing numbers are found in the Mediterranean, the Arctic in the Barents and Greenland Seas, the mid-latitude Pacific and North America. Changing patterns are even more similar, if only the most severe systems are considered: the methods reveal a coherent statistically significant increase in frequency over the eastern North Atlantic and North Pacific. We found that the differences between the methods considered are largely due to the different role of weaker systems in the specific methods.
Resumo:
Cataloging geocentric objects can be put in the framework of Multiple Target Tracking (MTT). Current work tends to focus on the S = 2 MTT problem because of its favorable computational complexity of O(n²). The MTT problem becomes NP-hard for a dimension of S˃3. The challenge is to find an approximation to the solution within a reasonable computation time. To effciently approximate this solution a Genetic Algorithm is used. The algorithm is applied to a simulated test case. These results represent the first steps towards a method that can treat the S˃3 problem effciently and with minimal manual intervention.
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.
Resumo:
When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
Many attempts have already been made to detect exomoons around transiting exoplanets, but the first confirmed discovery is still pending. The experiences that have been gathered so far allow us to better optimize future space telescopes for this challenge already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS), describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation to minimize the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with, e.g., the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star planet moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least a large moon and a Neptune-sized planet, an 80% detection chance requires at least 5-6 transit observations on average. There is also a nonzero chance in the case of smaller moons, but the detection statistics deteriorate rapidly, while the necessary transit measurements increase quickly. After the CoRoT and Kepler spacecrafts, CHEOPS will be the next dedicated space telescope that will observe exoplanetary transits and characterize systems with known Doppler-planets. Although it has a smaller aperture than Kepler (the ratio of the mirror diameters is about 1/3) and is mounted with a CCD that is similar to Kepler's, it will observe brighter stars and operate with larger sampling rate; therefore, the detection limit for an exomoon can be the same as or better, which will make CHEOPS a competitive instruments in the quest for exomoons.
Resumo:
A visual basic application for Microsoft® Excel 2007 has been developed as a helpful tool to perform mass, energy, exergy and thermoeconomic (MHBT) calculations during the systematic analysis of energy processes simulated with Aspen Plus®. The application reads an Excel workbook containing three sheets with the matter, work and heat streams results of an Aspen Plus® simulation. The required information from the Aspen Plus® simulation and the algorithm/calculations of the application are described and applied to an Air Separation Unit (ASU). This application helps the designer when MHBT analyses are performed, as it increases the knowledge of the process simulated with Aspen Plus®. It’s a valuable tool not only because of the calculations performed, but also because it creates a new Excel workbook where the results and the formulae written on the cells are fully visible and editable. There is free access to the application and it has no protection allowing changes and improvements to be done.
Resumo:
HELLO protocol or neighborhood discovery is essential in wireless ad hoc networks. It makes the rules for nodes to claim their existence/aliveness. In the presence of node mobility, no fix optimal HELLO frequency and optimal transmission range exist to maintain accurate neighborhood tables while reducing the energy consumption and bandwidth occupation. Thus a Turnover based Frequency and transmission Power Adaptation algorithm (TFPA) is presented in this paper. The method enables nodes in mobile networks to dynamically adjust both their HELLO frequency and transmission range depending on the relative speed. In TFPA, each node monitors its neighborhood table to count new neighbors and calculate the turnover ratio. The relationship between relative speed and turnover ratio is formulated and optimal transmission range is derived according to battery consumption model to minimize the overall transmission energy. By taking advantage of the theoretical analysis, the HELLO frequency is adapted dynamically in conjunction with the transmission range to maintain accurate neighborhood table and to allow important energy savings. The algorithm is simulated and compared to other state-of-the-art algorithms. The experimental results demonstrate that the TFPA algorithm obtains high neighborhood accuracy with low HELLO frequency (at least 11% average reduction) and with the lowest energy consumption. Besides, the TFPA algorithm does not require any additional GPS-like device to estimate the relative speed for each node, hence the hardware cost is reduced.
Resumo:
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method to a set of 100 simulated cases. The numerical results show that the proposed method estimates all the modal parameters reasonably well in the presence of 30% measurement noise even. Finally, advantages and disadvantages of the method have been discussed.
Resumo:
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm that is applied to the estimation of modal parameters from system input and output data. The effectiveness of this structural identification method is evaluated through numerical simulation. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the simulated structure are estimated applying the proposed identification method to a set of 100 simulated cases. The numerical results show that the proposed method estimates the modal parameters with precision in the presence of 20% measurement noise even. Finally, advantages and disadvantages of the method have been discussed.
Resumo:
This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.
Quality-optimization algorithm based on stochastic dynamic programming for MPEG DASH video streaming
Resumo:
In contrast to traditional push-based protocols, adaptive streaming techniques like Dynamic Adaptive Streaming over HTTP (DASH) fix attention on the client, who dynamically requests different-quality portions of the content to cope with a limited and variable bandwidth but aiming at maximizing the quality perceived by the user. Since DASH adaptation logic at the client is not covered by the standard, we propose a solution based on Stochastic Dynamic Programming (SDP) techniques to find the optimal request policies that guarantee the users' Quality of Experience (QoE). Our algorithm is evaluated in a simulated streaming session and is compared with other adaptation approaches. The results show that our proposal outperforms them in terms of QoE, requesting higher qualities on average.
Resumo:
The present paper describes the preliminary stages of the development of a new, comprehensive model conceived to simulate the evacuation of transport airplanes in certification studies. Two previous steps were devoted to implementing an efficient procedure to define the whole geometry of the cabin, and setting up an algorithm for assigning seats to available exits. Now, to clarify the role of the cabin arrangement in the evacuation process, the paper addresses the influence of several restrictions on the seat-to-exit assignment algorithm, maintaining a purely geometrical approach for consistency. Four situations are considered: first, an assignment method without limitations to search the minimum for the total distance run by all passengers along their escaping paths; second, a protocol that restricts the number of evacuees through each exit according to updated FAR 25 capacity; third, a procedure which tends to the best proportional sharing among exits but obliges to each passenger to egress through the nearest fore or rear exits; and fourth, a scenario which includes both restrictions. The four assignment strategies are applied to turboprops, and narrow body and wide body jets. Seat to exit distance and number of evacuees per exit are the main output variables. The results show the influence of airplane size and the impact of non-symmetries and inappropriate matching between size and longitudinal location of exits.