857 resultados para optimisation algorithms
Resumo:
This review aims to identify strategies to optimise radiography practice using digital technologies, for full spine studies on paediatrics focusing particularly on methods used to diagnose and measure severity of spinal curvatures. The literature search was performed on different databases (PubMed, Google Scholar and ScienceDirect) and relevant websites (e.g., American College of Radiology and International Commission on Radiological Protection) to identify guidelines and recent studies focused on dose optimisation in paediatrics using digital technologies. Plain radiography was identified as the most accurate method. The American College of Radiology (ACR) and European Commission (EC) provided two guidelines that were identified as the most relevant to the subject. The ACR guidelines were updated in 2014; however these guidelines do not provide detailed guidance on technical exposure parameters. The EC guidelines are more complete but are dedicated to screen film systems. Other studies provided reviews on the several exposure parameters that should be included for optimisation, such as tube current, tube voltage and source-to-image distance; however, only explored few of these parameters and not all of them together. One publication explored all parameters together but this was for adults only. Due to lack of literature on exposure parameters for paediatrics, more research is required to guide and harmonise practice.
Resumo:
Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters: positions (antero-posterior (AP), posteroanterior (PA) and lateral), kilo-voltage peak (kVp) (66-90), source-to-image distance (SID) (150 to 200cm), broad focus and the use of a grid (grid in/out) to analyse the impact on E and image quality (IQ). IQ was analysed applying two approaches: objective [contrast-to-noise-ratio/(CNR] and perceptual, using 5 observers. Monte-Carlo modelling was used for dose estimation. Cohen’s Kappa coefficient was used to calculate inter-observer-variability. The angle was measured using Cobb’s method on lateral projections under different imaging conditions. Results: PA promoted the lowest effective dose (0.013 mSv) compared to AP (0.048 mSv) and lateral (0.025 mSv). The exposure parameters that allowed lower dose were 200cm SID, 90 kVp, broad focus and grid out for paediatrics using an Agfa CR system. Thirty-seven images were assessed for IQ and thirty-two were classified adequate. Cobb angle measurements varied between 16°±2.9 and 19.9°±0.9. Conclusion: Cobb angle measurements can be performed using the lowest dose with a low contrast-tonoise ratio. The variation on measurements for this was ±2.9° and this is within the range of acceptable clinical error without impact on clinical diagnosis. Further work is recommended on improvement to the sample size and a more robust perceptual IQ assessment protocol for observers.
Resumo:
Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary ideas of natural selection and genetic. The basic concept of GAs is designed to simulate processes in natural system necessary for evolution, specifically those that follow the principles first laid down by Charles Darwin of survival of the fittest. On the other hand, Particle swarm optimization (PSO) is a population based stochastic optimization technique inspired by social behavior of bird flocking or fish schooling. PSO shares many similarities with evolutionary computation techniques such as GAs. The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. PSO is attractive because there are few parameters to adjust. This paper presents hybridization between a GA algorithm and a PSO algorithm (crossing the two algorithms). The resulting algorithm is applied to the synthesis of combinational logic circuits. With this combination is possible to take advantage of the best features of each particular algorithm.
Resumo:
This paper addresses the calculation of derivatives of fractional order for non-smooth data. The noise is avoided by adopting an optimization formulation using genetic algorithms (GA). Given the flexibility of the evolutionary schemes, a hierarchical GA composed by a series of two GAs, each one with a distinct fitness function, is established.
Resumo:
The trajectory planning of redundant robots is an important area of research and efficient optimization algorithms are needed. This paper presents a new technique that combines the closed-loop pseudoinverse method with genetic algorithms. The results are compared with a genetic algorithm that adopts the direct kinematics. In both cases the trajectory planning is formulated as an optimization problem with constraints.
Resumo:
In this article we describe several methods for the discretization of the differintegral operator sa, where α = u + jv is a complex value. The concept of the conjugated-order differintegral is also introduced, which enables the use of complex-order differintegrals while still producing real-valued time responses and transfer functions. The performance of the resulting approximations is analysed in both the time and frequency domains. Several results are presented that demonstrate its utility in control system design.
Resumo:
The development of fractional-order controllers is currently one of the most promising fields of research. However, most of the work in this area addresses the case of linear systems. This paper reports on the analysis of fractional-order control of nonlinear systems. The performance of discrete fractional-order PID controllers in the presence of several nonlinearities is discussed. Some results are provided that indicate the superior robustness of such algorithms.
Resumo:
Fuzzy logic controllers (FLC) are intelligent systems, based on heuristic knowledge, that have been largely applied in numerous areas of everyday life. They can be used to describe a linear or nonlinear system and are suitable when a real system is not known or too difficult to find their model. FLC provide a formal methodology for representing, manipulating and implementing a human heuristic knowledge on how to control a system. These controllers can be seen as artificial decision makers that operate in a closed-loop system, in real time. The main aim of this work was to develop a single optimal fuzzy controller, easily adaptable to a wide range of systems – simple to complex, linear to nonlinear – and able to control all these systems. Due to their efficiency in searching and finding optimal solution for high complexity problems, GAs were used to perform the FLC tuning by finding the best parameters to obtain the best responses. The work was performed using the MATLAB/SIMULINK software. This is a very useful tool that provides an easy way to test and analyse the FLC, the PID and the GAs in the same environment. Therefore, it was proposed a Fuzzy PID controller (FL-PID) type namely, the Fuzzy PD+I. For that, the controller was compared with the classical PID controller tuned with, the heuristic Ziegler-Nichols tuning method, the optimal Zhuang-Atherton tuning method and the GA method itself. The IAE, ISE, ITAE and ITSE criteria, used as the GA fitness functions, were applied to compare the controllers performance used in this work. Overall, and for most systems, the FL-PID results tuned with GAs were very satisfactory. Moreover, in some cases the results were substantially better than for the other PID controllers. The best system responses were obtained with the IAE and ITAE criteria used to tune the FL-PID and PID controllers.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Computerized scheduling methods and computerized scheduling systems according to exemplary embodiments. A computerized scheduling method may be stored in a memory and executed on one or more processors. The method may include defining a main multi-machine scheduling problem as a plurality of single machine scheduling problems; independently solving the plurality of single machine scheduling problems thereby calculating a plurality of near optimal single machine scheduling problem solutions; integrating the plurality of near optimal single machine scheduling problem solutions into a main multi-machine scheduling problem solution; and outputting the main multi-machine scheduling problem solution.
Resumo:
Dissertação apresentada na faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering