908 resultados para Algorithm Analysis and Problem Complexity
Resumo:
CONTEXT The necessity of specific intervention components for the successful treatment of patients with posttraumatic stress disorder is the subject of controversy. OBJECTIVE To investigate the complexity of clinical problems as a moderator of relative effects between specific and nonspecific psychological interventions. METHODS We included 18 randomized controlled trials, directly comparing specific and nonspecific psychological interventions. We conducted moderator analyses, including the complexity of clinical problems as predictor. RESULTS Our results have confirmed the moderate superiority of specific over nonspecific psychological interventions; however, the superiority was small in studies with complex clinical problems and large in studies with noncomplex clinical problems. CONCLUSIONS For patients with complex clinical problems, our results suggest that particular nonspecific psychological interventions may be offered as an alternative to specific psychological interventions. In contrast, for patients with noncomplex clinical problems, specific psychological interventions are the best treatment option.
Resumo:
Purpose - The purpose of this paper is twofold: to analyze the computational complexity of the cogeneration design problem; to present an expert system to solve the proposed problem, comparing such an approach with the traditional searching methods available.Design/methodology/approach - The complexity of the cogeneration problem is analyzed through the transformation of the well-known knapsack problem. Both problems are formulated as decision problems and it is proven that the cogeneration problem is np-complete. Thus, several searching approaches, such as population heuristics and dynamic programming, could be used to solve the problem. Alternatively, a knowledge-based approach is proposed by presenting an expert system and its knowledge representation scheme.Findings - The expert system is executed considering two case-studies. First, a cogeneration plant should meet power, steam, chilled water and hot water demands. The expert system presented two different solutions based on high complexity thermodynamic cycles. In the second case-study the plant should meet just power and steam demands. The system presents three different solutions, and one of them was never considered before by our consultant expert.Originality/value - The expert system approach is not a "blind" method, i.e. it generates solutions based on actual engineering knowledge instead of the searching strategies from traditional methods. It means that the system is able to explain its choices, making available the design rationale for each solution. This is the main advantage of the expert system approach over the traditional search methods. On the other hand, the expert system quite likely does not provide an actual optimal solution. All it can provide is one or more acceptable solutions.
Resumo:
An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
This report describes the analysis and development of novel tools for the global optimisation of relevant mission design problems. A taxonomy was created for mission design problems, and an empirical analysis of their optimisational complexity performed - it was demonstrated that the use of global optimisation was necessary on most classes and informed the selection of appropriate global algorithms. The selected algorithms were then applied to the di®erent problem classes: Di®erential Evolution was found to be the most e±cient. Considering the speci¯c problem of multiple gravity assist trajectory design, a search space pruning algorithm was developed that displays both polynomial time and space complexity. Empirically, this was shown to typically achieve search space reductions of greater than six orders of magnitude, thus reducing signi¯cantly the complexity of the subsequent optimisation. The algorithm was fully implemented in a software package that allows simple visualisation of high-dimensional search spaces, and e®ective optimisation over the reduced search bounds.
Resumo:
Intercontinental Ballistic Missiles are capable of placing a nuclear warhead at more than 5,000 km away from its launching base. With the lethal power of a nuclear warhead a whole city could be wiped out by a single weapon causing millions of deaths. This means that the threat posed to any country from a single ICBM captured by a terrorist group or launched by a 'rogue' state is huge. This threat is increasing as more countries are achieving nuclear and advanced launcher capabilities. In order to suppress or at least reduce this threat the United States created the National Missile Defense System which involved, among other systems, the development of long-range interceptors whose aim is to destroy incoming ballistic missiles in their midcourse phase. The Ballistic Missile Defense is a high-profile topic that has been the focus of political controversy lately when the U.S. decided to expand the Ballistic Missile system to Europe, with the opposition of Russia. However the technical characteristics of this system are mostly unknown by the general public. The Interception of an ICBM using a long range Interceptor Missile as intended within the Ground-Based Missile Defense System by the American National Missile Defense (NMD) implies a series of problems of incredible complexity: - The incoming missile has to be detected almost immediately after launch. - The incoming missile has to be tracked along its trajectory with a great accuracy. - The Interceptor Missile has to implement a fast and accurate guidance algorithm in order to reach the incoming missile as soon as possible. - The Kinetic Kill Vehicle deployed by the interceptor boost vehicle has to be able to detect the reentry vehicle once it has been deployed by ICBM, when it offers a very low infrared signature, in order to perform a final rendezvous manoeuvre. - The Kinetic Kill Vehicle has to be able to discriminate the reentry vehicle from the surrounding debris and decoys. - The Kinetic Kill Vehicle has to be able to implement an accurate guidance algorithm in order to perform a kinetic interception (direct collision) of the reentry vehicle, at relative speeds of more than 10 km/s. All these problems are being dealt simultaneously by the Ground-Based Missile Defense System that is developing very complex and expensive sensors, communications and control centers and long-range interceptors (Ground-Based Interceptor Missile) including a Kinetic Kill Vehicle. Among all the technical challenges involved in this interception scenario, this thesis focuses on the algorithms required for the guidance of the Interceptor Missile and the Kinetic Kill Vehicle in order to perform the direct collision with the ICBM. The involved guidance algorithms are deeply analysed in this thesis in part III where conventional guidance strategies are reviewed and optimal guidance algorithms are developed for this interception problem. The generation of a realistic simulation of the interception scenario between an ICBM and a Ground Based Interceptor designed to destroy it was considered as necessary in order to be able to compare different guidance strategies with meaningful results. As a consequence, a highly representative simulator for an ICBM and a Kill Vehicle has been implemented, as detailed in part II, and the generation of these simulators has also become one of the purposes of this thesis. In summary, the main purposes of this thesis are: - To develop a highly representative simulator of an interception scenario between an ICBM and a Kill Vehicle launched from a Ground Based Interceptor. -To analyse the main existing guidance algorithms both for the ascent phase and the terminal phase of the missiles. Novel conclusions of these analyses are obtained. - To develop original optimal guidance algorithms for the interception problem. - To compare the results obtained using the different guidance strategies, assess the behaviour of the optimal guidance algorithms, and analyse the feasibility of the Ballistic Missile Defense system in terms of guidance (part IV). As a secondary objective, a general overview of the state of the art in terms of ballistic missiles and anti-ballistic missile defence is provided (part I).
Resumo:
An algorithm for explicit integration of structural dynamics problems with multiple time steps is proposed that averages accelerations to obtain subcycle states at a nodal interface between regions integrated with different time steps. With integer time step ratios, the resulting subcycle updates at the interface sum to give the same effect as a central difference update over a major cycle. The algorithm is shown to have good accuracy, and stability properties in linear elastic analysis similar to those of constant velocity subcycling algorithms. The implementation of a generalised form of the algorithm with non-integer time step ratios is presented. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
Objective To evaluate the influence of oral contraceptives (OCs) containing 20 mu mu g ethinylestradiol (EE) and 150 mu mu g gestodene (GEST) on the autonomic modulation of heart rate (HR) in women. Methods One-hundred and fifty-five women aged 24 +/-+/- 2 years were divided into four groups according to their physical activity and the use or not of an OC: active-OC, active-non-OC (NOC), sedentary-OC, and sedentary-NOC. The heart rate was registered in real time based on the electrocardiogram signal for 15 minutes, in the supine-position. The heart rate variability (HRV) was analysed using Shannon`s entropy (SE), conditional entropy (complexity index [CInd] and normalised CInd [NCI]), and symbolic analysis (0V%, 1V%, 2LV%, and 2ULV%). For statistical analysis the Kruskal-Wallis test with Dunn post hoc and the Wilcoxon test (p < 0.05 was considered significant) were applied. Results Treatment with this COC caused no significant changes in SE, CInd, NCI, or symbolic analysis in either active or sedentary groups. Active groups presented higher values for SE and 2ULV%, and lower values for 0V% when compared to sedentary groups (p < 0.05). Conclusion HRV patterns differed depending on life style; the non-linear method applied was highly reliable for identifying these changes. The use of OCs containing 20 mu mu g EE and 150 mu mu g GEST does not influence HR autonomic modulation.
Resumo:
A previously developed model is used to numerically simulate real clinical cases of the surgical correction of scoliosis. This model consists of one-dimensional finite elements with spatial deformation in which (i) the column is represented by its axis; (ii) the vertebrae are assumed to be rigid; and (iii) the deformability of the column is concentrated in springs that connect the successive rigid elements. The metallic rods used for the surgical correction are modeled by beam elements with linear elastic behavior. To obtain the forces at the connections between the metallic rods and the vertebrae geometrically, non-linear finite element analyses are performed. The tightening sequence determines the magnitude of the forces applied to the patient column, and it is desirable to keep those forces as small as possible. In this study, a Genetic Algorithm optimization is applied to this model in order to determine the sequence that minimizes the corrective forces applied during the surgery. This amounts to find the optimal permutation of integers 1, ... , n, n being the number of vertebrae involved. As such, we are faced with a combinatorial optimization problem isomorph to the Traveling Salesman Problem. The fitness evaluation requires one computing intensive Finite Element Analysis per candidate solution and, thus, a parallel implementation of the Genetic Algorithm is developed.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Manipulator systems are rather complex and highly nonlinear which makes difficult their analysis and control. Classic system theory is veil known, however it is inadequate in the presence of strong nonlinear dynamics. Nonlinear controllers produce good results [1] and work has been done e. g. relating the manipulator nonlinear dynamics with frequency response [2–5]. Nevertheless, given the complexity of the problem, systematic methods which permit to draw conclusions about stability, imperfect modelling effects, compensation requirements, etc. are still lacking. In section 2 we start by analysing the variation of the poles and zeros of the descriptive transfer functions of a robot manipulator in order to motivate the development of more robust (and computationally efficient) control algorithms. Based on this analysis a new multirate controller which is an improvement of the well known “computed torque controller” [6] is announced in section 3. Some research in this area was done by Neuman [7,8] showing tbat better robustness is possible if the basic controller structure is modified. The present study stems from those ideas, and attempts to give a systematic treatment, which results in easy to use standard engineering tools. Finally, in section 4 conclusions are presented.
Resumo:
The application of Discriminant function analysis (DFA) is not a new idea in the studyof tephrochrology. In this paper, DFA is applied to compositional datasets of twodifferent types of tephras from Mountain Ruapehu in New Zealand and MountainRainier in USA. The canonical variables from the analysis are further investigated witha statistical methodology of change-point problems in order to gain a betterunderstanding of the change in compositional pattern over time. Finally, a special caseof segmented regression has been proposed to model both the time of change and thechange in pattern. This model can be used to estimate the age for the unknown tephrasusing Bayesian statistical calibration
Resumo:
Algoritmo que optimiza y crea pairings para tripulaciones de líneas aéreas mediante la posterior programación en Java.
Resumo:
Industry's growing need for higher productivity is placing new demands on mechanisms connected with electrical motors, because these can easily lead to vibration problems due to fast dynamics. Furthermore, the nonlinear effects caused by a motor frequently reduce servo stability, which diminishes the controller's ability to predict and maintain speed. Hence, the flexibility of a mechanism and its control has become an important area of research. The basic approach in control system engineering is to assume that the mechanism connected to a motor is rigid, so that vibrations in the tool mechanism, reel, gripper or any apparatus connected to the motor are not taken into account. This might reduce the ability of the machine system to carry out its assignment and shorten the lifetime of the equipment. Nonetheless, it is usually more important to know how the mechanism, or in other words the load on the motor, behaves. A nonlinear load control method for a permanent magnet linear synchronous motor is developed and implemented in the thesis. The purpose of the controller is to track a flexible load to the desired velocity reference as fast as possible and without awkward oscillations. The control method is based on an adaptive backstepping algorithm with its stability ensured by the Lyapunov stability theorem. As a reference controller for the backstepping method, a hybrid neural controller is introduced in which the linear motor itself is controlled by a conventional PI velocity controller and the vibration of the associated flexible mechanism is suppressed from an outer control loop using a compensation signal from a multilayer perceptron network. To avoid the local minimum problem entailed in neural networks, the initial weights are searched for offline by means of a differential evolution algorithm. The states of a mechanical system for controllers are estimated using the Kalman filter. The theoretical results obtained from the control design are validated with the lumped mass model for a mechanism. Generalization of the mechanism allows the methods derived here to be widely implemented in machine automation. The control algorithms are first designed in a specially introduced nonlinear simulation model and then implemented in the physical linear motor using a DSP (Digital Signal Processor) application. The measurements prove that both controllers are capable of suppressing vibration, but that the backstepping method is superior to others due to its accuracy of response and stability properties.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.