795 resultados para Inverse Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present an algorithm for the numerical simulation of the cavitation in the hydrodynamic lubrication of journal bearings. Despite the fact that this physical process is usually modelled as a free boundary problem, we adopted the equivalent variational inequality formulation. We propose a two-level iterative algorithm, where the outer iteration is associated to the penalty method, used to transform the variational inequality into a variational equation, and the inner iteration is associated to the conjugate gradient method, used to solve the linear system generated by applying the finite element method to the variational equation. This inner part was implemented using the element by element strategy, which is easily parallelized. We analyse the behavior of two physical parameters and discuss some numerical results. Also, we analyse some results related to the performance of a parallel implementation of the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The formal calibration procedure of a phase fraction meter is based on registering the outputs resulting from imposed phase fractions at known flow regimes. This can be straightforwardly done in laboratory conditions, but is rarely the case in industrial conditions, and particularly for on-site applications. Thus, there is a clear need for less restrictive calibration methods regarding to the prior knowledge of the complete set of inlet conditions. A new procedure is proposed in this work for the on-site construction of the calibration curve from total flown mass values of the homogeneous dispersed phase. The solution is obtained by minimizing a convenient error functional, assembled with data from redundant tests to handle the intrinsic ill-conditioned nature of the problem. Numerical simulations performed for increasing error levels demonstrate that acceptable calibration curves can be reconstructed, even from total mass measured within a precision of up to 2%. Consequently, the method can readily be applied, especially in on-site calibration problems in which classical procedures fail due to the impossibility of having a strict control of all the input/output parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Sepsis is a leading precipitant of Acute Kidney Injury (AKI) in intensive care unit (ICU) patients, and is associated with a high mortality rate. Objective: We aimed to evaluate the risk factors for dialysis and mortality in a cohort of AKI patients of predominantly septic etiology. Methods: Adult patients from an ICU for whom nephrology consultation was requested were included. End-stage chronic renal failure and kidney transplant patients were excluded. Results: 114 patients were followed. Most had sepsis (84%), AKIN stage 3 (69%) and oliguria (62%) at first consultation. Dialysis was performed in 66% and overall mortality was 70%. Median serum creatinine in survivors and non-survivors was 3.95 mg/dl (2.63 - 5.28) and 2.75 mg/dl (1.81 - 3.69), respectively. In the multivariable models, oliguria and serum urea were positively associated with dialysis; otherwise, a lower serum creatinine at first consultation was independently associated with higher mortality. Conclusion: In a cohort of septic AKI, oliguria and serum urea were the main indications for dialysis. We also described an inverse association between serum creatinine and mortality. Potential explanations for this finding include: delay in diagnosis, fluid overload with hemodilution of serum creatinine or poor nutritional status. This finding may also help to explain the low discriminative power of general severity scores - that assign higher risks to higher creatinine levels - in septic AKI patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work investigates theoretical properties of symmetric and anti-symmetric kernels. First chapters give an overview of the theory of kernels used in supervised machine learning. Central focus is on the regularized least squares algorithm, which is motivated as a problem of function reconstruction through an abstract inverse problem. Brief review of reproducing kernel Hilbert spaces shows how kernels define an implicit hypothesis space with multiple equivalent characterizations and how this space may be modified by incorporating prior knowledge. Mathematical results of the abstract inverse problem, in particular spectral properties, pseudoinverse and regularization are recollected and then specialized to kernels. Symmetric and anti-symmetric kernels are applied in relation learning problems which incorporate prior knowledge that the relation is symmetric or anti-symmetric, respectively. Theoretical properties of these kernels are proved in a draft this thesis is based on and comprehensively referenced here. These proofs show that these kernels can be guaranteed to learn only symmetric or anti-symmetric relations, and they can learn any relations relative to the original kernel modified to learn only symmetric or anti-symmetric parts. Further results prove spectral properties of these kernels, central result being a simple inequality for the the trace of the estimator, also called the effective dimension. This quantity is used in learning bounds to guarantee smaller variance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surface size analyses of Twenty and Sixteen Mile Creeks, the Grand and Genesee Rivers and Cazenovia Creek show three distinct types of bed-surface sediment: 1) a "continuous" armor coat which has a mean size of -6.5 phi and coarser, 2) a "discontinuous" armor coat which has a mean size of approximately -6.0 phi and 3) a bed with no armor coat which has a mean surface size of -5.0 phi and finer. The continuous armor coat completely covers and protects the subsurface from the flow. The discontinuous armor coat is composed of intermittently-spaced surface clasts, which provide the subsurface with only limited protection from the flow. The bed with no armor coat allows complete exposure of the subsurface to the flow. The subsurface beneath the continuous armor coats of Twenty and Sixteen Mile Creeks is possibly modified by a "vertical winnowing" process when the armor coat is p«natrat«d. This process results in a welld «v«loped inversely graded sediment sequence.vertical winnowing is reduced beneath the discontinuous armor coats of the Grand and Genesee Rivers. The reduction of vertical winnowing results in a more poorly-developed inverse grading than that found in Twenty and sixteen Mile Creeks. The streambed of Cazenovia Creek normally is not armored resulting in a homogeneous subsurface which shows no modification by vertical winnowing. This streambed forms during waning or moderate flows, suggesting it does not represent the maximum competence of the stream. Each population of grains in the subsurface layers of Twenty and sixteen Mile Creeks has been modified by vertical winnowing and does not represent a mode of transport. Each population in the subsurface layers beneath a discontinuous armor coat may partially reflect a transport mode. These layers are still inversely graded suggesting that each population is affected to some degree by vertical winnowing. The populations for sediment beneath a surface which is not armored are probably indicative of transport modes because such sediment has not been modified by vertical winnowing. Bed photographs taken in each of the five streams before and after the 1982-83 snow-melt show that the probability of movement for the surface clasts is a function of grain size. The greatest probability of of clast movement and scour depth of this study were recorded on Cazenovia Creek in areas where no armor coat is present. The scour depth in the armored beds of Twenty and Sixteen Mile Creeks is related to the probability of movement for a given mean surface size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solid state nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for studying structural and dynamical properties of disordered and partially ordered materials, such as glasses, polymers, liquid crystals, and biological materials. In particular, twodimensional( 2D) NMR methods such as ^^C-^^C correlation spectroscopy under the magicangle- spinning (MAS) conditions have been used to measure structural constraints on the secondary structure of proteins and polypeptides. Amyloid fibrils implicated in a broad class of diseases such as Alzheimer's are known to contain a particular repeating structural motif, called a /5-sheet. However, the details of such structures are poorly understood, primarily because the structural constraints extracted from the 2D NMR data in the form of the so-called Ramachandran (backbone torsion) angle distributions, g{^,'4)), are strongly model-dependent. Inverse theory methods are used to extract Ramachandran angle distributions from a set of 2D MAS and constant-time double-quantum-filtered dipolar recoupling (CTDQFD) data. This is a vastly underdetermined problem, and the stability of the inverse mapping is problematic. Tikhonov regularization is a well-known method of improving the stability of the inverse; in this work it is extended to use a new regularization functional based on the Laplacian rather than on the norm of the function itself. In this way, one makes use of the inherently two-dimensional nature of the underlying Ramachandran maps. In addition, a modification of the existing numerical procedure is performed, as appropriate for an underdetermined inverse problem. Stability of the algorithm with respect to the signal-to-noise (S/N) ratio is examined using a simulated data set. The results show excellent convergence to the true angle distribution function g{(j),ii) for the S/N ratio above 100.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interior illumination is a complex problem involving numerous interacting factors. This research applies genetic programming towards problems in illumination design. The Radiance system is used for performing accurate illumination simulations. Radiance accounts for a number of important environmental factors, which we exploit during fitness evaluation. Illumination requirements include local illumination intensity from natural and artificial sources, colour, and uniformity. Evolved solutions incorporate design elements such as artificial lights, room materials, windows, and glass properties. A number of case studies are examined, including many-objective problems involving up to 7 illumination requirements, the design of a decorative wall of lights, and the creation of a stained-glass window for a large public space. Our results show the technical and creative possibilities of applying genetic programming to illumination design.