878 resultados para Observational techniques and algorithms
Resumo:
В статье представлено развитие принципа построения автоматической пилотажно-навигационной системы (АПНС) для беспилотного летательного аппарата (БЛА). Принцип заключается в синтезе комплексных систем управления БПЛА не только на основе использования алгоритмов БИНС, но и алгоритмов, объединяющих в себе решение задач формирования и отработки сформированной траектории резервированной системой управления и навигации. Приведены результаты аналитического исследования и данные летных экспериментов разработанных алгоритмов АПНС БЛА, обеспечивающих дополнительное резервирование алгоритмов навигации и наделяющих БЛА новым функциональной способностью по выходу в заданную точку пространства с заданной скоростью в заданный момент времени с учетом атмосферных ветровых возмущений. Предложена и испытана методика идентификации параметров воздушной атмосферы: направления и скорости W ветра. Данные летных испытаний полученного решения задачи терминальной навигации демонстрируют устойчивую работу синтезированных алгоритмов управления в различных метеоусловиях. The article presents a progress in principle of development of automatic navigation management system (ANMS) for small unmanned aerial vehicle (UAV). The principle defines a development of integrated control systems for UAV based on tight coupling of strap down inertial navigation system algorithms and algorithms of redundant flight management system to form and control flight trajectory. The results of the research and flight testing of the developed ANMS UAV algorithms are presented. The system demonstrates advanced functional redundancy of UAV guidance. The system enables new UAV capability to perform autonomous multidimensional navigation along waypoints with controlled speed and time of arrival taking into account wind. The paper describes the technique for real-time identification of atmosphere parameters such as wind direction and wind speed. The flight test results demonstrate robustness of the algorithms in diverse meteorological conditions.
Resumo:
These lecture notes describe the use and implementation of a framework in which mathematical as well as engineering optimisation problems can be analysed. The foundations of the framework and algorithms described -Hierarchical Asynchronous Parallel Evolutionary Algorithms (HAPEAs) - lie upon traditional evolution strategies and incorporate the concepts of a multi-objective optimisation, hierarchical topology, asynchronous evaluation of candidate solutions , parallel computing and game strategies. In a step by step approach, the numerical implementation of EAs and HAPEAs for solving multi criteria optimisation problems is conducted providing the reader with the knowledge to reproduce these hand on training in his – her- academic or industrial environment.
Resumo:
Circular shortest paths represent a powerful methodology for image segmentation. The circularity condition ensures that the contour found by the algorithm is closed, a natural requirement for regular objects. Several implementations have been proposed in the past that either promise closure with high probability or ensure closure strictly, but with a mild computational efficiency handicap. Circularity can be viewed as a priori information that helps recover the correct object contour. Our "observation" is that circularity is only one among many possible constraints that can be imposed on shortest paths to guide them to a desirable solution. In this contribution, we illustrate this opportunity under a volume constraint but the concept is generally applicable. We also describe several adornments to the circular shortest path algorithm that proved useful in applications. © 2011 IEEE.
Resumo:
This thesis examines and compares imaging methods used during the radiotherapy treatment of prostate cancer. The studies found that radiation therapists were able to localise and target the prostate consistently with planar imaging techniques and that the use of small gold markers in the prostate reduced the variation in prostate localisation when using volumetric imaging. It was concluded that larger safety margins are required when using volumetric imaging without gold markers.
Resumo:
Organizations executing similar business processes need to understand the differences and similarities in activities performed across work environments. Presently, research interest is directed towards the potential of visualization for the display of process models, to support users in their analysis tasks. Although recent literature in process mining and comparison provide several methods and algorithms to perform process and log comparison, few contributions explore novel visualization approaches. This paper analyses process comparison from a design perspective, providing some practical visualization techniques as anal- ysis solutions (/to support process analysis). The design of the visual comparison has been tackled through three different points of view: the general model, the projected model and the side-by-side comparison in order to support the needs of business analysts. A case study is presented showing the application of process mining and visualization techniques to patient treatment across two Australian hospitals.
Resumo:
Many complex aeronautical design problems can be formulated with efficient multi-objective evolutionary optimization methods and game strategies. This book describes the role of advanced innovative evolution tools in the solution, or the set of solutions of single or multi disciplinary optimization. These tools use the concept of multi-population, asynchronous parallelization and hierarchical topology which allows different models including precise, intermediate and approximate models with each node belonging to the different hierarchical layer handled by a different Evolutionary Algorithm. The efficiency of evolutionary algorithms for both single and multi-objective optimization problems are significantly improved by the coupling of EAs with games and in particular by a new dynamic methodology named “Hybridized Nash-Pareto games”. Multi objective Optimization techniques and robust design problems taking into account uncertainties are introduced and explained in detail. Several applications dealing with civil aircraft and UAV, UCAV systems are implemented numerically and discussed. Applications of increasing optimization complexity are presented as well as two hands-on test cases problems. These examples focus on aeronautical applications and will be useful to the practitioner in the laboratory or in industrial design environments. The evolutionary methods coupled with games presented in this volume can be applied to other areas including surface and marine transport, structures, biomedical engineering, renewable energy and environmental problems.
Resumo:
Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.
Resumo:
Three new procedures for the extrapolation of series coefficients from a given power series expansion are proposed. They are based on (i) a novel resummation identity, (ii) parametrised Euler transformation (pet) and (iii) a modifiedpet. Several examples taken from the Ising model series expansions, ferrimagnetic systems, etc., are illustrated. Apart from these applications, the higher order virial coefficients for hard spheres and hard discs have also been evaluated using the new techniques and these are compared with the estimates obtained by other methods. A satisfactory agreement is revealed between the two.
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.
Resumo:
Four hybrid algorithms has been developed for the solution of the unit commitment problem. They use simulated annealing as one of the constituent techniques, and produce lower cost schedules; two of them have less overhead than other soft computing techniques. They are also more robust to the choice of parameters. A special technique avoids the generating of infeasible schedules, and thus reduces computation time.
Local numerical modelling of magnetoconvection and turbulence - implications for mean-field theories
Resumo:
During the last decades mean-field models, in which large-scale magnetic fields and differential rotation arise due to the interaction of rotation and small-scale turbulence, have been enormously successful in reproducing many of the observed features of the Sun. In the meantime, new observational techniques, most prominently helioseismology, have yielded invaluable information about the interior of the Sun. This new information, however, imposes strict conditions on mean-field models. Moreover, most of the present mean-field models depend on knowledge of the small-scale turbulent effects that give rise to the large-scale phenomena. In many mean-field models these effects are prescribed in ad hoc fashion due to the lack of this knowledge. With large enough computers it would be possible to solve the MHD equations numerically under stellar conditions. However, the problem is too large by several orders of magnitude for the present day and any foreseeable computers. In our view, a combination of mean-field modelling and local 3D calculations is a more fruitful approach. The large-scale structures are well described by global mean-field models, provided that the small-scale turbulent effects are adequately parameterized. The latter can be achieved by performing local calculations which allow a much higher spatial resolution than what can be achieved in direct global calculations. In the present dissertation three aspects of mean-field theories and models of stars are studied. Firstly, the basic assumptions of different mean-field theories are tested with calculations of isotropic turbulence and hydrodynamic, as well as magnetohydrodynamic, convection. Secondly, even if the mean-field theory is unable to give the required transport coefficients from first principles, it is in some cases possible to compute these coefficients from 3D numerical models in a parameter range that can be considered to describe the main physical effects in an adequately realistic manner. In the present study, the Reynolds stresses and turbulent heat transport, responsible for the generation of differential rotation, were determined along the mixing length relations describing convection in stellar structure models. Furthermore, the alpha-effect and magnetic pumping due to turbulent convection in the rapid rotation regime were studied. The third area of the present study is to apply the local results in mean-field models, which task we start to undertake by applying the results concerning the alpha-effect and turbulent pumping in mean-field models describing the solar dynamo.
Resumo:
The study investigates whether there is an association between different combinations of emphasis on generic strategies (product differentiation and cost efficiency) and perceived usefulness of management accounting techniques. Previous research has found that cost leadership is associated with traditional accounting techniques and product differentiation with a variety of modern management accounting approaches. The present study focuses on the possible existence of a strategy that mixes these generic strategies. The empirical results suggest that (a) there is no difference in the attitudes towards the usefulness of traditional management accounting techniques between companies that adhere either to a single strategy or a mixed strategy; (b) there is no difference in the attitudes towards modern and traditional techniques between companies that adhere to a single strategy, whether this is product differentiation or cost efficiency, and c) companies that favour a mixed strategy seem to have a more positive attitude towards modern techniques than companies adhering to a single strategy
Resumo:
Gene mapping is a systematic search for genes that affect observable characteristics of an organism. In this thesis we offer computational tools to improve the efficiency of (disease) gene-mapping efforts. In the first part of the thesis we propose an efficient simulation procedure for generating realistic genetical data from isolated populations. Simulated data is useful for evaluating hypothesised gene-mapping study designs and computational analysis tools. As an example of such evaluation, we demonstrate how a population-based study design can be a powerful alternative to traditional family-based designs in association-based gene-mapping projects. In the second part of the thesis we consider a prioritisation of a (typically large) set of putative disease-associated genes acquired from an initial gene-mapping analysis. Prioritisation is necessary to be able to focus on the most promising candidates. We show how to harness the current biomedical knowledge for the prioritisation task by integrating various publicly available biological databases into a weighted biological graph. We then demonstrate how to find and evaluate connections between entities, such as genes and diseases, from this unified schema by graph mining techniques. Finally, in the last part of the thesis, we define the concept of reliable subgraph and the corresponding subgraph extraction problem. Reliable subgraphs concisely describe strong and independent connections between two given vertices in a random graph, and hence they are especially useful for visualising such connections. We propose novel algorithms for extracting reliable subgraphs from large random graphs. The efficiency and scalability of the proposed graph mining methods are backed by extensive experiments on real data. While our application focus is in genetics, the concepts and algorithms can be applied to other domains as well. We demonstrate this generality by considering coauthor graphs in addition to biological graphs in the experiments.
Resumo:
The goal of speech enhancement algorithms is to provide an estimate of clean speech starting from noisy observations. The often-employed cost function is the mean square error (MSE). However, the MSE can never be computed in practice. Therefore, it becomes necessary to find practical alternatives to the MSE. In image denoising problems, the cost function (also referred to as risk) is often replaced by an unbiased estimator. Motivated by this approach, we reformulate the problem of speech enhancement from the perspective of risk minimization. Some recent contributions in risk estimation have employed Stein's unbiased risk estimator (SURE) together with a parametric denoising function, which is a linear expansion of threshold/bases (LET). We show that the first-order case of SURE-LET results in a Wiener-filter type solution if the denoising function is made frequency-dependent. We also provide enhancement results obtained with both techniques and characterize the improvement by means of local as well as global SNR calculations.