989 resultados para computational estimation
Resumo:
A medida que se incrementa la energía de los aceleradores de partículas o iones pesados como el CERN o GSI, de los reactores de fusión como JET o ITER, u otros experimentos científicos, se va haciendo cada vez más imprescindible el uso de técnicas de manipulación remota para la interacción con el entorno sujeto a la radiación. Hasta ahora la tasa de dosis radioactiva en el CERN podía tomar valores cercanos a algunos mSv para tiempos de enfriamiento de horas, que permitían la intervención humana para tareas de mantenimiento. Durante los primeros ensayos con plasma en JET, se alcanzaban valores cercanos a los 200 μSv después de un tiempo de enfriamiento de 4 meses y ya se hacía extensivo el uso de técnicas de manipulación remota. Hay una clara tendencia al incremento de los niveles de radioactividad en el futuro en este tipo de instalaciones. Un claro ejemplo es ITER, donde se esperan valores de 450 Sv/h en el centro del toroide a los 11 días de enfriamiento o los nuevos niveles energéticos del CERN que harán necesario una apuesta por niveles de mantenimiento remotos. En estas circunstancias se enmarca esta tesis, que estudia un sistema de control bilateral basado en fuerza-posición, tratando de evitar el uso de sensores de fuerza/par, cuyo contenido electrónico los hace especialmente sensitivos en estos ambientes. El contenido de este trabajo se centra en la teleoperación de robots industriales, que debido a su reconocida solvencia y facilidad para ser adaptados a estos entornos, unido al bajo coste y alta disponibilidad, les convierte en una alternativa interesante para tareas de manipulación remota frente a costosas soluciones a medida. En primer lugar se considera el problema cinemático de teleoperación maestro-esclavo de cinemática disimilar y se desarrolla un método general para la solución del problema en el que se incluye el uso de fuerzas asistivas para guiar al operador. A continuación se explican con detalle los experimentos realizados con un robot ABB y que muestran las dificultades encontradas y recomendaciones para solventarlas. Se concluye el estudio cinemático con un método para el encaje de espacios de trabajo entre maestro y esclavo disimilares. Posteriormente se mira hacia la dinámica, estudiándose el modelado de robots con vistas a obtener un método que permita estimar las fuerzas externas que actúan sobre los mismos. Durante la caracterización del modelo dinámico, se realizan varios ensayos para tratar de encontrar un compromiso entre complejidad de cálculo y error de estimación. También se dan las claves para modelar y caracterizar robots con estructura en forma de paralelogramo y se presenta la arquitectura de control deseada. Una vez obtenido el modelo completo del esclavo, se investigan diferentes alternativas que permitan una estimación de fuerzas externas en tiempo real, minimizando las derivadas de la posición para minimizar el ruido. Se comienza utilizando observadores clásicos del estado para ir evolucionando hasta llegar al desarrollo de un observador de tipo Luenberger-Sliding cuya implementación es relativamente sencilla y sus resultados contundentes. También se analiza el uso del observador propuesto durante un control bilateral simulado en el que se compara la realimentación de fuerzas obtenida con las técnicas clásicas basadas en error de posición frente a un control basado en fuerza-posición donde la fuerza es estimada y no medida. Se comprueba como la solución propuesta da resultados comparables con las arquitecturas clásicas y sin embargo introduce una alternativa para la teleoperación de robots industriales cuya teleoperación en entornos radioactivos sería imposible de otra manera. Finalmente se analizan los problemas derivados de la aplicación práctica de la teleoperación en los escenarios mencionados anteriormente. Debido a las condiciones prohibitivas para todo equipo electrónico, los sistemas de control se deben colocar a gran distancia de los manipuladores, dando lugar a longitudes de cable de centenares de metros. En estas condiciones se crean sobretensiones en controladores basados en PWM que pueden ser destructivas para el sistema formado por control, cableado y actuador, y por tanto, han de ser eliminadas. En este trabajo se propone una solución basada en un filtro LC comercial y se prueba de forma extensiva que su inclusión no produce efectos negativos sobre el control del actuador. ABSTRACT As the energy on the particle accelerators or heavy ion accelerators such as CERN or GSI, fusion reactors such as JET or ITER, or other scientific experiments is increased, it is becoming increasingly necessary to use remote handling techniques to interact with the remote and radioactive environment. So far, the dose rate at CERN could present values near several mSv for cooling times on the range of hours, which allowed human intervention for maintenance tasks. At JET, they measured values close to 200 μSv after a cooling time of 4 months and since then, the remote handling techniques became usual. There is a clear tendency to increase the radiation levels in the future. A clear example is ITER, where values of 450 Sv/h are expected in the centre of the torus after 11 days of cooling. Also, the new energetic levels of CERN are expected to lead to a more advanced remote handling means. In these circumstances this thesis is framed, studying a bilateral control system based on force-position, trying to avoid the use of force/torque sensors, whose electronic content makes them very sensitive in these environments. The contents of this work are focused on teleoperating industrial robots, which due its well-known reliability, easiness to be adapted to these environments, cost-effectiveness and high availability, are considered as an interesting alternative to expensive custom-made solutions for remote handling tasks. Firstly, the kinematic problem of teloperating master and slave with dissimilar kinematics is analysed and a new general approach for solving this issue is presented. The solution includes using assistive forces in order to guide the human operator. Coming up next, I explain with detail the experiments accomplished with an ABB robot that show the difficulties encountered and the proposed solutions. This section is concluded with a method to match the master’s and slave’s workspaces when they present dissimilar kinematics. Later on, the research studies the dynamics, with special focus on robot modelling with the purpose of obtaining a method that allows to estimate external forces acting on them. During the characterisation of the model’s parameters, a set of tests are performed in order to get to a compromise between computational complexity and estimation error. Key points for modelling and characterising robots with a parallelogram structure are also given, and the desired control architecture is presented. Once a complete model of the slave is obtained, different alternatives for external force estimation are review to be able to predict forces in real time, minimizing the position differentiation to minimize the estimation noise. The research starts by implementing classic state observers and then it evolves towards the use of Luenberger- Sliding observers whose implementation is relatively easy and the results are convincing. I also analyse the use of proposed observer during a simulated bilateral control on which the force feedback obtained with the classic techniques based on the position error is compared versus a control architecture based on force-position, where the force is estimated instead of measured. I t is checked how the proposed solution gives results comparable with the classical techniques and however introduces an alternative method for teleoperating industrial robots whose teleoperation in radioactive environments would have been impossible in a different way. Finally, the problems originated by the practical application of teleoperation in the before mentioned scenarios are analysed. Due the prohibitive conditions for every electronic equipment, the control systems should be placed far from the manipulators. This provokes that the power cables that fed the slaves devices can present lengths of hundreds of meters. In these circumstances, overvoltage waves are developed when implementing drives based on PWM technique. The occurrence of overvoltage is very dangerous for the system composed by drive, wiring and actuator, and has to be eliminated. During this work, a solution based on commercial LC filters is proposed and it is extensively proved that its inclusion does not introduce adverse effects into the actuator’s control.
Resumo:
In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a ? -estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier?Stokes equations. It is shown that the two quasi- a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.
Resumo:
Nowadays, there is an increasing number of robotic applications that need to act in real three-dimensional (3D) scenarios. In this paper we present a new mobile robotics orientated 3D registration method that improves previous Iterative Closest Points based solutions both in speed and accuracy. As an initial step, we perform a low cost computational method to obtain descriptions for 3D scenes planar surfaces. Then, from these descriptions we apply a force system in order to compute accurately and efficiently a six degrees of freedom egomotion. We describe the basis of our approach and demonstrate its validity with several experiments using different kinds of 3D sensors and different 3D real environments.
Resumo:
We propose the design of a real-time system to recognize and interprethand gestures. The acquisition devices are low cost 3D sensors. 3D hand pose will be segmented, characterized and track using growing neural gas (GNG) structure. The capacity of the system to obtain information with a high degree of freedom allows the encoding of many gestures and a very accurate motion capture. The use of hand pose models combined with motion information provide with GNG permits to deal with the problem of the hand motion representation. A natural interface applied to a virtual mirrorwriting system and to a system to estimate hand pose will be designed to demonstrate the validity of the system.
Resumo:
Spatial characterization of non-Gaussian attributes in earth sciences and engineering commonly requires the estimation of their conditional distribution. The indicator and probability kriging approaches of current nonparametric geostatistics provide approximations for estimating conditional distributions. They do not, however, provide results similar to those in the cumbersome implementation of simultaneous cokriging of indicators. This paper presents a new formulation termed successive cokriging of indicators that avoids the classic simultaneous solution and related computational problems, while obtaining equivalent results to the impractical simultaneous solution of cokriging of indicators. A successive minimization of the estimation variance of probability estimates is performed, as additional data are successively included into the estimation process. In addition, the approach leads to an efficient nonparametric simulation algorithm for non-Gaussian random functions based on residual probabilities.
Resumo:
In various signal-channel-estimation problems, the channel being estimated may be well approximated by a discrete finite impulse response (FIR) model with sparsely separated active or nonzero taps. A common approach to estimating such channels involves a discrete normalized least-mean-square (NLMS) adaptive FIR filter, every tap of which is adapted at each sample interval. Such an approach suffers from slow convergence rates and poor tracking when the required FIR filter is "long." Recently, NLMS-based algorithms have been proposed that employ least-squares-based structural detection techniques to exploit possible sparse channel structure and subsequently provide improved estimation performance. However, these algorithms perform poorly when there is a large dynamic range amongst the active taps. In this paper, we propose two modifications to the previous algorithms, which essentially remove this limitation. The modifications also significantly improve the applicability of the detection technique to structurally time varying channels. Importantly, for sparse channels, the computational cost of the newly proposed detection-guided NLMS estimator is only marginally greater than that of the standard NLMS estimator. Simulations demonstrate the favourable performance of the newly proposed algorithm. © 2006 IEEE.
Resumo:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^
Resumo:
Open Access funded by Medical Research Council Acknowledgment The work reported here was funded by a grant from the Medical Research Council, UK, grant number: MR/J013838/1.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
This paper introduces the LiDAR compass, a bounded and extremely lightweight heading estimation technique that combines a two-dimensional laser scanner and axis maps, which represent the orientations of flat surfaces in the environment. Although suitable for a variety of indoor and outdoor environments, the LiDAR compass is especially useful for embedded and real-time applications requiring low computational overhead. For example, when combined with a sensor that can measure translation (e.g., wheel encoders) the LiDAR compass can be used to yield accurate, lightweight, and very easily implementable localization that requires no prior mapping phase. The utility of using the LiDAR compass as part of a localization algorithm was tested on a widely-available open-source data set, an indoor environment, and a larger-scale outdoor environment. In all cases, it was shown that the growth in heading error was bounded, which significantly reduced the position error to less than 1% of the distance travelled.
Resumo:
Studies of fluid-structure interactions associated with flexible structures such as flapping wings require the capture and quantification of large motions of bodies that may be opaque. Motion capture of a free flying insect is considered by using three synchronized high-speed cameras. A solid finite element representation is used as a reference body and successive snapshots in time of the displacement fields are reconstructed via an optimization procedure. An objective function is formulated, and various shape difference definitions are considered. The proposed methodology is first studied for a synthetic case of a flexible cantilever structure undergoing large deformations, and then applied to a Manduca Sexta (hawkmoth) in free flight. The three-dimensional motions of this flapping system are reconstructed from image date collected by using three cameras. The complete deformation geometry of this system is analyzed. Finally, a computational investigation is carried out to understand the flow physics and aerodynamic performance by prescribing the body and wing motions in a fluid-body code. This thesis work contains one of the first set of such motion visualization and deformation analyses carried out for a hawkmoth in free flight. The tools and procedures used in this work are widely applicable to the studies of other flying animals with flexible wings as well as synthetic systems with flexible body elements.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.