952 resultados para Doubly robust estimation
Resumo:
In continuation of our previous work on the quintet transitions 1s2s2p^2 ^5 P-1s2s2p3d ^5 P^0, ^5 D^0, results on other n = 2 - n' = 3 quintet transitions for elements N, 0 and F are presented. Assignments have been established by comparison with Multi-Configuration Dirac-Fock calculations. High spectral resolution on beam-foil spectroscopy was essential for the identification of most of the lines. For some of the quintet lines decay curves were measured, and the lifetimes extracted were found to be in reasonable agreement with MCDF calculations.
Resumo:
In continuation of our previous work on doubly-excited ions with three and four electrons we present the first results on optical transitions in the term system of doubly-excited ions with five electrons. Transitions between such sextet states were identified in beam-foil spectra of the ions nitrogen, oxygen and fluorine. Assignments were first established by comparison with Multi-Configuration Dirac-Fock calculations. Later assignments were aided by Multi-Configuration Hartree-Fock calculations (see the contribution by G. Miecznik et al. in this issue). Decay curves were recorded for all six candidate lines. The lifetime results are compared to theoretical values which confirm most of the assignments qualitatively.
Resumo:
Correlation energies for all isoelectronic sequences of 2 to 20 electrons and Z = 2 to 25 are obtained by taking differences between theoretical total energies of Dirac-Fock calculations and experimental total energies. These are pure relativistic correlation energies because relativistic and QED effects are already taken care of. The theoretical as well as the experimental values are analysed critically in order to get values as accurate as possible. The correlation energies obtained show an essentially consistent behaviour from Z = 2 to 17. For Z > 17 inconsistencies occur indicating errors in the experimental values which become very large for Z > 25.
Resumo:
Following an earlier observation in F VI we identified the line pair 1s2s2p^2 {^5P}-1s2s2p3d {^5P^0} , {^5D^0} for the elements N, O, Mg, and tentatively for A1 and Si in beam-foil spectra. Assignment was established by comparison with Multi-Configuration Dirac-Fock calculations along the isoelectronic sequence. Using this method we also identified some quartet lines of lithium-like ions with Z > 10.
Resumo:
Brazil has been increasing its importance in agricultural markets. The reasons are well known to be the relative abundance of land, the increasing technology used in crops, and the development of the agribusiness sector which allow for a fast response to price stimuli. The elasticity of acreage response to increases in expected return is estimated for Soybeans in a dynamic (long term) error correction model. Regarding yield patterns, a large variation in the yearly rates of growth in yield is observed, climate being probably the main source of this variation which result in ‘good’ and ‘bad’ years. In South America, special attention should be given to the El Niño and La Niña phenomena, both said to have important effects on rainfalls patterns and consequently in yield. The influence on El Niño and La Niña in historical data is examined and some ways of estimating the impact of climate on yield of Soybean and Corn markets are proposed. Possible implications of climate change may apply.
Resumo:
Für große Windenergieanlagen werden neue Pitchregler wie Einzelblattregler oder Turmdämpfungsregler entwickelt. Während diese neuen Pitchregler die Elemente der Windenergieanlagen entlasten, wird das Pitchantriebssystem stärker belastet. Die Pitchantriebe müssen weitaus häufiger bei höherer Amplitude arbeiten. Um die neuen Pitchregler nutzen zu können, muss zunächst das Problem der Materialermüdung der Pitchantriebssysteme gelöst werden. Das Getriebespiel in Getrieben und zwischen Ritzeln und dem Zahnkranz erhöht die Materialermüdung in den Pitchantriebssystemen. In dieser Studie werden als Lösung zwei Pitchantriebe pro Blatt vorgeschlagen. Die beiden Pitchantriebe erzeugen eine Spannung auf dem Pitchantriebssystem und kompensieren das Getriebespiel. Drehmomentspitzen, die eine Materialermüdung verursachen, treten bei diesem System mit zwei Pitchmotoren nicht mehr auf. Ein Reglerausgang wird via Drehmomentverteiler auf die beiden Pitchantriebe übertragen. Es werden mehrere Methoden verglichen und der leistungsfähigste Drehmomentverteiler ausgewählt. Während die Pitchantriebe in Bewegung sind, ändert sich die Spannung auf den Getrieben. Die neuen Pitchregler verstellen den Pitchwinkel in einer sinusförmigen Welle. Der Profilgenerator, der derzeit als Pitchwinkelregler verwendet wird, kann eine Phasenverzögerung im sinusförmigen Pitchwinkel verursachen. Zusätzlich erzeugen große Windenergieanlagen eine hohe Last, die sich störend auf die Pitchbewegung auswirkt. Änderungen der viskosen Reibung und Nichtlinearität der Gleitreibung bzw. Coulombsche Reibung des Pitchregelsystems erschweren zudem die Entwicklung eines Pitchwinkelreglers. Es werden zwei robuste Regler (H∞ und μ–synthesis ) vorgestellt und mit zwei herkömmlichen Reglern (PD und Kaskadenregler) verglichen. Zur Erprobung des Pitchantriebssystems und des Pitchwinkelreglers wird eine Prüfanordnung verwendet. Da der Kranz nicht mit einem Positionssensor ausgestattet ist, wird ein Überwachungselement entwickelt, das die Kranzposition meldet. Neben den beiden Pitchantrieben sind zwei Lastmotoren mit dem Kranz verbunden. Über die beiden Lastmotoren wird das Drehmoment um die Pitchachse einer Windenergieanlage simuliert. Das Drehmoment um die Pitchachse setzt sich zusammen aus Schwerkraft, aerodynamischer Kraft, zentrifugaler Belastung, Reibung aufgrund des Kippmoments und der Beschleunigung bzw. Verzögerung des Rotorblatts. Das Blatt wird als Zweimassenschwinger modelliert. Große Windenergieanlagen und neue Pitchregler für die Anlagen erfordern ein neues Pitchantriebssystem. Als Hardware-Lösung bieten sich zwei Pitchantriebe an mit einem robusten Regler als Software.
Resumo:
We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.
Resumo:
A new formulation for recovering the structure and motion parameters of a moving patch using both motion and shading information is presented. It is based on a new differential constraint equation (FICE) that links the spatiotemporal gradients of irradiance to the motion and structure parameters and the temporal variations of the surface shading. The FICE separates the contribution to the irradiance spatiotemporal gradients of the gradients due to texture from those due to shading and allows the FICE to be used for textured and textureless surface. The new approach, combining motion and shading information, leads directly to two different contributions: it can compensate for the effects of shading variations in recovering the shape and motion; and it can exploit the shading/illumination effects to recover motion and shape when they cannot be recovered without it. The FICE formulation is also extended to multiple frames.
Resumo:
This thesis addresses the problem of developing automatic grasping capabilities for robotic hands. Using a 2-jointed and a 4-jointed nmodel of the hand, we establish the geometric conditions necessary for achieving form closure grasps of cylindrical objects. We then define and show how to construct the grasping pre-image for quasi-static (friction dominated) and zero-G (inertia dominated) motions for sensorless and sensor-driven grasps with and without arm motions. While the approach does not rely on detailed modeling, it is computationally inexpensive, reliable, and easy to implement. Example behaviors were successfully implemented on the Salisbury hand and on a planar 2-fingered, 4 degree-of-freedom hand.
Resumo:
This report examines how to estimate the parameters of a chaotic system given noisy observations of the state behavior of the system. Investigating parameter estimation for chaotic systems is interesting because of possible applications for high-precision measurement and for use in other signal processing, communication, and control applications involving chaotic systems. In this report, we examine theoretical issues regarding parameter estimation in chaotic systems and develop an efficient algorithm to perform parameter estimation. We discover two properties that are helpful for performing parameter estimation on non-structurally stable systems. First, it turns out that most data in a time series of state observations contribute very little information about the underlying parameters of a system, while a few sections of data may be extraordinarily sensitive to parameter changes. Second, for one-parameter families of systems, we demonstrate that there is often a preferred direction in parameter space governing how easily trajectories of one system can "shadow'" trajectories of nearby systems. This asymmetry of shadowing behavior in parameter space is proved for certain families of maps of the interval. Numerical evidence indicates that similar results may be true for a wide variety of other systems. Using the two properties cited above, we devise an algorithm for performing parameter estimation. Standard parameter estimation techniques such as the extended Kalman filter perform poorly on chaotic systems because of divergence problems. The proposed algorithm achieves accuracies several orders of magnitude better than the Kalman filter and has good convergence properties for large data sets.
Resumo:
This thesis presents the development of hardware, theory, and experimental methods to enable a robotic manipulator arm to interact with soils and estimate soil properties from interaction forces. Unlike the majority of robotic systems interacting with soil, our objective is parameter estimation, not excavation. To this end, we design our manipulator with a flat plate for easy modeling of interactions. By using a flat plate, we take advantage of the wealth of research on the similar problem of earth pressure on retaining walls. There are a number of existing earth pressure models. These models typically provide estimates of force which are in uncertain relation to the true force. A recent technique, known as numerical limit analysis, provides upper and lower bounds on the true force. Predictions from the numerical limit analysis technique are shown to be in good agreement with other accepted models. Experimental methods for plate insertion, soil-tool interface friction estimation, and control of applied forces on the soil are presented. In addition, a novel graphical technique for inverting the soil models is developed, which is an improvement over standard nonlinear optimization. This graphical technique utilizes the uncertainties associated with each set of force measurements to obtain all possible parameters which could have produced the measured forces. The system is tested on three cohesionless soils, two in a loose state and one in a loose and dense state. The results are compared with friction angles obtained from direct shear tests. The results highlight a number of key points. Common assumptions are made in soil modeling. Most notably, the Mohr-Coulomb failure law and perfectly plastic behavior. In the direct shear tests, a marked dependence of friction angle on the normal stress at low stresses is found. This has ramifications for any study of friction done at low stresses. In addition, gradual failures are often observed for vertical tools and tools inclined away from the direction of motion. After accounting for the change in friction angle at low stresses, the results show good agreement with the direct shear values.
Resumo:
As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.
Resumo:
In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.
Resumo:
This paper describes a trainable system capable of tracking faces and facialsfeatures like eyes and nostrils and estimating basic mouth features such as sdegrees of openness and smile in real time. In developing this system, we have addressed the twin issues of image representation and algorithms for learning. We have used the invariance properties of image representations based on Haar wavelets to robustly capture various facial features. Similarly, unlike previous approaches this system is entirely trained using examples and does not rely on a priori (hand-crafted) models of facial features based on optical flow or facial musculature. The system works in several stages that begin with face detection, followed by localization of facial features and estimation of mouth parameters. Each of these stages is formulated as a problem in supervised learning from examples. We apply the new and robust technique of support vector machines (SVM) for classification in the stage of skin segmentation, face detection and eye detection. Estimation of mouth parameters is modeled as a regression from a sparse subset of coefficients (basis functions) of an overcomplete dictionary of Haar wavelets.
Resumo:
We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. The algorithm is based upon a Support Vector Machine (SVM) approach to solving inverse operator problems. The algorithm is implemented and tested on simulated data from different distributions and different dimensionalities, gaussians and laplacians in $R^2$ and $R^{12}$. A comparison in performance is made with Gaussian Mixture Models (GMMs). Our algorithm does as well or better than the GMMs for the simulations tested and has the added advantage of being automated with respect to parameters.