925 resultados para III-posed inverse problem
Resumo:
The determination of the displacement and the space-dependent force acting on a vibrating structure from measured final or time-average displacement observation is thoroughly investigated. Several aspects related to the existence and uniqueness of a solution of the linear but ill-posed inverse force problems are highlighted. After that, in order to capture the solution a variational formulation is proposed and the gradient of the least-squares functional that is minimized is rigorously and explicitly derived. Numerical results obtained using the Landweber method and the conjugate gradient method are presented and discussed illustrating the convergence of the iterative procedures for exact input data. Furthermore, for noisy data the semi-convergence phenomenon appears, as expected, and stability is restored by stopping the iterations according to the discrepancy principle criterion once the residual becomes close to the amount of noise. The present investigation will be significant to researchers concerned with wave propagation and control of vibrating structures.
Resumo:
Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.
Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.
Resumo:
Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.
Resumo:
Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.
Resumo:
In this paper, we present a control strategy design technique for an autonomous underwater vehicle based on solutions to the motion planning problem derived from differential geometric methods. The motion planning problem is motivated by the practical application of surveying the hull of a ship for implications of harbor and port security. In recent years, engineers and researchers have been collaborating on automating ship hull inspections by employing autonomous vehicles. Despite the progresses made, human intervention is still necessary at this stage. To increase the functionality of these autonomous systems, we focus on developing model-based control strategies for the survey missions around challenging regions, such as the bulbous bow region of a ship. Recent advances in differential geometry have given rise to the field of geometric control theory. This has proven to be an effective framework for control strategy design for mechanical systems, and has recently been extended to applications for underwater vehicles. Advantages of geometric control theory include the exploitation of symmetries and nonlinearities inherent to the system. Here, we examine the posed inspection problem from a path planning viewpoint, applying recently developed techniques from the field of differential geometric control theory to design the control strategies that steer the vehicle along the prescribed path. Three potential scenarios for surveying a ship?s bulbous bow region are motivated for path planning applications. For each scenario, we compute the control strategy and implement it onto a test-bed vehicle. Experimental results are analyzed and compared with theoretical predictions.
Resumo:
In transport networks, Origin-Destination matrices (ODM) are classically estimated from road traffic counts whereas recent technologies grant also access to sample car trajectories. One example is the deployment in cities of Bluetooth scanners that measure the trajectories of Bluetooth equipped cars. Exploiting such sample trajectory information, the classical ODM estimation problem is here extended into a link-dependent ODM (LODM) one. This much larger size estimation problem is formulated here in a variational form as an inverse problem. We develop a convex optimization resolution algorithm that incorporates network constraints. We study the result of the proposed algorithm on simulated network traffic.
Resumo:
A graph is said to be k-variegated if its vertex set can be partitioned into k equal parts such that each vertex is adjacent to exactly one vertex from every other part not containing it. Bednarek and Sanders [1] posed the problem of characterizing k-variegated graphs. V.N. Bhat-Nayak, S.A. Choudum and R.N. Naik [2] gave the characterization of 2-variegated graphs. In this paper we characterize k-variegated graphs for k greater-or-equal, slanted 3.
Resumo:
Perceiving students, science students especially, as mere consumers of facts and information belies the importance of a need to engage them with the principles underlying those facts and is counter-intuitive to the facilitation of knowledge and understanding. Traditional didactic lecture approaches need a re-think if student classroom engagement and active learning are to be valued over fact memorisation and fact recall. In our undergraduate biomedical science programs across Years 1, 2 and 3 in the Faculty of Health at QUT, we have developed an authentic learning model with an embedded suite of pedagogical strategies that foster classroom engagement and allow for active learning in the sub-discipline area of medical bacteriology. The suite of pedagogical tools we have developed have been designed to enable their translation, with appropriate fine-tuning, to most biomedical and allied health discipline teaching and learning contexts. Indeed, aspects of the pedagogy have been successfully translated to the nursing microbiology study stream at QUT. The aims underpinning the pedagogy are for our students to: (1) Connect scientific theory with scientific practice in a more direct and authentic way, (2) Construct factual knowledge and facilitate a deeper understanding, and (3) Develop and refine their higher order flexible thinking and problem solving skills, both semi-independently and independently. The mindset and role of the teaching staff is critical to this approach since for the strategy to be successful tertiary teachers need to abandon traditional instructional modalities based on one-way information delivery. Face-to-face classroom interactions between students and lecturer enable realisation of pedagogical aims (1), (2) and (3). The strategy we have adopted encourages teachers to view themselves more as expert guides in what is very much a student-focused process of scientific exploration and learning. Specific pedagogical strategies embedded in the authentic learning model we have developed include: (i) interactive lecture-tutorial hybrids or lectorials featuring teacher role-plays as well as class-level question-and-answer sessions, (ii) inclusion of “dry” laboratory activities during lectorials to prepare students for the wet laboratory to follow, (iii) real-world problem-solving exercises conducted during both lectorials and wet laboratory sessions, and (iv) designing class activities and formative assessments that probe a student’s higher order flexible thinking skills. Flexible thinking in this context encompasses analytical, critical, deductive, scientific and professional thinking modes. The strategic approach outlined above is designed to provide multiple opportunities for students to apply principles flexibly according to a given situation or context, to adapt methods of inquiry strategically, to go beyond mechanical application of formulaic approaches, and to as much as possible self-appraise their own thinking and problem solving. The pedagogical tools have been developed within both workplace (real world) and theoretical frameworks. The philosophical core of the pedagogy is a coherent pathway of teaching and learning which we, and many of our students, believe is more conducive to student engagement and active learning in the classroom. Qualitative and quantitative data derived from online and hardcopy evaluations, solicited and unsolicited student and graduate feedback, anecdotal evidence as well as peer review indicate that: (i) our students are engaging with the pedagogy, (ii) a constructivist, authentic-learning approach promotes active learning, and (iii) students are better prepared for workplace transition.
Resumo:
The discovery of magnetic superconductors has posed the problem of the coexistence of two kinds of orders (magnetic and superconducting) in some temperature intervals in these systems. New microscopic mechanisms developed by us to explain the coexistence and reentrant behaviour are reported. The mechanism for antiferromagnetic superconductors which shows enhancement of superconductivity below the magnetic transition is found relevant for rare-earth systems having less than half-filled f-atomic shells. The theory will be compared with the experimental results of SmRh4B4 system. A phenomenological treatment based on a generalized Ginzburg-Landau approach will also be presented to explain the anomalous behaviour of the second critical field in some antiferromagnetic superconductors. These magnetic superconductors provide two kinds of Bose fields, namely, phonons and magnons which interact with each other and also with the conduction electrons. Theoretical studies of the effects of the excitations of these modes on superconducting pairing and magnetic ordering in these systems will be discussed.
Resumo:
Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are generally estimated be fitting theoretical models to data gathered from field monitoring or laboratory experiments. Transient through-diffusion tests are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. Thes parameters are usually estimated either by approximate eye-fitting calibration or by combining the solution of the direct problem with any available gradient-based techniques. In this work, an automated, gradient-free solver is developed to estimate the mass transport parameters of a transient through-diffusion model. The proposed inverse model uses a particle swarm optimization (PSO) algorithm that is based on the social behavior of animals searching for food sources. The finite difference numerical solution of the forward model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation. The working principle of the new solver is demonstrated and mass transport parameters are estimated from laboratory through-diffusion experimental data. An inverse model based on the standard gradient-based technique is formulated to compare with the proposed solver. A detailed comparative study is carried out between conventional methods and the proposed solver. The present automated technique is found to be very efficient and robust. The mass transport parameters are obtained with great precision.
Resumo:
We discuss the inverse problem associated with the propagation of the field autocorrelation of light through a highly scattering object like tissue. In the first part of the work, we reconstructed the optical absorption coefficient mu(u) and particle diffusion coefficient D-B from simulated measurements which are integrals of a quantity computed from the measured intensity and intensity autocorrelation g(2)(tau) at the boundary. In the second part we recover the mean square displacement (MSD) distribution of particles in an inhomogeneous object from the sampled g(2)(tau) measure on the boundary. From the MSD, we compute the storage and loss moduli distributions in the object. We have devised computationally easy methods to construct the sensitivity matrices which are used in the iterative reconstruction algorithms for recovering these parameters from the measurements. The results of the reconstruction of mu(a), D-B, MSD and the viscoelastic parameters, which are presented, show reasonable good position and quantitative accuracy.