917 resultados para Inverse Problem in Optics
Resumo:
The Analytic Hierarchy Process (AHP) is one of the most popular methods used in Multi-Attribute Decision Making. The Eigenvector Method (EM) and some distance minimizing methods such as the Least Squares Method (LSM) are of the possible tools for computing the priorities of the alternatives. A method for generating all the solutions of the LSM problem for 3 × 3 and 4 × 4 matrices is discussed in the paper. Our algorithms are based on the theory of resultants.
Resumo:
A nemzetközi, elsősorban európai szervezettudományban mára meghatározóvá vált a kritikai megközelítés, a hazai szakirodalomban mégis elvétve találni rá utalásokat. A szerzők írásukban tárgyalják, hogy a kritikai menedzsmentelméletek (KME) szemszögéből miként bírálható a mindenkori szervezeti gyakorlat, és miért bírálandók a főáramú menedzsmentelméletek. A tanulmány fő részében elméleti megkülönböztetéseket tesznek: egyrészt elhatárolják a kritikai megközelítést a főáramú szervezetelméletektől, másrészt több szempontból is különbséget tesznek a különféle – de egyaránt a KME alá tartozó – kritikai megközelítések között. De a kritikai szemlélethez hűen nem csak a puszta elméletismertetés volt a céljuk: e bevezetés és problémafelvető tanulmány – s a későbbiekben tervezett cikksorozat – szándékuk szerint vitaindítóként is szolgál. Abban bíznak, hogy a felvetett kérdésekről valódi, lényegi párbeszédet generálhatnak a hazai menedzsmenttudományban (kutatók, oktatók és elméletalkalmazók körében), mely kihathat a szervezeti gyakorlatra is. _____ Critical Management Studies (CMS) as a field of organization studies (OS) has become central internationally, and especially in Europe. Yet, its appearance is still very rare in the Hungarian OS literature. In this study first the authors discuss how the nowadays dominant organizational practices along with the mainstream management and organization theories are to be criticized from a Critical Management perspective. In the main section, so as to define CMS, they make important theoretical distinctions, first between CMS and mainstream organization theories (in general), and then among the different critical approaches that nevertheless all fall under the broad CMS umbrella. But, in line with a truly critical attitude, they not only go into theoretical discussions but, at least to their intention, the purpose of this introductory paper is also to addresses important problems both in the theory and the practice of organization and management. Therefore, it could serve as an opening of an important debate or dialogue in the Hungarian academic community (researchers, educators and other professionals), a theoretical discussion that could have real influence on organizational practice too.
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.
Resumo:
This work investigates theoretical properties of symmetric and anti-symmetric kernels. First chapters give an overview of the theory of kernels used in supervised machine learning. Central focus is on the regularized least squares algorithm, which is motivated as a problem of function reconstruction through an abstract inverse problem. Brief review of reproducing kernel Hilbert spaces shows how kernels define an implicit hypothesis space with multiple equivalent characterizations and how this space may be modified by incorporating prior knowledge. Mathematical results of the abstract inverse problem, in particular spectral properties, pseudoinverse and regularization are recollected and then specialized to kernels. Symmetric and anti-symmetric kernels are applied in relation learning problems which incorporate prior knowledge that the relation is symmetric or anti-symmetric, respectively. Theoretical properties of these kernels are proved in a draft this thesis is based on and comprehensively referenced here. These proofs show that these kernels can be guaranteed to learn only symmetric or anti-symmetric relations, and they can learn any relations relative to the original kernel modified to learn only symmetric or anti-symmetric parts. Further results prove spectral properties of these kernels, central result being a simple inequality for the the trace of the estimator, also called the effective dimension. This quantity is used in learning bounds to guarantee smaller variance.
Resumo:
The present thesis is about the inverse problem in differential Galois Theory. Given a differential field, the inverse problem asks which linear algebraic groups can be realized as differential Galois groups of Picard-Vessiot extensions of this field. In this thesis we will concentrate on the realization of the classical groups as differential Galois groups. We introduce a method for a very general realization of these groups. This means that we present for the classical groups of Lie rank $l$ explicit linear differential equations where the coefficients are differential polynomials in $l$ differential indeterminates over an algebraically closed field of constants $C$, i.e. our differential ground field is purely differential transcendental over the constants. For the groups of type $A_l$, $B_l$, $C_l$, $D_l$ and $G_2$ we managed to do these realizations at the same time in terms of Abhyankar's program 'Nice Equations for Nice Groups'. Here the choice of the defining matrix is important. We found out that an educated choice of $l$ negative roots for the parametrization together with the positive simple roots leads to a nice differential equation and at the same time defines a sufficiently general element of the Lie algebra. Unfortunately for the groups of type $F_4$ and $E_6$ the linear differential equations for such elements are of enormous length. Therefore we keep in the case of $F_4$ and $E_6$ the defining matrix differential equation which has also an easy and nice shape. The basic idea for the realization is the application of an upper and lower bound criterion for the differential Galois group to our parameter equations and to show that both bounds coincide. An upper and lower bound criterion can be found in literature. Here we will only use the upper bound, since for the application of the lower bound criterion an important condition has to be satisfied. If the differential ground field is $C_1$, e.g., $C(z)$ with standard derivation, this condition is automatically satisfied. Since our differential ground field is purely differential transcendental over $C$, we have no information whether this condition holds or not. The main part of this thesis is the development of an alternative lower bound criterion and its application. We introduce the specialization bound. It states that the differential Galois group of a specialization of the parameter equation is contained in the differential Galois group of the parameter equation. Thus for its application we need a differential equation over $C(z)$ with given differential Galois group. A modification of a result from Mitschi and Singer yields such an equation over $C(z)$ up to differential conjugation, i.e. up to transformation to the required shape. The transformation of their equation to a specialization of our parameter equation is done for each of the above groups in the respective transformation lemma.
Resumo:
The gravity inversion method is a mathematic process that can be used to estimate the basement relief of a sedimentary basin. However, the inverse problem in potential-field methods has neither a unique nor a stable solution, so additional information (other than gravity measurements) must be supplied by the interpreter to transform this problem into a well-posed one. This dissertation presents the application of a gravity inversion method to estimate the basement relief of the onshore Potiguar Basin. The density contrast between sediments and basament is assumed to be known and constant. The proposed methodology consists of discretizing the sedimentary layer into a grid of rectangular juxtaposed prisms whose thicknesses correspond to the depth to basement which is the parameter to be estimated. To stabilize the inversion I introduce constraints in accordance with the known geologic information. The method minimizes an objective function of the model that requires not only the model to be smooth and close to the seismic-derived model, which is used as a reference model, but also to honor well-log constraints. The latter are introduced through the use of logarithmic barrier terms in the objective function. The inversion process was applied in order to simulate different phases during the exploration development of a basin. The methodology consisted in applying the gravity inversion in distinct scenarios: the first one used only gravity data and a plain reference model; the second scenario was divided in two cases, we incorporated either borehole logs information or seismic model into the process. Finally I incorporated the basement depth generated by seismic interpretation into the inversion as a reference model and imposed depth constraint from boreholes using the primal logarithmic barrier method. As a result, the estimation of the basement relief in every scenario has satisfactorily reproduced the basin framework, and the incorporation of the constraints led to improve depth basement definition. The joint use of surface gravity data, seismic imaging and borehole logging information makes the process more robust and allows an improvement in the estimate, providing a result closer to the actual basement relief. In addition, I would like to remark that the result obtained in the first scenario already has provided a very coherent basement relief when compared to the known basin framework. This is significant information, when comparing the differences in the costs and environment impact related to gravimetric and seismic surveys and also the well drillings
Resumo:
O presente trabalho consiste na formulação de uma metodologia para interpretação automática de dados de campo magnético. Desta forma, a sua utilização tornará possível a determinação das fronteiras e magnetização de cada corpo. Na base desta metodologia foram utilizadas as características de variações abruptas de magnetização dos corpos. Estas variações laterais abruptas serão representadas por polinômios descontínuos conhecidos como polinômios de Walsh. Neste trabalho, muitos conceitos novos foram desenvolvidos na aplicação dos polinômios de Walsh para resolver problemas de inversão de dados aeromagnéticos. Dentre os novos aspectos considerados, podemos citar. (i) - O desenvolvimento de um algoritmo ótimo para gerar um jôgo dos polinômios "quase-ortogonais" baseados na distribuição de magnetização de Walsh. (ii) - O uso da metodologia damped least squares para estabilizar a solução inversa. (iii) - Uma investigação dos problemas da não-invariância, inerentes quando se usa os polinômios de Walsh. (iv) - Uma investigação da escolha da ordem dos polinômios, tomando-se em conta as limitações de resolução e o comportamento dos autovalores. Utilizando estas características dos corpos magnetizados é possível formular o problema direto, ou seja, a magnetização dos corpos obedece a distribuição de Walsh. É também possível formular o problema inverso, na qual a magnetização geradora do campo observado obedece a série de Walsh. Antes da utilização do método é necessária uma primeira estimativa da localização das fontes magnéticas. Foi escolhida uma metodologia desenvolvida por LOURES (1991), que tem como base a equação homogênea de Euler e cujas exigências necessárias à sua utilização é o conhecimento do campo magnético e suas derivadas. Para testar a metodologia com dados reais foi escolhida uma região localizada na bacia sedimentar do Alto Amazonas. Os dados foram obtidos a partir do levantamento aeromagnético realizado pela PETROBRÁS.
Resumo:
This work sets out to evaluate the potential benefits and pit-falls in using a priori information to help solve the Magnetoencephalographic (MEG) inverse problem. In chapter one the forward problem in MEG is introduced, together with a scheme that demonstrates how a priori information can be incorporated into the inverse problem. Chapter two contains a literature review of techniques currently used to solve the inverse problem. Emphasis is put on the kind of a priori information that is used by each of these techniques and the ease with which additional constraints can be applied. The formalism of the FOCUSS algorithm is shown to allow for the incorporation of a priori information in an insightful and straightforward manner. In chapter three it is described how anatomical constraints, in the form of a realistically shaped source space, can be extracted from a subject’s Magnetic Resonance Image (MRI). The use of such constraints relies on accurate co-registration of the MEG and MRI co-ordinate systems. Variations of the two main co-registration approaches, based on fiducial markers or on surface matching, are described and the accuracy and robustness of a surface matching algorithm is evaluated. Figures of merit introduced in chapter four are shown to given insight into the limitations of a typical measurement set-up and potential value of a priori information. It is shown in chapter five that constrained dipole fitting and FOCUSS outperform unconstrained dipole fitting when data with low SNR is used. However, the effect of errors in the constraints can reduce this advantage. Finally, it is demonstrated in chapter six that the results of different localisation techniques give corroborative evidence about the location and activation sequence of the human visual cortical areas underlying the first 125ms of the visual magnetic evoked response recorded with a whole head neuromagnetometer.
Resumo:
O presente trabalho descreve um estudo sobre a metodologia matemática para a solução do problema direto e inverso na Tomografia por Impedância Elétrica. Este estudo foi motivado pela necessidade de compreender o problema inverso e sua utilidade na formação de imagens por Tomografia por Impedância Elétrica. O entendimento deste estudo possibilitou constatar, através de equações e programas, a identificação das estruturas internas que constituem um corpo. Para isto, primeiramente, é preciso conhecer os potencias elétricos adquiridos nas fronteiras do corpo. Estes potenciais são adquiridos pela aplicação de uma corrente elétrica e resolvidos matematicamente pelo problema direto através da equação de Laplace. O Método dos Elementos Finitos em conjunção com as equações oriundas do eletromagnetismo é utilizado para resolver o problema direto. O software EIDORS, contudo, através dos conceitos de problema direto e inverso, reconstrói imagens de Tomografia por Impedância Elétrica que possibilitam visualizar e comparar diferentes métodos de resolução do problema inverso para reconstrução de estruturas internas. Os métodos de Tikhonov, Noser, Laplace, Hiperparamétrico e Variação Total foram utilizados para obter uma solução aproximada (regularizada) para o problema de identificação. Na Tomografia por Impedância Elétrica, com as condições de contorno preestabelecidas de corrente elétricas e regiões definidas, o método hiperparamétrico apresentou uma solução aproximada mais adequada para reconstrução da imagem.
Resumo:
In this work, we present the solution of a class of linear inverse heat conduction problems for the estimation of unknown heat source terms, with no prior information of the functional forms of timewise and spatial dependence of the source strength, using the conjugate gradient method with an adjoint problem. After describing the mathematical formulation of a general direct problem and the procedure for the solution of the inverse problem, we show applications to three transient heat transfer problems: a one-dimensional cylindrical problem; a two-dimensional cylindrical problem; and a one-dimensional problem with two plates.
Resumo:
The present work propounds an inverse method to estimate the heat sources in the transient two-dimensional heat conduction problem in a rectangular domain with convective bounders. The non homogeneous partial differential equation (PDE) is solved using the Integral Transform Method. The test function for the heat generation term is obtained by the chip geometry and thermomechanical cutting. Then the heat generation term is estimated by the conjugated gradient method (CGM) with adjoint problem for parameter estimation. The experimental trials were organized to perform six different conditions to provide heat sources of different intensities. This method was compared with others in the literature and advantages are discussed. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.
Resumo:
In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.
Resumo:
Assuming that the heat capacity of a body is negligible outside certain inclusions the heat equation degenerates to a parabolic-elliptic interface problem. In this work we aim to detect these interfaces from thermal measurements on the surface of the body. We deduce an equivalent variational formulation for the parabolic-elliptic problem and give a new proof of the unique solvability based on Lions’s projection lemma. For the case that the heat conductivity is higher inside the inclusions, we develop an adaptation of the factorization method to this time-dependent problem. In particular this shows that the locations of the interfaces are uniquely determined by boundary measurements. The method also yields to a numerical algorithm to recover the inclusions and thus the interfaces. We demonstrate how measurement data can be simulated numerically by a coupling of a finite element method with a boundary element method, and finally we present some numerical results for the inverse problem.
Resumo:
Radio-frequency ( RF) coils are designed such that they induce homogeneous magnetic fields within some region of interest within a magnetic resonance imaging ( MRI) scanner. Loading the scanner with a patient disrupts the homogeneity of these fields and can lead to a considerable degradation of the quality of the acquired image. In this paper, an inverse method is presented for designing RF coils, in which the presence of a load ( patient) within the MRI scanner is accounted for in the model. To approximate the finite length of the coil, a Fourier series expansion is considered for the coil current density and for the induced fields. Regularization is used to solve this ill-conditioned inverse problem for the unknown Fourier coefficients. That is, the error between the induced and homogeneous target fields is minimized along with an additional constraint, chosen in this paper to represent the curvature of the coil windings. Smooth winding patterns are obtained for both unloaded and loaded coils. RF fields with a high level of homogeneity are obtained in the unloaded case and a limit to the level of homogeneity attainable is observed in the loaded case.