873 resultados para Inverse computational method
Resumo:
[EN]The meccano method is a novel and promising mesh generation method for simultaneously creating adaptive tetrahedral meshes and volume parametrizations of a complex solid. We highlight the fact that the method requires minimum user intervention and has a low computational cost. The method builds a 3-D triangulation of the solid as a deformation of an appropriate tetrahedral mesh of the meccano. The new mesh generator combines an automatic parametrization of surface triangulations, a local refinement algorithm for 3-D nested triangulations and a simultaneous untangling and smoothing procedure. At present, the procedure is fully automatic for a genus-zero solid. In this case, the meccano can be a single cube. The efficiency of the proposed technique is shown with several applications...
Resumo:
[EN]In this paper we review the novel meccano method. We summarize the main stages (subdivision, mapping, optimization) of this automatic tetrahedral mesh generation technique and we concentrate the study to complex genus-zero solids. In this case, our procedure only requires a surface triangulation of the solid. A crucial consequence of our method is the volume parametrization of the solid to a cube. We construct volume T-meshes for isogeometric analysis by using this result. The efficiency of the proposed technique is shown with several examples. A comparison between the meccano method and standard mesh generation techniques is introduced.-1…
Resumo:
[EN]We have recently introduced a new strategy, based on the meccano method [1, 2], to construct a T-spline parameterization of 2D and 3D geometries for the application of iso geometric analysis [3, 4]. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between the objects and the parametric domain, i.e. the meccano. The key of the method lies in de_ning an isomorphic transformation between the parametric and physical T-mesh _nding the optimal position of the interior nodes, once the meccano boundary nodes are mapped to the boundary of the physical domain…
Resumo:
[EN]This work presents a novel approach to solve a two dimensional problem by using an adaptive finite element approach. The most common strategy to deal with nested adaptivity is to generate a mesh that represents the geometry and the input parameters correctly, and to refine this mesh locally to obtain the most accurate solution. As opposed to this approach, the authors propose a technique using independent meshes : geometry, input data and the unknowns. Each particular mesh is obtained by a local nested refinement of the same coarse mesh at the parametric space…
Resumo:
In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.
Resumo:
Porous materials are widely used in many fields of industrial applications, to achieve the requirements of noise reduction, that nowadays derive from strict regulations. The modeling of porous materials is still a problematic issue. Numerical simulations are often problematic in case of real complex geometries, especially in terms of computational times and convergence. At the same time, analytical models, even if partly limited by restrictive simplificative hypotheses, represent a powerful instrument to capture quickly the physics of the problem and general trends. In this context, a recently developed numerical method, called the Cell Method, is described, is presented in the case of the Biot's theory and applied for representative cases. The peculiarity of the Cell Method is that it allows for a direct algebraic and geometrical discretization of the field equations, without any reduction to a weak integral form. Then, the second part of the thesis presents the case of interaction between two poroelastic materials under the context of double porosity. The idea of using periodically repeated inclusions of a second porous material into a layer composed by an original material is described. In particular, the problem is addressed considering the efficiency of the analytical method. A analytical procedure for the simulation of heterogeneous layers based is described and validated considering both conditions of absorption and transmission; a comparison with the available numerical methods is performed. ---------------- I materiali porosi sono ampiamente utilizzati per diverse applicazioni industriali, al fine di raggiungere gli obiettivi di riduzione del rumore, che sono resi impegnativi da norme al giorno d'oggi sempre più stringenti. La modellazione dei materiali porori per applicazioni vibro-acustiche rapprensenta un aspetto di una certa complessità. Le simulazioni numeriche sono spesso problematiche quando siano coinvolte geometrie di pezzi reali, in particolare riguardo i tempi computazionali e la convergenza. Allo stesso tempo, i modelli analitici, anche se parzialmente limitati a causa di ipotesi semplificative che ne restringono l'ambito di utilizzo, rappresentano uno strumento molto utile per comprendere rapidamente la fisica del problema e individuare tendenze generali. In questo contesto, un metodo numerico recentemente sviluppato, il Metodo delle Celle, viene descritto, implementato nel caso della teoria di Biot per la poroelasticità e applicato a casi rappresentativi. La peculiarità del Metodo delle Celle consiste nella discretizzazione diretta algebrica e geometrica delle equazioni di campo, senza alcuna riduzione a forme integrali deboli. Successivamente, nella seconda parte della tesi viene presentato il caso delle interazioni tra due materiali poroelastici a contatto, nel contesto dei materiali a doppia porosità. Viene descritta l'idea di utilizzare inclusioni periodicamente ripetute di un secondo materiale poroso all'interno di un layer a sua volta poroso. In particolare, il problema è studiando il metodo analitico e la sua efficienza. Una procedura analitica per il calcolo di strati eterogenei di materiale viene descritta e validata considerando sia condizioni di assorbimento, sia di trasmissione; viene effettuata una comparazione con i metodi numerici a disposizione.
Resumo:
Some fundamental biological processes such as embryonic development have been preserved during evolution and are common to species belonging to different phylogenetic positions, but are nowadays largely unknown. The understanding of cell morphodynamics leading to the formation of organized spatial distribution of cells such as tissues and organs can be achieved through the reconstruction of cells shape and position during the development of a live animal embryo. We design in this work a chain of image processing methods to automatically segment and track cells nuclei and membranes during the development of a zebrafish embryo, which has been largely validates as model organism to understand vertebrate development, gene function and healingrepair mechanisms in vertebrates. The embryo is previously labeled through the ubiquitous expression of fluorescent proteins addressed to cells nuclei and membranes, and temporal sequences of volumetric images are acquired with laser scanning microscopy. Cells position is detected by processing nuclei images either through the generalized form of the Hough transform or identifying nuclei position with local maxima after a smoothing preprocessing step. Membranes and nuclei shapes are reconstructed by using PDEs based variational techniques such as the Subjective Surfaces and the Chan Vese method. Cells tracking is performed by combining informations previously detected on cells shape and position with biological regularization constraints. Our results are manually validated and reconstruct the formation of zebrafish brain at 7-8 somite stage with all the cells tracked starting from late sphere stage with less than 2% error for at least 6 hours. Our reconstruction opens the way to a systematic investigation of cellular behaviors, of clonal origin and clonal complexity of brain organs, as well as the contribution of cell proliferation modes and cell movements to the formation of local patterns and morphogenetic fields.
Resumo:
In der vorliegenden Arbeit wird die Faktorisierungsmethode zur Erkennung von Inhomogenitäten der Leitfähigkeit in der elektrischen Impedanztomographie auf unbeschränkten Gebieten - speziell der Halbebene bzw. dem Halbraum - untersucht. Als Lösungsräume für das direkte Problem, d.h. die Bestimmung des elektrischen Potentials zu vorgegebener Leitfähigkeit und zu vorgegebenem Randstrom, führen wir gewichtete Sobolev-Räume ein. In diesen wird die Existenz von schwachen Lösungen des direkten Problems gezeigt und die Gültigkeit einer Integraldarstellung für die Lösung der Laplace-Gleichung, die man bei homogener Leitfähigkeit erhält, bewiesen. Mittels der Faktorisierungsmethode geben wir eine explizite Charakterisierung von Einschlüssen an, die gegenüber dem Hintergrund eine sprunghaft erhöhte oder erniedrigte Leitfähigkeit haben. Damit ist zugleich für diese Klasse von Leitfähigkeiten die eindeutige Rekonstruierbarkeit der Einschlüsse bei Kenntnis der lokalen Neumann-Dirichlet-Abbildung gezeigt. Die mittels der Faktorisierungsmethode erhaltene Charakterisierung der Einschlüsse haben wir in ein numerisches Verfahren umgesetzt und sowohl im zwei- als auch im dreidimensionalen Fall mit simulierten, teilweise gestörten Daten getestet. Im Gegensatz zu anderen bekannten Rekonstruktionsverfahren benötigt das hier vorgestellte keine Vorabinformation über Anzahl und Form der Einschlüsse und hat als nicht-iteratives Verfahren einen vergleichsweise geringen Rechenaufwand.
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
In this work, an improved protocol for inverse size-exclusion chromatography (ISEC) was established to assess important pore structural data of porous silicas as stationary phases in packed chromatographic columns. After the validity of the values generated by ISEC was checked by comparison with data obtained from traditional methods like nitrogen sorption at 77 K (Study A), the method could be successfully employed as valuable tool at the development of bonded poly(methacrylate)-coated silicas, while traditional methods generate partially incorrect pore structural information (Study B). Study A: Different mesoporous silicas were converted by a pseudomorphical transition into ordered MCM-41-type silica while maintaining the particle-size and -shape. The essential parameters like specific surface area, average pore diameter and specific pore volume, the pore connectivity from ISEC remained nearly the same which was reflected by the same course of the theoretical plate height vs. linear velocity curves. Study B: In the development of bonded poly(methacrylate)-coated silicas for the reversed phase separation of biopolymers, ISEC was the only method to generate valid pore structural information of the polymer-coated materials. Synthesis procedures were developed to obtain reproducibly covalently bonded poly(methacrylate) coatings with good thermal stability on different base materials, employing as well particulate and monolithic materials.
Resumo:
The Factorization Method localizes inclusions inside a body from measurements on its surface. Without a priori knowing the physical parameters inside the inclusions, the points belonging to them can be characterized using the range of an auxiliary operator. The method relies on a range characterization that relates the range of the auxiliary operator to the measurements and is only known for very particular applications. In this work we develop a general framework for the method by considering symmetric and coercive operators between abstract Hilbert spaces. We show that the important range characterization holds if the difference between the inclusions and the background medium satisfies a coerciveness condition which can immediately be translated into a condition on the coefficients of a given real elliptic problem. We demonstrate how several known applications of the Factorization Method are covered by our general results and deduce the range characterization for a new example in linear elasticity.
Resumo:
In electrical impedance tomography, one tries to recover the conductivity inside a physical body from boundary measurements of current and voltage. In many practically important situations, the investigated object has known background conductivity but it is contaminated by inhomogeneities. The factorization method of Andreas Kirsch provides a tool for locating such inclusions. Earlier, it has been shown that under suitable regularity conditions positive (or negative) inhomogeneities can be characterized by the factorization technique if the conductivity or one of its higher normal derivatives jumps on the boundaries of the inclusions. In this work, we use a monotonicity argument to generalize these results: We show that the factorization method provides a characterization of an open inclusion (modulo its boundary) if each point inside the inhomogeneity has an open neighbourhood where the perturbation of the conductivity is strictly positive (or negative) definite. In particular, we do not assume any regularity of the inclusion boundary or set any conditions on the behaviour of the perturbed conductivity at the inclusion boundary. Our theoretical findings are verified by two-dimensional numerical experiments.
Resumo:
We consider a simple (but fully three-dimensional) mathematical model for the electromagnetic exploration of buried, perfect electrically conducting objects within the soil underground. Moving an electric device parallel to the ground at constant height in order to generate a magnetic field, we measure the induced magnetic field within the device, and factor the underlying mathematics into a product of three operations which correspond to the primary excitation, some kind of reflection on the surface of the buried object(s) and the corresponding secondary excitation, respectively. Using this factorization we are able to give a justification of the so-called sampling method from inverse scattering theory for this particular set-up.
Resumo:
The proton-nucleus elastic scattering at intermediate energies is a well-established method for the investigation of the nuclear matter distribution in stable nuclei and was recently applied also for the investigation of radioactive nuclei using the method of inverse kinematics. In the current experiment, the differential cross sections for proton elastic scattering on the isotopes $^{7,9,10,11,12,14}$Be and $^8$B were measured. The experiment was performed using the fragment separator at GSI, Darmstadt to produce the radioactive beams. The main part of the experimental setup was the time projection ionization chamber IKAR which was simultaneously used as hydrogen target and a detector for the recoil protons. Auxiliary detectors for projectile tracking and isotope identification were also installed. As results from the experiment, the absolute differential cross sections d$sigma$/d$t$ as a function of the four momentum transfer $t$ were obtained. In this work the differential cross sections for elastic p-$^{12}$Be, p-$^{14}$Be and p-$^{8}$B scattering at low $t$ ($t leq$~0.05~(GeV/c)$^2$) are presented. The measured cross sections were analyzed within the Glauber multiple-scattering theory using different density parameterizations, and the nuclear matter density distributions and radii of the investigated isotopes were determined. The analysis of the differential cross section for the isotope $^{14}$Be shows that a good description of the experimental data is obtained when density distributions consisting of separate core and halo components are used. The determined {it rms} matter radius is $3.11 pm 0.04 pm 0.13$~fm. In the case of the $^{12}$Be nucleus the results showed an extended matter distribution as well. For this nucleus a matter radius of $2.82 pm 0.03 pm 0.12$~fm was determined. An interesting result is that the free $^{12}$Be nucleus behaves differently from the core of $^{14}$Be and is much more extended than it. The data were also compared with theoretical densities calculated within the FMD and the few-body models. In the case of $^{14}$Be, the calculated cross sections describe the experimental data well while, in the case of $^{12}$Be there are discrepancies in the region of high momentum transfer. Preliminary experimental results for the isotope $^8$B are also presented. An extended matter distribution was obtained (though much more compact as compared to the neutron halos). A proton halo structure was observed for the first time with the proton elastic scattering method. The deduced matter radius is $2.60pm 0.02pm 0.26$~fm. The data were compared with microscopic calculations in the frame of the FMD model and reasonable agreement was observed. The results obtained in the present analysis are in most cases consistent with the previous experimental studies of the same isotopes with different experimental methods (total interaction and reaction cross section measurements, momentum distribution measurements). For future investigation of the structure of exotic nuclei a universal detector system EXL is being developed. It will be installed at the NESR at the future FAIR facility where higher intensity beams of radioactive ions are expected. The usage of storage ring techniques provides high luminosity and low background experimental conditions. Results from the feasibility studies of the EXL detector setup, performed at the present ESR storage ring, are presented.
Resumo:
Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.