872 resultados para Inverse integrating facto


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the power series ring R= K[[x1,x2,x3,...]]on countably infinitely many variables, over a field K, and two particular K-subalgebras of it: the ring S, which is isomorphic to an inverse limit of the polynomial rings in finitely many variables over K, and the ring R', which is the largest graded subalgebra of R. Of particular interest are the homogeneous, finitely generated ideals in R', among them the generic ideals. The definition of S as an inverse limit yields a set of truncation homomorphisms from S to K[x1,...,xn] which restrict to R'. We have that the truncation of a generic I in R' is a generic ideal in K[x1,...,xn]. It is shown in Initial ideals of Truncated Homogeneous Ideals that the initial ideal of such an ideal converge to the initial ideal of the corresponding ideal in R'. This initial ideal need no longer be finitely generated, but it is always locally finitely generated: this is proved in Gröbner Bases in R'. We show in Reverse lexicographic initial ideals of generic ideals are finitely generated that the initial ideal of a generic ideal in R' is finitely generated. This contrast to the lexicographic term order. If I in R' is a homogeneous, locally finitely generated ideal, and if we write the Hilbert series of the truncated algebras K[x1,...,xn] module the truncation of I as qn(t)/(1-t)n, then we show in Generalized Hilbert Numerators that the qn's converge to a power series in t which we call the generalized Hilbert numerator of the algebra R'/I. In Gröbner bases for non-homogeneous ideals in R' we show that the calculations of Gröbner bases and initial ideals in R' can be done also for some non-homogeneous ideals, namely those which have an associated homogeneous ideal which is locally finitely generated. The fact that S is an inverse limit of polynomial rings, which are naturally endowed with the discrete topology, provides S with a topology which makes it into a complete Hausdorff topological ring. The ring R', with the subspace topology, is dense in R, and the latter ring is the Cauchy completion of the former. In Topological properties of R' we show that with respect to this topology, locally finitely generated ideals in R'are closed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] We propose four algorithms for computing the inverse optical flow between two images. We assume that the forward optical flow has already been obtained and we need to estimate the flow in the backward direction. The forward and backward flows can be related through a warping formula, which allows us to propose very efficient algorithms. These are presented in increasing order of complexity. The proposed methods provide high accuracy with low memory requirements and low running times.In general, the processing reduces to one or two image passes. Typically, when objects move in a sequence, some regions may appear or disappear. Finding the inverse flows in these situations is difficult and, in some cases, it is not possible to obtain a correct solution. Our algorithms deal with occlusions very easy and reliably. On the other hand, disocclusions have to be overcome as a post-processing step. We propose three approaches for filling disocclusions. In the experimental results, we use standard synthetic sequences to study the performance of the proposed methods, and show that they yield very accurate solutions. We also analyze the performance of the filling strategies. 

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]A natural generalization of the classical Moore-Penrose inverse is presented. The so-called S-Moore-Penrose inverse of a m x n complex matrix A, denoted by As, is defined for any linear subspace S of the matrix vector space Cnxm. The S-Moore-Penrose inverse As is characterized using either the singular value decomposition or (for the nonsingular square case) the orthogonal complements with respect to the Frobenius inner product. These results are applied to the preconditioning of linear systems based on Frobenius norm minimization and to the linearly constrained linear least squares problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN ]The classical optimal (in the Frobenius sense) diagonal preconditioner for large sparse linear systems Ax = b is generalized and improved. The new proposed approximate inverse preconditioner N is based on the minimization of the Frobenius norm of the residual matrix AM − I, where M runs over a certain linear subspace of n × n real matrices, defined by a prescribed sparsity pattern. The number of nonzero entries of the n×n preconditioning matrix N is less than or equal to 2n, and n of them are selected as the optimal positions in each of the n columns of matrix N. All theoretical results are justified in detail…

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Crew scheduling and crew rostering are similar and related problems which can be solved by similar procedures. So far, the existing solution methods usually create a model for each one of these problems (scheduling and rostering), and when they are solved together in some cases an interaction between models is considered in order to obtain a better solution. A single set covering model to solve simultaneously both problems is presented here, where the total quantity of drivers needed is directly considered and optimized. This integration allows to optimize all of the depots at the same time, while traditional approaches needed to work depot by depot, and also it allows to see and manage the relationship between scheduling and rostering, which was known in some degree but usually not easy to quantify as this model permits. Recent research in the area of crew scheduling and rostering has stated that one of the current challenges to be achieved is to determine a schedule where crew fatigue, which depends mainly on the quality of the rosters created, is reduced. In this approach rosters are constructed in such way that stable working hours are used in every week of work, and a change to a different shift is done only using free days in between to make easier the adaptation to the new working hours. Computational results for real-world-based instances are presented. Instances are geographically diverse to test the performance of the procedures and the model in different scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is based on the integration of traditional and innovative approaches aimed at improving the normal faults seimogenic identification and characterization, focusing mainly on slip-rate estimate as a measure of the fault activity. The L’Aquila Mw 6.3 April 6, 2009 earthquake causative fault, namely the Paganica - San Demetrio fault system (PSDFS), was used as a test site. We developed a multidisciplinary and scale‐based strategy consisting of paleoseismological investigations, detailed geomorphological and geological field studies, as well as shallow geophysical imaging and an innovative application of physical properties measurements. We produced a detailed geomorphological and geological map of the PSDFS, defining its tectonic style, arrangement, kinematics, extent, geometry and internal complexities. The PSDFS is a 19 km-long tectonic structure, characterized by a complex structural setting and arranged in two main sectors: the Paganica sector to the NW, characterized by a narrow deformation zone, and the San Demetrio sector to SE, where the strain is accommodated by several tectonic structures, exhuming and dissecting a wide Quaternary basin, suggesting the occurrence of strain migration through time. The integration of all the fault displacement data and age constraints (radiocarbon dating, optically stimulated luminescence (OSL) and tephrochronology) helped in calculating an average Quaternary slip-rate representative for the PSDFS of 0.27 - 0.48 mm/yr. On the basis of its length (ca. 20 km) and slip per event (up to 0.8 m) we also estimated a max expected Magnitude of 6.3-6.8 for this fault. All these topics have a significant implication in terms of surface faulting hazard in the area and may contribute also to the understanding of the PSDFS seismic behavior and of the local seismic hazard.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BTES (borehole thermal energy storage)systems exchange thermal energy by conduction with the surrounding ground through borehole materials. The spatial variability of the geological properties and the space-time variability of hydrogeological conditions affect the real power rate of heat exchangers and, consequently, the amount of energy extracted from / injected into the ground. For this reason, it is not an easy task to identify the underground thermal properties to use when designing. At the current state of technology, Thermal Response Test (TRT) is the in situ test for the characterization of ground thermal properties with the higher degree of accuracy, but it doesn’t fully solve the problem of characterizing the thermal properties of a shallow geothermal reservoir, simply because it characterizes only the neighborhood of the heat exchanger at hand and only for the test duration. Different analytical and numerical models exist for the characterization of shallow geothermal reservoir, but they are still inadequate and not exhaustive: more sophisticated models must be taken into account and a geostatistical approach is needed to tackle natural variability and estimates uncertainty. The approach adopted for reservoir characterization is the “inverse problem”, typical of oil&gas field analysis. Similarly, we create different realizations of thermal properties by direct sequential simulation and we find the best one fitting real production data (fluid temperature along time). The software used to develop heat production simulation is FEFLOW 5.4 (Finite Element subsurface FLOW system). A geostatistical reservoir model has been set up based on literature thermal properties data and spatial variability hypotheses, and a real TRT has been tested. Then we analyzed and used as well two other codes (SA-Geotherm and FV-Geotherm) which are two implementation of the same numerical model of FEFLOW (Al-Khoury model).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, an improved protocol for inverse size-exclusion chromatography (ISEC) was established to assess important pore structural data of porous silicas as stationary phases in packed chromatographic columns. After the validity of the values generated by ISEC was checked by comparison with data obtained from traditional methods like nitrogen sorption at 77 K (Study A), the method could be successfully employed as valuable tool at the development of bonded poly(methacrylate)-coated silicas, while traditional methods generate partially incorrect pore structural information (Study B). Study A: Different mesoporous silicas were converted by a pseudomorphical transition into ordered MCM-41-type silica while maintaining the particle-size and -shape. The essential parameters like specific surface area, average pore diameter and specific pore volume, the pore connectivity from ISEC remained nearly the same which was reflected by the same course of the theoretical plate height vs. linear velocity curves. Study B: In the development of bonded poly(methacrylate)-coated silicas for the reversed phase separation of biopolymers, ISEC was the only method to generate valid pore structural information of the polymer-coated materials. Synthesis procedures were developed to obtain reproducibly covalently bonded poly(methacrylate) coatings with good thermal stability on different base materials, employing as well particulate and monolithic materials.