973 resultados para Boundary value problems.
Resumo:
Griffiths proposed a pair of boundary conditions that define a point interaction in one dimensional quantum mechanics. The conditions involve the nth derivative of the wave function where n is a non-negative integer. We re-examine the interaction so defined and explicitly confirm that it is self-adjoint for any even value of n and for n = 1. The interaction is not self-adjoint for odd n > 1. We then propose a similar but different pair of boundary conditions with the nth derivative of the wave function such that the ensuing point interaction is self-adjoint for any value of n.
Resumo:
In this work, we are interested in the dynamic behavior of a parabolic problem with nonlinear boundary conditions and delay in the boundary. We construct a reaction-diffusion problem with delay in the interior, where the reaction term is concentrated in a neighborhood of the boundary and this neighborhood shrinks to boundary, as a parameter epsilon goes to zero. We analyze the limit of the solutions of this concentrated problem and prove that these solutions converge in certain continuous function spaces to the unique solution of the parabolic problem with delay in the boundary. This convergence result allows us to approximate the solution of equations with delay acting on the boundary by solutions of equations with delay acting in the interior and it may contribute to analyze the dynamic behavior of delay equations when the delay is at the boundary. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
In this thesis the impact of R&D expenditures on firm market value and stock returns is examined. This is performed in a sample of European listed firms for the period 2000-2009. I apply different linear and GMM econometric estimations for testing the impact of R&D on market prices and construct country portfolios based on firms’ R&D expenditure to market capitalization ratio for studying the effect of R&D on stock returns. The results confirm that more innovative firms have a better market valuation,investors consider R&D as an asset that produces long-term benefits for corporations. The impact of R&D on firm value differs across countries. It is significantly modulated by the financial and legal environment where firms operate. Other firm and industry characteristics seem to play a determinant role when investors value R&D. First, only larger firms with lower financial leverage that operate in highly innovative sectors decide to disclose their R&D investment. Second, the markets assign a premium to small firms, which operate in hi-tech sectors compared to larger enterprises for low-tech industries. On the other hand, I provide empirical evidence indicating that generally highly R&D-intensive firms may enhance mispricing problems related to firm valuation. As R&D contributes to the estimation of future stock returns, portfolios that comprise high R&D-intensive stocks may earn significant excess returns compared to the less innovative after controlling for size and book-to-market risk. Further, the most innovative firms are generally more risky in terms of stock volatility but not systematically more risky than low-tech firms. Firms that operate in Continental Europe suffer more mispricing compared to Anglo-Saxon peers but the former are less volatile, other things being equal. The sectors where firms operate are determinant even for the impact of R&D on stock returns; this effect is much stronger in hi-tech industries.
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
Iodine chemistry plays an important role in the tropospheric ozone depletion and the new particle formation in the Marine Boundary Layer (MBL). The sources, reaction pathways, and the sinks of iodine are investigated using lab experiments and field observations. The aims of this work are, firstly, to develop analytical methods for iodine measurements of marine aerosol samples especially for iodine speciation in the soluble iodine; secondly, to apply the analytical methods in field collected aerosol samples, and to estimate the characteristics of aerosol iodine in the MBL. Inductively Coupled Plasma – Mass Spectrometry (ICP-MS) was the technique used for iodine measurements. Offline methods using water extraction and Tetra-methyl-ammonium-hydroxide (TMAH) extraction were applied to measure total soluble iodine (TSI) and total insoluble iodine (TII) in the marine aerosol samples. External standard calibration and isotope dilution analysis (IDA) were both conducted for iodine quantification and the limits of detection (LODs) were both 0.1 μg L-1 for TSI and TII measurements. Online couplings of Ion Chromatography (IC)-ICP-MS and Gel electrophoresis (GE)-ICP-MS were both developed for soluble iodine speciation. Anion exchange columns were adopted for IC-ICP-MS systems. Iodide, iodate, and unknown signal(s) were observed in these methods. Iodide and iodate were separated successfully and the LODs were 0.1 and 0.5 μg L-1, respectively. Unknown signals were soluble organic iodine species (SOI) and quantified by the calibration curve of iodide, but not clearly identified and quantified yet. These analytical methods were all applied to the iodine measurements of marine aerosol samples from the worldwide filed campaigns. The TSI and TII concentrations (medians) in PM2.5 were found to be 240.87 pmol m-3 and 105.37 pmol m-3 at Mace Head, west coast of Ireland, as well as 119.10 pmol m-3 and 97.88 pmol m-3 in the cruise campaign over the North Atlantic Ocean, during June – July 2006. Inorganic iodine, namely iodide and iodate, was the minor iodine fraction in both campaigns, accounting for 7.3% (median) and 5.8% (median) in PM2.5 iodine at Mace Head and over the North Atlantic Ocean, respectively. Iodide concentrations were higher than iodate in most of the samples. In the contrast, more than 90% of TSI was SOI and the SOI concentration was correlated significantly with the iodide concentration. The correlation coefficients (R2) were both higher than 0.5 at Mace Head and in the first leg of the cruise. Size fractionated aerosol samples collected by 5 stage Berner impactor cascade sampler showed similar proportions of inorganic and organic iodine. Significant correlations were obtained in the particle size ranges of 0.25 – 0.71 μm and 0.71 – 2.0 μm between SOI and iodide, and better correlations were found in sunny days. TSI and iodide existed mainly in fine particle size range (< 2.0 μm) and iodate resided in coarse range (2.0 – 10 μm). Aerosol iodine was suggested to be related to the primary iodine release in the tidal zone. Natural meteorological conditions such as solar radiation, raining etc were observed to have influence on the aerosol iodine. During the ship campaign over the North Atlantic Ocean (January – February 2007), the TSI concentrations (medians) ranged 35.14 – 60.63 pmol m-3 among the 5 stages. Likewise, SOI was found to be the most abundant iodine fraction in TSI with a median of 98.6%. Significant correlation also presented between SOI and iodide in the size range of 2.0 – 5.9 μm. Higher iodate concentration was again found in the higher particle size range, similar to that at Mace Head. Airmass transport from the biogenic bloom region and the Antarctic ice front sector was observed to play an important role in aerosol iodine enhancement. The TSI concentrations observed along the 30,000 km long cruise round trip from East Asia to Antarctica during November 2005 – March 2006 were much lower than in the other campaigns, with a median of 6.51 pmol m-3. Approximately 70% of the TSI was SOI on average. The abundances of inorganic iodine including iodine and iodide were less than 30% of TSI. The median value of iodide was 1.49 pmol m-3, which was more than four fold higher than that of iodate (median, 0.28 pmol m-3). Spatial variation indicated highest aerosol iodine appearing in the tropical area. Iodine level was considerably lower in coastal Antarctica with the TSI median of 3.22 pmol m-3. However, airmass transport from the ice front sector was correlated with the enhance TSI level, suggesting the unrevealed source of iodine in the polar region. In addition, significant correlation between SOI and iodide was also shown in this campaign. A global distribution in aerosol was shown in the field campaigns in this work. SOI was verified globally ubiquitous due to the presence in the different sampling locations and its high proportion in TSI in the marine aerosols. The correlations between SOI and iodide were obtained not only in different locations but also in different seasons, implying the possible mechanism of iodide production through SOI decomposition. Nevertheless, future studies are needed for improving the current understanding of iodine chemistry in the MBL (e.g. SOI identification and quantification as well as the update modeling involving organic matters).
Resumo:
I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
Over the past twenty years, new technologies have required an increasing use of mathematical models in order to understand better the structural behavior: finite element method is the one mostly used. However, the reliability of this method applied to different situations has to be tried each time. Since it is not possible to completely model the reality, different hypothesis must be done: these are the main problems of FE modeling. The following work deals with this problem and tries to figure out a way to identify some of the unknown main parameters of a structure. This main research focuses on a particular path of study and development, but the same concepts can be applied to other objects of research. The main purpose of this work is the identification of unknown boundary conditions of a bridge pier using the data acquired experimentally with field tests and a FEM modal updating process. This work doesn’t want to be new, neither innovative. A lot of work has been done during the past years on this main problem and many solutions have been shown and published. This thesis just want to rework some of the main aspects of the structural optimization process, using a real structure as fitting model.
Resumo:
The aim of our study was to analyze the neurophysiological monitoring method with regard to its potential problems during thoracic and thoracoabdominal aortic open or endovascular repair. Furthermore, preventive strategies to the main pitfalls with this method were developed.
Resumo:
Psychogenetic research has emphasised the influence of social factors on a child's intellectual development. In her work, Ms. Dumitrascu examines two such factors; family size and order of birth. However, since these formal parameters tend to be unstable, other more informal factors should be taken into consideration. Of these, perhaps the most interesting is the "style" of parental education, which Ms. Dumitrascu regards as an expression of national traditions at the family level. This educational style is culture dependent. Only a comparative, cross-cultural study can reveal the real mechanism through which educational style influences the development of a child's intellect and personality. Ms. Dumitrascu conducted an experimental cross-cultural study aimed at examining the effects of the family environment on a child's intellectual development. Three distinct populations were involved in her investigation, each having quite a distinct status in their geographical area; Romanians, Romanies (Gypsies) from Romania, and Russians from the Republic of Moldova. She presented her research in the form of a series of articles written in English totalling 85 pages, and also on disc. A significant difference was revealed between the intelligence of a child living in a large family, and that of a child with no brothers or sisters. In the case of Romany children, the gap is remarkably large. Ms. Dumitrascu concludes that the simultaneous action of several negative factors (low socio-economic status, large family size, socio-cultural isolation of a population) may delay child development. Subjected to such a precarious environment, Romany children do not seek self-realisation, but rather struggle to survive the hardship. Most of them remain out of civilisation. Unfortunately, adult Romanies seldom express any concern regarding their children's successful social integration. The school as main socialisation tool has no value for most parents. Ms. Dumitrascu argues the need for a major effort aimed at helping Romany's social integration. She hopes this project will be of some help for psychologists, social workers, teachers, and all those who are interested in the integration into society of minority groups.