952 resultados para Nonhomogeneous initial-boundary-value problems
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
In this thesis the impact of R&D expenditures on firm market value and stock returns is examined. This is performed in a sample of European listed firms for the period 2000-2009. I apply different linear and GMM econometric estimations for testing the impact of R&D on market prices and construct country portfolios based on firms’ R&D expenditure to market capitalization ratio for studying the effect of R&D on stock returns. The results confirm that more innovative firms have a better market valuation,investors consider R&D as an asset that produces long-term benefits for corporations. The impact of R&D on firm value differs across countries. It is significantly modulated by the financial and legal environment where firms operate. Other firm and industry characteristics seem to play a determinant role when investors value R&D. First, only larger firms with lower financial leverage that operate in highly innovative sectors decide to disclose their R&D investment. Second, the markets assign a premium to small firms, which operate in hi-tech sectors compared to larger enterprises for low-tech industries. On the other hand, I provide empirical evidence indicating that generally highly R&D-intensive firms may enhance mispricing problems related to firm valuation. As R&D contributes to the estimation of future stock returns, portfolios that comprise high R&D-intensive stocks may earn significant excess returns compared to the less innovative after controlling for size and book-to-market risk. Further, the most innovative firms are generally more risky in terms of stock volatility but not systematically more risky than low-tech firms. Firms that operate in Continental Europe suffer more mispricing compared to Anglo-Saxon peers but the former are less volatile, other things being equal. The sectors where firms operate are determinant even for the impact of R&D on stock returns; this effect is much stronger in hi-tech industries.
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
This thesis is a collection of essays related to the topic of innovation in the service sector. The choice of this structure is functional to the purpose of single out some of the relevant issues and try to tackle them, revising first the state of the literature and then proposing a way forward. Three relevant issues has been therefore selected: (i) the definition of innovation in the service sector and the connected question of measurement of innovation; (ii) the issue of productivity in services; (iii) the classification of innovative firms in the service sector. Facing the first issue, chapter II shows how the initial width of the original Schumpeterian definition of innovation has been narrowed and then passed to the service sector form the manufacturing one in a reduce technological form. Chapter III tackle the issue of productivity in services, discussing the difficulties for measuring productivity in a context where the output is often immaterial. We reconstruct the dispute on the Baumol’s cost disease argument and propose two different ways to go forward in the research on productivity in services: redefining the output along the line of a characteristic approach; and redefining the inputs, particularly analysing which kind of input it’s worth saving. Chapter IV derives an integrated taxonomy of innovative service and manufacturing firms, using data coming from the 2008 CIS survey for Italy. This taxonomy is based on the enlarged definition of “innovative firm” deriving from the Schumpeterian definition of innovation and classify firms using a cluster analysis techniques. The result is the emergence of a four cluster solution, where firms are differentiated by the breadth of the innovation activities in which they are involved. Chapter 5 reports some of the main conclusions of each singular previous chapter and the points worth of further research in the future.
Resumo:
Iodine chemistry plays an important role in the tropospheric ozone depletion and the new particle formation in the Marine Boundary Layer (MBL). The sources, reaction pathways, and the sinks of iodine are investigated using lab experiments and field observations. The aims of this work are, firstly, to develop analytical methods for iodine measurements of marine aerosol samples especially for iodine speciation in the soluble iodine; secondly, to apply the analytical methods in field collected aerosol samples, and to estimate the characteristics of aerosol iodine in the MBL. Inductively Coupled Plasma – Mass Spectrometry (ICP-MS) was the technique used for iodine measurements. Offline methods using water extraction and Tetra-methyl-ammonium-hydroxide (TMAH) extraction were applied to measure total soluble iodine (TSI) and total insoluble iodine (TII) in the marine aerosol samples. External standard calibration and isotope dilution analysis (IDA) were both conducted for iodine quantification and the limits of detection (LODs) were both 0.1 μg L-1 for TSI and TII measurements. Online couplings of Ion Chromatography (IC)-ICP-MS and Gel electrophoresis (GE)-ICP-MS were both developed for soluble iodine speciation. Anion exchange columns were adopted for IC-ICP-MS systems. Iodide, iodate, and unknown signal(s) were observed in these methods. Iodide and iodate were separated successfully and the LODs were 0.1 and 0.5 μg L-1, respectively. Unknown signals were soluble organic iodine species (SOI) and quantified by the calibration curve of iodide, but not clearly identified and quantified yet. These analytical methods were all applied to the iodine measurements of marine aerosol samples from the worldwide filed campaigns. The TSI and TII concentrations (medians) in PM2.5 were found to be 240.87 pmol m-3 and 105.37 pmol m-3 at Mace Head, west coast of Ireland, as well as 119.10 pmol m-3 and 97.88 pmol m-3 in the cruise campaign over the North Atlantic Ocean, during June – July 2006. Inorganic iodine, namely iodide and iodate, was the minor iodine fraction in both campaigns, accounting for 7.3% (median) and 5.8% (median) in PM2.5 iodine at Mace Head and over the North Atlantic Ocean, respectively. Iodide concentrations were higher than iodate in most of the samples. In the contrast, more than 90% of TSI was SOI and the SOI concentration was correlated significantly with the iodide concentration. The correlation coefficients (R2) were both higher than 0.5 at Mace Head and in the first leg of the cruise. Size fractionated aerosol samples collected by 5 stage Berner impactor cascade sampler showed similar proportions of inorganic and organic iodine. Significant correlations were obtained in the particle size ranges of 0.25 – 0.71 μm and 0.71 – 2.0 μm between SOI and iodide, and better correlations were found in sunny days. TSI and iodide existed mainly in fine particle size range (< 2.0 μm) and iodate resided in coarse range (2.0 – 10 μm). Aerosol iodine was suggested to be related to the primary iodine release in the tidal zone. Natural meteorological conditions such as solar radiation, raining etc were observed to have influence on the aerosol iodine. During the ship campaign over the North Atlantic Ocean (January – February 2007), the TSI concentrations (medians) ranged 35.14 – 60.63 pmol m-3 among the 5 stages. Likewise, SOI was found to be the most abundant iodine fraction in TSI with a median of 98.6%. Significant correlation also presented between SOI and iodide in the size range of 2.0 – 5.9 μm. Higher iodate concentration was again found in the higher particle size range, similar to that at Mace Head. Airmass transport from the biogenic bloom region and the Antarctic ice front sector was observed to play an important role in aerosol iodine enhancement. The TSI concentrations observed along the 30,000 km long cruise round trip from East Asia to Antarctica during November 2005 – March 2006 were much lower than in the other campaigns, with a median of 6.51 pmol m-3. Approximately 70% of the TSI was SOI on average. The abundances of inorganic iodine including iodine and iodide were less than 30% of TSI. The median value of iodide was 1.49 pmol m-3, which was more than four fold higher than that of iodate (median, 0.28 pmol m-3). Spatial variation indicated highest aerosol iodine appearing in the tropical area. Iodine level was considerably lower in coastal Antarctica with the TSI median of 3.22 pmol m-3. However, airmass transport from the ice front sector was correlated with the enhance TSI level, suggesting the unrevealed source of iodine in the polar region. In addition, significant correlation between SOI and iodide was also shown in this campaign. A global distribution in aerosol was shown in the field campaigns in this work. SOI was verified globally ubiquitous due to the presence in the different sampling locations and its high proportion in TSI in the marine aerosols. The correlations between SOI and iodide were obtained not only in different locations but also in different seasons, implying the possible mechanism of iodide production through SOI decomposition. Nevertheless, future studies are needed for improving the current understanding of iodine chemistry in the MBL (e.g. SOI identification and quantification as well as the update modeling involving organic matters).
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.
Resumo:
I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
The thesis presents a probabilistic approach to the theory of semigroups of operators, with particular attention to the Markov and Feller semigroups. The first goal of this work is the proof of the fundamental Feynman-Kac formula, which gives the solution of certain parabolic Cauchy problems, in terms of the expected value of the initial condition computed at the associated stochastic diffusion processes. The second target is the characterization of the principal eigenvalue of the generator of a semigroup with Markov transition probability function and of second order elliptic operators with real coefficients not necessarily self-adjoint. The thesis is divided into three chapters. In the first chapter we study the Brownian motion and some of its main properties, the stochastic processes, the stochastic integral and the Itô formula in order to finally arrive, in the last section, at the proof of the Feynman-Kac formula. The second chapter is devoted to the probabilistic approach to the semigroups theory and it is here that we introduce Markov and Feller semigroups. Special emphasis is given to the Feller semigroup associated with the Brownian motion. The third and last chapter is divided into two sections. In the first one we present the abstract characterization of the principal eigenvalue of the infinitesimal generator of a semigroup of operators acting on continuous functions over a compact metric space. In the second section this approach is used to study the principal eigenvalue of elliptic partial differential operators with real coefficients. At the end, in the appendix, we gather some of the technical results used in the thesis in more details. Appendix A is devoted to the Sion minimax theorem, while in appendix B we prove the Chernoff product formula for not necessarily self-adjoint operators.
Resumo:
Over the past twenty years, new technologies have required an increasing use of mathematical models in order to understand better the structural behavior: finite element method is the one mostly used. However, the reliability of this method applied to different situations has to be tried each time. Since it is not possible to completely model the reality, different hypothesis must be done: these are the main problems of FE modeling. The following work deals with this problem and tries to figure out a way to identify some of the unknown main parameters of a structure. This main research focuses on a particular path of study and development, but the same concepts can be applied to other objects of research. The main purpose of this work is the identification of unknown boundary conditions of a bridge pier using the data acquired experimentally with field tests and a FEM modal updating process. This work doesn’t want to be new, neither innovative. A lot of work has been done during the past years on this main problem and many solutions have been shown and published. This thesis just want to rework some of the main aspects of the structural optimization process, using a real structure as fitting model.
Resumo:
The prognosis of patients in whom pulmonary embolism (PE) is suspected but ruled out is poorly understood. We evaluated whether the initial assessment of clinical probability of PE could help to predict the prognosis for these patients.
Resumo:
The aim of our study was to analyze the neurophysiological monitoring method with regard to its potential problems during thoracic and thoracoabdominal aortic open or endovascular repair. Furthermore, preventive strategies to the main pitfalls with this method were developed.
Resumo:
The diagnostic performance of isolated high-grade prostatic intraepithelial neoplasia in prostatic biopsies has recently been questioned, and molecular analysis of high-grade prostatic intraepithelial neoplasia has been proposed for improved prediction of prostate cancer. Here, we retrospectively studied the value of isolated high-grade prostatic intraepithelial neoplasia and the immunohistochemical markers ?-methylacyl coenzyme A racemase, Bcl-2, annexin II, and Ki-67 for better risk stratification of high-grade prostatic intraepithelial neoplasia in our local Swiss population. From an initial 165 diagnoses of isolated high-grade prostatic intraepithelial neoplasia, we refuted 61 (37%) after consensus expert review. We used 30 reviewed high-grade prostatic intraepithelial neoplasia cases with simultaneous biopsy prostate cancer as positive controls. Rebiopsies were performed in 66 patients with isolated high-grade prostatic intraepithelial neoplasia, and the median time interval between initial and repeat biopsy was 3 months. Twenty (30%) of the rebiopsies were positive for prostate cancer, and 10 (15%) showed persistent isolated high-grade prostatic intraepithelial neoplasia. Another 2 (3%) of the 66 patients were diagnosed with prostate cancer in a second rebiopsy. Mean prostate-specific antigen serum levels did not significantly differ between the 22 patients with prostate cancer and the 44 without prostate cancer in rebiopsies, and the 30 positive control patients, respectively (median values, 8.1, 7.7, and 8.8 ng/mL). None of the immunohistochemical markers, including ?-methylacyl coenzyme A racemase, Bcl-2, annexin II, and Ki-67, revealed a statistically significant association with the risk of prostate cancer in repeat biopsies. Taken together, the 33% risk of being diagnosed with prostate cancer after a diagnosis of high-grade prostatic intraepithelial neoplasia justifies rebiopsy, at least in our not systematically prostate-specific antigen-screened population. There is not enough evidence that immunohistochemical markers can reproducibly stratify the risk of prostate cancer after a diagnosis of isolated high-grade prostatic intraepithelial neoplasia.
Resumo:
In order to improve the ability to link chemical exposure to toxicological and ecological effects, aquatic toxicology will have to move from observing what chemical concentrations induce adverse effects to more explanatory approaches, that are concepts which build on knowledge of biological processes and pathways leading from exposure to adverse effects, as well as on knowledge on stressor vulnerability as given by the genetic, physiological and ecological (e.g., life history) traits of biota. Developing aquatic toxicology in this direction faces a number of challenges, including (i) taking into account species differences in toxicant responses on the basis of the evolutionarily developed diversity of phenotypic vulnerability to environmental stressors, (ii) utilizing diversified biological response profiles to serve as biological read across for prioritizing chemicals, categorizing them according to modes of action, and for guiding targeted toxicity evaluation; (iii) prediction of ecological consequences of toxic exposure from knowledge of how biological processes and phenotypic traits lead to effect propagation across the levels of biological hierarchy; and (iv) the search for concepts to assess the cumulative impact of multiple stressors. An underlying theme in these challenges is that, in addition to the question of what the chemical does to the biological receptor, we should give increasing emphasis to the question how the biological receptor handles the chemicals, i.e., through which pathways the initial chemical-biological interaction extends to the adverse effects, how this extension is modulated by adaptive or compensatory processes as well as by phenotypic traits of the biological receptor.