946 resultados para Fundamentals in linear algebra
Resumo:
A rigorous asymptotic theory for Wald residuals in generalized linear models is not yet available. The authors provide matrix formulae of order O(n(-1)), where n is the sample size, for the first two moments of these residuals. The formulae can be applied to many regression models widely used in practice. The authors suggest adjusted Wald residuals to these models with approximately zero mean and unit variance. The expressions were used to analyze a real dataset. Some simulation results indicate that the adjusted Wald residuals are better approximated by the standard normal distribution than the Wald residuals.
Resumo:
Further advances in magnetic hyperthermia might be limited by biological constraints, such as using sufficiently low frequencies and low field amplitudes to inhibit harmful eddy currents inside the patient's body. These incite the need to optimize the heating efficiency of the nanoparticles, referred to as the specific absorption rate (SAR). Among the several properties currently under research, one of particular importance is the transition from the linear to the non-linear regime that takes place as the field amplitude is increased, an aspect where the magnetic anisotropy is expected to play a fundamental role. In this paper we investigate the heating properties of cobalt ferrite and maghemite nanoparticles under the influence of a 500 kHz sinusoidal magnetic field with varying amplitude, up to 134 Oe. The particles were characterized by TEM, XRD, FMR and VSM, from which most relevant morphological, structural and magnetic properties were inferred. Both materials have similar size distributions and saturation magnetization, but strikingly different magnetic anisotropies. From magnetic hyperthermia experiments we found that, while at low fields maghemite is the best nanomaterial for hyperthermia applications, above a critical field, close to the transition from the linear to the non-linear regime, cobalt ferrite becomes more efficient. The results were also analyzed with respect to the energy conversion efficiency and compared with dynamic hysteresis simulations. Additional analysis with nickel, zinc and copper-ferrite nanoparticles of similar sizes confirmed the importance of the magnetic anisotropy and the damping factor. Further, the analysis of the characterization parameters suggested core-shell nanostructures, probably due to a surface passivation process during the nanoparticle synthesis. Finally, we discussed the effect of particle-particle interactions and its consequences, in particular regarding discrepancies between estimated parameters and expected theoretical predictions. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi. org/10.1063/1.4739533]
Resumo:
The existence and stability of three-dimensional (3D) solitons, in cross-combined linear and nonlinear optical lattices, are investigated. In particular, with a starting optical lattice (OL) configuration such that it is linear in the x-direction and nonlinear in the y-direction, we consider the z-direction either unconstrained (quasi-2D OL case) or with another linear OL (full 3D case). We perform this study both analytically and numerically: analytically by a variational approach based on a Gaussian ansatz for the soliton wavefunction and numerically by relaxation methods and direct integrations of the corresponding Gross-Pitaevskii equation. We conclude that, while 3D solitons in the quasi-2D OL case are always unstable, the addition of another linear OL in the z-direction allows us to stabilize 3D solitons both for attractive and repulsive mean interactions. From our results, we suggest the possible use of spatial modulations of the nonlinearity in one of the directions as a tool for the management of stable 3D solitons.
Resumo:
Spatial linear models have been applied in numerous fields such as agriculture, geoscience and environmental sciences, among many others. Spatial dependence structure modelling, using a geostatistical approach, is an indispensable tool to estimate the parameters that define this structure. However, this estimation may be greatly affected by the presence of atypical observations in the sampled data. The purpose of this paper is to use diagnostic techniques to assess the sensitivity of the maximum-likelihood estimators, covariance functions and linear predictor to small perturbations in the data and/or the spatial linear model assumptions. The methodology is illustrated with two real data sets. The results allowed us to conclude that the presence of atypical values in the sample data have a strong influence on thematic maps, changing the spatial dependence structure.
Resumo:
In Kantor and Trishin (1997) [3], Kantor and Trishin described the algebra of polynomial invariants of the adjoint representation of the Lie superalgebra gl(m vertical bar n) and a related algebra A, of what they called pseudosymmetric polynomials over an algebraically closed field K of characteristic zero. The algebra A(s) was investigated earlier by Stembridge (1985) who in [9] called the elements of A(s) supersymmetric polynomials and determined generators of A(s). The case of positive characteristic p of the ground field K has been recently investigated by La Scala and Zubkov (in press) in [6]. We extend their work and give a complete description of generators of polynomial invariants of the adjoint action of the general linear supergroup GL(m vertical bar n) and generators of A(s).
Updating incomplete factorization preconditioners for shifted linear systems arising in a wind model
Resumo:
In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.
Resumo:
This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
Questa tesi si pone come obiettivo l'analisi delle componenti di sollecitazione statica di un serbatoio, in acciaio API 5L X52, sottoposto a carichi di flessione e pressione interna attraverso il programma agli elementi finiti PLCd4, sviluppato presso l'International Center for Numerical Methods in Engineering (CIMNE - Barcelona). Questo tipo di analisi rientra nel progetto europeo ULCF, il cui traguardo è lo studio della fatica a bassissimo numero di cicli per strutture in acciaio. Prima di osservare la struttura completa del serbatoio è stato studiato il comportamento del materiale per implementare all'interno del programma una nuova tipologia di curva che rappresentasse al meglio l'andamento delle tensioni interne. Attraverso il lavoro di preparazione alla tesi è stato inserito all'interno del programma un algoritmo per la distribuzione delle pressioni superficiali sui corpi 3D, successivamente utilizzato per l'analisi della pressione interna nel serbatoio. Sono state effettuate analisi FEM del serbatoio in diverse configurazioni di carico ove si è cercato di modellare al meglio la struttura portante relativa al caso reale di "full scale test". Dal punto di vista analitico i risultati ottenuti sono soddisfacenti in quanto rispecchiano un corretto comportamento del serbatoio in condizioni di pressioni molto elevate e confermano la bontà del programma nell'analisi computazionale.
Resumo:
The collapse of linear polyelectrolyte chains in a poor solvent: When does a collapsing polyelectrolyte collect its counter ions? The collapse of polyions in a poor solvent is a complex system and is an active research subject in the theoretical polyelectrolyte community. The complexity is due to the subtle interplay between hydrophobic effects, electrostatic interactions, entropy elasticity, intrinsic excluded volume as well as specific counter-ion and co-ion properties. Long range Coulomb forces can obscure single molecule properties. The here presented approach is to use just a small amount of screening salt in combination with a very high sample dilution in order to screen intermolecular interaction whereas keeping intramolecular interaction as much as possible (polyelectrolyte concentration cp ≤ 12 mg/L, salt concentration; Cs = 10^-5 mol/L). This is so far not described in literature. During collapse, the polyion is subject to a drastic change in size along with strong reduction of free counterions in solution. Therefore light scattering was utilized to obtain the size of the polyion whereas a conductivity setup was developed to monitor the proceeding of counterion collection by the polyion. Partially quaternized PVP’s below and above the Manning limit were investigated and compared to the collapse of their uncharged precursor. The collapses were induced by an isorefractive solvent/non-solvent mixture consisting of 1-propanol and 2-pentanone, with nearly constant dielectric constant. The solvent quality for the uncharged polyion could be quantified which, for the first time, allowed the experimental investigation of the effect of electrostatic interaction prior and during polyion collapse. Given that the Manning parameter M for QPVP4.3 is as low as lB / c = 0.6 (lB the Bjerrum length and c the mean contour distance between two charges), no counterion binding should occur. However the Walden product reduces with first addition of non solvent and accelerates when the structural collapse sets in. Since the dielectric constant of the solvent remains virtually constant during the chain collapse, the counterion binding is entirely caused by the reduction in the polyion chain dimension. The collapse is shifted to lower wns with higher degrees of quaternization as the samples QPVP20 and QPVP35 show (M = 2.8 respectively 4.9). The combination of light scattering and conductivity measurement revealed for the first time that polyion chains already collect their counter ions well above the theta-dimension when the dimensions start to shrink. Due to only small amounts of screening salt, strong electrostatic interactions bias dynamic as well as static light scattering measurements. An extended Zimm formula was derived to account for this interaction and to obtain the real chain dimensions. The effective degree of dissociation g could be obtained semi quantitatively using this extrapolated static in combination with conductivity measurements. One can conclude the expansion factor a and the effective degree of ionization of the polyion to be mutually dependent. In the good solvent regime g of QPVP4.3, QPVP20 and QPVP35 exhibited a decreasing value in the order 1 > g4.3 > g20 > g35. The low values of g for QPVP20 and QPVP35 are assumed to be responsible for the prior collapse of the higher quaternized samples. Collapse theory predicts dipole-dipole attraction to increase accordingly and even predicts a collapse in the good solvent regime. This could be exactly observed for the QPVP35 sample. The experimental results were compared to a newly developed theory of uniform spherical collapse induced by concomitant counterion binding developed by M. Muthukumar and A. Kundagrami. The theory agrees qualitatively with the location of the phase boundary as well as the trend of an increasing expansion with an increase of the degree of quaternization. However experimental determined g for the samples QPVP4.3, QPVP20 and QPVP35 decreases linearly with the degree of quaternization whereas this theory predicts an almost constant value.
Resumo:
The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.