928 resultados para linear matrix inequality (LMI)
Resumo:
In recent papers, the authors obtained formulas for directional derivatives of all orders, of the immanant and of the m-th xi-symmetric tensor power of an operator and a matrix, when xi is a character of the full symmetric group. The operator norm of these derivatives was also calculated. In this paper, similar results are established for generalized matrix functions and for every symmetric tensor power.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Química e Bioquímica
Resumo:
Gold nanoparticles were dispersed in two different dielectric matrices, TiO2 and Al2O3, using magnetron sputtering and a post-deposition annealing treatment. The main goal of the present work was to study how the two different host dielectric matrices, and the resulting microstructure evolution (including both the nanoparticles and the host matrix itself) promoted by thermal annealing, influenced the physical properties of the films. In particular, the structure and morphology of the nanocomposites were correlated with the optical response of the thin films, namely their localized surface plasmon resonance (LSPR) characteristics. Furthermore, and in order to scan the future application of the two thin film system in different types of sensors (namely biological ones), their functional behaviour (hardness and Young's modulus change) was also evaluated. Despite the similar Au concentrations in both matrices (~ 11 at.%), very different microstructural features were observed, which were found to depend strongly on the annealing temperature. The main structural differences included: (i) the early crystallization of the TiO2 host matrix, while the Al2O3 one remained amorphous up to 800 °C; (ii) different grain size evolution behaviours with the annealing temperature, namely an almost linear increase for the Au:TiO2 system (from 3 to 11 nm), and the approximately constant values observed in the Au:Al2O3 system (4–5 nm). The results from the nanoparticle size distributions were also found to be quite sensitive to the surrounding matrix, suggesting different mechanisms for the nanoparticle growth (particle migration and coalescence dominating in TiO2 and Ostwald ripening in Al2O3). These different clustering behaviours induced different transmittance-LSPR responses and a good mechanical stability, which opens the possibility for future use of these nanocomposite thin film systems in some envisaged applications (e.g. LSPR-biosensors).
Resumo:
Inspired by the relational algebra of data processing, this paper addresses the foundations of data analytical processing from a linear algebra perspective. The paper investigates, in particular, how aggregation operations such as cross tabulations and data cubes essential to quantitative analysis of data can be expressed solely in terms of matrix multiplication, transposition and the Khatri–Rao variant of the Kronecker product. The approach offers a basis for deriving an algebraic theory of data consolidation, handling the quantitative as well as qualitative sides of data science in a natural, elegant and typed way. It also shows potential for parallel analytical processing, as the parallelization theory of such matrix operations is well acknowledged.
Resumo:
Although polychlorinated biphenyls (PCBs) have been banned in many countries for more than three decades, exposures to PCBs continue to be of concern due to their long half-lives and carcinogenic effects. In National Institute for Occupational Safety and Health studies, we are using semiquantitative plant-specific job exposure matrices (JEMs) to estimate historical PCB exposures for workers (n = 24,865) exposed to PCBs from 1938 to 1978 at three capacitor manufacturing plants. A subcohort of these workers (n = 410) employed in two of these plants had serum PCB concentrations measured at up to four times between 1976 and 1989. Our objectives were to evaluate the strength of association between an individual worker's measured serum PCB levels and the same worker's cumulative exposure estimated through 1977 with the (1) JEM and (2) duration of employment, and to calculate the explained variance the JEM provides for serum PCB levels using (3) simple linear regression. Consistent strong and statistically significant associations were observed between the cumulative exposures estimated with the JEM and serum PCB concentrations for all years. The strength of association between duration of employment and serum PCBs was good for highly chlorinated (Aroclor 1254/HPCB) but not less chlorinated (Aroclor 1242/LPCB) PCBs. In the simple regression models, cumulative occupational exposure estimated using the JEMs explained 14-24% of the variance of the Aroclor 1242/LPCB and 22-39% for Aroclor 1254/HPCB serum concentrations. We regard the cumulative exposure estimated with the JEM as a better estimate of PCB body burdens than serum concentrations quantified as Aroclor 1242/LPCB and Aroclor 1254/HPCB.
Resumo:
We extend the linear reforms introduced by Pf¨ahler (1984) to the case of dual taxes. We study the relative effect that linear dual tax cuts have on the inequality of income distribution -a symmetrical study can be made for dual linear tax hikes-. We also introduce measures of the degree of progressivity for dual taxes and show that they can be connected to the Lorenz dominance criterion. Additionally, we study the tax liability elasticity of each of the reforms proposed. Finally, by means of a microsimulation model and a considerably large data set of taxpayers drawn from 2004 Spanish Income Tax Return population, 1) we compare different yield-equivalent tax cuts applied to the Spanish dual income tax and 2) we investigate how much income redistribution the dual tax reform (Act ‘35/2006’) introduced with respect to the previous tax.
Resumo:
In this paper we study a behavioral model of conflict that provides a basis for choosing certain indices of dispersion as indicators for conflict. We show that the (equilibrium) level of conflict can be expressed as an (approximate) linear function of the Gini coefficient, the Herfindahl-Hirschman fractionalization index, and a specific measure of polarization due to Esteban and Ray
Resumo:
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given
Resumo:
Doxorubicin is an antineoplasic agent active against sarcoma pulmonary metastasis, but its clinical use is hampered by its myelotoxicity and its cumulative cardiotoxicity, when administered systemically. This limitation may be circumvented using the isolated lung perfusion (ILP) approach, wherein a therapeutic agent is infused locoregionally after vascular isolation of the lung. The influence of the mode of infusion (anterograde (AG): through the pulmonary artery (PA); retrograde (RG): through the pulmonary vein (PV)) on doxorubicin pharmacokinetics and lung distribution was unknown. Therefore, a simple, rapid and sensitive high-performance liquid chromatography method has been developed to quantify doxorubicin in four different biological matrices (infusion effluent, serum, tissues with low or high levels of doxorubicin). The related compound daunorubicin was used as internal standard (I.S.). Following a single-step protein precipitation of 500 microl samples with 250 microl acetone and 50 microl zinc sulfate 70% aqueous solution, the obtained supernatant was evaporated to dryness at 60 degrees C for exactly 45 min under a stream of nitrogen and the solid residue was solubilized in 200 microl of purified water. A 100 microl-volume was subjected to HPLC analysis onto a Nucleosil 100-5 microm C18 AB column equipped with a guard column (Nucleosil 100-5 microm C(6)H(5) (phenyl) end-capped) using a gradient elution of acetonitrile and 1-heptanesulfonic acid 0.2% pH 4: 15/85 at 0 min-->50/50 at 20 min-->100/0 at 22 min-->15/85 at 24 min-->15/85 at 26 min, delivered at 1 ml/min. The analytes were detected by fluorescence detection with excitation and emission wavelength set at 480 and 550 nm, respectively. The calibration curves were linear over the range of 2-1000 ng/ml for effluent and plasma matrices, and 0.1 microg/g-750 microg/g for tissues matrices. The method is precise with inter-day and intra-day relative standard deviation within 0.5 and 6.7% and accurate with inter-day and intra-day deviations between -5.4 and +7.7%. The in vitro stability in all matrices and in processed samples has been studied at -80 degrees C for 1 month, and at 4 degrees C for 48 h, respectively. During initial studies, heparin used as anticoagulant was found to profoundly influence the measurements of doxorubicin in effluents collected from animals under ILP. Moreover, the strong matrix effect observed with tissues samples indicate that it is mandatory to prepare doxorubicin calibration standard samples in biological matrices which would reflect at best the composition of samples to be analyzed. This method was successfully applied in animal studies for the analysis of effluent, serum and tissue samples collected from pigs and rats undergoing ILP.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.
Resumo:
In this paper we examine the effect of tax policy on the relationship between inequality and growth in a two-sector non-scale model. With non-scale models, the longrun equilibrium growth rate is determined by technological parameters and it is independent of macroeconomic policy instruments. However, this fact does not imply that fiscal policy is unimportant for long-run economic performance. It indeed has important effects on the different levels of key economic variables such as per capita stock of capital and output. Hence, although the economy grows at the same rate across steady states, the bases for economic growth may be different.The model has three essential features. First, we explicitly model skill accumulation, second, we introduce government finance into the production function, and we introduce an income tax to mirror the fiscal events of the 1980¿s and 1990¿s in the US. The fact that the non-scale model is associated with higher order dynamics enables it to replicate the distinctly non-linear nature of inequality in the US with relative ease. The results derived in this paper attract attention to the fact that the non-scale growth model does not only fit the US data well for the long-run (Jones, 1995b) but also that it possesses unique abilities in explaining short term fluctuations of the economy. It is shown that during transition the response of the relative simulated wage to changes in the tax code is rather non-monotonic, quite in accordance to the US inequality pattern in the 1980¿s and early 1990¿s.More specifically, we have analyzed in detail the dynamics following the simulation of an isolated tax decrease and an isolated tax increase. So, after a tax decrease the skill premium follows a lower trajectory than the one it would follow without a tax decrease. Hence we are able to reduce inequality for several periods after the fiscal shock. On the contrary, following a tax increase, the evolution of the skill premium remains above the trajectory carried on by the skill premium under a situation with no tax increase. Consequently, a tax increase would imply a higher level of inequality in the economy
Resumo:
[cat] En aquest treball extenem les reformes lineals introduïdes per Pfähler (1984) al cas d’impostos duals. Estudiem l’efecte relatiu que els retalls lineals duals d’un impost dual tenen sobre la distribució de la desigualtat -es pot fer un estudi simètric per al cas d’augments d’impostos-. Tambe introduïm mesures del grau de progressivitat d’impostos duals i mostrem que estan connectades amb el criteri de dominació de Lorenz. Addicionalment, estudiem l’elasticitat de la càrrega fiscal de cadascuna de les reformes proposades. Finalment, gràcies a un model de microsimulació i una gran base de dades que conté informació sobre l’IRPF espanyol de l’any 2004, 1) comparem l’efecte que diferents reformes tindrien sobre l’impost dual espanyol i 2) estudiem quina redistribució de la riquesa va suposar la reforma dual de l’IRPF (Llei ’35/2006’) respecte l’anterior impost.
Resumo:
In this paper we examine the effect of tax policy on the relationship between inequality and growth in a two-sector non-scale model. With non-scale models, the longrun equilibrium growth rate is determined by technological parameters and it is independent of macroeconomic policy instruments. However, this fact does not imply that fiscal policy is unimportant for long-run economic performance. It indeed has important effects on the different levels of key economic variables such as per capita stock of capital and output. Hence, although the economy grows at the same rate across steady states, the bases for economic growth may be different.The model has three essential features. First, we explicitly model skill accumulation, second, we introduce government finance into the production function, and we introduce an income tax to mirror the fiscal events of the 1980¿s and 1990¿s in the US. The fact that the non-scale model is associated with higher order dynamics enables it to replicate the distinctly non-linear nature of inequality in the US with relative ease. The results derived in this paper attract attention to the fact that the non-scale growth model does not only fit the US data well for the long-run (Jones, 1995b) but also that it possesses unique abilities in explaining short term fluctuations of the economy. It is shown that during transition the response of the relative simulated wage to changes in the tax code is rather non-monotonic, quite in accordance to the US inequality pattern in the 1980¿s and early 1990¿s.More specifically, we have analyzed in detail the dynamics following the simulation of an isolated tax decrease and an isolated tax increase. So, after a tax decrease the skill premium follows a lower trajectory than the one it would follow without a tax decrease. Hence we are able to reduce inequality for several periods after the fiscal shock. On the contrary, following a tax increase, the evolution of the skill premium remains above the trajectory carried on by the skill premium under a situation with no tax increase. Consequently, a tax increase would imply a higher level of inequality in the economy
Resumo:
This paper presents a new method to analyze timeinvariant linear networks allowing the existence of inconsistent initial conditions. This method is based on the use of distributions and state equations. Any time-invariant linear network can be analyzed. The network can involve any kind of pure or controlled sources. Also, the transferences of energy that occur at t=O are determined, and the concept of connection energy is introduced. The algorithms are easily implemented in a computer program.
Resumo:
[cat] En aquest treball extenem les reformes lineals introduïdes per Pfähler (1984) al cas d’impostos duals. Estudiem l’efecte relatiu que els retalls lineals duals d’un impost dual tenen sobre la distribució de la desigualtat -es pot fer un estudi simètric per al cas d’augments d’impostos-. Tambe introduïm mesures del grau de progressivitat d’impostos duals i mostrem que estan connectades amb el criteri de dominació de Lorenz. Addicionalment, estudiem l’elasticitat de la càrrega fiscal de cadascuna de les reformes proposades. Finalment, gràcies a un model de microsimulació i una gran base de dades que conté informació sobre l’IRPF espanyol de l’any 2004, 1) comparem l’efecte que diferents reformes tindrien sobre l’impost dual espanyol i 2) estudiem quina redistribució de la riquesa va suposar la reforma dual de l’IRPF (Llei ’35/2006’) respecte l’anterior impost.