991 resultados para variable parameters


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän diplomityön tavoitteena oli sekundäärisen esiflotaation optimointi Stora Enso Sachsen GmbH:n tehtaalla. Optimoinnin muuttujana käytettiin vaahdon määrää ja optimointiparametreinä ISO-vaaleutta, saantoja sekä tuhkapitoisuutta. Lisäksi tutkittiin flotaatiosakeuden vaikutusta myös muihin tehtaan flotaatioprosesseihin. Kirjallisuusosassa tarkasteltiin flotaatiotapahtumaa, poistettavien partikkeleiden ja ilmakuplien kontaktia, vaahdon muodostumista sekä tärkeimpiä käytössä olevia siistausflotaattoreiden laiteratkaisuja. Kokeellisessa osassa tutkittiin flotaatiosakeuden pienetämisen vaikutuksia tehtaan flotaatioprosesseihin tuhkapitoisuuden, ISO-vaaleuden, valon sironta- ja valon absorpiokerrointen kannalta. Sekundäärisen esiflotaation optimonti suoritettiin muuttamalla vaahdon määrää kolmella erilaisella injektorin koolla, (8 mm, 10 mm ja 13 mm), joista keskimmäinen kasvattaa 30 % massan tilavuusvirtaa ilmapitoisuuden muodossa. Optimonnin tarkoituksena oli kasvattaa hyväksytyn massajakeen ISO-vaaleutta, sekä kasvattaa kuitu- ja kokonaissaantoa sekundäärisessä esiflotaatiossa. Flotaatiosakeuden pienentämisellä oli edullisia vaikutuksia ISO-vaaleuteen ja valon sirontakertoimeen kussakin flotaatiossa. Tuhkapitoisuus pieneni sekundäärisissä flotaatioissa enemmän sakeuden ollessa pienempi, kun taas primäärisissä flotaatiossa vaikutus oli päinvastainen. Valon absorptiokerroin parani jälkiflotaatioissa alhaisemmalla sakeudella, kun taas esiflotaatioissa vaikutus oli päinvastainen. Sekundäärisen esiflotaation optimoinnin tuloksena oli lähes 5 % parempi ISO-vaaleus hyväksytyssä massajakeessa. Kokonaissaanto parani optimoinnin myötä 5 % ja kuitusaanto 2 %. Saantojen nousu tuottaa vuosittaisia säästöjä siistauslaitoksen tuotantokapasiteetin noustessa 0,5 %. Tämän lisäksi sekundäärisessä esiflotaatiossa rejektoituvan massavirran pienentyminen tuottaa lisäsäästöjä tehtaan voimalaitoksella.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parkinson disease (PD) is associated with a clinical course of variable duration, severity, and a combination of motor and non-motor features. Recent PD research has focused primarily on etiology rather than clinical progression and long-term outcomes. For the PD patient, caregivers, and clinicians, information on expected clinical progression and long-term outcomes is of great importance. Today, it remains largely unknown what factors influence long-term clinical progression and outcomes in PD; recent data indicate that the factors that increase the risk to develop PD differ, at least partly, from those that accelerate clinical progression and lead to worse outcomes. Prospective studies will be required to identify factors that influence progression and outcome. We suggest that data for such studies is collected during routine office visits in order to guarantee high external validity of such research. We report here the results of a consensus meeting of international movement disorder experts from the Genetic Epidemiology of Parkinson's Disease (GEO-PD) consortium, who convened to define which long-term outcomes are of interest to patients, caregivers and clinicians, and what is presently known about environmental or genetic factors influencing clinical progression or long-term outcomes in PD. We propose a panel of rating scales that collects a significant amount of phenotypic information, can be performed in the routine office visit and allows international standardization. Research into the progression and long-term outcomes of PD aims at providing individual prognostic information early, adapting treatment choices, and taking specific measures to provide care optimized to the individual patient's needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated Fiber Placement is being extensively used in the production of major composite components for the aircraft industry. This technology enables the production of tow-steered panels, which have been proven to greatly improve the structural efficiency of composites by means of in-plane stiffness variation and load redistribution. However, traditional straight-fiber architectures are still preferred. One of the reasons behind this is related to the uncertainties, as a result of process-induced defects, in the mechanical performance of the laminates. This experimental work investigates the effect of the fiber angle discontinuities between different tow courses in a ply on the un-notched and open-hole tensile strength of the laminate. The influence of several manufacturing parameters are studied in detail. The results reveal that 'ply staggering' and '0% gap coverage' is an effective combination in reducing the influence of defects in these laminates

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current technology trends in medical device industry calls for fabrication of massive arrays of microfeatures such as microchannels on to nonsilicon material substrates with high accuracy, superior precision, and high throughput. Microchannels are typical features used in medical devices for medication dosing into the human body, analyzing DNA arrays or cell cultures. In this study, the capabilities of machining systems for micro-end milling have been evaluated by conducting experiments, regression modeling, and response surface methodology. In machining experiments by using micromilling, arrays of microchannels are fabricated on aluminium and titanium plates, and the feature size and accuracy (width and depth) and surface roughness are measured. Multicriteria decision making for material and process parameters selection for desired accuracy is investigated by using particle swarm optimization (PSO) method, which is an evolutionary computation method inspired by genetic algorithms (GA). Appropriate regression models are utilized within the PSO and optimum selection of micromilling parameters; microchannel feature accuracy and surface roughness are performed. An analysis for optimal micromachining parameters in decision variable space is also conducted. This study demonstrates the advantages of evolutionary computing algorithms in micromilling decision making and process optimization investigations and can be expanded to other applications

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the forced-air cooling process of fruits occurs, besides the convective heat transfer, the mass transfer by evaporation. The energy need in the evaporation is taken from fruit that has its temperature lowered. In this study it has been proposed the use of empirical correlations for calculating the convective heat transfer coefficient as a function of surface temperature of the strawberry during the cooling process. The aim of this variation of the convective coefficient is to compensate the effect of evaporation in the heat transfer process. Linear and exponential correlations are tested, both with two adjustable parameters. The simulations are performed using experimental conditions reported in the literature for the cooling of strawberries. The results confirm the suitability of the proposed methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present investigation we studied the fusogenic process developed by influenza A, B and C viruses on cell surfaces and different factors associated with virus and cell membrane structures. The biological activity of purified virus strains was evaluated in hemagglutination, sialidase and fusion assays. Hemolysis by influenza A, B and C viruses ranging from 77.4 to 97.2%, from 20.0 to 65.0%, from 0.2 to 93.7% and from 9.0 to 76.1% was observed when human, chicken, rabbit and monkey erythrocytes, respectively, were tested at pH 5.5. At this pH, low hemolysis indexes for influenza A, B and C viruses were observed if horse erythrocytes were used as target cells for the fusion process, which could be explained by an inefficient receptor binding activity of influenza on N-glycolyl sialic acids. Differences in hemagglutinin receptor binding activity due to its specificity to N-acetyl or N-glycolyl cell surface oligosaccharides, density of these cellular receptors and level of negative charges on the cell surface may possibly explain these results, showing influence on the sialidase activity and the fusogenic process. Comparative analysis showed a lack of dependence between the sialidase and fusion activities developed by influenza B viruses. Influenza A viruses at low sialidase titers (<2) also exhibited clearly low hemolysis at pH 5.5 (15.8%), while influenza B viruses with similarly low sialidase titers showed highly variable hemolysis indexes (0.2 to 78.0%). These results support the idea that different virus and cell-associated factors such as those presented above have a significant effect on the multifactorial fusion process

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The health-promoting effects of exercise training (ET) are related to nitric oxide (NO) production and/or its bioavailability. The objective of this study was to determine whether single nucleotide polymorphism of the endothelial NO synthase (eNOS) gene at positions -786T>C, G894T (Glu298Asp) and at the variable number of tandem repeat (VNTR) Intron 4b/a would interfere with the cardiometabolic responses of postmenopausal women submitted to physical training. Forty-nine postmenopausal women were trained in sessions of 30-40 min, 3 days a week for 8 weeks. Genotypes, oxidative stress status and cardiometabolic parameters were then evaluated in a double-blind design. Both systolic and diastolic blood pressure values were significantly reduced after ET, which was genotype-independent. However, women without eNOS gene polymorphism at position -786T>C (TT genotype) and Intron 4b/a (bb genotype) presented a better reduction of total cholesterol levels (-786T>C: before = 213 ± 12.1, after = 159.8 ± 14.4, Δ = -24.9% and Intron 4b/a: before = 211.8 ± 7.4, after = 180.12 ± 6.4 mg/dL, Δ = -15%), and LDL cholesterol (-786T>C: before = 146.1 ± 13.3, after = 82.8 ± 9.2, Δ = -43.3% and Intron 4b/a: before = 143.2 ± 8, after = 102.7 ± 5.8 mg/dL, Δ = -28.3%) in response to ET compared to those who carried the mutant allele. Superoxide dismutase activity was significantly increased in trained women whereas no changes were observed in malondialdehyde levels. Women without eNOS gene polymorphism at position -786T>C and Intron 4b/a showed a greater reduction of plasma cholesterol levels in response to ET. Furthermore, no genotype influence was observed on arterial blood pressure or oxidative stress status in this population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La variable aleatoria es una función matemática que permite asignar valores numéricos a cada uno de los posibles resultados obtenidos en un evento de naturaleza aleatoria. Si el número de estos resultados se puede contar, se tiene un conjunto discreto; por el contrario, cuando el número de resultados es infinito y no se puede contar, se tiene un conjunto continuo. El objetivo de la variable aleatoria es permitir adelantar estudios probabilísticos y estadísticos a partir del establecimiento de una asignación numérica a través de la cual se identifiquen cada uno de los resultados que pueden ser obtenidos en el desarrollo de un evento determinado. El valor esperado y la varianza son los parámetros por medio de los cuales es posible caracterizar el comportamiento de los datos reunidos en el desarrollo de una situación experimental; el valor esperado permite establecer el valor sobre el cual se centra la distribución de la probabilidad, mientras que la varianza proporciona información acerca de la manera como se distribuyen los datos obtenidos. Adicionalmente, las distribuciones de probabilidad son funciones numéricas asociadas a la variable aleatoria que describen la asignación de probabilidad para cada uno de los elementos del espacio muestral y se caracterizan por ser un conjunto de parámetros que establecen su comportamiento funcional, es decir, cada uno de los parámetros propios de la distribución suministra información del experimento aleatorio al que se asocia. El documento se cierra con una aproximación de la variable aleatoria a procesos de toma de decisión que implican condiciones de riesgo e incertidumbre.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La majoria de les fallades en elements estructurals són degudes a càrrega per fatiga. En conseqüència, la fatiga mecànica és un factor clau per al disseny d'elements mecànics. En el cas de materials compòsits laminats, el procés de fallada per fatiga inclou diferents mecanismes de dany que resulten en la degradació del material. Un dels mecanismes de dany més importants és la delaminació entre capes del laminat. En el cas de components aeronàutics, les plaques de composit estan exposades a impactes i les delaminacions apareixen facilment en un laminat després d'un impacte. Molts components fets de compòsit tenen formes corbes, superposició de capes i capes amb diferents orientacions que fan que la delaminació es propagui en un mode mixt que depen de la grandària de la delaminació. És a dir, les delaminacions generalment es propaguen en mode mixt variable. És per això que és important desenvolupar nous mètodes per caracteritzar el creixement subcrític en mode mixt per fatiga de les delaminacions. El principal objectiu d'aquest treball és la caracterització del creixement en mode mixt variable de les delaminacions en compòsits laminats per efecte de càrregues a fatiga. Amb aquest fi, es proposa un nou model per al creixement per fatiga de la delaminació en mode mixt. Contràriament als models ja existents, el model que es proposa es formula d'acord a la variació no-monotònica dels paràmetres de propagació amb el mode mixt observada en diferents resultats experimentals. A més, es du a terme un anàlisi de l'assaig mixed-mode end load split (MMELS), la característica més important del qual és la variació del mode mixt a mesura que la delaminació creix. Per a aquest anàlisi, es tenen em compte dos mètodes teòrics presents en la literatura. No obstant, les expressions resultants per l'assaig MMELS no són equivalents i les diferències entre els dos mètodes poden ser importants, fins a 50 vegades. Per aquest motiu, en aquest treball es porta a terme un anàlisi alternatiu més acurat del MMELS per tal d'establir una comparació. Aquest anàlisi alternatiu es basa en el mètode dels elements finits i virtual crack closure technique (VCCT). D'aquest anàlisi en resulten importants aspectes a considerar per a la bona caracterització de materials utilitzant l'assaig MMELS. Durant l'estudi s'ha dissenyat i construït un utillatge per l'assaig MMELS. Per a la caracterització experimental de la propagació per fatiga de delaminacions en mode mixt variable s'utilitzen diferents provetes de laminats carboni/epoxy essencialment unidireccionals. També es du a terme un anàlisi fractogràfic d'algunes de les superfícies de fractura per delaminació. Els resultats experimentals són comparats amb les prediccions del model proposat per la propagació per fatiga d'esquerdes interlaminars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Els models matemàtics quantitatius són simplificacions de la realitat i per tant el comportament obtingut per simulació d'aquests models difereix dels reals. L'ús de models quantitatius complexes no és una solució perquè en la majoria dels casos hi ha alguna incertesa en el sistema real que no pot ser representada amb aquests models. Una forma de representar aquesta incertesa és mitjançant models qualitatius o semiqualitatius. Un model d'aquest tipus de fet representa un conjunt de models. La simulació del comportament de models quantitatius genera una trajectòria en el temps per a cada variable de sortida. Aquest no pot ser el resultat de la simulació d'un conjunt de models. Una forma de representar el comportament en aquest cas és mitjançant envolupants. L'envolupant exacta és complete, és a dir, inclou tots els possibles comportaments del model, i correcta, és a dir, tots els punts dins de l'envolupant pertanyen a la sortida de, com a mínim, una instància del model. La generació d'una envolupant així normalment és una tasca molt dura que es pot abordar, per exemple, mitjançant algorismes d'optimització global o comprovació de consistència. Per aquesta raó, en molts casos s'obtenen aproximacions a l'envolupant exacta. Una aproximació completa però no correcta a l'envolupant exacta és una envolupant sobredimensionada, mentre que una envolupant correcta però no completa és subdimensionada. Aquestes propietats s'han estudiat per diferents simuladors per a sistemes incerts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physiological and yield traits such as stomatal conductance (mmol m-2s-1), Leaf relative water content (RWC %) and grain yield per plant were studied in a separate experiment. Results revealed that five out of sixteen cultivars viz. Anmol, Moomal, Sarsabz, Bhitai and Pavan, appeared to be relatively more drought tolerant. Based on morphophysiological results, studies were continued to look at these cultivars for drought tolerance at molecular level. Initially, four well recognized primers for dehydrin genes (DHNs) responsible for drought induction in T. durum L., T. aestivum L. and O. sativa L. were used for profiling gene sequence of sixteen wheat cultivars. The primers amplified the DHN genes variably like Primer WDHN13 (T. aestivum L.) amplified the DHN gene in only seven cultivars whereas primer TdDHN15 (T. durum L.) amplified all the sixteen cultivars with even different DNA banding patterns some showing second weaker DNA bands. Third primer TdDHN16 (T. durum L.) has shown entirely different PCR amplification prototype, specially showing two strong DNA bands while fourth primer RAB16C (O. sativa L.) failed to amplify DHN gene in any of the cultivars. Examination of DNA sequences revealed several interesting features. First, it identified the two exon/one intron structure of this gene (complete sequences were not shown), a feature not previously described in the two database cDNA sequences available from T. aestivum L. (gi|21850). Secondly, the analysis identified several single nucleotide polymorphisms (SNPs), positions in gene sequence. Although complete gene sequence was not obtained for all the cultivars, yet there were a total of 38 variable positions in exonic (coding region) sequence, from a total gene length of 453 nucleotides. Matrix of SNP shows these 37 positions with individual sequence at positions given for each of the 14 cultivars (sequence of two cultivars was not obtained) included in this analysis. It demonstrated a considerable diversity for this gene with only three cultivars i.e. TJ-83, Marvi and TD-1 being similar to the consensus sequence. All other cultivars showed a unique combination of SNPs. In order to prove a functional link between these polymorphisms and drought tolerance in wheat, it would be necessary to conduct a more detailed study involving directed mutation of this gene and DHN gene expression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the significantly different effective temperatures (T(eff)) achieved by the luminous blue variable AG Carinae during the consecutive visual minima of 1985-1990 (T(eff) similar or equal to 22,800 K) and 2000-2001 (T(eff) similar or equal to 17,000 K) place the star on different sides of the bistability limit, which occurs in line-driven stellar winds around T(eff) similar to 21,000 K. Decisive evidence is provided by huge changes in the optical depth of the Lyman continuum in the inner wind as T(eff) changes during the S Dor cycle. These changes cause different Fe ionization structures in the inner wind. The bistability mechanism is also related to the different wind parameters during visual minima: the wind terminal velocity was 2-3 times higher and the mass-loss rate roughly two times smaller in 1985-1990 than in 2000-2003. We obtain a projected rotational velocity of 220 +/- 50 km s(-1) during 1985-1990 which, combined with the high luminosity (L(star) = 1.5 x 10(6) L(circle dot)), puts AG Car extremely close to the Eddington limit modified by rotation (Omega Gamma limit): for an inclination angle of 90 degrees, Gamma(Omega) greater than or similar to 1.0 for M(circle dot) less than or similar to 60. Based on evolutionary models and mass budget, we obtain an initial mass of similar to 100 M(circle dot) and a current mass of similar to 60-70 M(circle dot) for AG Car. Therefore, AG Car is close to, if not at, the Omega Gamma limit during visual minimum. Assuming M = 70 M(circle dot), we find that Gamma(Omega) decreases from 0.93 to 0.72 as AG Car expands toward visual maximum, suggesting that the star is not above the Eddington limit during maximum phases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a note about proxy variables and instruments for identification of structural parameters in regression models. We have experienced that in the econometric textbooks these two issues are treated separately, although in practice these two concepts are very often combined. Usually, proxy variables are inserted in instrument variable regressions with the motivation they are exogenous. Implicitly meaning they are exogenous in a reduced form model and not in a structural model. Actually if these variables are exogenous they should be redundant in the structural model, e.g. IQ as a proxy for ability. Valid proxies reduce unexplained variation and increases the efficiency of the estimator of the structural parameter of interest. This is especially important in situations when the instrument is weak. With a simple example we demonstrate what is required of a proxy and an instrument when they are combined. It turns out that when a researcher has a valid instrument the requirements on the proxy variable is weaker than if no such instrument exists