72 resultados para Estimated parameters

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to analyse to what extent the use of cross-section data will distort the estimated elasticities for car ownership demand when the observed variables do not correspond to a state equilibrium for some individuals in the sample. Our proposal consists of approximating the equilibrium values of the observed variables by constructing a pseudo-panel data set which entails averaging individuals observed at different points of time into cohorts. The results show that individual and aggregate data lead to almost the same value for income elasticity, whereas with respect to working adult elasticity the similarity is less pronounced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El entorno aéreo es, a día de hoy, uno de los escenarios más complicados a la hora de establecer enlaces de comunicación fiables. Esto es debido, principalmente, a las altas velocidades a las que circulan los aviones, que propician una gran degradación del rendimiento des sistema si no se estima de forma continua el canal. Además el entorno aéreo es susceptible a sufrir muchos otros efectos que provocan la degradación de la señal, como la difracción, la reflexión, etc. Por este motivo en este proyecto se hace un estudio de dos escenarios típicos de vuelo: arrival (aterrizaje) y on route (vuelo en ruta). En el escenario on route los aviones circulan a más de el doble de velocidad que en el escenario arrival, de esta manera se podrá ver el efecto de sufrir un doppler mayor. Para realizar el estudio se utiliza un sistema multiportadora con solapamiento de subcanales, OFDM, y se toman inicialmente parámetros típicos de la tecnología WiMAX, que se variarán con el objetivo de mejorar el rendimiento del sistema.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pensions together with savings and investments during active life are key elements of retirement planning. Motivation for personal choices about the standard of living, bequest and the replacement ratio of pension with respect to last salary income must be considered. This research contributes to the financial planning by helping to quantify long-term care economic needs. We estimate life expectancy from retirement age onwards. The economic cost of care per unit of service is linked to the expected time of needed care and the intensity of required services. The expected individual cost of long-term care from an onset of dependence is estimated separately for men and women. Assumptions on the mortality of the dependent people compared to the general population are introduced. Parameters defining eligibility for various forms of coverage by the universal public social care of the welfare system are addressed. The impact of the intensity of social services on individual predictions is assessed, and a partial coverage by standard private insurance products is also explored. Data were collected by the Spanish Institute of Statistics in two surveys conducted on the general Spanish population in 1999 and in 2008. Official mortality records and life table trends were used to create realistic scenarios for longevity. We find empirical evidence that the public long-term care system in Spain effectively mitigates the risk of incurring huge lifetime costs. We also find that the most vulnerable categories are citizens with moderate disabilities that do not qualify to obtain public social care support. In the Spanish case, the trends between 1999 and 2008 need to be further explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drawing on PISA data of 2006, this study examines the impact of socio-economic school composition on science test score achievement for Spanish students in compulsory secondary schools. We define school composition in terms of the average parental human capital of students in the same school. These contextual peer effects are estimated using a semi-parametric methodology, which enables the spillovers to affect all the parameters of the educational production function. We also deal with the potential problem of self-selection of student into schools, using an artificial sorting that we argue to be independent from unobserved student’s abilities. The results indicate that the association between socio-economic school composition and test score results is clearly positive and significantly higher when computed with the semi-parametric approach. However, we find that the endogenous sorting of students into schools plays a fundamental role, given that the spillovers are significantly reduced when this selection process is ruled out from our measure of school composition effects. Specifically, the estimations suggest that the contextual peer effects are moderately positive only in those schools where the socio-economic composition is considerably elevated. In addition, we find some evidence of asymmetry of how the external effects and the sorting process actually operate, which seem affect in a different way males and females as well as high and low performance students.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work discuss the use of the standard model for the calculation of the solvency capital requirement (SCR) when the company aims to use the specific parameters of the model on the basis of the experience of its portfolio. In particular, this analysis focuses on the formula presented in the latest quantitative impact study (2010 CEIOPS) for non-life underwriting premium and reserve risk. One of the keys of the standard model for premium and reserves risk is the correlation matrix between lines of business. In this work we present how the correlation matrix between lines of business could be estimated from a quantitative perspective, as well as the possibility of using a credibility model for the estimation of the matrix of correlation between lines of business that merge qualitative and quantitative perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L’aigua i l’energia formen un binomi indissociable. En relació al cicle de l’aigua, des de fa varies dècades s’han desenvolupat diferents formes per recuperar part de l’energia relacionada amb l’aigua, per exemple a partir de centrals hidroelèctriques. No obstant, l’ús d’aquesta aigua també porta associat un gran consum energètic, relacionat sobretot amb el transport, la distribució, la depuració, etc... La depuració d’aigües residuals porta associada una elevada demanda energètica (Obis et al.,2009). En termes energètics, tot i que la despesa elèctrica d’una EDAR varia en funció de diferents paràmetres com la configuració i la capacitat de la planta, la càrrega a tractar, etc... es podria considerar que el rati mig seria d’ aproximadament 0.5 KWh•m-3.Els principals costos d’explotació estan relacionats tant amb la gestió de fangs (28%) com amb el consum elèctric (25%) (50% tractament biològic). Tot i que moltes investigacions relacionades amb el tractament d’aigua residual estan encaminades en disminuir els costos d’operació, des de fa poques dècades s’està investigant la viabilitat de que l’aigua residual fins i tot sigui una font d’energia, canviant la perspectiva, i començant a veure l’aigua residual no com a una problemàtica sinó com a un recurs. Concretament s’estima que l’aigua domèstica conté 9.3 vegades més energia que la necessària per el seu tractament mitjançant processos aerobis (Shizas et al., 2004). Un dels processos més desenvolupats relacionats amb el tractament d’aigües residuals i la producció energètica és la digestió anaeròbia. No obstant, aquesta tecnologia permet el tractament d’altes càrregues de matèria orgànica generant un efluent ric en nitrogen que s’haurà de tractar amb altres tecnologies. Per altre banda, recentment s’està investigant una nova tecnologia relacionada amb el tractament d’aigües residuals i la producció energètica: les piles biològiques (microbial fuel cells, MFC). Aquesta tecnologia permet obtenir directament energia elèctrica a partir de la degradació de substrats biodegradables (Rabaey et al., 2005). Les piles biològiques, més conegudes com a Microbial Fuel Cells (acrònim en anglès, MFC), són una emergent tecnologia que està centrant moltes mirades en el camp de l’ investigació, i que es basa en la producció d’energia elèctrica a partir de substrats biodegradables presents en l’aigua residual (Logan., 2008). Els fonaments de les piles biològiques és molt semblant al funcionament d’una pila Daniell, en la qual es separa en dos compartiments la reacció d’oxidació (compartiment anòdic) i la de reducció (compartiment catòdic) amb l’objectiu de generar un determinat corrent elèctric. En aquest estudi, bàsicament es mostra la posada en marxa d'una pila biològica per a l'eliminació de matèria orgànica i nitrogen de les aigües residuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr)transformation to obtain the random vector y of dimension D. The factor model istheny = Λf + e (1)with the factors f of dimension k & D, the error term e, and the loadings matrix Λ.Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysismodel (1) can be written asCov(y) = ΛΛT + ψ (2)where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as theloadings matrix Λ are estimated from an estimation of Cov(y).Given observed clr transformed data Y as realizations of the random vectory. Outliers or deviations from the idealized model assumptions of factor analysiscan severely effect the parameter estimation. As a way out, robust estimation ofthe covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), seePison et al. (2003). Well known robust covariance estimators with good statisticalproperties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), relyon a full-rank data matrix Y which is not the case for clr transformed data (see,e.g., Aitchison, 1986).The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves thissingularity problem. The data matrix Y is transformed to a matrix Z by usingan orthonormal basis of lower dimension. Using the ilr transformed data, a robustcovariance matrix C(Z) can be estimated. The result can be back-transformed tothe clr space byC(Y ) = V C(Z)V Twhere the matrix V with orthonormal columns comes from the relation betweenthe clr and the ilr transformation. Now the parameters in the model (2) can beestimated (Basilevsky, 1994) and the results have a direct interpretation since thelinks to the original variables are still preserved.The above procedure will be applied to data from geochemistry. Our specialinterest is on comparing the results with those of Reimann et al. (2002) for the Kolaproject data

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrogeological research usually includes some statistical studies devised to elucidate mean background state, characterise relationships among different hydrochemical parameters, and show the influence of human activities. These goals are achieved either by means of a statistical approach or by mixing modelsbetween end-members. Compositional data analysis has proved to be effective with the first approach, but there is no commonly accepted solution to the end-member problem in a compositional framework.We present here a possible solution based on factor analysis of compositions illustrated with a case study.We find two factors on the compositional bi-plot fitting two non-centered orthogonal axes to the most representative variables. Each one of these axes defines a subcomposition, grouping those variables thatlay nearest to it. With each subcomposition a log-contrast is computed and rewritten as an equilibrium equation. These two factors can be interpreted as the isometric log-ratio coordinates (ilr) of three hiddencomponents, that can be plotted in a ternary diagram. These hidden components might be interpreted as end-members.We have analysed 14 molarities in 31 sampling stations all along the Llobregat River and its tributaries, with a monthly measure during two years. We have obtained a bi-plot with a 57% of explained totalvariance, from which we have extracted two factors: factor G, reflecting geological background enhanced by potash mining; and factor A, essentially controlled by urban and/or farming wastewater. Graphicalrepresentation of these two factors allows us to identify three extreme samples, corresponding to pristine waters, potash mining influence and urban sewage influence. To confirm this, we have available analysisof diffused and widespread point sources identified in the area: springs, potash mining lixiviates, sewage, and fertilisers. Each one of these sources shows a clear link with one of the extreme samples, exceptfertilisers due to the heterogeneity of their composition.This approach is a useful tool to distinguish end-members, and characterise them, an issue generally difficult to solve. It is worth note that the end-member composition cannot be fully estimated but only characterised through log-ratio relationships among components. Moreover, the influence of each endmember in a given sample must be evaluated in relative terms of the other samples. These limitations areintrinsic to the relative nature of compositional data

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electron hole transfer (HT) properties of DNA are substantially affected by thermal fluctuations of the π stack structure. Depending on the mutual position of neighboring nucleobases, electronic coupling V may change by several orders of magnitude. In the present paper, we report the results of systematic QM/molecular dynamic (MD) calculations of the electronic couplings and on-site energies for the hole transfer. Based on 15 ns MD trajectories for several DNA oligomers, we calculate the average coupling squares 〈 V2 〉 and the energies of basepair triplets X G+ Y and X A+ Y, where X, Y=G, A, T, and C. For each of the 32 systems, 15 000 conformations separated by 1 ps are considered. The three-state generalized Mulliken-Hush method is used to derive electronic couplings for HT between neighboring basepairs. The adiabatic energies and dipole moment matrix elements are computed within the INDO/S method. We compare the rms values of V with the couplings estimated for the idealized B -DNA structure and show that in several important cases the couplings calculated for the idealized B -DNA structure are considerably underestimated. The rms values for intrastrand couplings G-G, A-A, G-A, and A-G are found to be similar, ∼0.07 eV, while the interstrand couplings are quite different. The energies of hole states G+ and A+ in the stack depend on the nature of the neighboring pairs. The X G+ Y are by 0.5 eV more stable than X A+ Y. The thermal fluctuations of the DNA structure facilitate the HT process from guanine to adenine. The tabulated couplings and on-site energies can be used as reference parameters in theoretical and computational studies of HT processes in DNA

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic coupling Vda is one of the key parameters that determine the rate of charge transfer through DNA. While there have been several computational studies of Vda for hole transfer, estimates of electronic couplings for excess electron transfer (ET) in DNA remain unavailable. In the paper, an efficient strategy is established for calculating the ET matrix elements between base pairs in a π stack. Two approaches are considered. First, we employ the diabatic-state (DS) method in which donor and acceptor are represented with radical anions of the canonical base pairs adenine-thymine (AT) and guanine-cytosine (GC). In this approach, similar values of Vda are obtained with the standard 6-31 G* and extended 6-31++ G* basis sets. Second, the electronic couplings are derived from lowest unoccupied molecular orbitals (LUMOs) of neutral systems by using the generalized Mulliken-Hush or fragment charge methods. Because the radical-anion states of AT and GC are well reproduced by LUMOs of the neutral base pairs calculated without diffuse functions, the estimated values of Vda are in good agreement with the couplings obtained for radical-anion states using the DS method. However, when the calculation of a neutral stack is carried out with diffuse functions, LUMOs of the system exhibit the dipole-bound character and cannot be used for estimating electronic couplings. Our calculations suggest that the ET matrix elements Vda for models containing intrastrand thymine and cytosine bases are essentially larger than the couplings in complexes with interstrand pyrimidine bases. The matrix elements for excess electron transfer are found to be considerably smaller than the corresponding values for hole transfer and to be very responsive to structural changes in a DNA stack

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Few studies have used longitudinal ultrasound measurements to assess the effect of traffic-related air pollution on fetal growth.Objective: We examined the relationship between exposure to nitrogen dioxide (NO2) and aromatic hydrocarbons [benzene, toluene, ethylbenzene, m/p-xylene, and o-xylene (BTEX)] on fetal growth assessed by 1,692 ultrasound measurements among 562 pregnant women from the Sabadell cohort of the Spanish INMA (Environment and Childhood) study.Methods: We used temporally adjusted land-use regression models to estimate exposures to NO2 and BTEX. We fitted mixed-effects models to estimate longitudinal growth curves for femur length (FL), head circumference (HC), abdominal circumference (AC), biparietal diameter (BPD), and estimated fetal weight (EFW). Unconditional and conditional SD scores were calculated at 12, 20, and 32 weeks of gestation. Sensitivity analyses were performed considering time–activity patterns during pregnancy.Results: Exposure to BTEX from early pregnancy was negatively associated with growth in BPD during weeks 20–32. None of the other fetal growth parameters were associated with exposure to air pollution during pregnancy. When considering only women who spent 2 hr/day in nonresidential outdoor locations, effect estimates were stronger and statistically significant for the association between NO2 and growth in HC during weeks 12–20 and growth in AC, BPD, and EFW during weeks 20–32.Conclusions: Our results lend some support to an effect of exposure to traffic-related air pollutants from early pregnancy on fetal growth during mid-pregnancy.