974 resultados para Estimate model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyzed more than 200 OSIRIS NAC images with a pixel scale of 0.9-2.4 m/pixel of comet 67P/Churyumov-Gerasimenko (67P) that have been acquired from onboard the Rosetta spacecraft in August and September 2014 using stereo-photogrammetric methods (SPG). We derived improved spacecraft position and pointing data for the OSIRIS images and a high-resolution shape model that consists of about 16 million facets (2 m horizontal sampling) and a typical vertical accuracy at the decimeter scale. From this model, we derive a volume for the northern hemisphere of 9.35 km(3) +/- 0.1 km(3). With the assumption of a homogeneous density distribution and taking into account the current uncertainty of the position of the comet's center-of-mass, we extrapolated this value to an overall volume of 18.7 km(3) +/- 1.2 km(3), and, with a current best estimate of 1.0 X 10(13) kg for the mass, we derive a bulk density of 535 kg/m(3) +/- 35 kg/m(3). Furthermore, we used SPG methods to analyze the rotational elements of 67P. The rotational period for August and September 2014 was determined to be 12.4041 +/- 0.0004 h. For the orientation of the rotational axis (z-axis of the body-fixed reference frame) we derived a precession model with a half-cone angle of 0.14 degrees, a cone center position at 69.54 degrees/64.11 degrees (RA/Dec J2000 equatorial coordinates), and a precession period of 10.7 days. For the definition of zero longitude (x-axis orientation), we finally selected the boulder-like Cheops feature on the big lobe of 67P and fixed its spherical coordinates to 142.35 degrees right-hand-rule eastern longitude and -0.28 degrees latitude. This completes the definition of the new Cheops reference frame for 67P. Finally, we defined cartographic mapping standards for common use and combined analyses of scientific results that have been obtained not only within the OSIRIS team, but also within other groups of the Rosetta mission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Everglades Depth Estimation Network (EDEN) is an integrated network of realtime water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on grid with 400-square-meter spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to: (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) (U.S. Army Corps of Engineers, 1999). The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades. The first objective of this report is to validate the spatially continuous EDEN water-surface model for the Everglades, Florida developed by Pearlstine et al. (2007) by using an independent field-measured data-set. The second objective is to demonstrate two applications of the EDEN water-surface model: to estimate site-specific ground elevation by using the validated EDEN water-surface model and observed water depth data; and to create water-depth hydrographs for tree islands. We found that there are no statistically significant differences between model-predicted and field-observed water-stage data in both southern Water Conservation Area (WCA) 3A and WCA 3B. Tree island elevations were derived by subtracting field water-depth measurements from the predicted EDEN water-surface. Water-depth hydrographs were then computed by subtracting tree island elevations from the EDEN water stage. Overall, the model is reliable by a root mean square error (RMSE) of 3.31 cm. By region, the RMSE is 2.49 cm and 7.77 cm in WCA 3A and 3B, respectively. This new landscape-scale hydrological model has wide applications for ongoing research and management efforts that are vital to restoration of the Florida Everglades. The accurate, high-resolution hydrological data, generated over broad spatial and temporal scales by the EDEN model, provides a previously missing key to understanding the habitat requirements and linkages among native and invasive populations, including fish, wildlife, wading birds, and plants. The EDEN model is a powerful tool that could be adapted for other ecosystem-scale restoration and management programs worldwide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we introduce technical efficiency via the intercept that evolve over time as a AR(1) process in a stochastic frontier (SF) framework in a panel data framework. Following are the distinguishing features of the model. First, the model is dynamic in nature. Second, it can separate technical inefficiency from fixed firm-specific effects which are not part of inefficiency. Third, the model allows one to estimate technical change separate from change in technical efficiency. We propose the ML method to estimate the parameters of the model. Finally, we derive expressions to calculate/predict technical inefficiency (efficiency).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates a theoretical model where a longitudinal process, that is a stationary Markov-Chain, and a Weibull survival process share a bivariate random effect. Furthermore, a Quality-of-Life adjusted survival is calculated as the weighted sum of survival time. Theoretical values of population mean adjusted survival of the described model are computed numerically. The parameters of the bivariate random effect do significantly affect theoretical values of population mean. Maximum-Likelihood and Bayesian methods are applied on simulated data to estimate the model parameters. Based on the parameter estimates, predicated population mean adjusted survival can then be calculated numerically and compared with the theoretical values. Bayesian method and Maximum-Likelihood method provide parameter estimations and population mean prediction with comparable accuracy; however Bayesian method suffers from poor convergence due to autocorrelation and inter-variable correlation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ordinal outcomes are frequently employed in diagnosis and clinical trials. Clinical trials of Alzheimer's disease (AD) treatments are a case in point using the status of mild, moderate or severe disease as outcome measures. As in many other outcome oriented studies, the disease status may be misclassified. This study estimates the extent of misclassification in an ordinal outcome such as disease status. Also, this study estimates the extent of misclassification of a predictor variable such as genotype status. An ordinal logistic regression model is commonly used to model the relationship between disease status, the effect of treatment, and other predictive factors. A simulation study was done. First, data based on a set of hypothetical parameters and hypothetical rates of misclassification was created. Next, the maximum likelihood method was employed to generate likelihood equations accounting for misclassification. The Nelder-Mead Simplex method was used to solve for the misclassification and model parameters. Finally, this method was applied to an AD dataset to detect the amount of misclassification present. The estimates of the ordinal regression model parameters were close to the hypothetical parameters. β1 was hypothesized at 0.50 and the mean estimate was 0.488, β2 was hypothesized at 0.04 and the mean of the estimates was 0.04. Although the estimates for the rates of misclassification of X1 were not as close as β1 and β2, they validate this method. X 1 0-1 misclassification was hypothesized as 2.98% and the mean of the simulated estimates was 1.54% and, in the best case, the misclassification of k from high to medium was hypothesized at 4.87% and had a sample mean of 3.62%. In the AD dataset, the estimate for the odds ratio of X 1 of having both copies of the APOE 4 allele changed from an estimate of 1.377 to an estimate 1.418, demonstrating that the estimates of the odds ratio changed when the analysis includes adjustment for misclassification. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multivariate frailty hazard model is developed for joint-modeling of three correlated time-to-event outcomes: (1) local recurrence, (2) distant recurrence, and (3) overall survival. The term frailty is introduced to model population heterogeneity. The dependence is modeled by conditioning on a shared frailty that is included in the three hazard functions. Independent variables can be included in the model as covariates. The Markov chain Monte Carlo methods are used to estimate the posterior distributions of model parameters. The algorithm used in present application is the hybrid Metropolis-Hastings algorithm, which simultaneously updates all parameters with evaluations of gradient of log posterior density. The performance of this approach is examined based on simulation studies using Exponential and Weibull distributions. We apply the proposed methods to a study of patients with soft tissue sarcoma, which motivated this research. Our results indicate that patients with chemotherapy had better overall survival with hazard ratio of 0.242 (95% CI: 0.094 - 0.564) and lower risk of distant recurrence with hazard ratio of 0.636 (95% CI: 0.487 - 0.860), but not significantly better in local recurrence with hazard ratio of 0.799 (95% CI: 0.575 - 1.054). The advantages and limitations of the proposed models, and future research directions are discussed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The type 2 diabetes (diabetes) pandemic is recognized as a threat to tuberculosis (TB) control worldwide. This secondary data analysis project estimated the contribution of diabetes to TB in a binational community on the Texas-Mexico border where both diseases occur. Newly-diagnosed TB patients > 20 years of age were prospectively enrolled at Texas-Mexico border clinics between January 2006 and November 2008. Upon enrollment, information regarding social, demographic, and medical risks for TB was collected at interview, including self-reported diabetes. In addition, self-reported diabetes was supported by blood-confirmation according to guidelines published by the American Diabetes Association (ADA). For this project, data was compared to existing statistics for TB incidence and diabetes prevalence from the corresponding general populations of each study site to estimate the relative and attributable risks of diabetes to TB. In concordance with historical sociodemographic data provided for TB patients with self-reported diabetes, our TB patients with diabetes also lacked the risk factors traditionally associated with TB (alcohol abuse, drug abuse, history of incarceration, and HIV infection); instead, the majority of our TB patients with diabetes were characterized by overweight/obesity, chronic hyperglycemia, and older median age. In addition, diabetes prevalence among our TB patients was significantly higher than in the corresponding general populations. Findings of this study will help accurately characterize TB patients with diabetes, thus aiding in the timely recognition and diagnosis of TB in a population not traditionally viewed as at-risk. We provide epidemiological and biological evidence that diabetes continues to be an increasingly important risk factor for TB.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional designs of animal bioassays allocate the same number of animals into control and dose groups to explore the spontaneous and induced tumor incidence rates, respectively. The purpose of such bioassays are (a) to determine whether or not the substance exhibits carcinogenic properties, and (b) if so, to estimate the human response at relatively low doses. In this study, it has been found that the optimal allocation to the experimental groups which, in some sense, minimize the error of the estimated response for low dose extrapolation is associated with the dose level and tumor risk. The number of dose levels has been investigated at the affordable experimental cost. The pattern of the administered dose, 1 MTD, 1/2 MTD, 1/4 MTD,....., etc. plus control, gives the most reasonable arrangement for the low dose extrapolation purpose. The arrangement of five dose groups may make the highest dose trivial. A four-dose design can circumvent this problem and has also one degree of freedom for testing the goodness-of-fit of the response model.^ An example using the data on liver tumors induced in mice in a lifetime study of feeding dieldrin (Walker et al., 1973) is implemented with the methodology. The results are compared with conclusions drawn from other studies. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian approach to estimation of the regression coefficients of a multinominal logit model with ordinal scale response categories is presented. A Monte Carlo method is used to construct the posterior distribution of the link function. The link function is treated as an arbitrary scalar function. Then the Gauss-Markov theorem is used to determine a function of the link which produces a random vector of coefficients. The posterior distribution of the random vector of coefficients is used to estimate the regression coefficients. The method described is referred to as a Bayesian generalized least square (BGLS) analysis. Two cases involving multinominal logit models are described. Case I involves a cumulative logit model and Case II involves a proportional-odds model. All inferences about the coefficients for both cases are described in terms of the posterior distribution of the regression coefficients. The results from the BGLS method are compared to maximum likelihood estimates of the regression coefficients. The BGLS method avoids the nonlinear problems encountered when estimating the regression coefficients of a generalized linear model. The method is not complex or computationally intensive. The BGLS method offers several advantages over Bayesian approaches. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bromoform (CHBr3) is one important precursor of atmospheric reactive bromine species that are involved in ozone depletion in the troposphere and stratosphere. In the open ocean bromoform production is linked to phytoplankton that contains the enzyme bromoperoxidase. Coastal sources of bromoform are higher than open ocean sources. However, open ocean emissions are important because the transfer of tracers into higher altitude in the air, i.e. into the ozone layer, strongly depends on the location of emissions. For example, emissions in the tropics are more rapidly transported into the upper atmosphere than emissions from higher latitudes. Global spatio-temporal features of bromoform emissions are poorly constrained. Here, a global three-dimensional ocean biogeochemistry model (MPIOM-HAMOCC) is used to simulate bromoform cycling in the ocean and emissions into the atmosphere using recently published data of global atmospheric concentrations (Ziska et al., 2013) as upper boundary conditions. Our simulated surface concentrations of CHBr3 match the observations well. Simulated global annual emissions based on monthly mean model output are lower than previous estimates, including the estimate by Ziska et al. (2013), because the gas exchange reverses when less bromoform is produced in non-blooming seasons. This is the case for higher latitudes, i.e. the polar regions and northern North Atlantic. Further model experiments show that future model studies may need to distinguish different bromoform-producing phytoplankton species and reveal that the transport of CHBr3 from the coast considerably alters open ocean bromoform concentrations, in particular in the northern sub-polar and polar regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Greenland ice core records indicate that the last deglaciation (~7-21 ka) was punctuated by numerous abrupt climate reversals involving temperature changes of up to 5°C-10°C within decades. However, the cause behind many of these events is uncertain. A likely candidate may have been the input of deglacial meltwater, from the Laurentide ice sheet (LIS), to the high-latitude North Atlantic, which disrupted ocean circulation and triggered cooling. Yet the direct evidence of meltwater input for many of these events has so far remained undetected. In this study, we use the geochemistry (paired Mg/Ca-d18O) of planktonic foraminifera from a sediment core south of Iceland to reconstruct the input of freshwater to the northern North Atlantic during abrupt deglacial climate change. Our record can be placed on the same timescale as ice cores and therefore provides a direct comparison between the timing of freshwater input and climate variability. Meltwater events coincide with the onset of numerous cold intervals, including the Older Dryas (14.0 ka), two events during the Allerød (at ~13.1 and 13.6 ka), the Younger Dryas (12.9 ka), and the 8.2 ka event, supporting a causal link between these abrupt climate changes and meltwater input. During the Bølling-Allerød warm interval, we find that periods of warming are associated with an increased meltwater flux to the northern North Atlantic, which in turn induces abrupt cooling, a cessation in meltwater input, and eventual climate recovery. This implies that feedback between climate and meltwater input produced a highly variable climate. A comparison to published data sets suggests that this feedback likely included fluctuations in the southern margin of the LIS causing rerouting of LIS meltwater between southern and eastern drainage outlets, as proposed by Clark et al. (2001, doi:10.1126/science.1062517).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

“Import content of exports”, based on Leontief’s demand-driven input-output model, has been widely used as an indicator to measure a country’s degree of participation in vertical specialisation trade. At a sectoral level, this indicator represents the share of inter-mediates imported by all sectors embodied in a given sector’s exported output. However, this indicator only reflects one aspect of vertical specialisation – the demand side. This paper discusses the possibility of using the input-output model developed by Ghosh to measure the vertical specialisation from the perspective of the supply side. At a sector level, the Ghosh type indicator measures the share of imported intermediates used in a sector’s production that are subsequently embodied in exports by all sectors. We estimate these two indicators of vertical specialisation for 47 selected economies for 1995, 2000, 2005 using the OECD’s harmonized input-output database. In addition, the potential biases of both indicators due to the treatment of net withdrawals in inventories, are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use an automatic weather station and surface mass balance dataset spanning four melt seasons collected on Hurd Peninsula Glaciers, South Shetland Islands, to investigate the point surface energy balance, to determine the absolute and relative contribution of the various energy fluxes acting on the glacier surface and to estimate the sensitivity of melt to ambient temperature changes. Long-wave incoming radiation is the main energy source for melt, while short-wave radiation is the most important flux controlling the variation of both seasonal and daily mean surface energy balance. Short-wave and long-wave radiation fluxes do, in general, balance each other, resulting in a high correspondence between daily mean net radiation flux and available melt energy flux. We calibrate a distributed melt model driven by air temperature and an expression for the incoming short-wave radiation. The model is calibrated with the data from one of the melt seasons and validated with the data of the three remaining seasons. The model results deviate at most 140 mm w.e. from the corresponding observations using the glaciological method. The model is very sensitive to changes in ambient temperature: a 0.5 ◦ C increase results in 56 % higher melt rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo final de las investigaciones recogidas en esta tesis doctoral es la estimación del volumen de hielo total de los ms de 1600 glaciares de Svalbard, en el Ártico, y, con ello, su contribución potencial a la subida del nivel medio del mar en un escenario de calentamiento global. Los cálculos más exactos del volumen de un glaciar se efectúan a partir de medidas del espesor de hielo obtenidas con georradar. Sin embargo, estas medidas no son viables para conjuntos grandes de glaciares, debido al coste, dificultades logísticas y tiempo requerido por ellas, especialmente en las regiones polares o de montaña. Frente a ello, la determinación de áreas de glaciares a partir de imágenes de satélite sí es viable a escalas global y regional, por lo que las relaciones de escala volumen-área constituyen el mecanismo más adecuado para las estimaciones de volúmenes globales y regionales, como las realizadas para Svalbard en esta tesis. Como parte del trabajo de tesis, hemos elaborado un inventario de los glaciares de Svalbard en los que se han efectuado radioecosondeos, y hemos realizado los cálculos del volumen de hielo de más de 80 cuencas glaciares de Svalbard a partir de datos de georradar. Estos volúmenes han sido utilizados para calibrar las relaciones volumen-área desarrolladas en la tesis. Los datos de georradar han sido obtenidos en diversas campañas llevadas a cabo por grupos de investigación internacionales, gran parte de ellas lideradas por el Grupo de Simulación Numérica en Ciencias e Ingeniería de la Universidad Politécnica de Madrid, del que forman parte la doctoranda y los directores de tesis. Además, se ha desarrollado una metodología para la estimación del error en el cálculo de volumen, que aporta una novedosa técnica de cálculo del error de interpolación para conjuntos de datos del tipo de los obtenidos con perfiles de georradar, que presentan distribuciones espaciales con unos patrones muy característicos pero con una densidad de datos muy irregular. Hemos obtenido en este trabajo de tesis relaciones de escala específicas para los glaciares de Svalbard, explorando la sensibilidad de los parámetros a diferentes morfologías glaciares, e incorporando nuevas variables. En particular, hemos efectuado experimentos orientados a verificar si las relaciones de escala obtenidas caracterizando los glaciares individuales por su tamaño, pendiente o forma implican diferencias significativas en el volumen total estimado para los glaciares de Svalbard, y si esta partición implica algún patrón significativo en los parámetros de las relaciones de escala. Nuestros resultados indican que, para un valor constante del factor multiplicativo de la relacin de escala, el exponente que afecta al área en la relación volumen-área decrece según aumentan la pendiente y el factor de forma, mientras que las clasificaciones basadas en tamaño no muestran un patrón significativo. Esto significa que los glaciares con mayores pendientes y de tipo circo son menos sensibles a los cambios de área. Además, los volúmenes de la población total de los glaciares de Svalbard calculados con fraccionamiento en grupos por tamaño y pendiente son un 1-4% menores que los obtenidas usando la totalidad de glaciares sin fraccionamiento en grupos, mientras que los volúmenes calculados fraccionando por forma son un 3-5% mayores. También realizamos experimentos multivariable para obtener estimaciones óptimas del volumen total mediante una combinación de distintos predictores. Nuestros resultados muestran que un modelo potencial simple volumen-área explica el 98.6% de la varianza. Sólo el predictor longitud del glaciar proporciona significación estadística cuando se usa además del área del glaciar, aunque el coeficiente de determinación disminuye en comparación con el modelo más simple V-A. El predictor intervalo de altitud no proporciona información adicional cuando se usa además del área del glaciar. Nuestras estimaciones del volumen de la totalidad de glaciares de Svalbard usando las diferentes relaciones de escala obtenidas en esta tesis oscilan entre 6890 y 8106 km3, con errores relativos del orden de 6.6-8.1%. El valor medio de nuestras estimaciones, que puede ser considerado como nuestra mejor estimación del volumen, es de 7.504 km3. En términos de equivalente en nivel del mar (SLE), nuestras estimaciones corresponden a una subida potencial del nivel del mar de 17-20 mm SLE, promediando 19_2 mm SLE, donde el error corresponde al error en volumen antes indicado. En comparación, las estimaciones usando las relaciones V-A de otros autores son de 13-26 mm SLE, promediando 20 _ 2 mm SLE, donde el error representa la desviación estándar de las distintas estimaciones. ABSTRACT The final aim of the research involved in this doctoral thesis is the estimation of the total ice volume of the more than 1600 glaciers of Svalbard, in the Arctic region, and thus their potential contribution to sea-level rise under a global warming scenario. The most accurate calculations of glacier volumes are those based on ice-thicknesses measured by groundpenetrating radar (GPR). However, such measurements are not viable for very large sets of glaciers, due to their cost, logistic difficulties and time requirements, especially in polar or mountain regions. On the contrary, the calculation of glacier areas from satellite images is perfectly viable at global and regional scales, so the volume-area scaling relationships are the most useful tool to determine glacier volumes at global and regional scales, as done for Svalbard in this PhD thesis. As part of the PhD work, we have compiled an inventory of the radio-echo sounded glaciers in Svalbard, and we have performed the volume calculations for more than 80 glacier basins in Svalbard from GPR data. These volumes have been used to calibrate the volume-area relationships derived in this dissertation. Such GPR data have been obtained during fieldwork campaigns carried out by international teams, often lead by the Group of Numerical Simulation in Science and Engineering of the Technical University of Madrid, to which the PhD candidate and her supervisors belong. Furthermore, we have developed a methodology to estimate the error in the volume calculation, which includes a novel technique to calculate the interpolation error for data sets of the type produced by GPR profiling, which show very characteristic data distribution patterns but with very irregular data density. We have derived in this dissertation scaling relationships specific for Svalbard glaciers, exploring the sensitivity of the scaling parameters to different glacier morphologies and adding new variables. In particular, we did experiments aimed to verify whether scaling relationships obtained through characterization of individual glacier shape, slope and size imply significant differences in the estimated volume of the total population of Svalbard glaciers, and whether this partitioning implies any noticeable pattern in the scaling relationship parameters. Our results indicate that, for a fixed value of the factor in the scaling relationship, the exponent of the area in the volume-area relationship decreases as slope and shape increase, whereas size-based classifications do not reveal any clear trend. This means that steep slopes and cirque-type glaciers are less sensitive to changes in glacier area. Moreover, the volumes of the total population of Svalbard glaciers calculated according to partitioning in subgroups by size and slope are smaller (by 1-4%) than that obtained considering all glaciers without partitioning into subgroups, whereas the volumes calculated according to partitioning in subgroups by shape are 3-5% larger. We also did multivariate experiments attempting to optimally predict the volume of Svalbard glaciers from a combination of different predictors. Our results show that a simple power-type V-A model explains 98.6% of the variance. Only the predictor glacier length provides statistical significance when used in addition to the predictor glacier area, though the coefficient of determination decreases as compared with the simpler V-A model. The predictor elevation range did not provide any additional information when used in addition to glacier area. Our estimates of the volume of the entire population of Svalbard glaciers using the different scaling relationships that we have derived along this thesis range within 6890-8106 km3, with estimated relative errors in total volume of the order of 6.6-8.1% The average value of all of our estimates, which could be used as a best estimate for the volume, is 7,504 km3. In terms of sea-level equivalent (SLE), our volume estimates correspond to a potential contribution to sea-level rise within 17-20 mm SLE, averaging 19 _ 2 mm SLE, where the quoted error corresponds to our estimated relative error in volume. For comparison, the estimates using the V-A scaling relations found in the literature range within 13-26 mm SLE, averaging 20 _ 2 mm SLE, where the quoted error represents the standard deviation of the different estimates.