808 resultados para Empirical Predictions
Resumo:
The preceding two editions of CoDaWork included talks on the possible considerationof densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended theEuclidean structure of the simplex to a Hilbert space structure of the set of densitieswithin a bounded interval, and van den Boogaart (2005) generalized this to the setof densities bounded by an arbitrary reference density. From the many variations ofthe Hilbert structures available, we work with three cases. For bounded variables, abasis derived from Legendre polynomials is used. For variables with a lower bound, westandardize them with respect to an exponential distribution and express their densitiesas coordinates in a basis derived from Laguerre polynomials. Finally, for unboundedvariables, a normal distribution is used as reference, and coordinates are obtained withrespect to a Hermite-polynomials-based basis.To get the coordinates, several approaches can be considered. A numerical accuracyproblem occurs if one estimates the coordinates directly by using discretized scalarproducts. Thus we propose to use a weighted linear regression approach, where all k-order polynomials are used as predictand variables and weights are proportional to thereference density. Finally, for the case of 2-order Hermite polinomials (normal reference)and 1-order Laguerre polinomials (exponential), one can also derive the coordinatesfrom their relationships to the classical mean and variance.Apart of these theoretical issues, this contribution focuses on the application of thistheory to two main problems in sedimentary geology: the comparison of several grainsize distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock orsediment, like their composition
Resumo:
The integration of geophysical data into the subsurface characterization problem has been shown in many cases to significantly improve hydrological knowledge by providing information at spatial scales and locations that is unattainable using conventional hydrological measurement techniques. The investigation of exactly how much benefit can be brought by geophysical data in terms of its effect on hydrological predictions, however, has received considerably less attention in the literature. Here, we examine the potential hydrological benefits brought by a recently introduced simulated annealing (SA) conditional stochastic simulation method designed for the assimilation of diverse hydrogeophysical data sets. We consider the specific case of integrating crosshole ground-penetrating radar (GPR) and borehole porosity log data to characterize the porosity distribution in saturated heterogeneous aquifers. In many cases, porosity is linked to hydraulic conductivity and thus to flow and transport behavior. To perform our evaluation, we first generate a number of synthetic porosity fields exhibiting varying degrees of spatial continuity and structural complexity. Next, we simulate the collection of crosshole GPR data between several boreholes in these fields, and the collection of porosity log data at the borehole locations. The inverted GPR data, together with the porosity logs, are then used to reconstruct the porosity field using the SA-based method, along with a number of other more elementary approaches. Assuming that the grid-cell-scale relationship between porosity and hydraulic conductivity is unique and known, the porosity realizations are then used in groundwater flow and contaminant transport simulations to assess the benefits and limitations of the different approaches.
Resumo:
Teaching and research are organised differently between subject domains: attempts to construct typologies of higher education institutions, however, often do not include quantitative indicators concerning subject mix which would allow systematic comparisons of large numbers of higher education institutions among different countries, as the availability of data for such indicators is limited. In this paper, we present an exploratory approach for the construction of such indicators. The database constructed in the AQUAMETH project, which includes also data disaggregated at the disciplinary level, is explored with the aim of understanding patterns of subject mix. For six European countries, an exploratory and descriptive analysis of staff composition divided in four large domains (medical sciences, engineering and technology, natural sciences and social sciences and humanities) is performed, which leads to a classification distinguishing between specialist and generalist institutions. Among the latter, a further distinction is made based on the presence or absence of a medical department. Preliminary exploration of this classification and its comparison with other indicators show the influence of long term dynamics on the subject mix of individual higher education institutions, but also underline disciplinary differences, for example regarding student to staff ratios, as well as national patterns, for example regarding the number of PhD degrees per 100 undergraduate students. Despite its many limitations, this exploratory approach allows defining a classification of higher education institutions that accounts for a large share of differences between the analysed higher education institutions.
Resumo:
Owing to increasing resistance and the limited arsenal of new antibiotics, especially against Gram-negative pathogens, carefully designed antibiotic regimens are obligatory for febrile neutropenic patients, along with effective infection control. The Expert Group of the 4(th) European Conference on Infections in Leukemia has developed guidelines for initial empirical therapy in febrile neutropenic patients, based on: i) the local resistance epidemiology; and ii) the patient's risk factors for resistant bacteria and for a complicated clinical course. An 'escalation' approach, avoiding empirical carbapenems and combinations, should be employed in patients without particular risk factors. A 'de-escalation' approach, with initial broad-spectrum antibiotics or combinations, should be used only in those patients with: i) known prior colonization or infection with resistant pathogens; or ii) complicated presentation; or iii) in centers where resistant pathogens are prevalent at the onset of febrile neutropenia. In the latter case, infection control and antibiotic stewardship also need urgent review. Modification of the initial regimen at 72-96 h should be based on the patient's clinical course and the microbiological results. Discontinuation of antibiotics after 72 h or later should be considered in neutropenic patients with fever of unknown origin who are hemodynamically stable since presentation and afebrile for at least 48 h, irrespective of neutrophil count and expected duration of neutropenia. This strategy aims to minimize the collateral damage associated with antibiotic overuse, and the further selection of resistance.
Resumo:
Empirical studies have recently pointed towards a socio-structural category largely overlooked in social inequality research: the dynamic positions of households adjacent to those of the poor and yet not representing those of the established, more prosperous positions in society. These results suggest that the population in this category fluctuates into and out of poverty more often than moving into and out of secure prosperity. This category - still lacking theoretical conceptualization - is characterized by both precariousness and a certain degree of prosperity; despite a restricted and uncertain living standard it holds a range of opportunities for action. We seek analytical elements to conceptualize 'precarious prosperity' for comparative empirical research by subjecting various concepts of social inequality research to critical scrutiny. We then operationally define 'precarious prosperity' to screen for this population in three countries. Based on qualitative interviews with households in precarious prosperity, we present first analyses of perceptions and household strategies that underline the relevance of the concept in different countries.
Resumo:
Background Estimated cancer mortality statistics were published for the years 2011 and 2012 for the European Union (EU) and its six more populous countries. Patients and methods Using logarithmic Poisson count data joinpoint models and the World Health Organization mortality and population database, we estimated numbers of deaths and age-standardized (world) mortality rates (ASRs) in 2013 from all cancers and selected cancers. Results The 2013 predicted number of cancer deaths in the EU is 1 314 296 (737 747 men and 576 489 women). Between 2009 and 2013, all cancer ASRs are predicted to fall by 6% to 140.1/100 000 in men, and by 4% to 85.3/100 000 in women. The ASRs per 100 000 are 6.6 men and 2.9 women for stomach, 16.7 men and 9.5 women for intestines, 8.0 men and 5.5 women for pancreas, 37.1 men and 13.9 women for lung, 10.5 men for prostate, 14.6 women for breast, and 4.7 for uterine cancer, and 4.2 and 2.6 for leukaemia. Recent trends are favourable except for pancreatic cancer and lung cancer in women. Conclusions Favourable trends will continue in 2013. Pancreatic cancer has become the fourth cause of cancer death in both sexes, while in a few years lung cancer will likely become the first cause of cancer mortality in women as well, overtaking breast cancer.
Resumo:
BACKGROUND: From most recent available data, we projected cancer mortality statistics for 2014, for the European Union (EU) and its six more populous countries. Specific attention was given to pancreatic cancer, the only major neoplasm showing unfavorable trends in both sexes. PATIENTS AND METHODS: Population and death certification data from stomach, colorectum, pancreas, lung, breast, uterus, prostate, leukemias and total cancers were obtained from the World Health Organisation database and Eurostat. Figures were derived for the EU, France, Germany, Italy, Poland, Spain and the UK. Projected 2014 numbers of deaths by age group were obtained by linear regression on estimated numbers of deaths over the most recent time period identified by a joinpoint regression model. RESULTS: In the EU in 2014, 1,323,600 deaths from cancer are predicted (742,500 men and 581,100 women), corresponding to standardized death rates of 138.1/100,000 men and 84.7/100,000 women, falling by 7% and 5%, respectively, since 2009. In men, predicted rates for the three major cancers (lung, colorectum and prostate cancer) are lower than in 2009, falling by 8%, 4% and 10%, respectively. In women, breast and colorectal cancers had favorable trends (-9% and -7%), but female lung cancer rates are predicted to rise 8%. Pancreatic cancer is the only neoplasm with a negative outlook in both sexes. Only in the young (25-49 years), EU trends become more favorable in men, while women keep registering slight predicted rises. CONCLUSIONS: Cancer mortality predictions for 2014 confirm the overall favorable cancer mortality trend in the EU, translating to an overall 26% fall in men since its peak in 1988, and 20% in women, and the avoidance of over 250,000 deaths in 2014 compared with the peak rate. Notable exceptions are female lung cancer and pancreatic cancer in both sexes.
Resumo:
A study of how the machine learning technique, known as gentleboost, could improve different digital watermarking methods such as LSB, DWT, DCT2 and Histogram shifting.
Resumo:
This article studies how product introduction decisions relate to profitability and uncertainty in the context of multi-product firms and product differentiation. These two features, common to many modern industries, have not received much attention in the literature as compared to the classical problem of firm entry, even if the determinants of firm and product entry are quite different. The theoretical predictions about the sign of the impact of uncertainty on product entry are not conclusive. Therefore, an econometric model relating firms’ product introduction decisions with profitability and profit uncertainty is proposed. Firm’s estimated profits are obtained from a structural model of product demand and supply, and uncertainty is proxied by profits’ variance. The empirical analysis is carried out using data on the Spanish car industry for the period 1990-2000. The results show a positive relationship between product introduction and profitability, and a negative one with respect to profit variability. Interestingly, the degree of uncertainty appears to be a driving force of entry stronger than profitability, suggesting that the product proliferation process in the Spanish car market may have been mainly a consequence of lower uncertainty rather than the result of having a more profitable market
Resumo:
This paper presents an application of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) approach to the estimation of quantities of Gross Value Added (GVA) referring to economic entities defined at different scales of study. The method first estimates benchmark values of the pace of GVA generation per hour of labour across economic sectors. These values are estimated as intensive variables –e.g. €/hour– by dividing the various sectorial GVA of the country (expressed in € per year) by the hours of paid work in that same sector per year. This assessment is obtained using data referring to national statistics (top down information referring to the national level). Then, the approach uses bottom-up information (the number of hours of paid work in the various economic sectors of an economic entity –e.g. a city or a province– operating within the country) to estimate the amount of GVA produced by that entity. This estimate is obtained by multiplying the number of hours of work in each sector in the economic entity by the benchmark value of GVA generation per hour of work of that particular sector (national average). This method is applied and tested on two different socio-economic systems: (i) Catalonia (considered level n) and Barcelona (considered level n-1); and (ii) the region of Lima (considered level n) and Lima Metropolitan Area (considered level n-1). In both cases, the GVA per year of the local economic entity –Barcelona and Lima Metropolitan Area – is estimated and the resulting value is compared with GVA data provided by statistical offices. The empirical analysis seems to validate the approach, even though the case of Lima Metropolitan Area indicates a need for additional care when dealing with the estimate of GVA in primary sectors (agriculture and mining).
Resumo:
Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a) real absences b) pseudo-absences selected randomly from the background and c) two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA) or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors) was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97), and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have limited fit.
Resumo:
Question Does a land-use variable improve spatial predictions of plant species presence-absence and abundance models at the regional scale in a mountain landscape? Location Western Swiss Alps. Methods Presence-absence generalized linear models (GLM) and abundance ordinal logistic regression models (LRM) were fitted to data on 78 mountain plant species, with topo-climatic and/or land-use variables available at a 25-m resolution. The additional contribution of land use when added to topo-climatic models was evaluated by: (1) assessing the changes in model fit and (2) predictive power, (3) partitioning the deviance respectively explained by the topo-climatic variables and the land-use variable through variation partitioning, and (5) comparing spatial projections. Results Land use significantly improved the fit of presence-absence models but not their predictive power. In contrast, land use significantly improved both the fit and predictive power of abundance models. Variation partitioning also showed that the individual contribution of land use to the deviance explained by presence-absence models was, on average, weak for both GLM and LRM (3.7% and 4.5%, respectively), but changes in spatial projections could nevertheless be important for some species. Conclusions In this mountain area and at our regional scale, land use is important for predicting abundance, but not presence-absence. The importance of adding land-use information depends on the species considered. Even without a marked effect on model fit and predictive performance, adding land use can affect spatial projections of both presence-absence and abundance models.
Resumo:
Many animals that live in groups maintain competitive relationships, yet avoid continual fighting, by forming dominance hierarchies. We compare predictions of stochastic, individual-based models with empirical experimental evidence using shore crabs to test competing hypotheses regarding hierarchy development. The models test (1) what information individuals use when deciding to fight or retreat, (2) how past experience affects current resource-holding potential, and (3) how individuals deal with changes to the social environment. First, we conclude that crabs assess only their own state and not their opponent's when deciding to fight or retreat. Second, willingness to enter, and performance in, aggressive contests are influenced by previous contest outcomes. Winning increases the likelihood of both fighting and winning future interactions, while losing has the opposite effect. Third, when groups with established dominance hierarchies dissolve and new groups form, individuals reassess their ranks, showing no memory of previous rank or group affiliation. With every change in group composition, individuals fight for their new ranks. This iterative process carries over as groups dissolve and form, which has important implications for the relationship between ability and hierarchy rank. We conclude that dominance hierarchies emerge through an interaction of individual and social factors, and discuss these findings in terms of an underlying mechanism. Overall, our results are consistent with crabs using a cumulative assessment strategy iterated across changes in group composition, in which aggression is constrained by an absolute threshold in energy spent and damage received while fighting.