950 resultados para Decomposition of Ranked Models
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Detailed large-scale information on mammal distribution has often been lacking, hindering conservation efforts. We used the information from the 2009 IUCN Red List of Threatened Species as a baseline for developing habitat suitability models for 5027 out of 5330 known terrestrial mammal species, based on their habitat relationships. We focused on the following environmental variables: land cover, elevation and hydrological features. Models were developed at 300 m resolution and limited to within species' known geographical ranges. A subset of the models was validated using points of known species occurrence. We conducted a global, fine-scale analysis of patterns of species richness. The richness of mammal species estimated by the overlap of their suitable habitat is on average one-third less than that estimated by the overlap of their geographical ranges. The highest absolute difference is found in tropical and subtropical regions in South America, Africa and Southeast Asia that are not covered by dense forest. The proportion of suitable habitat within mammal geographical ranges correlates with the IUCN Red List category to which they have been assigned, decreasing monotonically from Least Concern to Endangered. These results demonstrate the importance of fine-resolution distribution data for the development of global conservation strategies for mammals.
Resumo:
Dendritic cells (DCs) are the most potent antigen-presenting cells in the human lung and are now recognized as crucial initiators of immune responses in general. They are arranged as sentinels in a dense surveillance network inside and below the epithelium of the airways and alveoli, where thet are ideally situated to sample inhaled antigen. DCs are known to play a pivotal role in maintaining the balance between tolerance and active immune response in the respiratory system. It is no surprise that the lungs became a main focus of DC-related investigations as this organ provides a large interface for interactions of inhaled antigens with the human body. During recent years there has been a constantly growing body of lung DC-related publications that draw their data from in vitro models, animal models and human studies. This review focuses on the biology and functions of different DC populations in the lung and highlights the advantages and drawbacks of different models with which to study the role of lung DCs. Furthermore, we present a number of up-to-date visualization techniques to characterize DC-related cell interactions in vitro and/or in vivo.
Resumo:
The suitable timing of capacity investments is a remarkable issue especially in capital intensive industries. Despite its importance, fairly few studies have been published on the topic. In the present study models for the timing of capacity change in capital intensive industry are developed. The study considers mainly the optimal timing of single capacity changes. The review of earlier research describes connections between cost, capacity and timing literature, and empirical examples are used to describe the starting point of the study and to test the developed models. The study includes four models, which describe the timing question from different perspectives. The first model, which minimizes unit costs, has been built for capacity expansion and replacement situations. It is shown that the optimal timing of an investment can be presented with the capacity and cost advantage ratios. After the unit cost minimization model the view is extended to the direction of profit maximization. The second model states that early investments are preferable if the change of fixed costs is small compared to the change of the contribution margin. The third model is a numerical discounted cash flow model, which emphasizes the roles of start-up time, capacity utilization rate and value of waiting as drivers of the profitable timing of a project. The last model expands the view from project level to company level and connects the flexibility of assets and cost structures to the timing problem. The main results of the research are the solutions of the models and analysis or simulations done with the models. The relevance and applicability of the results are verified by evaluating the logic of the models and by numerical cases.
Resumo:
Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen.
Resumo:
Tämän tutkielman tavoitteena on selvittää mitkä riskitekijät vaikuttavat osakkeiden tuottoihin. Arvopapereina käytetään kuutta portfoliota, jotka ovat jaoteltu markkina-arvon mukaan. Aikaperiodi on vuoden 1987 alusta vuoden 2004 loppuun. Malleina käytetään pääomamarkkinoiden hinnoittelumallia, arbitraasihinnoitteluteoriaa sekä kulutuspohjaista pääomamarkkinoiden hinnoittelumallia. Riskifaktoreina kahteen ensimmäiseen malliin käytetään markkinariskiä sekä makrotaloudellisia riskitekijöitä. Kulutuspohjaiseen pääomamarkkinoiden hinnoinoittelumallissa keskitytään estimoimaan kuluttajien riskitottumuksia sekä diskonttaustekijää, jolla kuluttaja arvostavat tulevaisuuden kulutusta. Tämä työ esittelee momenttiteorian, jolla pystymme estimoimaan lineaarisia sekä epälineaarisia yhtälöitä. Käytämme tätä menetelmää testaamissamme malleissa. Yhteenvetona tuloksista voidaan sanoa, että markkinabeeta onedelleen tärkein riskitekijä, mutta löydämme myös tukea makrotaloudellisille riskitekijöille. Kulutuspohjainen mallimme toimii melko hyvin antaen teoreettisesti hyväksyttäviä arvoja.
Resumo:
Falls are common in the elderly, and potentially result in injury and disability. Thus, preventing falls as soon as possible in older adults is a public health priority, yet there is no specific marker that is predictive of the first fall onset. We hypothesized that gait features should be the most relevant variables for predicting the first fall. Clinical baseline characteristics (e.g., gender, cognitive function) were assessed in 259 home-dwelling people aged 66 to 75 that had never fallen. Likewise, global kinetic behavior of gait was recorded from 22 variables in 1036 walking tests with an accelerometric gait analysis system. Afterward, monthly telephone monitoring reported the date of the first fall over 24 months. A principal components analysis was used to assess the relationship between gait variables and fall status in four groups: non-fallers, fallers from 0 to 6 months, fallers from 6 to 12 months and fallers from 12 to 24 months. The association of significant principal components (PC) with an increased risk of first fall was then evaluated using the area under the Receiver Operator Characteristic Curve (ROC). No effect of clinical confounding variables was shown as a function of groups. An eigenvalue decomposition of the correlation matrix identified a large statistical PC1 (termed "Global kinetics of gait pattern"), which accounted for 36.7% of total variance. Principal component loadings also revealed a PC2 (12.6% of total variance), related to the "Global gait regularity." Subsequent ANOVAs showed that only PC1 discriminated the fall status during the first 6 months, while PC2 discriminated the first fall onset between 6 and 12 months. After one year, any PC was associated with falls. These results were bolstered by the ROC analyses, showing good predictive models of the first fall during the first six months or from 6 to 12 months. Overall, these findings suggest that the performance of a standardized walking test at least once a year is essential for fall prevention.
Resumo:
A rigorous unit operation model is developed for vapor membrane separation. The new model is able to describe temperature, pressure, and concentration dependent permeation as wellreal fluid effects in vapor and gas separation with hydrocarbon selective rubbery polymeric membranes. The permeation through the membrane is described by a separate treatment of sorption and diffusion within the membrane. The chemical engineering thermodynamics is used to describe the equilibrium sorption of vapors and gases in rubbery membranes with equation of state models for polymeric systems. Also a new modification of the UNIFAC model is proposed for this purpose. Various thermodynamic models are extensively compared in order to verify the models' ability to predict and correlate experimental vapor-liquid equilibrium data. The penetrant transport through the selective layer of the membrane is described with the generalized Maxwell-Stefan equations, which are able to account for thebulk flux contribution as well as the diffusive coupling effect. A method is described to compute and correlate binary penetrant¿membrane diffusion coefficients from the experimental permeability coefficients at different temperatures and pressures. A fluid flow model for spiral-wound modules is derived from the conservation equation of mass, momentum, and energy. The conservation equations are presented in a discretized form by using the control volume approach. A combination of the permeation model and the fluid flow model yields the desired rigorous model for vapor membrane separation. The model is implemented into an inhouse process simulator and so vapor membrane separation may be evaluated as an integralpart of a process flowsheet.
Resumo:
It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.
A priori parameterisation of the CERES soil-crop models and tests against several European data sets
Resumo:
Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.
Resumo:
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.
Resumo:
Abstract Purpose: Several well-known managerial accounting performance measurement models rely on causal assumptions. Whilst users of the models express satisfaction and link them with improved organizational performance, academic research, of the realworld applications, shows few reliable statistical associations. This paper provides a discussion on the"problematic" of causality in a performance measurement setting. Design/methodology/approach: This is a conceptual study based on an analysis and synthesis of the literature from managerial accounting, organizational theory, strategic management and social scientific causal modelling. Findings: The analysis indicates that dynamic, complex and uncertain environments may challenge any reliance upon valid causal models. Due to cognitive limitations and judgmental biases, managers may fail to trace correct cause-and-effect understanding of the value creation in their organizations. However, even lacking this validity, causal models can support strategic learning and perform as organizational guides if they are able to mobilize managerial action. Research limitations/implications: Future research should highlight the characteristics necessary for elaboration of convincing and appealing causal models and the social process of their construction. Practical implications: Managers of organizations using causal models should be clear on the purposes of their particular models and their limitations. In particular, difficulties are observed in specifying detailed cause and effect relations and their potential for communicating and directing attention. They should therefore construct their models to suit the particular purpose envisaged. Originality/value: This paper provides an interdisciplinary and holistic view on the issue of causality in managerial accounting models.
Resumo:
Geophysical data may provide crucial information about hydrological properties, states, and processes that are difficult to obtain by other means. Large data sets can be acquired over widely different scales in a minimally invasive manner and at comparatively low costs, but their effective use in hydrology makes it necessary to understand the fidelity of geophysical models, the assumptions made in their construction, and the links between geophysical and hydrological properties. Geophysics has been applied for groundwater prospecting for almost a century, but it is only in the last 20 years that it is regularly used together with classical hydrological data to build predictive hydrological models. A largely unexplored venue for future work is to use geophysical data to falsify or rank competing conceptual hydrological models. A promising cornerstone for such a model selection strategy is the Bayes factor, but it can only be calculated reliably when considering the main sources of uncertainty throughout the hydrogeophysical parameter estimation process. Most classical geophysical imaging tools tend to favor models with smoothly varying property fields that are at odds with most conceptual hydrological models of interest. It is thus necessary to account for this bias or use alternative approaches in which proposed conceptual models are honored at all steps in the model building process.
Resumo:
Diplomityön tarkoituksena oli luoda ja kehittää kaksi asiakastyytyväisyysmallia asiakastyytyväisyyden mittaamisen aloittamiseksi ja toteuttamiseksi kohdeyrityksessä. Työ pohjautuu nykyisten tyytyväisyysprosessien analysointiin sekä työn teoriaosaan, joka käsittelee yksityiskohtaisesti niitä asioita, joita asiakastyytyväisyyden mittaamisessa ja prosessissa tulisi huomioida. Työssä tehdyn mallien tarkoituksen on auttaa kohdeyritystä hyödyntämään asiakastyytyväisyysmittauksen tuloksia paremmin liiketoiminnassa, sekä asiakkaiden keskuudessa. Työn yhtenä tavoitteena oli myös sopivan mittaustyökalun löytäminen ja suositteleminen kohdeyritykselle.Teorian ja analysoinnin pohjalta luotiin molemmat asiakastyytyväisyysmallit vastamaan kohdeyksiköiden tarpeita. Kun ulkoiset seikat, kuten mittaustavat, mittausinstrumentit, kyselylomakkeet ja vastaajaryhmät oli määritelty, keskityttiin tulosten analysointiin ja hyödyntämiseen, mikä korostui asiakassuuntautuneessa organisaatiossa. Työssä pohdittiin myös yhtenäisen asiakastyytyväisyysprosessin merkitystä ja etuja kohdeyrityksessä.
Resumo:
Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.