950 resultados para Decomposition of Ranked Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although ancestral polymorphisms and incomplete lineage sorting are commonly used at the population level, increasing reports of these models have been invoked and tested to explain deep radiations. Hypotheses are put forward for ancestral polymorphisms being the likely reason for paraphyletic taxa at the class level in the diatoms based on an ancient rapid radiation of the entire groups. Models for ancestral deep coalescence are invoked to explain paraphyly and molecular evolution at the class level in the diatoms. Other examples at more recent divergences are also documented. Discussion as to whether or not paraphyletic groups seen in the diatoms at all taxonomic levels should be recognized is provided. The continued use of the terms centric and pennate diatoms is substantiated with additional evidence produced to support their use in diatoms both as descriptive terms for both groups and as taxonomic groups for the latter because new morphological evidence from the auxospores justifies the formal classification of the basal and core araphids as new subclasses of pennate diatoms in the Class Bacillariophyceae. Keys for higher levels of the diatoms showing how the terms centrics and araphid diatoms can be defined are provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of the air-sea transfer velocity of gases may be either empirical or mechanistic. Extrapolations of empirical models to an unmeasured gas or to another water temperature can be erroneous if the basis of that extrapolation is flawed. This issue is readily demonstrated for the most well-known empirical gas transfer velocity models where the influence of bubble-mediated transfer, which can vary between gases, is not explicitly accounted for. Mechanistic models are hindered by an incomplete knowledge of the mechanisms of air-sea gas transfer. We describe a hybrid model that incorporates a simple mechanistic view—strictly enforcing a distinction between direct and bubble-mediated transfer—but also uses parameterizations based on data from eddy flux measurements of dimethyl sulphide (DMS) to calibrate the model together with dual tracer results to evaluate the model. This model underpins simple algorithms that can be easily applied within schemes to calculate local, regional, or global air-sea fluxes of gases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of the air-sea transfer velocity of gases may be either empirical or mechanistic. Extrapolations of empirical models to an unmeasured gas or to another water temperature can be erroneous if the basis of that extrapolation is flawed. This issue is readily demonstrated for the most well-known empirical gas transfer velocity models where the influence of bubble-mediated transfer, which can vary between gases, is not explicitly accounted for. Mechanistic models are hindered by an incomplete knowledge of the mechanisms of air-sea gas transfer. We describe a hybrid model that incorporates a simple mechanistic view—strictly enforcing a distinction between direct and bubble-mediated transfer—but also uses parameterizations based on data from eddy flux measurements of dimethyl sulphide (DMS) to calibrate the model together with dual tracer results to evaluate the model. This model underpins simple algorithms that can be easily applied within schemes to calculate local, regional, or global air-sea fluxes of gases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In establishing the reliability of performance-related design methods for concrete – which are relevant for resistance against chloride-induced corrosion - long-term experience of local materials and practices and detailed knowledge of the ambient and local micro-climate are critical. Furthermore, in the development of analytical models for performance-based design, calibration against test data representative of actual conditions in practice is required. To this end, the current study presents results from full-scale, concrete pier-stems under long-term exposure to a marine environment with work focussing on XS2 (below mid-tide level) in which the concrete is regarded as fully saturated and XS3 (tidal, splash and spray) in which the concrete is in an unsaturated condition. These exposures represent zones where concrete structures are most susceptible to ionic ingress and deterioration. Chloride profiles and chloride transport behaviour are studied using both an empirical model (erfc function) and a physical model (ClinConc). The time dependency of surface chloride concentration (Cs) and apparent diffusivity (Da) were established for the empirical model whereas, in the ClinConc model (originally based on saturated concrete), two new environmental factors were introduced for the XS3 environmental exposure zone. Although the XS3 is considered as one environmental exposure zone according to BS EN 206-1:2013, the work has highlighted that even within this zone, significant changes in chloride ingress are evident. This study aims to update the parameters of both models for predicting the long term transport behaviour of concrete subjected to environmental exposure classes XS2 and XS3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a mechanism for testing the theory of collapse models such as continuous spontaneous localization (CSL) by examining the parametric heating rate of a trapped nanosphere. The random localizations of the center-of-mass for a given particle predicted by the CSL model can be understood as a stochastic force embodying a source of heating for the nanosphere. We show that by utilising a Paul trap to levitate the particle and optical cooling, it is possible to reduce environmental decoher- ence to such a level that CSL dominates the dynamics and contributes the main source of heating. We show that this approach allows measurements to be made on the timescale of seconds, and that the free parameter λcsl which characterises the model ought to be testable to values as low as 10^{−12} Hz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Implementing effective antenatal care models is a key global policy goal. However, the mechanisms of action of these multi-faceted models that would allow widespread implementation are seldom examined and poorly understood. In existing care model analyses there is little distinction between what is done, how it is done, and who does it. A new evidence-informed quality maternal and newborn care (QMNC) framework identifies key characteristics of quality care. This offers the opportunity to identify systematically the characteristics of care delivery that may be generalizable across contexts, thereby enhancing implementation. Our objective was to map the characteristics of antenatal care models tested in Randomised Controlled Trials (RCTs) to a new evidence-based framework for quality maternal and newborn care; thus facilitating the identification of characteristics of effective care.

Methods: A systematic review of RCTs of midwifery-led antenatal care models. Mapping and evaluation of these models’ characteristics to the QMNC framework using data extraction and scoring forms derived from the five framework components. Paired team members independently extracted data and conducted quality assessment using the QMNC framework and standard RCT criteria.

Results: From 13,050 citations initially retrieved we identified 17 RCTs of midwifery-led antenatal care models from Australia (7), the UK (4), China (2), and Sweden, Ireland, Mexico and Canada (1 each). QMNC framework scores ranged from 9 to 25 (possible range 0–32), with most models reporting fewer than half the characteristics associated with quality maternity care. Description of care model characteristics was lacking in many studies, but was better reported for the intervention arms. Organisation of care was the best-described component. Underlying values and philosophy of care were poorly reported.

Conclusions: The QMNC framework facilitates assessment of the characteristics of antenatal care models. It is vital to understand all the characteristics of multi-faceted interventions such as care models; not only what is done but why it is done, by whom, and how this differed from the standard care package. By applying the QMNC framework we have established a foundation for future reports of intervention studies so that the characteristics of individual models can be evaluated, and the impact of any differences appraised.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enzyme-mediated decomposition of soil organic matter (SOM) is controlled, amongst other factors, by organic matter properties and by the microbial decomposer community present. Since microbial community composition and SOM properties are often interrelated and both change with soil depth, the drivers of enzymatic decomposition are hard to dissect. We investigated soils from three regions in the Siberian Arctic, where carbon rich topsoil material has been incorporated into the subsoil (cryoturbation). We took advantage of this subduction to test if SOM properties shape microbial community composition, and to identify controls of both on enzyme activities. We found that microbial community composition (estimated by phospholipid fatty acid analysis), was similar in cryoturbated material and in surrounding subsoil, although carbon and nitrogen contents were similar in cryoturbated material and topsoils. This suggests that the microbial community in cryoturbated material was not well adapted to SOM properties. We also measured three potential enzyme activities (cellobiohydrolase, leucine-amino-peptidase and phenoloxidase) and used structural equation models (SEMs) to identify direct and indirect drivers of the three enzyme activities. The models included microbial community composition, carbon and nitrogen contents, clay content, water content, and pH. Models for regular horizons, excluding cryoturbated material, showed that all enzyme activities were mainly controlled by carbon or nitrogen. Microbial community composition had no effect. In contrast, models for cryoturbated material showed that enzyme activities were also related to microbial community composition. The additional control of microbial community composition could have restrained enzyme activities and furthermore decomposition in general. The functional decoupling of SOM properties and microbial community composition might thus be one of the reasons for low decomposition rates and the persistence of 400 Gt carbon stored in cryoturbated material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Titanium dioxide (TiO2) nanoparticles with different sizes and crystalloid structures produced by the thermal method and doped with silver iodide (AgI), nitrogen (N), sulphur (S) and carbon (C) were applied as adsorbents. The adsorption of Methyl Violet (MV), Methylene Blue (MB), Methyl Orange (MO) and Orange II on the surface of these particles was studied. The photocatalytic activity of some particles for the destruction of MV and Orange II was evaluated under sunlight and visible light. The equilibrium adsorption data were fitted to the Langmuir, Freundlich, Langmuir-Freundlich and Temkin isotherms. The equilibrium data show that TiO2 particles with larger sizes and doped with AgI, N, S and C have the highest adsorption capacity for the dyes. The kinetic data followed the pseudo-first order and pseudo-second order models, while desorption data fitted the zero order, first order and second order models. The highest adsorption rate constant was observed for the TiO2 with the highest anatase phase percentage. Factors such as anatase crystalloid structure, particle size and doping with AgI affect the photocatalytic activity significantly. Increasing the rutile phase percentage also decreases the tendency to desorption for N-TiO2 and S-TiO2. Adsorption was not found to be important in the photocatalytic decomposition of MV in an investigation with differently sized AgI-TiO2 nanoparticles. Nevertheless C-TiO2 was found to have higher adsorption activity onto Orange II, as the adsorption role of carbon approached synchronicity with the oxidation role.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: The importance of e-government models lies in their offering a basis to measure and guide e-government. There is still no agreement on how to assess a government online. Most of the e-government models are not based on research, nor are they validated. In most countries, e-government has not reached higher stages of growth. Several scholars have shown a confusing picture of e-government. What is lacking is an in-depth analysis of e-government models. Responding to the need for such an analysis, this study identifies the strengths and weaknesses of major national and local e-government evaluation models. The common limitations of most models are focusing on the government and not the citizen, missing qualitative measures, constructing the e-equivalent of a bureaucratic administration, and defining general criteria without sufficient validations. In addition, this study has found that the metrics defined for national e-government are not suitable for municipalities, and most of the existing studies have focused on national e-governments even though local ones are closer to citizens. There is a need for developing a good theoretical model for both national and local municipal e-government.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent developments in the physical parameterizations available in spectral wave models have already been validated, but there is little information on their relative performance especially with focus on the higher order spectral moments and wave partitions. This study concentrates on documenting their strengths and limitations using satellite measurements, buoy spectra, and a comparison between the different models. It is confirmed that all models perform well in terms of significant wave heights; however higher-order moments have larger errors. The partition wave quantities perform well in terms of direction and frequency but the magnitude and directional spread typically have larger discrepancies. The high-frequency tail is examined through the mean square slope using satellites and buoys. From this analysis it is clear that some models behave better than the others, suggesting their parameterizations match the physical processes reasonably well. However none of the models are entirely satisfactory, pointing to poorly constrained parameterizations or missing physical processes. The major space-time differences between the models are related to the swell field stressing the importance of describing its evolution. An example swell field confirms the wave heights can be notably different between model configurations while the directional distributions remain similar. It is clear that all models have difficulty in describing the directional spread. Therefore, knowledge of the source term directional distributions is paramount in improving the wave model physics in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The U.S. railroad companies spend billions of dollars every year on railroad track maintenance in order to ensure safety and operational efficiency of their railroad networks. Besides maintenance costs, other costs such as train accident costs, train and shipment delay costs and rolling stock maintenance costs are also closely related to track maintenance activities. Optimizing the track maintenance process on the extensive railroad networks is a very complex problem with major cost implications. Currently, the decision making process for track maintenance planning is largely manual and primarily relies on the knowledge and judgment of experts. There is considerable potential to improve the process by using operations research techniques to develop solutions to the optimization problems on track maintenance. In this dissertation study, we propose a range of mathematical models and solution algorithms for three network-level scheduling problems on track maintenance: track inspection scheduling problem (TISP), production team scheduling problem (PTSP) and job-to-project clustering problem (JTPCP). TISP involves a set of inspection teams which travel over the railroad network to identify track defects. It is a large-scale routing and scheduling problem where thousands of tasks are to be scheduled subject to many difficult side constraints such as periodicity constraints and discrete working time constraints. A vehicle routing problem formulation was proposed for TISP, and a customized heuristic algorithm was developed to solve the model. The algorithm iteratively applies a constructive heuristic and a local search algorithm in an incremental scheduling horizon framework. The proposed model and algorithm have been adopted by a Class I railroad in its decision making process. Real-world case studies show the proposed approach outperforms the manual approach in short-term scheduling and can be used to conduct long-term what-if analyses to yield managerial insights. PTSP schedules capital track maintenance projects, which are the largest track maintenance activities and account for the majority of railroad capital spending. A time-space network model was proposed to formulate PTSP. More than ten types of side constraints were considered in the model, including very complex constraints such as mutual exclusion constraints and consecution constraints. A multiple neighborhood search algorithm, including a decomposition and restriction search and a block-interchange search, was developed to solve the model. Various performance enhancement techniques, such as data reduction, augmented cost function and subproblem prioritization, were developed to improve the algorithm. The proposed approach has been adopted by a Class I railroad for two years. Our numerical results show the model solutions are able to satisfy all hard constraints and most soft constraints. Compared with the existing manual procedure, the proposed approach is able to bring significant cost savings and operational efficiency improvement. JTPCP is an intermediate problem between TISP and PTSP. It focuses on clustering thousands of capital track maintenance jobs (based on the defects identified in track inspection) into projects so that the projects can be scheduled in PTSP. A vehicle routing problem based model and a multiple-step heuristic algorithm were developed to solve this problem. Various side constraints such as mutual exclusion constraints and rounding constraints were considered. The proposed approach has been applied in practice and has shown good performance in both solution quality and efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis develops and tests various transient and steady-state computational models such as direct numerical simulation (DNS), large eddy simulation (LES), filtered unsteady Reynolds-averaged Navier-Stokes (URANS) and steady Reynolds-averaged Navier-Stokes (RANS) with and without magnetic field to investigate turbulent flows in canonical as well as in the nozzle and mold geometries of the continuous casting process. The direct numerical simulations are first performed in channel, square and 2:1 aspect rectangular ducts to investigate the effect of magnetic field on turbulent flows. The rectangular duct is a more practical geometry for continuous casting nozzle and mold and has the option of applying magnetic field either perpendicular to broader side or shorter side. This work forms the part of a graphic processing unit (GPU) based CFD code (CU-FLOW) development for magnetohydrodynamic (MHD) turbulent flows. The DNS results revealed interesting effects of the magnetic field and its orientation on primary, secondary flows (instantaneous and mean), Reynolds stresses, turbulent kinetic energy (TKE) budgets, momentum budgets and frictional losses, besides providing DNS database for two-wall bounded square and rectangular duct MHD turbulent flows. Further, the low- and high-Reynolds number RANS models (k-ε and Reynolds stress models) are developed and tested with DNS databases for channel and square duct flows with and without magnetic field. The MHD sink terms in k- and ε-equations are implemented as proposed by Kenjereš and Hanjalić using a user defined function (UDF) in FLUENT. This work revealed varying accuracies of different RANS models at different levels. This work is useful for industry to understand the accuracies of these models, including continuous casting. After realizing the accuracy and computational cost of RANS models, the steady-state k-ε model is then combined with the particle image velocimetry (PIV) and impeller probe velocity measurements in a 1/3rd scale water model to study the flow quality coming out of the well- and mountain-bottom nozzles and the effect of stopper-rod misalignment on fluid flow. The mountain-bottom nozzle was found more prone to the longtime asymmetries and higher surface velocities. The left misalignment of stopper gave higher surface velocity on the right leading to significantly large number of vortices forming behind the nozzle on the left. Later, the transient and steady-state models such as LES, filtered URANS and steady RANS models are combined with ultrasonic Doppler velocimetry (UDV) measurements in a GaInSn model of typical continuous casting process. LES-CU-LOW is the fastest and the most accurate model owing to much finer mesh and a smaller timestep. This work provided a good understanding on the performance of these models. The behavior of instantaneous flows, Reynolds stresses and proper orthogonal decomposition (POD) analysis quantified the nozzle bottom swirl and its importance on the turbulent flow in the mold. Afterwards, the aforementioned work in GaInSn model is extended with electromagnetic braking (EMBr) to help optimize a ruler-type brake and its location for the continuous casting process. The magnetic field suppressed turbulence and promoted vortical structures with their axis aligned with the magnetic field suggesting tendency towards 2-d turbulence. The stronger magnetic field at the nozzle well and around the jet region created large scale and lower frequency flow behavior by suppressing nozzle bottom swirl and its front-back alternation. Based on this work, it is advised to avoid stronger magnetic field around jet and nozzle bottom to get more stable and less defect prone flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mass Customization (MC) is not a mature business strategy and hence it is not clear that a single or small group of operational models are dominating. Companies tend to approach MC from either a mass production or a customization origin and this in itself gives reason to believe that several operational models will be observable. This paper reviews actual and theoretical fulfilment systems that enterprises could apply when offering a pre-engineered catalogue of customizable products and options. Issues considered are: How product flows are structured in relation to processes, inventories and decoupling point(s); - Characteristics of the OF process that inhibit or facilitate fulfilment; - The logic of how products are allocated to customers; - Customer factors that influence OF process design and operation. Diversity in the order fulfilment structures is expected and is found in the literature. The review has identified four structural forms that have been used in a Catalogue MC context: - fulfilment from stock; - fulfilment from a single fixed decoupling point; - fulfilment from one of several fixed decoupling points; - fulfilment from several locations, with floating decoupling points. From the review it is apparent that producers are being imaginative in coping with the demands of high variety, high volume, customization and short lead times. These demands have encouraged the relationship between product, process and customer to be re-examined. Not only has this strengthened interest in commonality and postponement, but, as is reported in the paper, has led to the re-engineering of the order fulfilment process to create models with multiple fixed decoupling points and the floating decoupling point system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.