958 resultados para Extended Hubbard-model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION Extended-spectrum beta-lactamases (ESBL) and AmpC beta-lactamases (AmpC) are of concern for veterinary and public health because of their ability to cause treatment failure due to antimicrobial resistance in Enterobacteriaceae. The main objective was to assess the relative contribution (RC) of different types of meat to the exposure of consumers to ESBL/AmpC and their potential importance for human infections in Denmark. MATERIAL AND METHODS The prevalence of each genotype of ESBL/AmpC-producing E. coli in imported and nationally produced broiler meat, pork and beef was weighted by the meat consumption patterns. Data originated from the Danish surveillance program for antibiotic use and antibiotic resistance (DANMAP) from 2009 to 2011. DANMAP also provided data about human ESBL/AmpC cases in 2011, which were used to assess a possible genotype overlap. Uncertainty about the occurrence of ESBL/AmpC-producing E. coli in meat was assessed by inspecting beta distributions given the available data of the genotypes in each type of meat. RESULTS AND DISCUSSION Broiler meat represented the largest part (83.8%) of the estimated ESBL/AmpC-contaminated pool of meat compared to pork (12.5%) and beef (3.7%). CMY-2 was the genotype with the highest RC to human exposure (58.3%). However, this genotype is rarely found in human infections in Denmark. CONCLUSION The overlap between ESBL/AmpC genotypes in meat and human E. coli infections was limited. This suggests that meat might constitute a less important source of ESBL/AmpC exposure to humans in Denmark than previously thought - maybe because the use of cephalosporins is restricted in cattle and banned in poultry and pigs. Nonetheless, more detailed surveillance data are required to determine the contribution of meat compared to other sources, such as travelling, pets, water resources, community and hospitals in the pursuit of a full source attribution model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every x-ray attenuation curve inherently contains all the information necessary to extract the complete energy spectrum of a beam. To date, attempts to obtain accurate spectral information from attenuation data have been inadequate.^ This investigation presents a mathematical pair model, grounded in physical reality by the Laplace Transformation, to describe the attenuation of a photon beam and the corresponding bremsstrahlung spectral distribution. In addition the Laplace model has been mathematically extended to include characteristic radiation in a physically meaningful way. A method to determine the fraction of characteristic radiation in any diagnostic x-ray beam was introduced for use with the extended model.^ This work has examined the reconstructive capability of the Laplace pair model for a photon beam range of from 50 kVp to 25 MV, using both theoretical and experimental methods.^ In the diagnostic region, excellent agreement between a wide variety of experimental spectra and those reconstructed with the Laplace model was obtained when the atomic composition of the attenuators was accurately known. The model successfully reproduced a 2 MV spectrum but demonstrated difficulty in accurately reconstructing orthovoltage and 6 MV spectra. The 25 MV spectrum was successfully reconstructed although poor agreement with the spectrum obtained by Levy was found.^ The analysis of errors, performed with diagnostic energy data, demonstrated the relative insensitivity of the model to typical experimental errors and confirmed that the model can be successfully used to theoretically derive accurate spectral information from experimental attenuation data. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the requirements for health care hospitalization have become more demanding, so has the discharge planning process become a more important part of the health services system. A thorough understanding of hospital discharge planning can, then, contribute to our understanding of the health services system. This study involved the development of a process model of discharge planning from hospitals. Model building involved the identification of factors used by discharge planners to develop aftercare plans, and the specification of the roles of these factors in the development of the discharge plan. The factors in the model were concatenated in 16 discrete decision sequences, each of which produced an aftercare plan.^ The sample for this study comprised 407 inpatients admitted to the M. D. Anderson Hospital and Tumor Institution at Houston, Texas, who were discharged to any site within Texas during a 15 day period. Allogeneic bone marrow donors were excluded from the sample. The factors considered in the development of discharge plans were recorded by discharge planners and were used to develop the model. Data analysis consisted of sorting the discharge plans using the plan development factors until for some combination and sequence of factors all patients were discharged to a single site. The arrangement of factors that led to that aftercare plan became a decision sequence in the model.^ The model constructs the same discharge plans as those developed by hospital staff for every patient in the study. Tests of the validity of the model should be extended to other patients at the MDAH, to other cancer hospitals, and to other inpatient services. Revisions of the model based on these tests should be of value in the management of discharge planning services and in the design and development of comprehensive community health services.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-resolution records of glacial-interglacial variations in biogenic carbonate, opal, and detritus (derived from non-destructive core log measurements of density, P-wave velocity and color; r >= 0.9) from 15 sediment sites in the eastern equatorial (sampling resolution is ~1 kyr) clear response to eccentricity and precession forcing. For the Peru Basin, we generate a high-resolution (21 kyr increment) orbitally-based chronology for the last 1.3 Ma. Spectral analysis indicates that the 100 kyr cycle became dominant at roughly 1.2 Ma, 200-300 kyr earlier than reported for other paleoclimatic records. The response to orbital forcing is weaker since the Mid-Brunhes Dissolution Event (at 400 ka). A west-east reconstruction of biogenic sedimentation in the Peru Basin (four cores; 91-85°W) distinguishes equatorial and coastal upwelling systems in the western and eastern sites, respectively. A north-south reconstruction perpendicular to the equatorial upwelling system (11 cores, 11°N-°3S) shows high carbonate contents (>= 50%) between 6°N and 4°S and highly variable opal contents between 2°N and 4°S. Carbonate cycles B-6, B-8, B-10, B-12, B-14, M-2, and M-6 are well developed with B-10 (430 ka) as the most prominent cycle. Carbonate highs during glacials and glacial-interglacial transitions extended up to 400 km north and south compared to interglacial or interglacial^glacial carbonate lows. Our reconstruction thus favors glacial-interglacial expansion and contraction of the equatorial upwelling system rather than shifting north or south. Elevated accumulation rates are documented near the equator from 6°N to 4°S and from 2°N to 4°S for carbonate and opal, respectively. Accumulation rates are higher during glacials and glacial-interglacial transitions in all cores, whereas increased dissolution is concentrated on Peru Basin sediments close to the carbonate compensation depth and occurred during interglacials or interglacial-glacial transitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global value chains are supported not only directly by domestic regions that export goods and services to the world market, but also indirectly by other domestic regions that provide parts, components, and intermediate services to final exporting regions. In order to better understand the nature of a country’s position and degree of participation in global value chains, we need to more fully examine the role of individual domestic regions. Understanding the domestic components of global supply chains is especially important for large developing countries like China and India, where there may be large variations in economic scale and development between domestic regions. This paper proposes a new framework for measuring domestic linkages to global value chains. This framework measures domestic linkages by endogenously embedding a country’s domestic interregional input-output (IO) table in an international IO model. Using this framework, we can more clearly describe how global production is fragmented and extended through linkages across a country’s domestic regions. This framework will also enable us to estimate how value added is created and distributed in both domestic and international segments of global value chains. For examining the validity and usefulness of this new approach, some numerical results are presented and discussed based on the 2007 Chinese interregional IO table, China customs statistics at the provincial level, and World Input-Output Tables (WIOTs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies on the rise of global value chains (GVCs) have attracted a great deal of interest in the recent economics literature. However, due to statistical and methodological challenges, most existing research ignores domestic regional heterogeneity in assessing the impact of joining GVCs. GVCs are supported not only directly by domestic regions that export goods and services to the world market, but also indirectly by other domestic regions that provide parts, components, and intermediate services to final exporting regions. To better understand the nature of a country's position and degree of participation in GVCs, we need to fully examine the role of individual domestic regions. Understanding the domestic components of GVCs is especially important for larger economies such as China, the US, India and Japan, where there may be large variations in economic scale, geography of manufacturing, and development stages at the domestic regional level. This paper proposes a new framework for measuring domestic linkages to global value chains. This framework measures domestic linkages by endogenously embedding a target country's (e.g. China and Japan) domestic interregional input–output tables into the OECD inter-country input–output model. Using this framework, we can more clearly understand how global production is fragmented and extended internationally and domestically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Growing scarcity, increasing demand and bad management of water resources are causing weighty competition for water and consequently managers are facing more and more pressure in an attempt to satisfy users? requirement. In many regions agriculture is one of the most important users at river basin scale since it concentrates high volumes of water consumption during relatively short periods (irrigation season), with a significant economic, social and environmental impact. The interdisciplinary characteristics of related water resources problems require, as established in the Water Framework Directive 2000/60/EC, an integrated and participative approach to water management and assigns an essential role to economic analysis as a decision support tool. For this reason, a methodology is developed to analyse the economic and environmental implications of water resource management under different scenarios, with a focus on the agricultural sector. This research integrates both economic and hydrologic components in modelling, defining scenarios of water resource management with the goal of preventing critical situations, such as droughts. The model follows the Positive Mathematical Programming (PMP) approach, an innovative methodology successfully used for agricultural policy analysis in the last decade and also applied in several analyses regarding water use in agriculture. This approach has, among others, the very important capability of perfectly calibrating the baseline scenario using a very limited database. However one important disadvantage is its limited capacity to simulate activities non-observed during the reference period but which could be adopted if the scenario changed. To overcome this problem the classical methodology is extended in order to simulate a more realistic farmers? response to new agricultural policies or modified water availability. In this way an economic model has been developed to reproduce the farmers? behaviour within two irrigation districts in the Tiber High Valley. This economic model is then integrated with SIMBAT, an hydrologic model developed for the Tiber basin which allows to simulate the balance between the water volumes available at the Montedoglio dam and the water volumes required by the various irrigation users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Initially, service sector was defined as complementary to manufacturing sector. This situation has changed in recent times; services growth has resulted in a dominance of employment and economic activity in most developed nations and is becoming a key process for the competitiveness of their industrial sectors. New services related to commodities have become a strategy to differentiate their value proposition (Robinson et al., 2002). The service sector's importance is evident when evaluating its share in the gross domestic product. According to the World Bank (2011), in 2009, 74.8% of GDP in the euro area and 77.5% in United States were attributed to services. Globalization and use of information and communication technology has accelerated dissemination of knowledge and increasing customer expectations about services available worldwide. Innovation becomes essential to ensure that service organizations respond with appropriate products and services for each market segment. Customized and placed on time-tomarket new services require a more developed innovation process. Service innovation and new service development process are cited as one of the priorities for academic research in the following years (Karniouchina et al., 2005) This paper has the following objectives: -To present a model for the analysis of innovation process through the service value network, -To verify its applicability through an empirical research, and -To identify the path and mode of innovation for a group of studied organizations and to compare it with previous studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we present the optical properties of nonpolar GaN/(Al,Ga)N single quantum wells (QWs) grown on either a- or m-plane GaN templates for Al contents set below 15%. In order to reduce the density of extended defects, the templates have been processed using the epitaxial lateral overgrowth technique. As expected for polarization-free heterostructures, the larger the QW width for a given Al content, the narrower the QW emission line. In structures with an Al content set to 5 or 10%, we also observe emission from excitons bound to the intersection of I1-type basal plane stacking faults (BSFs) with the QW. Similarly to what is seen in bulk material, the temperature dependence of BSF-bound QW exciton luminescence reveals intra-BSF localization. A qualitative model evidences the large spatial extension of the wavefunction of these BSF-bound QW excitons, making them extremely sensitive to potential fluctuations located in and away from BSF. Finally, polarization-dependent measurements show a strong emission anisotropy for BSF-bound QW excitons, which is related to their one-dimensional character and that confirms that the intersection between a BSF and a GaN/(Al,Ga)N QW can be described as a quantum wire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When we try to analyze and to control a system whose model was obtained only based on input/output data, accuracy is essential in the model. On the other hand, to make the procedure practical, the modeling stage must be computationally efficient. In this regard, this paper presents the application of extended Kalman filter for the parametric adaptation of a fuzzy model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.