951 resultados para one-meson-exchange: independent-particle shell model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the BPE (Brownian particle equation) model of the Burgers equationpresented in the preceeding article [6]. More precisely, we are interestedin establishing the existence and uniqueness properties of solutions usingprobabilistic techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Delta isobar components in the nuclear many-body wave function are investigated for the deuteron, light nuclei (16O), and infinite nuclear matter within the framework of the coupled-cluster theory. The predictions derived for various realistic models of the baryon-baryon interaction are compared to each other. These include local (V28) and nonlocal meson exchange potentials (Bonn2000) but also a model recently derived by the Salamanca group accounting for quark degrees of freedom. The characteristic differences which are obtained for the NDelta and Delta Delta correlation functions are related to the approximation made in deriving the matrix elements for the baryon-baryon interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of Delta isobar components on the ground-state properties of nuclear systems is investigated for nuclear matter as well as finite nuclei. Many-body wave functions, including isobar configurations and binding energies, are evaluated employing the framework of the coupled-cluster theory. It is demonstrated that the effect of isobar configurations depends in a rather sensitive way on the model used for the baryon-baryon interaction. As examples for realistic baryon-baryon interactions with explicit inclusion of isobar channels we use the local (V28) and nonlocal meson-exchange potentials (Bonn2000) but also a model recently developed by the Salamanca group, which is based on a quark picture. The differences obtained for the nuclear observables are related to the treatment of the interaction, the pi-exchange contributions in particular, at high momentum transfers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To determine self‐consistently the time evolution of particle size and their number density in situ multi‐angle polarization‐sensitive laser light scattering was used. Cross‐polarization intensities (incident and scattered light intensities with opposite polarization) measured at 135° and ex situ transmission electronic microscopy analysis demonstrate the existence of nonspherical agglomerates during the early phase of agglomeration. Later in the particle time development both techniques reveal spherical particles again. The presence of strong cross‐polarization intensities is accompanied by low‐frequency instabilities detected on the scattered light intensities and plasma emission. It is found that the particle radius and particle number density during the agglomeration phase can be well described by the Brownian free molecule coagulation model. Application of this neutral particle coagulation model is justified by calculation of the particle charge whereby it is shown that particles of a few tens of nanometer can be considered as neutral under our experimental conditions. The measured particle dispersion can be well described by a Brownian free molecule coagulation model including a log‐normal particle size distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIMS: The aims of the study are to compare the outcome with and without major bleeding and to identify the independent correlates of major bleeding complications and mortality in patients described in the ATOLL study. METHODS: The ATOLL study included 910 patients randomly assigned to either 0.5 mg/kg intravenous enoxaparin or unfractionated heparin before primary percutaneous coronary intervention. Incidence of major bleeding and ischemic end points was assessed at 1 month, and mortality, at 1 and 6 months. Patients with and without major bleeding complication were compared. A multivariate model of bleeding complications at 1 month and mortality at 6 months was realized. Intention-to-treat and per-protocol analyses were performed. RESULTS: The most frequent bleeding site appears to be the gastrointestinal tract. Age >75 years, cardiac arrest, and the use of insulin or >1 heparin emerged as independent correlates of major bleeding at 1 month. Patients presenting with major bleeding had significantly higher rates of adverse ischemic complications. Mortality at 6 months was higher in bleeders. Major bleeding was found to be one of the independent correlates of 6-month mortality. The addition or mixing of several anticoagulant drugs was an independent factor of major bleeding despite the predominant use of radial access. CONCLUSIONS: This study shows that major bleeding is independently associated with poor outcome, increasing ischemic events, and mortality in primary percutaneous coronary intervention performed mostly with radial access.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally limestone has been used for the flue gas desulfurization in fluidized bed combustion. Recently, several studies have been carried out to examine the use of limestone in applications which enable the removal of carbon dioxide from the combustion gases, such as calcium looping technology and oxy-fuel combustion. In these processes interlinked limestone reactions occur but the reaction mechanisms and kinetics are not yet fully understood. To examine these phenomena, analytical and numerical models have been created. In this work, the limestone reactions were studied with aid of one-dimensional numerical particle model. The model describes a single limestone particle in the process as a function of time, the progress of the reactions and the mass and energy transfer in the particle. The model-based results were compared with experimental laboratory scale BFB results. It was observed that by increasing the temperature from 850 °C to 950 °C the calcination was enhanced but the sulfate conversion was no more improved. A higher sulfur dioxide concentration accelerated the sulfation reaction and based on the modeling, the sulfation is first order with respect to SO2. The reaction order of O2 seems to become zero at high oxygen concentrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The overall goal of the study was to describe nurses’ acceptance of an Internet-based support system in the care of adolescents with depression. The data were collected in four phases during the period 2006 – 2010 from nurses working in adolescent psychiatric outpatient clinics and from professionals working with adolescents in basic public services. In the first phase, the nurses’ anticipated perceptions of the usefulness of the Internet-based support system before its implementation was explored. In the second phase, the nurses’ perceived ease of computer and Internet use and attitudes toward it were explored. In the third phase, the features of the support system and its implementation process were described. In the fourth phase, the nurses’ experiences of behavioural intention and actual system use of the Internet-based support were described in psychiatric out-patient care after one year use. The Technology Acceptance Model (TAM) was used to structure the various research phases. Several benefits were identified from the nurses’ perspective in using the Internet-based support system in the care of adolescents with depression. The nurses’ technology skills were good and their attitudes towards computer use were positive. The support system was developed in various phases to meet the adolescents’ needs. Before the implementation of the information technology (IT)-based support system, it is important to pay attention to the nurses’ IT-training, technology support, resources, and safety as well as ethical issues related to the support system. After one year of using the system, the nurses perceived the Internet-based support system to be useful in the care of adolescents with depression. The adolescents’ independent work with the support system at home and the program’s systematic character were experienced as conducive from the point of view of the treatment. However, the Internet-based support system was integrated only partly into the nurseadolescent interaction even though the nurses’ perceptions of it were positive. The use of the IT-based system as part of the adolescents’ depression care was seen positively and its benefits were recognized. This serves as a good basis for future IT-based techniques. Successful implementations of IT-based support systems need a systematic implementation plan and commitment from the part of the organization and its managers. Supporting and evaluating the implementation of an IT-based system should pay attention to changing the nurses’ work styles. Health care organizations should be offered more flexible opportunities to utilize IT-based systems in direct patient care in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Affiliation: Département de Biochimie, Université de Montréal

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’intérêt principal de cette recherche porte sur la validation d’une méthode statistique en pharmaco-épidémiologie. Plus précisément, nous allons comparer les résultats d’une étude précédente réalisée avec un devis cas-témoins niché dans la cohorte utilisé pour tenir compte de l’exposition moyenne au traitement : – aux résultats obtenus dans un devis cohorte, en utilisant la variable exposition variant dans le temps, sans faire d’ajustement pour le temps passé depuis l’exposition ; – aux résultats obtenus en utilisant l’exposition cumulative pondérée par le passé récent ; – aux résultats obtenus selon la méthode bayésienne. Les covariables seront estimées par l’approche classique ainsi qu’en utilisant l’approche non paramétrique bayésienne. Pour la deuxième le moyennage bayésien des modèles sera utilisé pour modéliser l’incertitude face au choix des modèles. La technique utilisée dans l’approche bayésienne a été proposée en 1997 mais selon notre connaissance elle n’a pas été utilisée avec une variable dépendante du temps. Afin de modéliser l’effet cumulatif de l’exposition variant dans le temps, dans l’approche classique la fonction assignant les poids selon le passé récent sera estimée en utilisant des splines de régression. Afin de pouvoir comparer les résultats avec une étude précédemment réalisée, une cohorte de personnes ayant un diagnostique d’hypertension sera construite en utilisant les bases des données de la RAMQ et de Med-Echo. Le modèle de Cox incluant deux variables qui varient dans le temps sera utilisé. Les variables qui varient dans le temps considérées dans ce mémoire sont iv la variable dépendante (premier évènement cérébrovasculaire) et une des variables indépendantes, notamment l’exposition

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Delta isobar components in the nuclear many-body wave function are investigated for the deuteron, light nuclei (16O), and infinite nuclear matter within the framework of the coupled-cluster theory. The predictions derived for various realistic models of the baryon-baryon interaction are compared to each other. These include local (V28) and nonlocal meson exchange potentials (Bonn2000) but also a model recently derived by the Salamanca group accounting for quark degrees of freedom. The characteristic differences which are obtained for the NDelta and Delta Delta correlation functions are related to the approximation made in deriving the matrix elements for the baryon-baryon interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of Delta isobar components on the ground-state properties of nuclear systems is investigated for nuclear matter as well as finite nuclei. Many-body wave functions, including isobar configurations and binding energies, are evaluated employing the framework of the coupled-cluster theory. It is demonstrated that the effect of isobar configurations depends in a rather sensitive way on the model used for the baryon-baryon interaction. As examples for realistic baryon-baryon interactions with explicit inclusion of isobar channels we use the local (V28) and nonlocal meson-exchange potentials (Bonn2000) but also a model recently developed by the Salamanca group, which is based on a quark picture. The differences obtained for the nuclear observables are related to the treatment of the interaction, the pi-exchange contributions in particular, at high momentum transfers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have studied the transport properties of disordered one-dimensional two-band systems. The model includes a narrow d band hybridised with an s band. The Landauer formula was used in the case of a very narrow d band or in the case of short chains. The results were compared with the localisation length of the wavefunctions calculated by the transfer matrix method, which allows the use of very lang chains, and an excellent agreement was obtained.