927 resultados para Multi- Choice mixed integer goal programming


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piezoelectrics present an interactive electromechanical behaviour that, especially in recent years, has generated much interest since it renders these materials adapt for use in a variety of electronic and industrial applications like sensors, actuators, transducers, smart structures. Both mechanical and electric loads are generally applied on these devices and can cause high concentrations of stress, particularly in proximity of defects or inhomogeneities, such as flaws, cavities or included particles. A thorough understanding of their fracture behaviour is crucial in order to improve their performances and avoid unexpected failures. Therefore, a considerable number of research works have addressed this topic in the last decades. Most of the theoretical studies on this subject find their analytical background in the complex variable formulation of plane anisotropic elasticity. This theoretical approach bases its main origins in the pioneering works of Muskelishvili and Lekhnitskii who obtained the solution of the elastic problem in terms of independent analytic functions of complex variables. In the present work, the expressions of stresses and elastic and electric displacements are obtained as functions of complex potentials through an analytical formulation which is the application to the piezoelectric static case of an approach introduced for orthotropic materials to solve elastodynamics problems. This method can be considered an alternative to other formalisms currently used, like the Stroh’s formalism. The equilibrium equations are reduced to a first order system involving a six-dimensional vector field. After that, a similarity transformation is induced to reach three independent Cauchy-Riemann systems, so justifying the introduction of the complex variable notation. Closed form expressions of near tip stress and displacement fields are therefore obtained. In the theoretical study of cracked piezoelectric bodies, the issue of assigning consistent electric boundary conditions on the crack faces is of central importance and has been addressed by many researchers. Three different boundary conditions are commonly accepted in literature: the permeable, the impermeable and the semipermeable (“exact”) crack model. This thesis takes into considerations all the three models, comparing the results obtained and analysing the effects of the boundary condition choice on the solution. The influence of load biaxiality and of the application of a remote electric field has been studied, pointing out that both can affect to a various extent the stress fields and the angle of initial crack extension, especially when non-singular terms are retained in the expressions of the electro-elastic solution. Furthermore, two different fracture criteria are applied to the piezoelectric case, and their outcomes are compared and discussed. The work is organized as follows: Chapter 1 briefly introduces the fundamental concepts of Fracture Mechanics. Chapter 2 describes plane elasticity formalisms for an anisotropic continuum (Eshelby-Read-Shockley and Stroh) and introduces for the simplified orthotropic case the alternative formalism we want to propose. Chapter 3 outlines the Linear Theory of Piezoelectricity, its basic relations and electro-elastic equations. Chapter 4 introduces the proposed method for obtaining the expressions of stresses and elastic and electric displacements, given as functions of complex potentials. The solution is obtained in close form and non-singular terms are retained as well. Chapter 5 presents several numerical applications aimed at estimating the effect of load biaxiality, electric field, considered permittivity of the crack. Through the application of fracture criteria the influence of the above listed conditions on the response of the system and in particular on the direction of crack branching is thoroughly discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future hydrogen demand is expected to increase, both in existing industries (including upgrading of fossil fuels or ammonia production) and in new technologies, like fuel cells. Nowadays, hydrogen is obtained predominantly by steam reforming of methane, but it is well known that hydrocarbon based routes result in environmental problems and besides the market is dependent on the availability of this finite resource which is suffering of rapid depletion. Therefore, alternative processes using renewable sources like wind, solar energy and biomass, are now being considered for the production of hydrogen. One of those alternative methods is the so-called “steam-iron process” which consists in the reduction of a metal-oxide by hydrogen-containing feedstock, like ethanol for instance, and then the reduced material is reoxidized with water to produce “clean” hydrogen (water splitting). This kind of thermochemical cycles have been studied before but currently some important facts like the development of more active catalysts, the flexibility of the feedstock (including renewable bio-alcohols) and the fact that the purification of hydrogen could be avoided, have significantly increased the interest for this research topic. With the aim of increasing the understanding of the reactions that govern the steam-iron route to produce hydrogen, it is necessary to go into the molecular level. Spectroscopic methods are an important tool to extract information that could help in the development of more efficient materials and processes. In this research, ethanol was chosen as a reducing fuel and the main goal was to study its interaction with different catalysts having similar structure (spinels), to make a correlation with the composition and the mechanism of the anaerobic oxidation of the ethanol which is the first step of the steam-iron cycle. To accomplish this, diffuse reflectance spectroscopy (DRIFTS) was used to study the surface composition of the catalysts during the adsorption of ethanol and its transformation during the temperature program. Furthermore, mass spectrometry was used to monitor the desorbed products. The set of studied materials include Cu, Co and Ni ferrites which were also characterized by means of X-ray diffraction, surface area measurements, Raman spectroscopy, and temperature programmed reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many physiological and pathological processes are mediated by the activity of proteins assembled in homo and/or hetero-oligomers. The correct recognition and association of these proteins into a functional complex is a key step determining the fate of the whole pathway. This has led to an increasing interest in selecting molecules able to modulate/inhibit these protein-protein interactions. In particular, our research was focused on Heat Shock Protein 90 (Hsp90), responsible for the activation and maturation and disposition of many client proteins [1], [2] [3]. Circular Dichroism (CD) spectroscopy, Surface Plasmon Resonance (SPR) and Affinity Capillary Electrophoresis (ACE) were used to characterize the Hsp90 target and, furthermore, its inhibition process via C-terminal domain driven by the small molecule Coumermycin A1. Circular Dichroism was used as powerful technique to characterize Hsp90 and its co-chaperone Hop in solution for secondary structure content, stability to different pHs, temperatures and solvents. Furthermore, CD was used to characterize ATP but, unfortunately, we were not able to monitor an interaction between ATP and Hsp90. The utility of SPR technology, on the other hand, arises from the possibility of immobilizing the protein on a chip through its N-terminal domain to later study the interaction with small molecules able to disrupt the Hsp90 dimerization on the C-terminal domain. The protein was attached on SPR chip using the “amine coupling” chemistry so that the C-terminal domain was free to interact with Coumermycin A1. The goal of the experiment was achieved by testing a range of concentrations of the small molecule Coumermycin A1. Despite to the large difference in the molecular weight of the protein (90KDa) and the drug (1110.08 Da), we were able to calculate the affinity constant of the interaction that was found to be 11.2 µm. In order to confirm the binding constant calculated for the Hsp90 on the chip, we decided to use Capillary Electrophoresis to test the Coumermycin binding to Hsp90. First, this technique was conveniently used to characterize the Hsp90 sample in terms of composition and purity. The experimental conditions were settled on two different systems, the bared fused silica and the PVA-coated capillary. We were able to characterize the Hsp90 sample in both systems. Furthermore, we employed an application of capillary electrophoresis, the Affinity Capillary Electrophoresis (ACE), to measure and confirm the binding constant calculated for Coumermycin on Optical Biosensor. We found a KD = 19.45 µM. This result compares favorably with the KD previously obtained on biosensor. This is a promising result for the use of our novel approach to screen new potential inhibitors of Hsp90 C-terminal domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La presente tesi è dedicata al riuso nel software. Eccettuata un'introduzione organica al tema, l'analisi è a livello dei meccanismi offerti dai linguaggi di programmazione e delle tecniche di sviluppo, con speciale attenzione rivolta al tema della concorrenza. Il primo capitolo fornisce un quadro generale nel quale il riuso del software è descritto, assieme alle ragioni che ne determinano l'importanza e ai punti cruciali relativi alla sua attuazione. Si individuano diversi livelli di riuso sulla base dell'astrazione e degli artefatti in gioco, e si sottolinea come i linguaggi contribuiscano alla riusabilità e alla realizzazione del riuso. In seguito, viene esplorato, con esempi di codice, il supporto al riuso da parte del paradigma ad oggetti, in termini di incapsulamento, ereditarietà, polimorfismo, composizione. La trattazione prosegue analizzando differenti feature – tipizzazione, interfacce, mixin, generics – offerte da vari linguaggi di programmazione, mostrando come esse intervengano sulla riusabilità dei componenti software. A chiudere il capitolo, qualche parola contestualizzata sull'inversione di controllo, la programmazione orientata agli aspetti, e il meccanismo della delega. Il secondo capitolo abbraccia il tema della concorrenza. Dopo aver introdotto l'argomento, vengono approfonditi alcuni significativi modelli di concorrenza: programmazione multi-threaded, task nel linguaggio Ada, SCOOP, modello ad Attori. Essi vengono descritti negli elementi fondamentali e ne vengono evidenziati gli aspetti cruciali in termini di contributo al riuso, con esempi di codice. Relativamente al modello ad Attori, viene presentata la sua implementazione in Scala/Akka come caso studio. Infine, viene esaminato il problema dell'inheritance anomaly, sulla base di esempi e delle tre classi principali di anomalia, e si analizza la suscettibilità del supporto di concorrenza di Scala/Akka a riscontrare tali problemi. Inoltre, in questo capitolo si nota come alcuni aspetti relativi al binomio riuso/concorrenza, tra cui il significato profondo dello stesso, non siano ancora stati adeguatamente affrontati dalla comunità informatica. Il terzo e ultimo capitolo esordisce con una panoramica dell'agent-oriented programming, prendendo il linguaggio simpAL come riferimento. In seguito, si prova ad estendere al caso degli agenti la nozione di riuso approfondita nei capitoli precedenti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die optische Eigenschaften sowie der Oberflächenverstärkungseffekt von rauen Metalloberflächen sowie Nanopartikeln wurden intensiv für den infraroten Bereich des Spektrums in der Literatur diskutiert. Für die Präparation solcher Oberflächen gibt es prinzipiell zwei verschiedene Strategien, zum einen können die Nanopartikel zuerst ex-situ synthetisiert werden, der zweite Ansatz beruht darauf, dass die Nanopartikel in-situ hergestellt und aufgewachsen werden. Hierbei wurden beide Ansätze ausgetestet, dabei stellte sich heraus, dass man nur mittels der in-situ Synthese der Goldnanopartikel in der Lage ist nanostrukturierte Oberflächen zu erhalten, welche elektronisch leitfähig sind, nicht zu rau sind, um eine Membranbildung zu ermöglichen und gleichzeitig einen optimalen Oberflächenverstärkungseffekt zeigen. Obwohl keine ideale Form der Nanopartikel mittels der in-situ Synthese erhalten werden können, verhalten sich diese dennoch entsprechend der Theorie des Oberflächenverstärkungseffekts. Optimierungen der Form und Grösse der Nanopartikel führten in dieser Arbeit zu einer Optimierung des Verstärkungseffekts. Solche optimierten Oberflächen konnten einfach reproduziert werden und zeichnen sich durch eine hohe Stabilität aus. Der so erhaltene Oberflächenverstärkungseffekt beträgt absolut 128 verglichen mit dem belegten ATR-Kristall ohne Nanopartikel oder etwa 6 mal, verglichen mit der Oberfläche, die bis jetzt auch in unserer Gruppe verwendet wurde. Daher können nun Spektren erhalten werden, welche ein deutlich besseres Signal zu Rauschverhältnis (SNR) aufweisen, was die Auswertung und Bearbeitung der erhaltenen Spektren deutlich vereinfacht und verkürzt.rnNach der Optimierung der verwendeten Metalloberfläche und der verwendeten Messparameter am Beispiel von Cytochrom C wurde nun an der Oberflächenbelegung der deutlich größeren Cytochrom c Oxidase gearbeitet. Hierfür wurde der DTNTA-Linker ex-situ synthetisiert. Anschließend wurden gemischte Monolagen (self assembeld monolayers) aus DTNTA und DTP hergestellt. Die NTA-Funktionalität ist für die Anbindung der CcO mit der his-tag Technologie verantwortlich. Die Kriterien für eine optimale Linkerkonzentration waren die elektrischen Parameter der Schicht vor und nach Rekonstitution in eine Lipidmembran, sowie Elektronentransferraten bestimmt durch elektrochemische Messungen. Erst mit diesem optimierten System, welches zuverlässig und reproduzierbar funktioniert, konnten weitere Messungen an der CcO begonnen werden. Aus elektrochemischen Messungen war bekannt, dass die CcO durch direkten Elektronentransfer unter Sauerstoffsättigung in einen aktivierten Zustand überführt werden kann. Dieser aktivierte Zustand zeichnet sich durch eine Verschiebung der Redoxpotentiale um etwa 400mV gegenüber dem aus Gleichgewichts-Titrationen bekannten Redoxpotential aus. Durch SEIRAS konnte festgestellt werden, dass die Reduktion bzw. Oxidation aller Redoxzentren tatsächlich bei den in der Cyclovoltammetrie gemessenen Potentialen erfolgt. Außerdem ergaben die SEIRA-Spektren, dass durch direkten Elektronentransfer gravierende Konformationsänderungen innerhalb des Proteins stattfinden. rnBisher war man davon ausgegangen, aufgrund des Elektronentransfers mittels Mediatoren, dass nur minimale Konformationsänderungen beteiligt sind. Vor allem konnte erstmaligrnder aktivierte und nicht aktivierte Zustand der Cytochrom c Oxidase spektroskopisch nachweisen werden.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid vehicles (HV), comprising a conventional ICE-based powertrain and a secondary energy source, to be converted into mechanical power as well, represent a well-established alternative to substantially reduce both fuel consumption and tailpipe emissions of passenger cars. Several HV architectures are either being studied or already available on market, e.g. Mechanical, Electric, Hydraulic and Pneumatic Hybrid Vehicles. Among the others, Electric (HEV) and Mechanical (HSF-HV) parallel Hybrid configurations are examined throughout this Thesis. To fully exploit the HVs potential, an optimal choice of the hybrid components to be installed must be properly designed, while an effective Supervisory Control must be adopted to coordinate the way the different power sources are managed and how they interact. Real-time controllers can be derived starting from the obtained optimal benchmark results. However, the application of these powerful instruments require a simplified and yet reliable and accurate model of the hybrid vehicle system. This can be a complex task, especially when the complexity of the system grows, i.e. a HSF-HV system assessed in this Thesis. The first task of the following dissertation is to establish the optimal modeling approach for an innovative and promising mechanical hybrid vehicle architecture. It will be shown how the chosen modeling paradigm can affect the goodness and the amount of computational effort of the solution, using an optimization technique based on Dynamic Programming. The second goal concerns the control of pollutant emissions in a parallel Diesel-HEV. The emissions level obtained under real world driving conditions is substantially higher than the usual result obtained in a homologation cycle. For this reason, an on-line control strategy capable of guaranteeing the respect of the desired emissions level, while minimizing fuel consumption and avoiding excessive battery depletion is the target of the corresponding section of the Thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chlorinated solvents are the most ubiquitous organic contaminants found in groundwater since the last five decades. They generally reach groundwater as Dense Non-Aqueous Phase Liquid (DNAPL). This phase can migrate through aquifers, and also through aquitards, in ways that aqueous contaminants cannot. The complex phase partitioning to which chlorinated solvent DNAPLs can undergo (i.e. to the dissolved, vapor or sorbed phase), as well as their transformations (e.g. degradation), depend on the physico-chemical properties of the contaminants themselves and on features of the hydrogeological system. The main goal of the thesis is to provide new knowledge for the future investigations of sites contaminated by DNAPLs in alluvial settings, proposing innovative investigative approaches and emphasizing some of the key issues and main criticalities of this kind of contaminants in such a setting. To achieve this goal, the hydrogeologic setting below the city of Ferrara (Po plain, northern Italy), which is affected by scattered contamination by chlorinated solvents, has been investigated at different scales (regional and site specific), both from an intrinsic (i.e. groundwater flow systems) and specific (i.e. chlorinated solvent DNAPL behavior) point of view. Detailed investigations were carried out in particular in one selected test-site, known as “Caretti site”, where high-resolution vertical profiling of different kind of data were collected by means of multilevel monitoring systems and other innovative sampling and analytical techniques. This allowed to achieve a deep geological and hydrogeological knowledge of the system and to reconstruct in detail the architecture of contaminants in relationship to the features of the hosting porous medium. The results achieved in this thesis are useful not only at local scale, e.g. employable to interpret the origin of contamination in other sites of the Ferrara area, but also at global scale, in order to address future remediation and protection actions of similar hydrogeologic settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classic group recommender systems focus on providing suggestions for a fixed group of people. Our work tries to give an inside look at design- ing a new recommender system that is capable of making suggestions for a sequence of activities, dividing people in subgroups, in order to boost over- all group satisfaction. However, this idea increases problem complexity in more dimensions and creates great challenge to the algorithm’s performance. To understand the e↵ectiveness, due to the enhanced complexity and pre- cise problem solving, we implemented an experimental system from data collected from a variety of web services concerning the city of Paris. The sys- tem recommends activities to a group of users from two di↵erent approaches: Local Search and Constraint Programming. The general results show that the number of subgroups can significantly influence the Constraint Program- ming Approaches’s computational time and e�cacy. Generally, Local Search can find results much quicker than Constraint Programming. Over a lengthy period of time, Local Search performs better than Constraint Programming, with similar final results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In surgical animal studies anesthesia is used regularly. Several reports in the literature demonstrate respiratory and cardiovascular side effects of anesthesiologic agents. The aim of this study was to compare two frequently used anesthesia cocktails (ketamine/xylazine [KX] versus medetomidine/climazolam/fentanyl [MCF]) in skin flap mouse models. Systemic blood values, local metabolic parameters, and surgical outcome should be analyzed in critical ischemic skin flap models. Systemic hypoxia was found in the animals undergoing KX anesthesia compared with normoxia in the MCF group (sO(2): 89.2% +/- 2.4% versus 98.5% +/- 1.2%, P < 0.01). Analysis of tissue metabolism revealed impaired anaerobic oxygen metabolism and increased cellular damage in critical ischemic flap tissue under KX anesthesia (lactate/pyruvate ratio: KX 349.86 +/- 282.38 versus MCF 64.53 +/- 18.63; P < 0.01 and glycerol: KX 333.50 +/- 83.91 micromol/L versus MCF 195.83 +/- 29.49 micromol/L; P < 0.01). After 6 d, different rates of flap tissue necrosis could be detected (MCF 57% +/- 6% versus KX 68% +/- 6%, P < 0.01). In summary we want to point out that the type of anesthesia, the animal model and the goal of the study have to be well correlated. Comparing the effects of KX and MCF anesthesia in mice on surgical outcome was a novel aspect of our study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changes in marine net primary productivity (PP) and export of particulate organic carbon (EP) are projected over the 21st century with four global coupled carbon cycle-climate models. These include representations of marine ecosystems and the carbon cycle of different structure and complexity. All four models show a decrease in global mean PP and EP between 2 and 20% by 2100 relative to preindustrial conditions, for the SRES A2 emission scenario. Two different regimes for productivity changes are consistently identified in all models. The first chain of mechanisms is dominant in the low- and mid-latitude ocean and in the North Atlantic: reduced input of macro-nutrients into the euphotic zone related to enhanced stratification, reduced mixed layer depth, and slowed circulation causes a decrease in macro-nutrient concentrations and in PP and EP. The second regime is projected for parts of the Southern Ocean: an alleviation of light and/or temperature limitation leads to an increase in PP and EP as productivity is fueled by a sustained nutrient input. A region of disagreement among the models is the Arctic, where three models project an increase in PP while one model projects a decrease. Projected changes in seasonal and interannual variability are modest in most regions. Regional model skill metrics are proposed to generate multi-model mean fields that show an improved skill in representing observation-based estimates compared to a simple multi-model average. Model results are compared to recent productivity projections with three different algorithms, usually applied to infer net primary production from satellite observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Abstractor training is a key element in creating valid and reliable data collection procedures. The choice between in-person vs. remote or simultaneous vs. sequential abstractor training has considerable consequences for time and resource utilization. We conducted a web-based (webinar) abstractor training session to standardize training across six individual Cancer Research Network (CRN) sites for a study of breast cancer treatment effects in older women (BOWII). The goals of this manuscript are to describe the training session, its participants and participants' evaluation of webinar technology for abstraction training. Findings A webinar was held for all six sites with the primary purpose of simultaneously training staff and ensuring consistent abstraction across sites. The training session involved sequential review of over 600 data elements outlined in the coding manual in conjunction with the display of data entry fields in the study's electronic data collection system. Post-training evaluation was conducted via Survey Monkey©. Inter-rater reliability measures for abstractors within each site were conducted three months after the commencement of data collection. Ten of the 16 people who participated in the training completed the online survey. Almost all (90%) of the 10 trainees had previous medical record abstraction experience and nearly two-thirds reported over 10 years of experience. Half of the respondents had previously participated in a webinar, among which three had participated in a webinar for training purposes. All rated the knowledge and information delivered through the webinar as useful and reported it adequately prepared them for data collection. Moreover, all participants would recommend this platform for multi-site abstraction training. Consistent with participant-reported training effectiveness, results of data collection inter-rater agreement within sites ranged from 89 to 98%, with a weighted average of 95% agreement across sites. Conclusions Conducting training via web-based technology was an acceptable and effective approach to standardizing medical record review across multiple sites for this group of experienced abstractors. Given the substantial time and cost savings achieved with the webinar, coupled with participants' positive evaluation of the training session, researchers should consider this instructional method as part of training efforts to ensure high quality data collection in multi-site studies.