961 resultados para pre-slaughter management


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In view of the drastic growth in the Canadian Inuit population, the rising costs of living, the missing job and income alternatives and the high unemployment rate in the arctic, efforts are being made to make use of the muskox populations in order to provide additional sources of food and/or revenue. The present paper attempts to review the course of muskox utilization in the Canadian Arctic and to tentatively assess its present as weIl as its future economic importance. Starting with the pre-European status of muskoxen in Canada, the drastic reduction in numbers resulting from the combined efforts of hide traders, whalers and expedition parties in the 19th and early 20th centuries, the impact of the legal protection and the recovery since 1917 are being described. Establishing muskox farms with semi-domesticated herds failed in Canada in the 1970's. Since 1969, though, increasing numbers of animals have been allotted to many Inuit communities, and despite the fact that most of the animals were primarily used for subsistence purposes, some communities could reserve part of their quotas for trophy (sport) hunters. While controlled sustainable subsistence and trophy hunts may eventually be carried out over the whole muskox range, including recently colonized northern Quebec, commercial harvesting for meat, hides and wool, introduced in 1981, will at least for some time be restricted to Banks and Victoria islands which at present show 78 % of the Canadian muskox population and 94 % of the overall quota.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The manipulation and handling of an ever increasing volume of data by current data-intensive applications require novel techniques for e?cient data management. Despite recent advances in every aspect of data management (storage, access, querying, analysis, mining), future applications are expected to scale to even higher degrees, not only in terms of volumes of data handled but also in terms of users and resources, often making use of multiple, pre-existing autonomous, distributed or heterogeneous resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growth of wind power as an electric energy source is profitable from an environmental point of view and improves the energetic independence of countries with little fossil fuel resources. However, the wind resource randomness poses a great challenge in the management of electric grids. This study raises the possibility of using hydrogen as a mean to damp the variability of the wind resource. Thus, it is proposed the use of all the energy produced by a typical wind farm for hydrogen generation, that will in turn be used after for suitable generation of electric energy according to the operation rules in a liberalized electric market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of fire ( Control burned soil) and two emergency stabilisation techniques (grass Seeding and straw Mulching ) on 20 chemical characteristics were evaluated on 0 – 5 cm top-soils sampled 1, 90, 180 and 365 days after an experimental fi re in a steep shrubland of a temperate-humid region (NW Spain). Most part of pH (in H 2 O and KCl) variance was explained by the sampling date. No clear temporal trends were identi fi able for total soil C and N content, likely due to the large SOM pool in these soils; however, changes on soil δ 13 C were explained by the deposition of 13 C-depleted ashes, followed by its progressive erosion, while those on soil δ 15 N were a consequence of fi re induced N outputs. After the fi re, NH 4 + – N, P, Na, K, Mg, Ca, Mn, Cu, Zn and B concentrations increased, while those of NO 3 − – N, Al, Fe and Co did not vary significantly. Despite a significant decline with time, concentrations of Mg, Ca and Mn at the end of the study were still higher than in unburned soil, while those of K, Cu, Zn and B were similar to the pre-fire levels and those of NH 4 + – N, P and Na were below pre-fire values. Mulching and Seeding treatments for burned soil emergency stabilisation had significant effects on soil δ 15 N and extractable K, Mg and Ca, while data were inconclusive for their possible effects on the extractable Al, Fe and Co

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past few years, the common practice within air traffic management has been that commercial aircraft fly by following a set of predefined routes to reach their destination. Currently, aircraft operators are requesting more flexibility to fly according to their prefer- ences, in order to achieve their business objectives. Due to this reason, much research effort is being invested in developing different techniques which evaluate aircraft optimal trajectory and traffic synchronisation. Also, the inefficient use of the airspace using barometric altitude overall in the landing and takeoff phases or in Continuous Descent Approach (CDA) trajectories where currently it is necessary introduce the necessary reference setting (QNH or QFE). To solve this problem and to permit a better airspace management born the interest of this research. Where the main goals will be to evaluate the impact, weakness and strength of the use of geometrical altitude instead of the use of barometric altitude. Moreover, this dissertation propose the design a simplified trajectory simulator which is able to predict aircraft trajectories. The model is based on a three degrees of freedom aircraft point mass model that can adapt aircraft performance data from Base of Aircraft Data, and meteorological information. A feature of this trajectory simulator is to support the improvement of the strategic and pre-tactical trajectory planning in the future Air Traffic Management. To this end, the error of the tool (aircraft Trajectory Simulator) is measured by comparing its performance variables with actual flown trajectories obtained from Flight Data Recorder information. The trajectory simulator is validated by analysing the performance of different type of aircraft and considering different routes. A fuel consumption estimation error was identified and a correction is proposed for each type of aircraft model. In the future Air Traffic Management (ATM) system, the trajectory becomes the fundamental element of a new set of operating procedures collectively referred to as Trajectory-Based Operations (TBO). Thus, governmental institutions, academia, and industry have shown a renewed interest for the application of trajectory optimisation techniques in com- mercial aviation. The trajectory optimisation problem can be solved using optimal control methods. In this research we present and discuss the existing methods for solving optimal control problems focusing on direct collocation, which has received recent attention by the scientific community. In particular, two families of collocation methods are analysed, i.e., Hermite-Legendre-Gauss-Lobatto collocation and the pseudospectral collocation. They are first compared based on a benchmark case study: the minimum fuel trajectory problem with fixed arrival time. For the sake of scalability to more realistic problems, the different meth- ods are also tested based on a real Airbus 319 El Cairo-Madrid flight. Results show that pseudospectral collocation, which has shown to be numerically more accurate and computa- tionally much faster, is suitable for the type of problems arising in trajectory optimisation with application to ATM. Fast and accurate optimal trajectory can contribute properly to achieve the new challenges of the future ATM. As atmosphere uncertainties are one of the most important issues in the trajectory plan- ning, the final objective of this dissertation is to have a magnitude order of how different is the fuel consumption under different atmosphere condition. Is important to note that in the strategic phase planning the optimal trajectories are determined by meteorological predictions which differ from the moment of the flight. The optimal trajectories have shown savings of at least 500 [kg] in the majority of the atmosphere condition (different pressure, and temperature at Mean Sea Level, and different lapse rate temperature) with respect to the conventional procedure simulated at the same atmosphere condition.This results show that the implementation of optimal profiles are beneficial under the current Air traffic Management (ATM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rabbit does in modern rabbitries are under intensive reproductive rhythms. Females are high milk producers with high energetic expenses due to the extensive overlap between lactation and gestation. This situation leads to a negative energy balance with a mobilization of body fat especially in primiparous rabbit does. Poor body condition and poor health status severely affect the reproductive features (fertility rate and lifespan of the doe as well as ovarian physiology). This paper reviews some reproductive and nutritional approaches used in the last years to improve the reproductive performance of rabbit females, mainly focusing on the influence on ovarian response and embryo quality and with emphasis on epigenetic modifications in pre-implantation embryos and offspring consequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The operating theatres are the engine of the hospitals; proper management of the operating rooms and its staff represents a great challenge for managers and its results impact directly in the budget of the hospital. This work presents a MILP model for the efficient schedule of multiple surgeries in Operating Rooms (ORs) during a working day. This model considers multiple surgeons and ORs and different types of surgeries. Stochastic strategies are also implemented for taking into account the uncertain in surgery durations (pre-incision, incision, post-incision times). In addition, a heuristic-based methods and a MILP decomposition approach is proposed for solving large-scale ORs scheduling problems in computational efficient way. All these computer-aided strategies has been implemented in AIMMS, as an advanced modeling and optimization software, developing a user friendly solution tool for the operating room management under uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Guideline is an official statement of the European Society of Gastrointestinal Endoscopy (ESGE). It addresses the diagnosis and management of nonvariceal upper gastrointestinal hemorrhage (NVUGIH). Main Recommendations MR1. ESGE recommends immediate assessment of hemodynamic status in patients who present with acute upper gastrointestinal hemorrhage (UGIH), with prompt intravascular volume replacement initially using crystalloid fluids if hemodynamic instability exists (strong recommendation, moderate quality evidence). MR2. ESGE recommends a restrictive red blood cell transfusion strategy that aims for a target hemoglobin between 7 g/dL and 9 g/dL. A higher target hemoglobin should be considered in patients with significant co-morbidity (e. g., ischemic cardiovascular disease) (strong recommendation, moderate quality evidence). MR3. ESGE recommends the use of the Glasgow-Blatchford Score (GBS) for pre-endoscopy risk stratification. Outpatients determined to be at very low risk, based upon a GBS score of 0 - 1, do not require early endoscopy nor hospital admission. Discharged patients should be informed of the risk of recurrent bleeding and be advised to maintain contact with the discharging hospital (strong recommendation, moderate quality evidence). MR4. ESGE recommends initiating high dose intravenous proton pump inhibitors (PPI), intravenous bolus followed by continuous infusion (80 mg then 8 mg/hour), in patients presenting with acute UGIH awaiting upper endoscopy. However, PPI infusion should not delay the performance of early endoscopy (strong recommendation, high quality evidence). MR5. ESGE does not recommend the routine use of nasogastric or orogastric aspiration/lavage in patients presenting with acute UGIH (strong recommendation, moderate quality evidence). MR6. ESGE recommends intravenous erythromycin (single dose, 250 mg given 30 - 120 minutes prior to upper gastrointestinal [GI] endoscopy) in patients with clinically severe or ongoing active UGIH. In selected patients, pre-endoscopic infusion of erythromycin significantly improves endoscopic visualization, reduces the need for second-look endoscopy, decreases the number of units of blood transfused, and reduces duration of hospital stay (strong recommendation, high quality evidence). MR7. Following hemodynamic resuscitation, ESGE recommends early (≤ 24 hours) upper GI endoscopy. Very early (< 12 hours) upper GI endoscopy may be considered in patients with high risk clinical features, namely: hemodynamic instability (tachycardia, hypotension) that persists despite ongoing attempts at volume resuscitation; in-hospital bloody emesis/nasogastric aspirate; or contraindication to the interruption of anticoagulation (strong recommendation, moderate quality evidence). MR8. ESGE recommends that peptic ulcers with spurting or oozing bleeding (Forrest classification Ia and Ib, respectively) or with a nonbleeding visible vessel (Forrest classification IIa) receive endoscopic hemostasis because these lesions are at high risk for persistent bleeding or rebleeding (strong recommendation, high quality evidence). MR9. ESGE recommends that peptic ulcers with an adherent clot (Forrest classification IIb) be considered for endoscopic clot removal. Once the clot is removed, any identified underlying active bleeding (Forrest classification Ia or Ib) or nonbleeding visible vessel (Forrest classification IIa) should receive endoscopic hemostasis (weak recommendation, moderate quality evidence). MR10. In patients with peptic ulcers having a flat pigmented spot (Forrest classification IIc) or clean base (Forrest classification III), ESGE does not recommend endoscopic hemostasis as these stigmata present a low risk of recurrent bleeding. In selected clinical settings, these patients may be discharged to home on standard PPI therapy, e. g., oral PPI once-daily (strong recommendation, moderate quality evidence). MR11. ESGE recommends that epinephrine injection therapy not be used as endoscopic monotherapy. If used, it should be combined with a second endoscopic hemostasis modality (strong recommendation, high quality evidence). MR12. ESGE recommends PPI therapy for patients who receive endoscopic hemostasis and for patients with adherent clot not receiving endoscopic hemostasis. PPI therapy should be high dose and administered as an intravenous bolus followed by continuous infusion (80 mg then 8 mg/hour) for 72 hours post endoscopy (strong recommendation, high quality evidence). MR13. ESGE does not recommend routine second-look endoscopy as part of the management of nonvariceal upper gastrointestinal hemorrhage (NVUGIH). However, in patients with clinical evidence of rebleeding following successful initial endoscopic hemostasis, ESGE recommends repeat upper endoscopy with hemostasis if indicated. In the case of failure of this second attempt at hemostasis, transcatheter angiographic embolization (TAE) or surgery should be considered (strong recommendation, high quality evidence). MR14. In patients with NVUGIH secondary to peptic ulcer, ESGE recommends investigating for the presence of Helicobacter pylori in the acute setting with initiation of appropriate antibiotic therapy when H. pylori is detected. Re-testing for H. pylori should be performed in those patients with a negative test in the acute setting. Documentation of successful H. pylori eradication is recommended (strong recommendation, high quality evidence). MR15. In patients receiving low dose aspirin for secondary cardiovascular prophylaxis who develop peptic ulcer bleeding, ESGE recommends aspirin be resumed immediately following index endoscopy if the risk of rebleeding is low (e. g., FIIc, FIII). In patients with high risk peptic ulcer (FIa, FIb, FIIa, FIIb), early reintroduction of aspirin by day 3 after index endoscopy is recommended, provided that adequate hemostasis has been established (strong recommendation, moderate quality evidence).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Publisher's catalog, [4] p. following t.p.