958 resultados para Multi-ion Counting System
Resumo:
BACKGROUND Stroke is a major cause of morbidity and mortality during open-heart surgery. Up to 60% of intraoperative cerebral events are emboli induced. This randomized, controlled, multicenter trial is the first human study evaluating the safety and efficacy of a novel aortic cannula producing simultaneous forward flow and backward suction for extracting solid and gaseous emboli from the ascending aorta and aortic arch upon their intraoperative release. METHODS Sixty-six patients (25 females; 68±10 years) undergoing elective aortic valve replacement surgery, with or without coronary artery bypass graft surgery, were randomized to the use of the CardioGard (CardioGard Medical, Or-Yehuda, Israel) Emboli Protection cannula ("treatment") or a standard ("control") aortic cannula. The primary endpoint was the volume of new brain lesions measured by diffusion-weighted magnetic resonance imaging (DW-MRI), performed preoperatively and postoperatively. Device safety was investigated by comparisons of complications rate, namely neurologic events, stroke, renal insufficiency and death. RESULTS Of 66 patients (34 in the treatment group), 51 completed the presurgery and postsurgery MRI (27 in the treatment group). The volume of new brain lesion for the treatment group was (mean±standard error of the mean) 44.00±64.00 versus 126.56±28.74 mm3 in the control group (p=0.004). Of the treatment group, 41% demonstrated new postoperative lesions versus 66% in the control group (p=0.03). The complication rate was comparable in both groups. CONCLUSIONS The CardioGard cannula is safe and efficient in use during open-heart surgery. Efficacy was demonstrated by the removal of a substantial amount of emboli, a significant reduction in the volume of new brain lesions, and the percentage of patients experiencing new brain lesions.
Resumo:
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Resumo:
NA61/SHINE (SPS Heavy Ion and Neutrino Experiment) is a multi-purpose experimental facility to study hadron production in hadron-proton, hadron-nucleus and nucleus-nucleus collisions at the CERN Super Proton Synchrotron. It recorded the first physics data with hadron beams in 2009 and with ion beams (secondary 7Be beams) in 2011. NA61/SHINE has greatly profited from the long development of the CERN proton and ion sources and the accelerator chain as well as the H2 beamline of the CERN North Area. The latter has recently been modified to also serve as a fragment separator as needed to produce the Be beams for NA61/SHINE. Numerous components of the NA61/SHINE set-up were inherited from its predecessors, in particular, the last one, the NA49 experiment. Important new detectors and upgrades of the legacy equipment were introduced by the NA61/SHINE Collaboration. This paper describes the state of the NA61/SHINE facility — the beams and the detector system — before the CERN Long Shutdown I, which started in March 2013.
Resumo:
Although the climate development over the Holocene in the Northern Hemisphere is well known, palaeolimnological climate reconstructions reveal spatiotemporal variability in northern Eurasia. Here we present a multi-proxy study from north-eastern Siberia combining sediment geochemistry, and diatom and pollen data from lake-sediment cores covering the last 38,000 cal. years. Our results show major changes in pyrite content and fragilarioid diatom species distributions, indicating prolonged seasonal lake-ice cover between ~13,500 and ~8,900 cal. years BP and possibly during the 8,200 cal. years BP cold event. A pollen-based climate reconstruction generated a mean July temperature of 17.8°C during the Holocene Thermal Maximum (HTM) between ~8,900 and ~4,500 cal. years BP. Naviculoid diatoms appear in the late Holocene indicating a shortening of the seasonal ice cover that continues today. Our results reveal a strong correlation between the applied terrestrial and aquatic indicators and natural seasonal climate dynamics in the Holocene. Planktonic diatoms show a strong response to changes in the lake ecosystem due to recent climate warming in the Anthropocene. We assess other palaeolimnological studies to infer the spatiotemporal pattern of the HTM and affirm that the timing of its onset, a difference of up to 3,000 years from north to south, can be well explained by climatic teleconnections. The westerlies brought cold air to this part of Siberia until the Laurentide ice-sheet vanished 7,000 years ago. The apparent delayed ending of the HTM in the central Siberian record can be ascribed to the exceedance of ecological thresholds trailing behind increases in winter temperatures and decreases in contrast in insolation between seasons during the mid to late Holocene as well as lacking differentiation between summer and winter trends in paleolimnological reconstructions.
Resumo:
The glacial climate system transitioned rapidly between cold (stadial) and warm (interstadial) conditions in the Northern Hemisphere. This variability, referred to as Dansgaard-Oeschger variability, is widely believed to arise from perturbations of the Atlantic Meridional Overturning Circulation. Evidence for such changes during the longer Heinrich stadials has been identified, but direct evidence for overturning circulation changes during Dansgaard-Oeschger events has proven elusive. Here we reconstruct bottom water [CO3]2- variability from B/Ca ratios of benthic foraminifera and indicators of sedimentary dissolution, and use these reconstructions to infer the flow of northern-sourced deep water to the deep central sub-Antarctic Atlantic Ocean. We find that nearly every Dansgaard-Oeschger interstadial is accompanied by a rapid incursion of North Atlantic Deep Water into the deep South Atlantic. Based on these results and transient climate model simulations, we conclude that North Atlantic stadial-interstadial climate variability was associated with significant Atlantic overturning circulation changes that were rapidly transmitted across the Atlantic. However, by demonstrating the persistent role of Atlantic overturning circulation changes in past abrupt climate variability, our reconstructions of carbonate chemistry further indicate that the carbon cycle response to abrupt climate change was not a simple function of North Atlantic overturning.
Resumo:
In this paper we present an adaptive multi-camera system for real time object detection able to efficiently adjust the computational requirements of video processing blocks to the available processing power and the activity of the scene. The system is based on a two level adaptation strategy that works at local and at global level. Object detection is based on a Gaussian mixtures model background subtraction algorithm. Results show that the system can efficiently adapt the algorithm parameters without a significant loss in the detection accuracy.
Resumo:
El audio multicanal ha avanzado a pasos agigantados en los últimos años, y no solo en las técnicas de reproducción, sino que en las de capitación también. Por eso en este proyecto se encuentran ambas cosas: un array microfónico, EigenMike32 de MH Acoustics, y un sistema de reproducción con tecnología Wave Field Synthesis, instalado Iosono en la Jade Höchscule Oldenburg. Para enlazar estos dos puntos de la cadena de audio se proponen dos tipos distintos de codificación: la reproducción de la toma horizontal del EigenMike32; y el 3er orden de Ambisonics (High Order Ambisonics, HOA), una técnica de codificación basada en Armónicos Esféricos mediante la cual se simula el campo acústico en vez de simular las distintas fuentes. Ambas se desarrollaron en el entorno Matlab y apoyadas por la colección de scripts de Isophonics llamada Spatial Audio Matlab Toolbox. Para probar éstas se llevaron a cabo una serie de test en los que se las comparó con las grabaciones realizadas a la vez con un Dummy Head, a la que se supone el método más aproximado a nuestro modo de escucha. Estas pruebas incluían otras grabaciones hechas con un Doble MS de Schoeps que se explican en el proyecto “Sally”. La forma de realizar éstas fue, una batería de 4 audios repetida 4 veces para cada una de las situaciones garbadas (una conversación, una clase, una calle y un comedor universitario). Los resultados fueron inesperados, ya que la codificación del tercer orden de HOA quedo por debajo de la valoración Buena, posiblemente debido a la introducción de material hecho para un array tridimensional dentro de uno de 2 dimensiones. Por el otro lado, la codificación que consistía en extraer los micrófonos del plano horizontal se mantuvo en el nivel de Buena en todas las situaciones. Se concluye que HOA debe seguir siendo probado con mayores conocimientos sobre Armónicos Esféricos; mientras que el otro codificador, mucho más sencillo, puede ser usado para situaciones sin mucha complejidad en cuanto a espacialidad. In the last years the multichannel audio has increased in leaps and bounds and not only in the playback techniques, but also in the recording ones. That is the reason of both things being in this project: a microphone array, EigenMike32 from MH Acoustics; and a playback system with Wave Field Synthesis technology, installed by Iosono in Jade Höchscule Oldenburg. To link these two points of the audio chain, 2 different kinds of codification are proposed: the reproduction of the EigenMike32´s horizontal take, and the Ambisonics´ third order (High Order Ambisonics, HOA), a codification technique based in Spherical Harmonics through which the acoustic field is simulated instead of the different sound sources. Both have been developed inside Matlab´s environment and supported by the Isophonics´ scripts collection called Spatial Audio Matlab Toolbox. To test these, a serial of tests were made in which they were compared with recordings made at the time by a Dummy Head, which is supposed to be the closest method to our hearing way. These tests included other recording and codifications made by a Double MS (DMS) from Schoeps which are explained in the project named “3D audio rendering through Ambisonics techniques: from multi-microphone recordings (DMS Schoeps) to a WFS system, through Matlab”. The way to perform the tests was, a collection made of 4 audios repeated 4 times for each recorded situation (a chat, a class, a street and college canteen or Mensa). The results were unexpected, because the HOA´s third order stood under the Well valuation, possibly caused by introducing material made for a tridimensional array inside one made only by 2 dimensions. On the other hand, the codification that consisted of extracting the horizontal plane microphones kept the Well valuation in all the situations. It is concluded that HOA should keep being tested with larger knowledge about Spherical Harmonics; while the other coder, quite simpler, can be used for situations without a lot of complexity with regards to spatiality.
Resumo:
This paper describes the multi-agent organization of a computer system that was designed to assist operators in decision making in the presence of emergencies. The application was developed for the case of emergencies caused by river floods. It operates on real-time receiving data recorded by sensors (rainfall, water levels, flows, etc.) and applies multi-agent techniques to interpret the data, predict the future behavior and recommend control actions. The system includes an advanced knowledge based architecture with multiple symbolic representation with uncertainty models (bayesian networks). This system has been applied and validated at two particular sites in Spain (the Jucar basin and the South basin).
Resumo:
CIAO is an advanced programming environment supporting Logic and Constraint programming. It offers a simple concurrent kernel on top of which declarative and non-declarative extensions are added via librarles. Librarles are available for supporting the ISOProlog standard, several constraint domains, functional and higher order programming, concurrent and distributed programming, internet programming, and others. The source language allows declaring properties of predicates via assertions, including types and modes. Such properties are checked at compile-time or at run-time. The compiler and system architecture are designed to natively support modular global analysis, with the two objectives of proving properties in assertions and performing program optimizations, including transparently exploiting parallelism in programs. The purpose of this paper is to report on recent progress made in the context of the CIAO system, with special emphasis on the capabilities of the compiler, the techniques used for supporting such capabilities, and the results in the áreas of program analysis and transformation already obtained with the system.
Resumo:
Abstract is not available.
Resumo:
Abstract is not available
Resumo:
Energy Efficiency is one of the goals of the Smart Building initiatives. This paper presents an Open Energy Management System which consists of an ontology-based multi-technology platform and a wireless transducer network using 6LoWPAN communication technology. The system allows the integration of several building automation protocols and eases the development of different kind of services to make use of them. The system has been implemented and tested in the Energy Efficiency Research Facility at CeDInt-UPM.