936 resultados para Scheduler simulator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introducción: La cirugía laparoscópica ocupa un lugar privilegiado dentro de la cirugía mínimamente invasiva, brindando al paciente y a las instituciones hospitalarias importantes beneficios comparados con la cirugía convencional. Los cirujanos en formación deben contar con un entrenamiento adecuado en cirugía laparoscópica basado en simuladores previo a la práctica con pacientes, disminuyendo la morbimortalidad derivada de la curva de aprendizaje. Este estudio busca describir e identificar los cambios en habilidades y tiempos quirúrgicos antes y después del entrenamiento con simulador de bajo costo y simulador virtual. Metodología: Se realizó un seudoexperimento (antes y después) con 20 residentes de los cuales 18 completaron el estudio, quienes recibieron un entrenamiento dirigido para la realización de procedimientos por vía laparoscópica en simuladores. El análisis estadístico se realiza mediante un análisis uni y bivariado, y se determina la significancia estadística con la medición de X2 y prueba exacta de Fisher así como la prueba T Student para muestras emparejadas y Wilcoxon para las variables numéricas. Resultados: El simulador de bajo costo muestra dependencia en la variable de manejo de tejidos en el ejercicio 3 y 10, con valores de p=0.035, y p=0.028 respectivamente. El 60% de los ejercicios muestra una diferencia estadísticamente significativa en el tiempo empleado en las pruebas. Para simulador virtual, todos los ejercicios mostraron diferencias significativas en al menos una de las variables evaluadas. Conclusiones: El entrenamiento, tanto con el simulador de bajo costo como con el simulador virtual, mejora las habilidades quirúrgicas necesarias para la realización de un procedimiento laparoscópico.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El desarrollo del archivo docente de imágenes diagnósticas, permite compartir y difundir el conocimiento de la colección de casos e imágenes radiológicas con rapidez y facilidad al personal de la Clínica Fundación Cardio-Infantil – Instituto de Cardiología, a través por portal web “e-cardio”, contribuyendo en la formación académica del personal médico, técnico y administrativo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we employ techniques from artificial intelligence such as reinforcement learning and agent based modeling as building blocks of a computational model for an economy based on conventions. First we model the interaction among firms in the private sector. These firms behave in an information environment based on conventions, meaning that a firm is likely to behave as its neighbors if it observes that their actions lead to a good pay off. On the other hand, we propose the use of reinforcement learning as a computational model for the role of the government in the economy, as the agent that determines the fiscal policy, and whose objective is to maximize the growth of the economy. We present the implementation of a simulator of the proposed model based on SWARM, that employs the SARSA(λ) algorithm combined with a multilayer perceptron as the function approximation for the action value function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introducción: La calidad de las compresiones torácicas tiene importancia durante la reanimación pediátrica y se ve afectada por diversos factores como la fatiga del reanimador, esta puede verse condicionada por las características de las compresiones establecidas según la presencia o ausencia de un dispositivo avanzado en la vía aérea determinando la interrupción continuidad de las mismas. En este estudio se realizó una simulación clínica, evaluando la presencia de fatiga del reanimador frente a pacientes con y sin dispositivo avanzado de la vía aérea. Metodología: Se incluyeron 12 participantes, quienes realizaron compresiones torácicas a un simulador clínico, tanto para el caso de la maniobra 1 correspondiente a ciclos interrumpidos con el fin de proporcionar ventilaciones, como para el caso de la maniobra 2 en la que la actividad fue continua. Se midieron calidad de compresiones, VO2 max y fatiga mediante escala de Borg RPE 6-20. Resultados: La calidad de las compresiones disminuyó en ambos grupos después del minuto 2 y más rápidamente cuando fueron ininterrumpidas. La fatiga se incrementó cuando las compresiones fueron continuas. Discusión: Se evidencia una relación directamente proporcional del aumento de la fatiga en relación al tiempo de reanimación e inversamente proporcional entre la calidad de las compresiones y la sensación de cansancio, en especial después del minuto 2. Un tiempo de 2 minutos podría ser el tiempo ideal para lograr compresiones de calidad y para realizar el reemplazo de la persona que realiza las compresiones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La gestió de xarxes és un camp molt ampli i inclou molts aspectes diferents. Aquesta tesi doctoral està centrada en la gestió dels recursos en les xarxes de banda ampla que disposin de mecanismes per fer reserves de recursos, com per exemple Asynchronous Transfer Mode (ATM) o Multi-Protocol Label Switching (MPLS). Es poden establir xarxes lògiques utilitzant els Virtual Paths (VP) d'ATM o els Label Switched Paths (LSP) de MPLS, als que anomenem genèricament camins lògics. Els usuaris de la xarxa utilitzen doncs aquests camins lògics, que poden tenir recursos assignats, per establir les seves comunicacions. A més, els camins lògics són molt flexibles i les seves característiques es poden canviar dinàmicament. Aquest treball, se centra, en particular, en la gestió dinàmica d'aquesta xarxa lògica per tal de maximitzar-ne el rendiment i adaptar-la a les connexions ofertes. En aquest escenari, hi ha diversos mecanismes que poden afectar i modificar les característiques dels camins lògics (ample de banda, ruta, etc.). Aquests mecanismes inclouen els de balanceig de la càrrega (reassignació d'ample de banda i reencaminament) i els de restauració de fallades (ús de camins lògics de backup). Aquests dos mecanismes poden modificar la xarxa lògica i gestionar els recursos (ample de banda) dels enllaços físics. Per tant, existeix la necessitat de coordinar aquests mecanismes per evitar possibles interferències. La gestió de recursos convencional que fa ús de la xarxa lògica, recalcula periòdicament (per exemple cada hora o cada dia) tota la xarxa lògica d'una forma centralitzada. Això introdueix el problema que els reajustaments de la xarxa lògica no es realitzen en el moment en què realment hi ha problemes. D'altra banda també introdueix la necessitat de mantenir una visió centralitzada de tota la xarxa. En aquesta tesi, es proposa una arquitectura distribuïda basada en un sistema multi agent. L'objectiu principal d'aquesta arquitectura és realitzar de forma conjunta i coordinada la gestió de recursos a nivell de xarxa lògica, integrant els mecanismes de reajustament d'ample de banda amb els mecanismes de restauració preplanejada, inclosa la gestió de l'ample de banda reservada per a la restauració. Es proposa que aquesta gestió es porti a terme d'una forma contínua, no periòdica, actuant quan es detecta el problema (quan un camí lògic està congestionat, o sigui, quan està rebutjant peticions de connexió dels usuaris perquè està saturat) i d'una forma completament distribuïda, o sigui, sense mantenir una visió global de la xarxa. Així doncs, l'arquitectura proposada realitza petits rearranjaments a la xarxa lògica adaptant-la d'una forma contínua a la demanda dels usuaris. L'arquitectura proposada també té en consideració altres objectius com l'escalabilitat, la modularitat, la robustesa, la flexibilitat i la simplicitat. El sistema multi agent proposat està estructurat en dues capes d'agents: els agents de monitorització (M) i els de rendiment (P). Aquests agents estan situats en els diferents nodes de la xarxa: hi ha un agent P i diversos agents M a cada node; aquests últims subordinats als P. Per tant l'arquitectura proposada es pot veure com una jerarquia d'agents. Cada agent és responsable de monitoritzar i controlar els recursos als que està assignat. S'han realitzat diferents experiments utilitzant un simulador distribuït a nivell de connexió proposat per nosaltres mateixos. Els resultats mostren que l'arquitectura proposada és capaç de realitzar les tasques assignades de detecció de la congestió, reassignació dinàmica d'ample de banda i reencaminament d'una forma coordinada amb els mecanismes de restauració preplanejada i gestió de l'ample de banda reservat per la restauració. L'arquitectura distribuïda ofereix una escalabilitat i robustesa acceptables gràcies a la seva flexibilitat i modularitat.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urban flood inundation models require considerable data for their parameterisation, calibration and validation. TerraSAR-X should be suitable for urban flood detection because of its high resolution in stripmap/spotlight modes. The paper describes ongoing work on a project to assess how well TerraSAR-X can detect flooded regions in urban areas, and how well these can constrain the parameters of an urban flood model. The study uses a TerraSAR-X image of a 1-in-150 year flood near Tewkesbury, UK , in 2007, for which contemporaneous aerial photography exists for validation. The DLR SETES SAR simulator was used in conjunction with LiDAR data to estimate regions of the image in which water would not be visible due to shadow or layover caused by buildings and vegetation. An algorithm for the delineation of flood water in urban areas is described, together with its validation using the aerial photographs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high resolution TerraSAR-X Synthetic Aperture Radar (SAR) data to detect flooded regions in urban areas is described. The study uses a TerraSAR-X image of a 1 in 150 year flood near Tewkesbury, UK, in 2007, for which contemporaneous aerial photography exists for validation. The DLR SAR End-To-End simulator (SETES) was used in conjunction with airborne scanning laser altimetry (LiDAR) data to estimate regions of the image in which water would not be visible due to shadow or layover caused by buildings and taller vegetation. A semi-automatic algorithm for the detection of floodwater in urban areas is described, together with its validation using the aerial photographs. 76% of the urban water pixels visible to TerraSAR-X were correctly detected, with an associated false positive rate of 25%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19% respectively. The algorithm is aimed at producing urban flood extents with which to calibrate and validate urban flood inundation models, and these findings indicate that TerraSAR-X is capable of providing useful data for this purpose.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high resolution TerraSAR-X data to detect flooded regions in urban areas is described. An important application for this would be the calibration and validation of the flood extent predicted by an urban flood inundation model. To date, research on such models has been hampered by lack of suitable distributed validation data. The study uses a 3m resolution TerraSAR-X image of a 1-in-150 year flood near Tewkesbury, UK, in 2007, for which contemporaneous aerial photography exists for validation. The DLR SETES SAR simulator was used in conjunction with airborne LiDAR data to estimate regions of the TerraSAR-X image in which water would not be visible due to radar shadow or layover caused by buildings and taller vegetation, and these regions were masked out in the flood detection process. A semi-automatic algorithm for the detection of floodwater was developed, based on a hybrid approach. Flooding in rural areas adjacent to the urban areas was detected using an active contour model (snake) region-growing algorithm seeded using the un-flooded river channel network, which was applied to the TerraSAR-X image fused with the LiDAR DTM to ensure the smooth variation of heights along the reach. A simpler region-growing approach was used in the urban areas, which was initialized using knowledge of the flood waterline in the rural areas. Seed pixels having low backscatter were identified in the urban areas using supervised classification based on training areas for water taken from the rural flood, and non-water taken from the higher urban areas. Seed pixels were required to have heights less than a spatially-varying height threshold determined from nearby rural waterline heights. Seed pixels were clustered into urban flood regions based on their close proximity, rather than requiring that all pixels in the region should have low backscatter. This approach was taken because it appeared that urban water backscatter values were corrupted in some pixels, perhaps due to contributions from side-lobes of strong reflectors nearby. The TerraSAR-X urban flood extent was validated using the flood extent visible in the aerial photos. It turned out that 76% of the urban water pixels visible to TerraSAR-X were correctly detected, with an associated false positive rate of 25%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19% respectively. These findings indicate that TerraSAR-X is capable of providing useful data for the calibration and validation of urban flood inundation models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Joint UK Land Environmental Simulator (JULES) was run offline to investigate the sensitivity of land surface type changes over South Africa. Sensitivity tests were made in idealised experiments where the actual land surface cover is replaced by a single homogeneous surface type. The vegetation surface types on which some of the experiments were made are static. Experimental tests were evaluated against the control. The model results show among others that the change of the surface cover results in changes of other variables such as soil moisture, albedo, net radiation and etc. These changes are also visible in the spin up process. The model shows different surfaces spinning up at different cycles. Because JULES is the land surface model of Unified Model, the results could be more physically meaningful if it is coupled to the Unified Model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power.In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. RESULTS: We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. CONCLUSION: This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motorcyclists and a matched group of non-motorcycling car drivers were assessed on behavioral measures known to relate to accident involvement. Using a range of laboratory measures, we found that motorcyclists chose faster speeds than the car drivers, overtook more, and pulled into smaller gaps in traffic, though they did not travel any closer to the vehicle in front. The speed and following distance findings were replicated by two further studies involving unobtrusive roadside observation. We suggest that the increased risk-taking behavior of motorcyclists was only likely to account for a small proportion of the difference in accident risk between motorcyclists and car drivers. A second group of motorcyclists was asked to complete the simulator tests as if driving a car. They did not differ from the non-motorcycling car drivers on the risk-taking measures but were better at hazard perception. There were also no differences for sensation seeking, mild social deviance, and attitudes to riding/driving, indicating that the risk-taking tendencies of motorcyclists did not transfer beyond motorcycling, while their hazard perception skill did. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How can a bridge be built between autonomic computing approaches and parallel computing system? The work reported in this paper is motivated towards bridging this gap by proposing swarm-array computing, a novel technique to achieve autonomy for distributed parallel computing systems. Among three proposed approaches, the second approach, namely 'Intelligent Agents' is of focus in this paper. The task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier. agents and can be seamlessly transferred between cores in the event of a pre-dicted failure, thereby achieving self-ware objectives of autonomic computing. The feasibility of the proposed approach is validated on a multi-agent simulator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Space applications are challenged by the reliability of parallel computing systems (FPGAs) employed in space crafts due to Single-Event Upsets. The work reported in this paper aims to achieve self-managing systems which are reliable for space applications by applying autonomic computing constructs to parallel computing systems. A novel technique, 'Swarm-Array Computing' inspired by swarm robotics, and built on the foundations of autonomic and parallel computing is proposed as a path to achieve autonomy. The constitution of swarm-array computing comprising for constituents, namely the computing system, the problem / task, the swarm and the landscape is considered. Three approaches that bind these constituents together are proposed. The feasibility of one among the three proposed approaches is validated on the SeSAm multi-agent simulator and landscapes representing the computing space and problem are generated using the MATLAB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work reported in this paper proposes 'Intelligent Agents', a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.