922 resultados para Application specific algorithm
Resumo:
This paper proposes a pose-based algorithm to solve the full SLAM problem for an autonomous underwater vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a mechanical scanning imaging sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method utilizes two extended Kalman filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach
Resumo:
The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved
Resumo:
The aim of traffic engineering is to optimise network resource utilization. Although several works on minimizing network resource utilization have been published, few works have focused on LSR label space. This paper proposes an algorithm that uses MPLS label stack features in order to reduce the number of labels used in LSPs forwarding. Some tunnelling methods and their MPLS implementation drawbacks are also discussed. The algorithm described sets up the NHLFE tables in each LSR, creating asymmetric tunnels when possible. Experimental results show that the algorithm achieves a large reduction factor in the label space. The work presented here applies for both types of connections: P2MP and P2P
Resumo:
In computer graphics, global illumination algorithms take into account not only the light that comes directly from the sources, but also the light interreflections. This kind of algorithms produce very realistic images, but at a high computational cost, especially when dealing with complex environments. Parallel computation has been successfully applied to such algorithms in order to make it possible to compute highly-realistic images in a reasonable time. We introduce here a speculation-based parallel solution for a global illumination algorithm in the context of radiosity, in which we have taken advantage of the hierarchical nature of such an algorithm
Resumo:
In this paper, different recovery methods applied at different network layers and time scales are used in order to enhance the network reliability. Each layer deploys its own fault management methods. However, current recovery methods are applied to only a specific layer. New protection schemes, based on the proposed partial disjoint path algorithm, are defined in order to avoid protection duplications in a multi-layer scenario. The new protection schemes also encompass shared segment backup computation and shared risk link group identification. A complete set of experiments proves the efficiency of the proposed methods in relation with previous ones, in terms of resources used to protect the network, the failure recovery time and the request rejection ratio
Resumo:
Custom coded module for TikiWiki v4.2, allowing keyword search by Category. Auto-populates categories as new ones are created.
Resumo:
Entire set of working mu-specific plugins to create a structured/secure/LDAP-authenticated WP-multisite environment in WP3.1.3
Resumo:
Experimental and epidemiological studies demonstrate that fetal growth restriction and low birth weight enhance the risk of chronic diseases in adulthood. Derangements in tissue-specific epigenetic programming of fetal and placental tissues are a suggested mechanism of which DNA methylation is best understood. DNA methylation profiles in human tissue are mostly performed in DNA from white blood cells. The objective of this study was to assess DNA methylation profiles of IGF2 DMR and H19 in DNA derived from four tissues of the newborn. We obtained from 6 newborns DNA from fetal placental tissue (n = 5), umbilical cord CD34+ hematopoietic stem cells (HSC) and CD34- mononuclear cells (MNC) (n = 6), and umbilical cord Wharton jelly (n = 5). HCS were isolated using magnetic-activated cell separation. DNA methylation of the imprinted fetal growth genes IGF2 DMR and H19 was measured in all tissues using quantitative mass spectrometry. ANOVA testing showed tissue-specific differences in DNA methylation of IGF2 DMR (p value 0.002) and H19 (p value 0.001) mainly due to a higher methylation of IGF2 DMR in Wharton jelly (mean 0.65, sd 0.14) and a lower methylation of H19 in placental tissue (mean 0.25, sd 0.02) compared to other tissues. This study demonstrates the feasibility of the assessment of differential tissue specific DNA methylation. Although the results have to be confirmed in larger sample sizes, our approach gives opportunities to investigate epigenetic profiles as underlying mechanism of associations between pregnancy exposures and outcome, and disease risks in later life.
Resumo:
Wednesday 23rd April 2014 Speaker(s): Willi Hasselbring Organiser: Leslie Carr Time: 23/04/2014 14:00-15:00 Location: B32/3077 File size: 802Mb Abstract The internal behavior of large-scale software systems cannot be determined on the basis of static (e.g., source code) analysis alone. Kieker provides complementary dynamic analysis capabilities, i.e., monitoring/profiling and analyzing a software system's runtime behavior. Application Performance Monitoring is concerned with continuously observing a software system's performance-specific runtime behavior, including analyses like assessing service level compliance or detecting and diagnosing performance problems. Architecture Discovery is concerned with extracting architectural information from an existing software system, including both structural and behavioral aspects like identifying architectural entities (e.g., components and classes) and their interactions (e.g., local or remote procedure calls). In addition to the Architecture Discovery of Java systems, Kieker supports Architecture Discovery for other platforms, including legacy systems, for instance, inplemented in C#, C++, Visual Basic 6, COBOL or Perl. Thanks to Kieker's extensible architecture it is easy to implement and use custom extensions and plugins. Kieker was designed for continuous monitoring in production systems inducing only a very low overhead, which has been evaluated in extensive benchmark experiments. Please, refer to http://kieker-monitoring.net/ for more information.
Resumo:
Objetivos: Describir si el uso de sangre fresca total (SFT) intraoperatoria en pacientes llevados a procedimientos RACHS 3 y 4 en la Fundación Cardioinfantil, disminuye el sangrado postoperatorio y el volumen de transfusión de elementos sanguíneos, en comparación a aquellos en quienes no se usa SFT. Materiales y métodos: Se realizó un estudio de cohorte histórica, tomando una población menor de 1 año expuesta a la sangre fresca total y comparándola con una población de similares características, llevadas a procedimientos de riesgo similar no expuesta. Los análisis se realizaron mediante pruebas estándar para variables continuas y discretas. Un valor de p menor a 0.05 fue aceptado como signficativo. Resultados: 46 pacientes expuestos a SFT y se compararon con 50 pacientes no expuestos. La principal diferencia entre los grupos fue la edad, siendo mayor en el grupo de no expuestos (3.8 años vs 0.9; p<0.001). El volumen de sangrado postoperatorio fue similar, sin embargo los pacientes expuestos a SFT recibieron mayor volumen de transfusiones, sin ser una diferencia estadísticamente significativa (155cc vs 203cc, P=0.9). No hubo diferencia significativa en complicaciones o mortalidad. Conclusiones: En nuestro estudio no se encontró una disminución en el volumen de sangrado postoperatorio en los pacientes menores de 1 año, sometidos a cirugías catalogadas como RACHS 3 y 4, expuestos a SFT, sin embargo se necesitan estudios clínicos controlados que respondan definitivamente a la pregunta.
Resumo:
Non-specific Occupational Low Back Pain (NOLBP) is a health condition that generates a high absenteeism and disability. Due to multifactorial causes is difficult to determine accurate diagnosis and prognosis. The clinical prediction of NOLBP is identified as a series of models that integrate a multivariate analysis to determine early diagnosis, course, and occupational impact of this health condition. Objective: to identify predictor factors of NOLBP, and the type of material referred to in the scientific evidence and establish the scopes of the prediction. Materials and method: the title search was conducted in the databases PubMed, Science Direct, and Ebsco Springer, between1985 and 2012. The selected articles were classified through a bibliometric analysis allowing to define the most relevant ones. Results: 101 titles met the established criteria, but only 43 metthe purpose of the review. As for NOLBP prediction, the studies varied in relation to the factors for example: diagnosis, transition of lumbar pain from acute to chronic, absenteeism from work, disability and return to work. Conclusion: clinical prediction is considered as a strategic to determine course and prognostic of NOLBP, and to determine the characteristics that increase the risk of chronicity in workers with this health condition. Likewise, clinical prediction rules are tools that aim to facilitate decision making about the evaluation, diagnosis, prognosis and intervention for low back pain, which should incorporate risk factors of physical, psychological and social.
Resumo:
Introducción: La enfermedad celiaca (EC) es una enfermedad autoinmune (EA) intestinal desencadenada por la ingesta de gluten. Por la falta de información de la presencia de EC en Latinoamérica (LA), nosotros investigamos la prevalencia de la enfermedad en esta región utilizando una revisión sistemática de la literatura y un meta-análisis. Métodos y resultados: Este trabajo fue realizado en dos fases: La primera, fue un estudio de corte transversal de 300 individuos Colombianos. La segunda, fue una revisión sistemática y una meta-regresión siguiendo las guías PRSIMA. Nuestros resultados ponen de manifiesto una falta de anti-transglutaminasa tisular (tTG) e IgA anti-endomisio (EMA) en la población Colombiana. En la revisión sistemática, 72 artículos cumplían con los criterios de selección, la prevalencia estimada de EC en LA fue de 0,46% a 0,64%, mientras que la prevalencia en familiares de primer grado fue de 5,5 a 5,6%, y en los pacientes con diabetes mellitus tipo 1 fue de 4,6% a 8,7% Conclusión: Nuestro estudio muestra que la prevalencia de EC en pacientes sanos de LA es similar a la notificada en la población europea.
Resumo:
La computación evolutiva y muy especialmente los algoritmos genéticos son cada vez más empleados en las organizaciones para resolver sus problemas de gestión y toma de decisiones (Apoteker & Barthelemy, 2000). La literatura al respecto es creciente y algunos estados del arte han sido publicados. A pesar de esto, no hay un trabajo explícito que evalúe de forma sistemática el uso de los algoritmos genéticos en problemas específicos de los negocios internacionales (ejemplos de ello son la logística internacional, el comercio internacional, el mercadeo internacional, las finanzas internacionales o estrategia internacional). El propósito de este trabajo de grado es, por lo tanto, realizar un estado situacional de las aplicaciones de los algoritmos genéticos en los negocios internacionales.
Resumo:
This paper presents a case study that explores the advantages that can be derived from the use of a design support system during the design of wastewater treatment plants (WWTP). With this objective in mind a simplified but plausible WWTP design case study has been generated with KBDS, a computer-based support system that maintains a historical record of the design process. The study shows how, by employing such a historical record, it is possible to: (1) rank different design proposals responding to a design problem; (2) study the influence of changing the weight of the arguments used in the selection of the most adequate proposal; (3) take advantage of keywords to assist the designer in the search of specific items within the historical records; (4) evaluate automatically the compliance of alternative design proposals with respect to the design objectives; (5) verify the validity of previous decisions after the modification of the current constraints or specifications; (6) re-use the design records when upgrading an existing WWTP or when designing similar facilities; (7) generate documentation of the decision making process; and (8) associate a variety of documents as annotations to any component in the design history. The paper also shows one possible future role of design support systems as they outgrow their current reactive role as repositories of historical information and start to proactively support the generation of new knowledge during the design process