991 resultados para SON enhanced algorithm
Resumo:
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.
Resumo:
BACKGROUND: A major problem in Chagas disease donor screening is the high frequency of samples with inconclusive results. The objective of this study was to describe patterns of serologic results among donors to the three Brazilian REDS-II blood centers and correlate with epidemiologic characteristics. STUDY DESIGN AND METHODS: The centers screened donor samples with one Trypanosoma cruzi lysate enzyme immunoassay (EIA). EIA-reactive samples were tested with a second lysate EIA, a recombinant-antigen based EIA, and an immunfluorescence assay. Based on the serologic results, samples were classified as confirmed positive (CP), probable positive (PP), possible other parasitic infection (POPI), and false positive (FP). RESULTS: In 2007 to 2008, a total of 877 of 615,433 donations were discarded due to Chagas assay reactivity. The prevalences (95% confidence intervals [CIs]) among first-time donors for CP, PP, POPI, and FP patterns were 114 (99-129), 26 (19-34), 10 (5-14), and 96 (82-110) per 100,000 donations, respectively. CP and PP had similar patterns of prevalence when analyzed by age, sex, education, and location, suggesting that PP cases represent true T. cruzi infections; in contrast the demographics of donors with POPI were distinct and likely unrelated to Chagas disease. No CP cases were detected among 218,514 repeat donors followed for a total of 718,187 person-years. CONCLUSION: We have proposed a classification algorithm that may have practical importance for donor counseling and epidemiologic analyses of T. cruzi-seroreactive donors. The absence of incident T. cruzi infections is reassuring with respect to risk of window phase infections within Brazil and travel-related infections in nonendemic countries such as the United States.
Resumo:
The behavior of plasma and sheath characteristics under the action of an applied magnetic field is important in many applications including plasma probes and material processing. Plasma immersion ion implantation (PIII) has been developed as a fast and efficient surface modification technique of complex shaped three-dimensional objects. The PIII process relies on the acceleration of ions across a high-voltage plasma sheath that develops around the target. Recent studies have shown that the sheath dynamics is significantly affected by an external magnetic field. In this work we describe a two-dimensional computer simulation of magnetic field enhanced plasma immersion implantation system. Negative bias voltage is applied to a cylindrical target located on the axis of a grounded cylindrical vacuum chamber filled with uniform nitrogen plasma. An axial magnetic field is created by a solenoid installed inside the cylindrical target. The computer code employs the Monte Carlo method for collision of electrons and neutrals in the plasma and a particle-in-cell (PIC) algorithm for simulating the movement of charged particles in the electromagnetic field. Secondary electron emission from the target subjected to ion bombardment is also included. It is found that a high-density plasma region is formed around the cylindrical target due to the intense background gas ionization by the magnetized electrons drifting in the crossed ExB fields. An increase of implantation current density in front of high density plasma region is observed. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Recent studies have demonstrated that the sheath dynamics in plasma immersion ion implantation (PIII) is significantly affected by an external magnetic field. In this paper, a two-dimensional computer simulation of a magnetic-field-enhanced PHI system is described. Negative bias voltage is applied to a cylindrical target located on the axis of a grounded vacuum chamber filled with uniform molecular nitrogen plasma. A static magnetic field is created by a small coil installed inside the target holder. The vacuum chamber is filled with background nitrogen gas to form a plasma in which collisions of electrons and neutrals are simulated by the Monte Carlo algorithm. It is found that a high-density plasma is formed around the target due to the intense background gas ionization by the magnetized electrons drifting in the crossed E x B fields. The effect of the magnetic field intensity, the target bias, and the gas pressure on the sheath dynamics and implantation current of the PHI system is investigated.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A growing body of evidence indiates that carbon monoxide (CO) acts as a gas neurotransmitter within the central nervous system. Although CO has been shown to affect neurohypophyseal hormone release in response to osmotic stimuli, the precise sources, targets and mechanisms underlying the actions of CO within the magnocellular neurosecretory system remain largely unknown. In the present study, we combined immunohistochemistry and patch-clamp electrophysiology to study the cellular distribution of the CO-synthase enzyme heme oxygenase type 1 (HO-1), as well as the actions of CO on oxytocin (OT) and vasopressin (VP) magnocellular neurosecretory cells (MNCs), in euhydrated (EU) and 48-h water-deprived rats (48WD). Our results show the expression of HO-1 immunoreactivity both in OT and VP neurones, as well as in a small proportion of astrocytes, both in supraoptic (SON) and paraventricular (PVN) nuclei. HO-1 expression, and its colocalisation with OT and VP neurones within the SON and PVN, was significantly enhanced in 48WD rats. Inhibition of HO activity with chromium mesoporphyrin IX chloride (CrMP; 20 mu m) resulted in a slight membrane hyperpolarisation in SON neurones from EU rats, without significantly affecting their firing activity. In 48WD rats, on the other hand, CrMP resulted in a more robust membrane hyperpolarisation, significantly decreasing neuronal firing discharge. Taken together, our results indicate that magnocellular SON and PVN neurones express HO-1, and that CO acts as an excitatory gas neurotransmitter in this system. Moreover, we found that the expression and actions of CO were enhanced in water-deprived rats, suggesting that the state-dependent up-regulation of the HO-1/CO signalling pathway contributes to enhance MNCs firing activity during an osmotic challenge.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).
Resumo:
Aim of this paper is to evaluate the diagnostic contribution of various types of texture features in discrimination of hepatic tissue in abdominal non-enhanced Computed Tomography (CT) images. Regions of Interest (ROIs) corresponding to the classes: normal liver, cyst, hemangioma, and hepatocellular carcinoma were drawn by an experienced radiologist. For each ROI, five distinct sets of texture features are extracted using First Order Statistics (FOS), Spatial Gray Level Dependence Matrix (SGLDM), Gray Level Difference Method (GLDM), Laws' Texture Energy Measures (TEM), and Fractal Dimension Measurements (FDM). In order to evaluate the ability of the texture features to discriminate the various types of hepatic tissue, each set of texture features, or its reduced version after genetic algorithm based feature selection, was fed to a feed-forward Neural Network (NN) classifier. For each NN, the area under Receiver Operating Characteristic (ROC) curves (Az) was calculated for all one-vs-all discriminations of hepatic tissue. Additionally, the total Az for the multi-class discrimination task was estimated. The results show that features derived from FOS perform better than other texture features (total Az: 0.802+/-0.083) in the discrimination of hepatic tissue.
Resumo:
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a noninvasive technique for quantitative assessment of the integrity of blood-brain barrier and blood-spinal cord barrier (BSCB) in the presence of central nervous system pathologies. However, the results of DCE-MRI show substantial variability. The high variability can be caused by a number of factors including inaccurate T1 estimation, insufficient temporal resolution and poor contrast-to-noise ratio. My thesis work is to develop improved methods to reduce the variability of DCE-MRI results. To obtain fast and accurate T1 map, the Look-Locker acquisition technique was implemented with a novel and truly centric k-space segmentation scheme. In addition, an original multi-step curve fitting procedure was developed to increase the accuracy of T1 estimation. A view sharing acquisition method was implemented to increase temporal resolution, and a novel normalization method was introduced to reduce image artifacts. Finally, a new clustering algorithm was developed to reduce apparent noise in the DCE-MRI data. The performance of these proposed methods was verified by simulations and phantom studies. As part of this work, the proposed techniques were applied to an in vivo DCE-MRI study of experimental spinal cord injury (SCI). These methods have shown robust results and allow quantitative assessment of regions with very low vascular permeability. In conclusion, applications of the improved DCE-MRI acquisition and analysis methods developed in this thesis work can improve the accuracy of the DCE-MRI results.
Resumo:
The electron pencil-beam redefinition algorithm (PBRA) of Shiu and Hogstrom has been developed for use in radiotherapy treatment planning (RTP). Earlier studies of Boyd and Hogstrom showed that the PBRA lacked an adequate incident beam model, that PBRA might require improved electron physics, and that no data existed which allowed adequate assessment of the PBRA-calculated dose accuracy in a heterogeneous medium such as one presented by patient anatomy. The hypothesis of this research was that by addressing the above issues the PBRA-calculated dose would be accurate to within 4% or 2 mm in regions of high dose gradients. A secondary electron source was added to the PBRA to account for collimation-scattered electrons in the incident beam. Parameters of the dual-source model were determined from a minimal data set to allow ease of beam commissioning. Comparisons with measured data showed 3% or better dose accuracy in water within the field for cases where 4% accuracy was not previously achievable. A measured data set was developed that allowed an evaluation of PBRA in regions distal to localized heterogeneities. Geometries in the data set included irregular surfaces and high- and low-density internal heterogeneities. The data was estimated to have 1% precision and 2% agreement with accurate, benchmarked Monte Carlo (MC) code. PBRA electron transport was enhanced by modeling local pencil beam divergence. This required fundamental changes to the mathematics of electron transport (divPBRA). Evaluation of divPBRA with the measured data set showed marginal improvement in dose accuracy when compared to PBRA; however, 4% or 2mm accuracy was not achieved by either PBRA version for all data points. Finally, PBRA was evaluated clinically by comparing PBRA- and MC-calculated dose distributions using site-specific patient RTP data. Results show PBRA did not agree with MC to within 4% or 2mm in a small fraction (<3%) of the irradiated volume. Although the hypothesis of the research was shown to be false, the minor dose inaccuracies should have little or no impact on RTP decisions or patient outcome. Therefore, given ease of beam commissioning, documentation of accuracy, and calculational speed, the PBRA should be considered a practical tool for clinical use. ^
Resumo:
In general, a moderate drying trend is observed in mid-latitude arid Central Asia since the Mid-Holocene, attributed to the progressively weakening influence of the mid-latitude Westerlies on regional climate. However, as the spatio-temporal pattern of this development and the underlying climatic mechanisms are yet not fully understood, new high-resolution paleoclimate records from this region are needed. Within this study, a sediment core from Lake Son Kol (Central Kyrgyzstan) was investigated using sedimentological, (bio)geochemical, isotopic, and palynological analyses, aiming at reconstructing regional climate development during the last 6000 years. Biogeochemical data, mainly reflecting summer moisture conditions, indicate predominantly wet conditions until 4950 cal. yr BP, succeeded by a pronounced dry interval between 4950 and 3900 cal. yr BP. In the following, a return to wet conditions and a subsequent moderate drying trend until present times are observed. This is consistent with other regional paleoclimate records and likely reflects the gradual Late Holocene diminishment of the amount of summer moisture provided by the mid-latitude Westerlies. However, climate impact of the Westerlies was apparently not only restricted to the summer season but also significant during winter as indicated by recurrent episodes of enhanced allochthonous input through snowmelt, occurring before 6000 cal. yr BP and at 5100-4350, 3450-2850, and 1900-1500 cal. yr BP. The distinct ~1500-year periodicity of these episodes of increased winter precipitation in Central Kyrgyzstan resembles similar cyclicities observed in paleoclimate records around the North Atlantic, likely indicating a hemispheric-scale climatic teleconnection and an impact of North Atlantic Oscillation (NAO) variability in Central Asia.
Resumo:
Animal tracking has been addressed by different initiatives over the last two decades. Most of them rely on satellite connectivity on every single node and lack of energy-saving strategies. This paper presents several new contributions on the tracking of dynamic heterogeneous asynchronous networks (primary nodes with GPS and secondary nodes with a kinetic generator) motivated by the animal tracking paradigm with random transmissions. A simple approach based on connectivity and coverage intersection is compared with more sophisticated algorithms based on ad-hoc implementations of distributed Kalman-based filters that integrate measurement information using Consensus principles in order to provide enhanced accuracy. Several simulations varying the coverage range, the random behavior of the kinetic generator (modeled as a Poisson Process) and the periodic activation of GPS are included. In addition, this study is enhanced with HW developments and implementations on commercial off-the-shelf equipment which show the feasibility for performing these proposals on real hardware.
Resumo:
Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.