866 resultados para the Fuzzy Colour Segmentation Algorithm
Resumo:
Este proyecto, desarrollado en la empresa Davantis, tiene como objetivo encontrar posibles mejoras a su actual sistema de videovigilancia, el Daview. El proyecto está dedicado al estudio del algoritmo de seguimiento Mean Shift para la elaboración de un sistema de tracking. Para ello se han desarrollado y evaluado tres implementaciones diferentes, mediante las cuales se han encontrado mejoras que complementan al módulo de tracking del Daview. También se ha estudiado la utilidad de un sistema de evaluación manual frente a uno de automático.
Quantitative comparison of reconstruction methods for intra-voxel fiber recovery from diffusion MRI.
Resumo:
Validation is arguably the bottleneck in the diffusion magnetic resonance imaging (MRI) community. This paper evaluates and compares 20 algorithms for recovering the local intra-voxel fiber structure from diffusion MRI data and is based on the results of the "HARDI reconstruction challenge" organized in the context of the "ISBI 2012" conference. Evaluated methods encompass a mixture of classical techniques well known in the literature such as diffusion tensor, Q-Ball and diffusion spectrum imaging, algorithms inspired by the recent theory of compressed sensing and also brand new approaches proposed for the first time at this contest. To quantitatively compare the methods under controlled conditions, two datasets with known ground-truth were synthetically generated and two main criteria were used to evaluate the quality of the reconstructions in every voxel: correct assessment of the number of fiber populations and angular accuracy in their orientation. This comparative study investigates the behavior of every algorithm with varying experimental conditions and highlights strengths and weaknesses of each approach. This information can be useful not only for enhancing current algorithms and develop the next generation of reconstruction methods, but also to assist physicians in the choice of the most adequate technique for their studies.
Resumo:
Triatoma brasiliensis is considered as one of the most important Chagas disease vectors in the northeastern Brazil. This species presents chromatic variations which led to descriptions of subspecies, synonymized by Lent and Wygodzinsky (1979). In order to broaden bionomic knowledge of these distinct colour patterns of T. brasiliensis, captures were performed at different sites, where the chromatic patterns were described: Caicó, Rio Grande do Norte (T. brasiliensis brasiliensis Neiva, 1911), it will be called the "brasiliensis population"; Espinosa, Minas Gerais (T. brasiliensis melanica Neiva & Lent 1941), the "melanica population" and Petrolina, Pernambuco (T. brasiliensis macromelasoma, Galvão 1956), the "macromelasoma population". A fourth chromatic pattern was collected in Juazeiro, Bahia the darker one in overall cuticle coloration, the "Juazeiro population". At the sites of Caicó, Petrolina and Juazeiro, specimens were captured in peridomiciliar ecotopes and in wilderness. In Espinosa the specimens were collected only in wilderness, even though several exhaustive captures have been performed in peridomicile at different sites of this municipality. A total of 298 specimens were captured. The average registered infection rate was 15% for "brasiliensis population" and of 6.6% for "melanica population". Specimens of "macromelasoma" and of "Juazeiro populations" did not present natural infection. Concerning trophic resources, evaluated by the precipitin test, feeding eclecticism for the different colour patterns studied was observed, with dominance of goat blood in household surroundings as well as in wilderness
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.
Resumo:
Aquest memoria descriu els fonaments teòrics i la funcionalitat d'una aplicació per a cifrar arxius i directoris utilitzant la norma PKCS#5 dels laboratoris RSA, a més d'una modificació de la norma (algorisme TripelDES) per a aconseguir cifres més fortes.
Resumo:
Introduction : DTI has proven to be an exquisite biomarker of tissue microstructure integrity. This technique has been successfully applied to schizophrenia in showing that fractional anisotropy (FA, a marker of white matter integrity) is diminished in several areas of the brain (Kyriakopoulos M et al (2008)). New ways of representing diffusion data emerged recently and achieved to create structural connectivity maps in healthy brains (Hagmann P et al. (2008)). These maps have the capacity to study alterations over the entire brain at the connection and network level. This is of high interest in complex disconnection diseases like schizophrenia. We report on the specific network alterations of schizophrenic patients. Methods : 13 patients with chronic schizophrenia were recruited from in-patient, day treatment, out-patient clinics. Comparison subjects were recruited and group-matched to patients on age, sex, handedness, and parental social economic-status. This study was approved by the local IRB and subjects had to give informed written consent. They were scanned with a 3T clinical MRI scanner. DTI and high-resolution anatomical T1w imaging were performed during the same session. The path from diffusion MRI to a multi-resolution structural connection matrices of the entire brain is a five steps process that was performed in a similar way as described in Hagmann P et al. (2008). (1) DTI and T1w MRI of the brain, (2) segmentation of white and gray matter, (3) white matter tractography, (4) segmentation of the cortex into 242 ROIs of equal surface area covering the entire cortex (Fig 1), (5) the connection network was constructed by measuring for each ROI to ROI connection the related average FA along the corresponding tract. Results : For every connection between 2 ROIs of the network we tested the hypothesis H0: "average FA along fiber pathway is larger or equal in patients than in controls". H0 was rejected for connections where average FA in a connection was significantly lower in patients than in controls. Threshold p-value was 0.01 corrected for multiple comparisons with false discovery rate. We identified consistently that temporal, occipito-temporal, precuneo-temporal as well as frontal inferior and precuneo-cingulate connections were altered (Fig 2: significant connections in yellow). This is in agreement with the known literature, which showed across several studies that FA is diminished in several areas of the brain. More precisely, abnormalities were reported in the prefrontal and temporal white matter and to some extent also in the parietal and occipital regions. The alterations reported in the literature specifically included the corpus callosum, the arcuate fasciculus and the cingulum bundle, which was the case here as well. In addition small world indexes are significantly reduced in patients (p<0.01) (Fig. 3). Conclusions : Using connectome mapping to characterize differences in structural connectivity between healthy and diseased subjects we were able to show widespread connectional alterations in schizophrenia patients and systematic small worldness decrease, which is a marker of network desorganization. More generally, we described a method that has the capacity to sensitively identify structure alterations in complex disconnection syndromes where lesions are widespread throughout the connectional network.
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced
Resumo:
BACKGROUND: The yeast Schizosaccharomyces pombe is frequently used as a model for studying the cell cycle. The cells are rod-shaped and divide by medial fission. The process of cell division, or cytokinesis, is controlled by a network of signaling proteins called the Septation Initiation Network (SIN); SIN proteins associate with the SPBs during nuclear division (mitosis). Some SIN proteins associate with both SPBs early in mitosis, and then display strongly asymmetric signal intensity at the SPBs in late mitosis, just before cytokinesis. This asymmetry is thought to be important for correct regulation of SIN signaling, and coordination of cytokinesis and mitosis. In order to study the dynamics of organelles or large protein complexes such as the spindle pole body (SPB), which have been labeled with a fluorescent protein tag in living cells, a number of the image analysis problems must be solved; the cell outline must be detected automatically, and the position and signal intensity associated with the structures of interest within the cell must be determined. RESULTS: We present a new 2D and 3D image analysis system that permits versatile and robust analysis of motile, fluorescently labeled structures in rod-shaped cells. We have designed an image analysis system that we have implemented as a user-friendly software package allowing the fast and robust image-analysis of large numbers of rod-shaped cells. We have developed new robust algorithms, which we combined with existing methodologies to facilitate fast and accurate analysis. Our software permits the detection and segmentation of rod-shaped cells in either static or dynamic (i.e. time lapse) multi-channel images. It enables tracking of two structures (for example SPBs) in two different image channels. For 2D or 3D static images, the locations of the structures are identified, and then intensity values are extracted together with several quantitative parameters, such as length, width, cell orientation, background fluorescence and the distance between the structures of interest. Furthermore, two kinds of kymographs of the tracked structures can be established, one representing the migration with respect to their relative position, the other representing their individual trajectories inside the cell. This software package, called "RodCellJ", allowed us to analyze a large number of S. pombe cells to understand the rules that govern SIN protein asymmetry. CONCLUSIONS: "RodCell" is freely available to the community as a package of several ImageJ plugins to simultaneously analyze the behavior of a large number of rod-shaped cells in an extensive manner. The integration of different image-processing techniques in a single package, as well as the development of novel algorithms does not only allow to speed up the analysis with respect to the usage of existing tools, but also accounts for higher accuracy. Its utility was demonstrated on both 2D and 3D static and dynamic images to study the septation initiation network of the yeast Schizosaccharomyces pombe. More generally, it can be used in any kind of biological context where fluorescent-protein labeled structures need to be analyzed in rod-shaped cells. AVAILABILITY: RodCellJ is freely available under http://bigwww.epfl.ch/algorithms.html, (after acceptance of the publication).
Resumo:
To date, state-of-the-art seismic material parameter estimates from multi-component sea-bed seismic data are based on the assumption that the sea-bed consists of a fully elastic half-space. In reality, however, the shallow sea-bed generally consists of soft, unconsolidated sediments that are characterized by strong to very strong seismic attenuation. To explore the potential implications, we apply a state-of-the-art elastic decomposition algorithm to synthetic data for a range of canonical sea-bed models consisting of a viscoelastic half-space of varying attenuation. We find that in the presence of strong seismic attenuation, as quantified by Q-values of 10 or less, significant errors arise in the conventional elastic estimation of seismic properties. Tests on synthetic data indicate that these errors can be largely avoided by accounting for the inherent attenuation of the seafloor when estimating the seismic parameters. This can be achieved by replacing the real-valued expressions for the elastic moduli in the governing equations in the parameter estimation by their complex-valued viscoelastic equivalents. The practical application of our parameter procedure yields realistic estimates of the elastic seismic material properties of the shallow sea-bed, while the corresponding Q-estimates seem to be biased towards too low values, particularly for S-waves. Given that the estimation of inelastic material parameters is notoriously difficult, particularly in the immediate vicinity of the sea-bed, this is expected to be of interest and importance for civil and ocean engineering purposes.
Resumo:
In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows
Resumo:
Pulse-wave velocity (PWV) is considered as the gold-standard method to assess arterial stiffness, an independent predictor of cardiovascular morbidity and mortality. Current available devices that measure PWV need to be operated by skilled medical staff, thus, reducing the potential use of PWV in the ambulatory setting. In this paper, we present a new technique allowing continuous, unsupervised measurements of pulse transit times (PTT) in central arteries by means of a chest sensor. This technique relies on measuring the propagation time of pressure pulses from their genesis in the left ventricle to their later arrival at the cutaneous vasculature on the sternum. Combined thoracic impedance cardiography and phonocardiography are used to detect the opening of the aortic valve, from which a pre-ejection period (PEP) value is estimated. Multichannel reflective photoplethysmography at the sternum is used to detect the distal pulse-arrival time (PAT). A PTT value is then calculated as PTT = PAT - PEP. After optimizing the parameters of the chest PTT calculation algorithm on a nine-subject cohort, a prospective validation study involving 31 normo- and hypertensive subjects was performed. 1/chest PTT correlated very well with the COMPLIOR carotid to femoral PWV (r = 0.88, p < 10 (-9)). Finally, an empirical method to map chest PTT values onto chest PWV values is explored.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
Underbody plows can be very useful tools in winter maintenance, especially when compacted snow or hard ice must be removed from the roadway. By the application of significant down-force, and the use of an appropriate cutting edge angle, compacted snow and ice can be removed very effectively by such plows, with much greater efficiency than any other tool under those circumstances. However, the successful operation of an underbody plow requires considerable skill. If too little down pressure is applied to the plow, then it will not cut the ice or compacted snow. However, if too much force is applied, then either the cutting edge may gouge the road surface, causing significant damage often to both the road surface and the plow, or the plow may ride up on the cutting edge so that it is no longer controllable by the operator. Spinning of the truck in such situations is easily accomplished. Further, excessive down force will result in rapid wear of the cutting edge. Given this need for a high level of operator skill, the operation of an underbody plow is a candidate for automation. In order to successfully automate the operation of an underbody plow, a control system must be developed that follows a set of rules that represent appropriate operation of such a plow. These rules have been developed, based upon earlier work in which operational underbody plows were instrumented to determine the loading upon them (both vertical and horizontal) and the angle at which the blade was operating.These rules have been successfully coded into two different computer programs, both using the MatLab® software. In the first program, various load and angle inputs are analyzed to determine when, whether, and how they violate the rules of operation. This program is essentially deterministic in nature. In the second program, the Simulink® package in the MatLab® software system was used to implement these rules using fuzzy logic. Fuzzy logic essentially replaces a fixed and constant rule with one that varies in such a way as to improve operational control. The development of the fuzzy logic in this simulation was achieved simply by using appropriate routines in the computer software, rather than being developed directly. The results of the computer testing and simulation indicate that a fully automated, computer controlled underbody plow is indeed possible. The issue of whether the next steps toward full automation should be taken (and by whom) has also been considered, and the possibility of some sort of joint venture between a Department of Transportation and a vendor has been suggested.
Resumo:
The supraclavicular flap (SCF) is a fasciocutaneous flap used to cover head, oral, and neck region defects after tumor resection. Its main vascular supply is the supraclavicular artery and accompanying veins and it can be harvested as a vascularised pedicled flap. The SCF serves as an excellent outer skin cover as well as a good inner mucosal lining after oral cavity and head-neck tumor resections. The flap has a wide arc of rotation and matches the skin colour and texture of the face and neck. Between March 2006 and March 2011, the pedicled supraclavicular flap was used for reconstruction in 50 consecutive patients after head and neck tumor resections and certain benign conditions in a tertiary university hospital setting. The flaps were tunnelized under the neck skin to cover the external cervicofacial defects or passed medial to the mandible to give an inner epithelial lining after the oral cavity and oropharyngeal tumor excision. Forty-four of the 50 patients had 100% flap survival with excellent wound healing. All the flaps were harvested in less than 1 h. There were four cases of distal tip desquamation and two patients had complete flap necrosis. Distal flap desquamation was observed in SCFs used for resurfacing the external skin defects after oral cavity tumor ablation and needed only conservative treatment measures. Total flap failure was encountered in two patients who had failed in previous chemoradiotherapy for squamous cell cancer of the floor of mouth and tonsil, respectively, and the SCF was used in mucosal defect closure after tumor ablation. The benefits of a pedicled fasciocutaneous supraclavicular flap are clear; it is thin, reliable, easy, and quick to harvest. In head, face and neck reconstructions, it is a good alternative to free fasciocutaneous flaps, regional pedicled myocutaneous flaps, and the deltopectoral flap.
Resumo:
Background: Conventional magnetic resonance imaging (MRI) techniques are highly sensitive to detect multiple sclerosis (MS) plaques, enabling a quantitative assessment of inflammatory activity and lesion load. In quantitative analyses of focal lesions, manual or semi-automated segmentations have been widely used to compute the total number of lesions and the total lesion volume. These techniques, however, are both challenging and time-consuming, being also prone to intra-observer and inter-observer variability.Aim: To develop an automated approach to segment brain tissues and MS lesions from brain MRI images. The goal is to reduce the user interaction and to provide an objective tool that eliminates the inter- and intra-observer variability.Methods: Based on the recent methods developed by Souplet et al. and de Boer et al., we propose a novel pipeline which includes the following steps: bias correction, skull stripping, atlas registration, tissue classification, and lesion segmentation. After the initial pre-processing steps, a MRI scan is automatically segmented into 4 classes: white matter (WM), grey matter (GM), cerebrospinal fluid (CSF) and partial volume. An expectation maximisation method which fits a multivariate Gaussian mixture model to T1-w, T2-w and PD-w images is used for this purpose. Based on the obtained tissue masks and using the estimated GM mean and variance, we apply an intensity threshold to the FLAIR image, which provides the lesion segmentation. With the aim of improving this initial result, spatial information coming from the neighbouring tissue labels is used to refine the final lesion segmentation.Results:The experimental evaluation was performed using real data sets of 1.5T and the corresponding ground truth annotations provided by expert radiologists. The following values were obtained: 64% of true positive (TP) fraction, 80% of false positive (FP) fraction, and an average surface distance of 7.89 mm. The results of our approach were quantitatively compared to our implementations of the works of Souplet et al. and de Boer et al., obtaining higher TP and lower FP values.Conclusion: Promising MS lesion segmentation results have been obtained in terms of TP. However, the high number of FP which is still a well-known problem of all the automated MS lesion segmentation approaches has to be improved in order to use them for the standard clinical practice. Our future work will focus on tackling this issue.