978 resultados para Scale space
Resumo:
Embora tenha sido proposto que a vasculatura retínica apresenta estrutura fractal, nenhuma padronização do método de segmentação ou do método de cálculo das dimensões fractais foi realizada. Este estudo objetivou determinar se a estimação das dimensões fractais da vasculatura retínica é dependente dos métodos de segmentação vascular e dos métodos de cálculo de dimensão. Métodos: Dez imagens retinográficas foram segmentadas para extrair suas árvores vasculares por quatro métodos computacionais (“multithreshold”, “scale-space”, “pixel classification” e “ridge based detection”). Suas dimensões fractais de “informação”, de “massa-raio” e “por contagem de caixas” foram então calculadas e comparadas com as dimensões das mesmas árvores vasculares, quando obtidas pela segmentação manual (padrão áureo). Resultados: As médias das dimensões fractais variaram através dos grupos de diferentes métodos de segmentação, de 1,39 a 1,47 para a dimensão por contagem de caixas, de 1,47 a 1,52 para a dimensão de informação e de 1,48 a 1,57 para a dimensão de massa-raio. A utilização de diferentes métodos computacionais de segmentação vascular, bem como de diferentes métodos de cálculo de dimensão, introduziu diferença estatisticamente significativa nos valores das dimensões fractais das árvores vasculares. Conclusão: A estimação das dimensões fractais da vasculatura retínica foi dependente tanto dos métodos de segmentação vascular, quanto dos métodos de cálculo de dimensão utilizados
Resumo:
Embora tenha sido proposto que a vasculatura retínica apresenta estrutura fractal, nenhuma padronização do método de segmentação ou do método de cálculo das dimensões fractais foi realizada. Este estudo objetivou determinar se a estimação das dimensões fractais da vasculatura retínica é dependente dos métodos de segmentação vascular e dos métodos de cálculo de dimensão. Métodos: Dez imagens retinográficas foram segmentadas para extrair suas árvores vasculares por quatro métodos computacionais (“multithreshold”, “scale-space”, “pixel classification” e “ridge based detection”). Suas dimensões fractais de “informação”, de “massa-raio” e “por contagem de caixas” foram então calculadas e comparadas com as dimensões das mesmas árvores vasculares, quando obtidas pela segmentação manual (padrão áureo). Resultados: As médias das dimensões fractais variaram através dos grupos de diferentes métodos de segmentação, de 1,39 a 1,47 para a dimensão por contagem de caixas, de 1,47 a 1,52 para a dimensão de informação e de 1,48 a 1,57 para a dimensão de massa-raio. A utilização de diferentes métodos computacionais de segmentação vascular, bem como de diferentes métodos de cálculo de dimensão, introduziu diferença estatisticamente significativa nos valores das dimensões fractais das árvores vasculares. Conclusão: A estimação das dimensões fractais da vasculatura retínica foi dependente tanto dos métodos de segmentação vascular, quanto dos métodos de cálculo de dimensão utilizados
Resumo:
The present work aims to provide a deeper understanding of thermally driven turbulence and to address some modelling aspects related to the physics of the flow. For this purpose, two idealized systems are investigated by Direct Numerical Simulation: the rotating and non-rotating Rayleigh-Bénard convection. The preliminary study of the flow topologies shows how the coherent structures organise into different patterns depending on the rotation rate. From a statistical perspective, the analysis of the turbulent kinetic energy and temperature variance budgets allows to identify the flow regions where the production, the transport, and the dissipation of turbulent fluctuations occur. To provide a multi-scale description of the flows, a theoretical framework based on the Kolmogorov and Yaglom equations is applied for the first time to the Rayleigh-Bénard convection. The analysis shows how the spatial inhomogeneity modulates the dynamics at different scales and wall-distances. Inside the core of the flow, the space of scales can be divided into an inhomogeneity-dominated range at large scales, an inertial-like range at intermediate scales and a dissipative range at small scales. This classic scenario breaks close to the walls, where the inhomogeneous mechanisms and the viscous/diffusive processes are important at every scale and entail more complex dynamics. The same theoretical framework is extended to the filtered velocity and temperature fields of non-rotating Rayleigh-Bénard convection. The analysis of the filtered Kolmogorov and Yaglom equations reveals the influence of the residual scales on the filtered dynamics both in physical and scale space, highlighting the effect of the relative position between the filter length and the crossover that separates the inhomogeneity-dominated range from the quasi-homogeneous range. The assessment of the filtered and residual physics results to be instrumental for the correct use of the existing Large-Eddy Simulation models and for the development of new ones.
Resumo:
Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.
Resumo:
Using series solutions and time-domain evolutions, we probe the eikonal limit of the gravitational and scalar-field quasinormal modes of large black holes and black branes in anti-de Sitter backgrounds. These results are particularly relevant for the AdS/CFT correspondence, since the eikonal regime is characterized by the existence of long-lived modes which (presumably) dominate the decay time scale of the perturbations. We confirm all the main qualitative features of these slowly damped modes as predicted by Festuccia and Liu [G. Festuccia and H. Liu, arXiv:0811.1033.] for the scalar-field (tensor-type gravitational) fluctuations. However, quantitatively we find dimensional-dependent correction factors. We also investigate the dependence of the quasinormal mode frequencies on the horizon radius of the black hole (brane) and the angular momentum (wave number) of vector- and scalar-type gravitational perturbations.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
Using data from the H I Parkes All Sky Survey (HIPASS), we have searched for neutral hydrogen in galaxies in a region similar to25x25 deg(2) centred on NGC 1399, the nominal centre of the Fornax cluster. Within a velocity search range of 300-3700 km s(-1) and to a 3sigma lower flux limit of similar to40 mJy, 110 galaxies with H I emission were detected, one of which is previously uncatalogued. None of the detections has early-type morphology. Previously unknown velocities for 14 galaxies have been determined, with a further four velocity measurements being significantly dissimilar to published values. Identification of an optical counterpart is relatively unambiguous for more than similar to90 per cent of our H I galaxies. The galaxies appear to be embedded in a sheet at the cluster velocity which extends for more than 30degrees across the search area. At the nominal cluster distance of similar to20 Mpc, this corresponds to an elongated structure more than 10 Mpc in extent. A velocity gradient across the structure is detected, with radial velocities increasing by similar to500 km s(-1) from south-east to north-west. The clustering of galaxies evident in optical surveys is only weakly suggested in the spatial distribution of our H I detections. Of 62 H I detections within a 10degrees projected radius of the cluster centre, only two are within the core region (projected radius
Resumo:
Despite their limitations, linear filter models continue to be used to simulate the receptive field properties of cortical simple cells. For theoreticians interested in large scale models of visual cortex, a family of self-similar filters represents a convenient way in which to characterise simple cells in one basic model. This paper reviews research on the suitability of such models, and goes on to advance biologically motivated reasons for adopting a particular group of models in preference to all others. In particular, the paper describes why the Gabor model, so often used in network simulations, should be dropped in favour of a Cauchy model, both on the grounds of frequency response and mutual filter orthogonality.
Resumo:
With the current complexity of communication protocols, implementing its layers totally in the kernel of the operating system is too cumbersome, and it does not allow use of the capabilities only available in user space processes. However, building protocols as user space processes must not impair the responsiveness of the communication. Therefore, in this paper we present a layer of a communication protocol, which, due to its complexity, was implemented in a user space process. Lower layers of the protocol are, for responsiveness issues, implemented in the kernel. This protocol was developed to support large-scale power-line communication (PLC) with timing requirements.
Resumo:
We examine the constraints on the two Higgs doublet model (2HDM) due to the stability of the scalar potential and absence of Landau poles at energy scales below the Planck scale. We employ the most general 2HDM that incorporates an approximately Standard Model (SM) Higgs boson with a flavor aligned Yukawa sector to eliminate potential tree-level Higgs-mediated flavor changing neutral currents. Using basis independent techniques, we exhibit robust regimes of the 2HDM parameter space with a 125 GeV SM-like Higgs boson that is stable and perturbative up to the Planck scale. Implications for the heavy scalar spectrum are exhibited.
Resumo:
A thesis submitted in fulfilment of the requirements for the Degree of Doctor of Philosophy in Sanitary Engineering in the Faculty of Sciences and Technology of the New University of Lisbon
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
BACKGROUND: Wireless capsule endoscopy has been introduced as an innovative, non-invasive diagnostic technique for evaluation of the gastrointestinal tract, reaching places where conventional endoscopy is unable to. However, the output of this technique is an 8 hours video, whose analysis by the expert physician is very time consuming. Thus, a computer assisted diagnosis tool to help the physicians to evaluate CE exams faster and more accurately is an important technical challenge and an excellent economical opportunity. METHOD: The set of features proposed in this paper to code textural information is based on statistical modeling of second order textural measures extracted from co-occurrence matrices. To cope with both joint and marginal non-Gaussianity of second order textural measures, higher order moments are used. These statistical moments are taken from the two-dimensional color-scale feature space, where two different scales are considered. Second and higher order moments of textural measures are computed from the co-occurrence matrices computed from images synthesized by the inverse wavelet transform of the wavelet transform containing only the selected scales for the three color channels. The dimensionality of the data is reduced by using Principal Component Analysis. RESULTS: The proposed textural features are then used as the input of a classifier based on artificial neural networks. Classification performances of 93.1% specificity and 93.9% sensitivity are achieved on real data. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis systems to assist physicians in their clinical practice.