913 resultados para segmentation and reverberation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently Adams and Bischof (1994) proposed a novel region growing algorithm for segmenting intensity images. The inputs to the algorithm are the intensity image and a set of seeds - individual points or connected components - that identify the individual regions to be segmented. The algorithm grows these seed regions until all of the image pixels have been assimilated. Unfortunately the algorithm is inherently dependent on the order of pixel processing. This means, for example, that raster order processing and anti-raster order processing do not, in general, lead to the same tessellation. In this paper we propose an improved seeded region growing algorithm that retains the advantages of the Adams and Bischof algorithm fast execution, robust segmentation, and no tuning parameters - but is pixel order independent. (C) 1997 Elsevier Science B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have performed MRI examinations to determine the water diffusion tensor in the brain of six patients who were admitted to the hospital within 12 h after the onset of cerebral ischemic symptoms. The examinations have been carried out immediately after admission, and thereafter at varying intervals up to 90 days post admission. Maps of the trace of the diffusion tensor, the fractional anisotropy and the lattice index, as well as maps of cerebral blood perfusion parameters, were generated to quantitatively assess the character of the water diffusion tensor in the infarcted area. In patients with significant perfusion deficits and substantial lesion volume changes, four of six cases, our measurements show a monotonic and significant decrease in the diffusion anisotropy within the ischemic lesion as a function of time. We propose that retrospective analysis of this quantity, in combination with brain tissue segmentation and cerebral perfusion maps, may be used in future studies to assess the severity of the ischemic event. (C) 1999 Elsevier Science Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE To examine cortical thickness and volumetric changes in the cortex of patients with polymicrogyria, using an automated image analysis algorithm. METHODS Cortical thickness of patients with polymicrogyria was measured using magnetic resonance imaging (MRI) cortical surface-based analysis and compared with age-and sex-matched healthy subjects. We studied 3 patients with disorder of cortical development (DCD), classified as polymicrogyria, and 15 controls. Two experienced neuroradiologists performed a conventional visual assessment of the MRIs. The same data were analyzed using an automated algorithm for tissue segmentation and classification. Group and individual average maps of cortical thickness differences were produced by cortical surface-based statistical analysis. RESULTS Patients with polymicrogyria showed increased thickness of the cortex in the same areas identified as abnormal by radiologists. We also identified a reduction in the volume and thickness of cortex within additional areas of apparently normal cortex relative to controls. CONCLUSIONS Our findings indicate that there may be regions of reduced cortical thickness, which appear normal from radiological analysis, in the cortex of patients with polymicrogyria. This finding suggests that alterations in neuronal migration may have an impact in the cortical formation of the cortical areas that are visually normal. These areas are associated or occur concurrently with polymicrogyria.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spondylocostal dysostosis (SCD) is a genetic disorder characterized by vertebral segmentation and formation defects associated with changes of the ribs. Autosomal dominant and recessive modes of inheritance have been reported. Methylmalonic aciduria (MMA) is an inborn error of propionate or cobalamin metabolism. It is an autosomal recessive disorder and one of the most frequent forms of branched-chain organic acidurias. Here we report on a case of a Brazilian boy with both diseases. As we know, it is the first case in the literature with the occurrence of both SCD and MMA-the first a skeletal disease and the latter an inborn error of metabolism.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The comparative method, the inference of biological processes from phylogenetic patterns, is founded on the reliability of the phylogenetic tree. In attempting to apply the comparative method to the understanding of the evolution of parasitism in the phylum Platyhelminthes, we have highlighted several points we consider to be of value along with many problems. We discuss four of these topics. Firstly, we view the group at a phylum level, in particular discussing the importance of establishing the sister taxon to the obligate parasite group, the Neodermata, for addressing such questions as the monophyly, parasitism or the endo or ectoparasitic nature of the early parasites. The variety of non-congruent phylogenetic trees presented so far, utilising either or both morphological and molecular data, gives rise to the suggestion that any evolutionary scenarios presented at this stage be treated as interesting hypotheses rather than well-supported theories. Our second point of discussion is the conflict between morphological and molecular estimates of monogenean evolution. The Monogenea presents several well-established morphological autapomorphies, such that morphology consistently estimates the group as monophyletic, whereas molecular sequence analyses indicate paraphyly, with different genes giving different topologies. We discuss the problem of reconciling gene and species trees. Thirdly, we use recent phylogenetic results on the tapeworms to interpret the evolution of strobilation, proglottization, segmentation and scolex structure. In relation to the latter, the results presented indicate that the higher cestodes are diphyletic, with one branch difossate and the other tetrafossate. Finally, we use a SSU rDNA phylogenetic tree of the Trematoda as a basis for the discussion of an aspect of the digenean life-cycle, namely the nature of the first intermediate host. Frequent episodes of host-switching, between gastropod and bivalve hosts or even into annelids, are indicated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). Methods: A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. Conclusions: The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

RESUMO: O constructo de customer value é aquele que melhor explica o comportamento do consumidor, uma vez que o seu objectivo é o de entender como os consumidores traduzem os atributos e consequências do uso de um produto em valores pessoais relevantes. A metodologia laddering, que tem como base a teoria das cadeias meios-fim, é um elemento teórico que se considera consistente para estabelecer a relação entre os atributos e os valores do consumidor. Esta dissertação pretende demonstrar a exequibilidade da metodologia laddering em estudos sobre o valor para o consumidor, percebendo quais são as vantagens e limitações do seu uso. É conclusivo que este método, através da construção de cadeias A-C-V, proporciona elementos de estudo que permitem a visualização de hierarquia de valores produzida pelos consumidores, função dos critérios de escolha destes durante e após um processo de compra. A aplicabilidade desta metodologia na perspectiva do valor para o cliente, permite a utilização dos seus resultados num conjunto de áreas específicas do marketing, das quais destacamos a segmentação e análise de mercado, a avaliação do posicionamento de produtos e marcas, a avaliação da publicidade e o desenvolvimento de estratégias de comunicação. ABSTRACT: The customer value construct is the one that best explains the consumer behavior, since its purpose is to understand how consumers translate the attributes and consequences of the use of a product in relevant personal values. The laddering methodology, which is based on the theory of means-end chains, is a theoretical element that is considered consistent for establish the relationship between attributes and consumer values. This thesis attempts to demonstrate the feasibility of the laddering methodology in studies about the value for the consumer, knowing what are the advantages and limitations of its use. It is conclusive that this method, by building chains A-C-V, provides elements of study that allows the visualization of the values hierarchy produced by consumers, according to the criteria of their choice during and after a purchase process. The applicability of this methodology from the perspective of customer value, allows the use of their results in a number of specific areas of marketing, which we emphasize the segmentation and market analysis, evaluation of product and branding positioning, evaluation of advertising and development of communication strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study reports the embryogenesis of T. infestans (Hemiptera, Reduviidae). Morphological parameters of growth sequences from oviposition until hatching (12-14 d 28ºC) were established. Five periods, as percent of time of development (TD), were characterized from oviposition until hatching. The most important morphological features were: 1) formation of blastoderm within 7% of TD; 2) germ band and gastrulation within 30% of TD; 3) nerve cord, limb budding, thoracic and abdominal segmentation and formation of body cavity within 50% of TD; 4) nervous system and blastokinesis end, and development of embryonic cuticle within 65% of TD; 5) differentiation of the mouth parts, fat body, and Malphigian tubules during final stage and completion of embryo at day 12 to day 14 around hatching. These signals were chosen as appropriate morphological parameters which should enable the evaluation of embryologic modifications due to the action/s of different insecticides

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nos últimos anos, o fácil acesso em termos de custos, ferramentas de produção, edição e distribuição de conteúdos audiovisuais, contribuíram para o aumento exponencial da produção diária deste tipo de conteúdos. Neste paradigma de superabundância de conteúdos multimédia existe uma grande percentagem de sequências de vídeo que contém material explícito, sendo necessário existir um controlo mais rigoroso, de modo a não ser facilmente acessível a menores. O conceito de conteúdo explícito pode ser caraterizado de diferentes formas, tendo o trabalho descrito neste documento incidido sobre a deteção automática de nudez feminina presente em sequências de vídeo. Este processo de deteção e classificação automática de material para adultos pode constituir uma ferramenta importante na gestão de um canal de televisão. Diariamente podem ser recebidas centenas de horas de material sendo impraticável a implementação de um processo manual de controlo de qualidade. A solução criada no contexto desta dissertação foi estudada e desenvolvida em torno de um produto especifico ligado à área do broadcasting. Este produto é o mxfSPEEDRAIL F1000, sendo este uma solução da empresa MOG Technologies. O objetivo principal do projeto é o desenvolvimento de uma biblioteca em C++, acessível durante o processo de ingest, que permita, através de uma análise baseada em funcionalidades de visão computacional, detetar e sinalizar na metadata do sinal, quais as frames que potencialmente apresentam conteúdo explícito. A solução desenvolvida utiliza um conjunto de técnicas do estado da arte adaptadas ao problema a tratar. Nestas incluem-se algoritmos para realizar a segmentação de pele e deteção de objetos em imagens. Por fim é efetuada uma análise critica à solução desenvolvida no âmbito desta dissertação de modo a que em futuros desenvolvimentos esta seja melhorada a nível do consumo de recursos durante a análise e a nível da sua taxa de sucesso.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertation to obtain the degree of Doctor of Philosophy in Biomedical Engineering

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human-Computer Interaction have been one of the main focus of the technological community, specially the Natural User Interfaces (NUI) field of research as, since the launch of the Kinect Sensor, the goal to achieve fully natural interfaces just got a lot closer to reality. Taking advantage of this conditions the following research work proposes to compute the hand skeleton in order to recognize Sign Language Shapes. The proposed solution uses the Kinect Sensor to achieve a good segmentation and image analysis algorithms to extend the skeleton from the extraction of high-level features. In order to recognize complex hand shapes the current research work proposes the redefinition of the hand contour making it immutable to translation, rotation and scaling operations, and a set of tools to achieve a good recognition. The validation of the proposed solution extended the Kinects Software Development Kit to allow the developer to access the new set of inferred points and created a template-matching based platform that uses the contour to define the hand shape, this prototype was tested in a set of predefined conditions and showed to have a good success ration and has proven to be eligible for real-time scenarios.