875 resultados para Image acquisition and representation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of the present study was to assess body dissatisfaction and eating symptoms in mothers of eating disorder (ED) female patients and to compare results with those of a control group. The case group consisted of 35 mothers of female adolescents (aged between 10 and 17 yrs) diagnosed with ED who attended the Interdisciplinary Project for Care, Teaching and Research on Eating Disorders in Childhood and Adolescence (PROTAD) at Clinicas Hospital Institute of Psychiatry of the Universidade de Sao Paulo Medical School. Demographic and socioeconomic data were collected. Eating symptoms were assessed using the Eating Attitudes Test (EAT-26) and body image was assessed by the Body Image Questionnaire (BSQ) and Stunkard Figure Rating Scale (FRS). The case group was compared to a control group consisting of 35 mothers of female adolescents (between 10 and 17 years) who attended a private school in the city of Sao Paulo, southeastern Brazil. With regard to EAT, BSQ and FRS scores, we found no statistically significant differences between the two groups. However, we found a positive correlation between BMI and BSQ scores in the control group (but not in the case group) and a positive correlation between EAT and FRS scores in the case group (but not in the control group). It appears to be advantageous to assess body image by combining more than one scale to evaluate additional components of the construct. (Eating Weight Disord. 15: e219-e225, 2010). (C)2010, Editrice Kurtis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Successful classification, information retrieval and image analysis tools are intimately related with the quality of the features employed in the process. Pixel intensities, color, texture and shape are, generally, the basis from which most of the features are Computed and used in such fields. This papers presents a novel shape-based feature extraction approach where an image is decomposed into multiple contours, and further characterized by Fourier descriptors. Unlike traditional approaches we make use of topological knowledge to generate well-defined closed contours, which are efficient signatures for image retrieval. The method has been evaluated in the CBIR context and image analysis. The results have shown that the multi-contour decomposition, as opposed to a single shape information, introduced a significant improvement in the discrimination power. (c) 2008 Elsevier B.V. All rights reserved,

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional content-based image retrieval (CBIR) systems use low-level features such as colors, shapes, and textures of images. Although, users make queries based on semantics, which are not easily related to such low-level characteristics. Recent works on CBIR confirm that researchers have been trying to map visual low-level characteristics and high-level semantics. The relation between low-level characteristics and image textual information has motivated this article which proposes a model for automatic classification and categorization of words associated to images. This proposal considers a self-organizing neural network architecture, which classifies textual information without previous learning. Experimental results compare the performance results of the text-based approach to an image retrieval system based on low-level features. (c) 2008 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Mario Schenberg gravitational wave detector has started its commissioning phase at the Physics Institute of the University of Sao Paulo. We have collected almost 200 h of data from the instrument in order to check out its behavior and performance. We have also been developing a data acquisition system for it under a VXI System. Such a system is composed of an analog-to-digital converter and a GPS receiver for time synchronization. We have been building the software that controls and sets up the data acquisition. Here we present an overview of the Mario Schenberg detector and its data acquisition system, some results from the first commissioning run and solutions for some problems we have identified.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My dissertation focuses on dynamic aspects of coordination processes such as reversibility of early actions, option to delay decisions, and learning of the environment from the observation of other people’s actions. This study proposes the use of tractable dynamic global games where players privately and passively learn about their actions’ true payoffs and are able to adjust early investment decisions to the arrival of new information to investigate the consequences of the presence of liquidity shocks to the performance of a Tobin tax as a policy intended to foster coordination success (chapter 1), and the adequacy of the use of a Tobin tax in order to reduce an economy’s vulnerability to sudden stops (chapter 2). Then, it analyzes players’ incentive to acquire costly information in a sequential decision setting (chapter 3). In chapter 1, a continuum of foreign agents decide whether to enter or not in an investment project. A fraction λ of them are hit by liquidity restrictions in a second period and are forced to withdraw early investment or precluded from investing in the interim period, depending on the actions they chose in the first period. Players not affected by the liquidity shock are able to revise early decisions. Coordination success is increasing in the aggregate investment and decreasing in the aggregate volume of capital exit. Without liquidity shocks, aggregate investment is (in a pivotal contingency) invariant to frictions like a tax on short term capitals. In this case, a Tobin tax always increases success incidence. In the presence of liquidity shocks, this invariance result no longer holds in equilibrium. A Tobin tax becomes harmful to aggregate investment, which may reduces success incidence if the economy does not benefit enough from avoiding capital reversals. It is shown that the Tobin tax that maximizes the ex-ante probability of successfully coordinated investment is decreasing in the liquidity shock. Chapter 2 studies the effects of a Tobin tax in the same setting of the global game model proposed in chapter 1, with the exception that the liquidity shock is considered stochastic, i.e, there is also aggregate uncertainty about the extension of the liquidity restrictions. It identifies conditions under which, in the unique equilibrium of the model with low probability of liquidity shocks but large dry-ups, a Tobin tax is welfare improving, helping agents to coordinate on the good outcome. The model provides a rationale for a Tobin tax on economies that are prone to sudden stops. The optimal Tobin tax tends to be larger when capital reversals are more harmful and when the fraction of agents hit by liquidity shocks is smaller. Chapter 3 focuses on information acquisition in a sequential decision game with payoff complementar- ity and information externality. When information is cheap relatively to players’ incentive to coordinate actions, only the first player chooses to process information; the second player learns about the true payoff distribution from the observation of the first player’s decision and follows her action. Miscoordination requires that both players privately precess information, which tends to happen when it is expensive and the prior knowledge about the distribution of the payoffs has a large variance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper proposes a framework for the analysis and representation of external systems for online optimisation studies. The basis for this framework is the equivalent OPF (EOPF), an optimisation model obtained by partitioning of the OPF model. The EOPF is mathematically redefined in the paper to accommodate the concept of a buffer zone. The resulting model is more useful for online optimisation, since external information obtained through intercontrol-centre exchange contracts can be used to improve internal control calculation. Numerical results obtained with original studies involving the boundary-matching procedure have provided a conceptual basis for the definition of a buffer zone for optimisation studies with the EOPF. In the proposed framework, the accuracy of the external representation in optimisation studies is evaluated by comparing the controls obtained by an EOPF procedure with those obtained by the reference-optimisation procedure defined in this paper. The framework is then used to evaluate the accuracy of equivalent optimisation studies involving the IEEE 118-bus test system and the Brazilian South Southeast 810-bus system. The results show that the incorporation of a buffer zone improves the external system representation for all optimisation studies performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Petroleum well drilling monitoring has become an important tool for detecting and preventing problems during the well drilling process. In this paper, we propose to assist the drilling process by analyzing the cutting images at the vibrating shake shaker, in which different concentrations of cuttings can indicate possible problems, such as the collapse of the well borehole walls. In such a way, we present here an innovative computer vision system composed by a real time cutting volume estimator addressed by support vector regression. As far we know, we are the first to propose the petroleum well drilling monitoring by cutting image analysis. We also applied a collection of supervised classifiers for cutting volume classification. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the present study, developed in a mountainous region in Brazil where many landslides occur, is to present a method for detecting landslide scars that couples image processing techniques with spatial analysis tools. An IKONOS image was initially segmented, and then classified through a Batthacharrya classifier, with an acceptance limit of 99%, resulting in 216 polygons identified with a spectral response similar to landslide scars. After making use of some spatial analysis tools that took into account a susceptibility map, a map of local drainage channels and highways, and the maximum expected size of scars in the study area, some features misinterpreted as scars were excluded. The 43 resulting features were then compared with visually interpreted landslide scars and field observations. The proposed method can be reproduced and enhanced by adding filtering criteria and was able to find new scars on the image, with a final error rate of 2.3%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have studied the effects of niobium beam filtration on absorbed doses, on image density and contrast, and on photon spectra with conventional and high-frequency dental x-ray generators. Added niobium reduced entry and superficial absorbed doses in periapical radiography by 9% to 40% with film and digital image receptors, decreased the radiation necessary to produce a given image density on E-speed film and reduced image contrast on D- and E-speed films. As shown by increased half-value layers for aluminum, titanium, and copper and by pulse-height analyses of beam spectra, niobium increased average beam energy by 6% to 19%. Despite the benefits of adding niobium on patient dose reduction and on narrowing the beams' energy spectra, the beam can be overhardened. Adding niobium, therefore, strikes the best balance between radiation dose reduction and beam attenuation, with its risks of increased exposure times, motion blur, and diminished image contrast, when it is used at modest thicknesses (30 μm) and at lower kVp (70) settings. © 1995 Mosby-Year Book, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to present a photogrammetric method for determining the dimensions of flat surfaces, such as billboards, based on a single digital image. A mathematical model was adapted to generate linear equations for vertical and horizontal lines in the object space. These lines are identified and measured in the image and the rotation matrix is computed using an indirect method. The distance between the camera and the surface is measured using a lasermeter, providing the coordinates of the camera perspective center. Eccentricity of the lasermeter center related to the camera perspective center is modeled by three translations, which are computed using a calibration procedure. Some experiments were performed to test the proposed method and the achieved results are within a relative error of about 1 percent in areas and distances in the object space. This accuracy fulfills the requirements of the intended applications. © 2005 American Society for Photogrammetry and Remote Sensing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to simplify computer management, several system administrators are adopting advanced techniques to manage software configuration of enterprise computer networks, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. Virtualization is an established technology, however its use is been more focused on server consolidation and virtual desktop infrastructure, not for managing distributed computers over a network. This paper discusses the feasibility of the Distributed Virtual Machine Environment, a new approach for enterprise computer management that combines virtualization and distributed system architecture as the basis of the management architecture. © 2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The methodology for fracture analysis of polymeric composites with scanning electron microscopes (SEM) is still under discussion. Many authors prefer to use sputter coating with a conductive material instead of applying low-voltage (LV) or variable-pressure (VP) methods, which preserves the original surfaces. The present work examines the effects of sputter coating with 25 nm of gold on the topography of carbon-epoxy composites fracture surfaces, using an atomic force microscope. Also, the influence of SEM imaging parameters on fractal measurements is evaluated for the VP-SEM and LV-SEM methods. It was observed that topographic measurements were not significantly affected by the gold coating at tested scale. Moreover, changes on SEM setup leads to nonlinear outcome on texture parameters, such as fractal dimension and entropy values. For VP-SEM or LV-SEM, fractal dimension and entropy values did not present any evident relation with image quality parameters, but the resolution must be optimized with imaging setup, accompanied by charge neutralization. © Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, a semiautomatic method for road extraction in object space is proposed that combines a stereoscopic pair of low-resolution aerial images with a digital terrain model (DTM) structured as a triangulated irregular network (TIN). First, we formulate an objective function in the object space to allow the modeling of roads in 3-D. In this model, the TIN-based DTM allows the search for the optimal polyline to be restricted along a narrow band that is overlaid upon it. Finally, the optimal polyline for each road is obtained by optimizing the objective function using the dynamic programming optimization algorithm. A few seed points need to be supplied by an operator. To evaluate the performance of the proposed method, a set of experiments was designed using two stereoscopic pairs of low-resolution aerial images and a TIN-based DTM with an average resolution of 1 m. The experimental results showed that the proposed method worked properly, even when faced with anomalies along roads, such as obstructions caused by shadows and trees.