913 resultados para contour enhancement
Resumo:
The present work derives motivation from the so called surface/interfacial magnetism in core shell structures and commercial samples of Fe3O4 and c Fe2O3 with sizes ranging from 20 to 30 nm were coated with polyaniline using plasma polymerization and studied. The High Resolution Transmission Electron Microscopy images indicate a core shell structure after polyaniline coating and exhibited an increase in saturation magnetization by 2 emu/g. For confirmation, plasma polymerization was performed on maghemite nanoparticles which also exhibited an increase in saturation magnetization. This enhanced magnetization is rather surprising and the reason is found to be an interfacial phenomenon resulting from a contact potential.
Resumo:
The paper summarizes the design and implementation of a quadratic edge detection filter, based on Volterra series, for enhancing calcifications in mammograms. The proposed filter can account for much of the polynomial nonlinearities inherent in the input mammogram image and can replace the conventional edge detectors like Laplacian, gaussian etc. The filter gives rise to improved visualization and early detection of microcalcifications, which if left undetected, can lead to breast cancer. The performance of the filter is analyzed and found superior to conventional spatial edge detectors
Resumo:
The Towed Array electronics is a multi-channel simultaneous real time high speed data acquisition system. Since its assembly is highly manpower intensive, the costs of arrays are prohibitive and therefore any attempt to reduce the manufacturing, assembly, testing and maintenance costs is a welcome proposition. The Network Based Towed Array is an innovative concept and its implementation has remarkably simplified the fabrication, assembly and testing and revolutionised the Towed Array scenario. The focus of this paper is to give a good insight into the Reliability aspects of Network Based Towed Array. A case study of the comparison between the conventional array and the network based towed array is also dealt with
Resumo:
In a leading service economy like India, services lie at the very center of economic activity. Competitive organizations now look not only at the skills and knowledge, but also at the behavior required by an employee to be successful on the job. Emotionally competent employees can effectively deal with occupational stress and maintain psychological well-being. This study explores the scope of the first two formants and jitter to assess seven common emotional states present in the natural speech in English. The k-means method was used to classify emotional speech as neutral, happy, surprised, angry, disgusted and sad. The accuracy of classification obtained using raw jitter was more than 65 percent for happy and sad but less accurate for the others. The overall classification accuracy was 72% in the case of preprocessed jitter. The experimental study was done on 1664 English utterances of 6 females. This is a simple, interesting and more proactive method for employees from varied backgrounds to become aware of their own communication styles as well as that of their colleagues' and customers and is therefore socially beneficial. It is a cheap method also as it requires only a computer. Since knowledge of sophisticated software or signal processing is not necessary, it is easy to analyze
Resumo:
We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct flash shadows. This combines the advantages of available illumination and flash photography.
Resumo:
This paper describes a method to achieve the most relevant contours of an image. The presented method proposes to integrate the information of the local contours from chromatic components such as H, S and I, taking into account the criteria of coherence of the local contour orientation values obtained from each of these components. The process is based on parametrizing pixel by pixel the local contours (magnitude and orientation values) from the H, S and I images. This process is carried out individually for each chromatic component. If the criterion of dispersion of the obtained orientation values is high, this chromatic component will lose relevance. A final processing integrates the extracted contours of the three chromatic components, generating the so-called integrated contours image
Resumo:
In image segmentation, clustering algorithms are very popular because they are intuitive and, some of them, easy to implement. For instance, the k-means is one of the most used in the literature, and many authors successfully compare their new proposal with the results achieved by the k-means. However, it is well known that clustering image segmentation has many problems. For instance, the number of regions of the image has to be known a priori, as well as different initial seed placement (initial clusters) could produce different segmentation results. Most of these algorithms could be slightly improved by considering the coordinates of the image as features in the clustering process (to take spatial region information into account). In this paper we propose a significant improvement of clustering algorithms for image segmentation. The method is qualitatively and quantitative evaluated over a set of synthetic and real images, and compared with classical clustering approaches. Results demonstrate the validity of this new approach
Revisión sistemática de la literatura: efecto de los rellenos inyectables en la región periorbitaria
Resumo:
Introducción: El conocimiento actual de la fisiopatología del envejecimiento periorbitario justifica la aplicación de materiales de relleno inyectables, dado que se enfocan en la restauración del volumen perdido en esta zona, convirtiéndose en una excelente alternativa a procedimientos quirúrgicos que remueven el tejido excedente. Sin embargo los efectos y la seguridad de esta naciente tendencia terapéutica aún no se sustentan en una sólida base científica. El objetivo de esta revisión es identificar el material de relleno inyectable más adecuado para el manejo de los defectos volumétricos estéticos de la región periorbitaria. Metodología: Se realizó una búsqueda exhaustiva de los artículos indexados publicados del 1º de enero de 2.000 al 30 de septiembre de 2.013, en diversas bases de datos electrónicas, se seleccionaron catorce publicaciones, se extrajo la información referente a datos demográficos, la intervención, el seguimiento y los desenlaces y se realizó un análisis de 14 estudios que cumplieron los criterios. Resultados: Todos los artículos incluidos poseían un bajo nivel de evidencia y del grado de recomendación. Todos los materiales de relleno se asociaron a altos niveles de satisfacción para el paciente, adecuada mejoría de la apariencia estética y similares efectos colaterales, el ácido hialurónico fue el material de relleno inyectable más utilizado en la región periorbitaria. Discusión: Los materiales de relleno inyectable mejoran los defectos volumétricos estéticos de la región periorbitaria pero es necesaria mayor evidencia para determinar el tipo relleno más apropiado para esta condición.
Resumo:
Oxidation of amorphous silicon (a-Si) nanoparticles grown by plasma-enhanced chemical vapor deposition were investigated. Their hydrogen content has a great influence on the oxidation rate at low temperature. When the mass gain is recorded during a heating ramp in dry air, an oxidation process at low temperature is identified with an onset around 250°C. This temperature onset is similar to that of hydrogen desorption. It is shown that the oxygen uptake during this process almost equals the number of hydrogen atoms present in the nanoparticles. To explain this correlation, we propose that oxidation at low temperature is triggered by the process of hydrogen desorption
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
This paper discusses a study to investigate the effectiveness of collagen splints for the enhancement of regeneration of the peripheral portion of the eighth nerve.
Resumo:
This article describes a novel algorithmic development extending the contour advective semi-Lagrangian model to include nonconservative effects. The Lagrangian contour representation of finescale tracer fields, such as potential vorticity, allows for conservative, nondiffusive treatment of sharp gradients allowing very high numerical Reynolds numbers. It has been widely employed in accurate geostrophic turbulence and tracer advection simulations. In the present, diabatic version of the model the constraint of conservative dynamics is overcome by including a parallel Eulerian field that absorbs the nonconservative ( diabatic) tendencies. The diabatic buildup in this Eulerian field is limited through regular, controlled transfers of this field to the contour representation. This transfer is done with a fast newly developed contouring algorithm. This model has been implemented for several idealized geometries. In this paper a single-layer doubly periodic geometry is used to demonstrate the validity of the model. The present model converges faster than the analogous semi-Lagrangian models at increased resolutions. At the same nominal spatial resolution the new model is 40 times faster than the analogous semi-Lagrangian model. Results of an orographically forced idealized storm track show nontrivial dependency of storm-track statistics on resolution and on the numerical model employed. If this result is more generally applicable, this may have important consequences for future high-resolution climate modeling.
Resumo:
A coupled ocean–atmosphere general circulation model is used to investigate the modulation of El Niño–Southern Oscillation (ENSO) variability due to a weakened Atlantic thermohaline circulation (THC). The THC weakening is induced by freshwater perturbations in the North Atlantic, and leads to a well-known sea surface temperature dipole and a southward shift of the intertropical convergence zone (ITCZ) in the tropical Atlantic. Through atmospheric teleconnections and local coupled air–sea feedbacks, a meridionally asymmetric mean state change is generated in the eastern equatorial Pacific, corresponding to a weakened annual cycle, and westerly anomalies develop over the central Pacific. The westerly anomalies are associated with anomalous warming of SST, causing an eastward extension of the west Pacific warm pool particularly in August–February, and enhanced precipitation. These and other changes in the mean state lead in turn to an eastward shift of the zonal wind anomalies associated with El Niño events, and a significant increase in ENSO variability. In response to a 1-Sv (1 Sv ≡ 106 m3 s−1) freshwater input in the North Atlantic, the THC slows down rapidly and it weakens by 86% over years 50–100. The Niño-3 index standard deviation increases by 36% during the first 100-yr simulation relative to the control simulation. Further analysis indicates that the weakened THC not only leads to a stronger ENSO variability, but also leads to a stronger asymmetry between El Niño and La Niña events. This study suggests a role for an atmospheric bridge that rapidly conveys the influence of the Atlantic Ocean to the tropical Pacific and indicates that fluctuations of the THC can mediate not only mean climate globally but also modulate interannual variability. The results may contribute to understanding both the multidecadal variability of ENSO activity during the twentieth century and longer time-scale variability of ENSO, as suggested by some paleoclimate records.