986 resultados para Image resolution


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Meteorological satellite and radar data comparative analysis allows to correlate the precipitation structures observed in both images. Such analysis would make feasible the extension of the range of ground-based meteorological radars. In addition to the different spatial and temporal resolution of these images this comparative analysis presents difficulties due to the effects of rotation and distortion, besides the different formats, projections, and coordinate systems. This work employed an approach based on a Gaussian adaptive filter in order to compare such images. The statistical results obtained from the comparison of the images are matched to those produced by other methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nos últimos anos tem-se verificado um interesse crescente no desenvolvimento de algoritmos de imageamento sísmico com a finalidade de obter uma imagem da subsuperfície da terra. A migração pelo método de Kirchhoff, por exemplo, é um método de imageamento muito eficiente empregado na busca da localização de refletores na subsuperficie, quando dispomos do cálculo dos tempos de trânsito necessários para a etapa de empilhamento, sendo estes obtidos neste trabalho através da solução da equação eiconal. Primeiramente, é apresentada a teoria da migração de Kirchhoff em profundidade baseada na teoria do raio, sendo em seguida introduzida a equação eiconal, através da qual são obtidos os tempos de trânsitos empregados no empilhamento das curvas de difrações. Em seguida é desenvolvido um algoritmo de migração em profundidade fazendo uso dos tempos de trânsito obtidos através da equação eiconal. Finalmente, aplicamos este algoritmo a dados sintéticos contendo ruído aditivo e múltiplas e obtemos como resultado uma seção sísmica na profundidade. Através dos experimentos feitos neste trabalho observou-se que o algoritmo de migração desenvolvido mostrou-se bastante eficiente e eficaz na reconstrução da imagem dos refletores.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The efficient generation of digital surface model (DSM) from optical images has been explored for many years and the results are dependent on the project characteristics (image resolution, size of overlap between images, among others), of the image matching techniques and the computer capabilities for the image processing. The points generated from image matching have a direct impact on the quality of the DSM and, consequently, influence the need for the costly step of edition. This work aims at assessing experimentally a technique for DSM generation by matching of multiple images (two or more) simultaneously using the vertical line locus method (VLL). The experiments were performed with six images of the urban area of Presidente Prudente/SP, with a ground sample distance (GSD) of approximately 7cm. DSMs of a small area with homogeneous texture, repetitive pattern, moving objects including shadows and trees were generated to assess the quality of the developed procedure. This obtained DSM was compared to cloud points acquired by LASER (Light Amplification by Simulated Emission of Radiation) scanning as wells as with a DSM generated by Leica Photogrammetric Suite (LPS) software. The accomplished results showed that the MDS generated by the implemented technique has a geometric quality compatible with the reference models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

60.00% 60.00%

Publicador:

Resumo:

CONTEXTUALIZAÇÃO: A biofotogrametria é uma técnica difundida na área da saúde e, apesar dos cuidados metodológicos, há distorções nas leituras angulares das imagens fotográficas. OBJETIVO: Mensurar o erro das medidas angulares em imagens fotográficas com diferentes resoluções digitais em um objeto com ângulos pré-demarcados. MÉTODOS: Utilizou-se uma esfera de borracha com 52 cm de circunferência. O objeto foi previamente demarcado com ângulos de 10º, 30º, 60º e 90º, e os registros fotográficos foram realizados com o eixo focal da câmera a três metros de distância e perpendicular ao objeto, sem utilização de zoom óptico e com resolução de 3, 5 e 10 Megapixels (Mp). Todos os registros fotográficos foram armazenados, e os valores angulares foram analisados por um experimentador previamente treinado, utilizando o programa ImageJ. As aferições das medidas foram realizadas duas vezes, com intervalo de 15 dias entre elas. Posteriormente, foram calculados os valores de acurácia, erro relativo e em graus, precisão e Coeficiente de Correlação Intraclasse (ICC). RESULTADOS: Quando analisado o ângulo de 10º, a média da acurácia das medidas foi maior para os registros com resolução de 3 Mp em relação às resoluções de 5 e 10 Mp. O ICC foi considerado excelente para as três resoluções de imagem analisadas e, em relação aos ângulos analisados nos registros fotográficos, pôde-se verificar maior acurácia, menor erro relativo e em graus e maior precisão para o ângulo de 90º, independentemente da resolução da imagem. CONCLUSÃO: Os registros fotográficos realizados com a resolução de 3 Mp proporcionaram medidas de maiores valores de acurácia e precisão e menores valores de erro, sugerindo ser a resolução mais adequada para gerar imagem de ângulos de 10º e 30º.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the increasing advances in hip joint preservation surgery, accurate diagnosis and assessment of femoral head and acetabular cartilage status is becoming increasingly important. Magnetic resonance imaging (MRI) of the hip does present technical difficulties. The fairly thin cartilage lining necessitates high image resolution and high contrast-to-noise ratio (CNR). With MR arthrography (MRA) using intraarticular injected gadolinium, labral tears and cartilage clefts may be better identified through the contrast medium filling into the clefts. However, the ability of MRA to detect varying grades of cartilage damage is fairly limited and early histological and biochemical changes in the beginning of osteoarthritis (OA) cannot be accurately delineated. Traditional MRI thus lacks the ability to analyze the biological status of cartilage degeneration. The technique of delayed gadolinium-enhanced MRI of cartilage (dGEMRIC) is sensitive to the charge density of cartilage contributed by glycosaminoglycans (GAGs), which are lost early in the process of OA. Therefore, the dGEMRIC technique has a potential to detect early cartilage damage that is obviously critical for decision-making regarding time and extent of intervention for joint-preservation. In the last decade, cartilage imaging with dGEMRIC has been established as an accurate and reliable tool for assessment of cartilage status in the knee and hip joint.This review outlines the current status of dGEMRIC for assessment of hip joint cartilage. Practical modifications of the standard technique including three-dimensional (3D) dGEMRIC and dGEMRIC after intra-articular gadolinium instead of iv-dGEMRIC will also be addressed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Article preview View full access options BoneKEy Reports | Review Print Email Share/bookmark Finite element analysis for prediction of bone strength Philippe K Zysset, Enrico Dall'Ara, Peter Varga & Dieter H Pahr Affiliations Corresponding author BoneKEy Reports (2013) 2, Article number: 386 (2013) doi:10.1038/bonekey.2013.120 Received 03 January 2013 Accepted 25 June 2013 Published online 07 August 2013 Article tools Citation Reprints Rights & permissions Abstract Abstract• References• Author information Finite element (FE) analysis has been applied for the past 40 years to simulate the mechanical behavior of bone. Although several validation studies have been performed on specific anatomical sites and load cases, this study aims to review the predictability of human bone strength at the three major osteoporotic fracture sites quantified in recently completed in vitro studies at our former institute. Specifically, the performance of FE analysis based on clinical computer tomography (QCT) is compared with the ones of the current densitometric standards, bone mineral content, bone mineral density (BMD) and areal BMD (aBMD). Clinical fractures were produced in monotonic axial compression of the distal radii, vertebral sections and in side loading of the proximal femora. QCT-based FE models of the three bones were developed to simulate as closely as possible the boundary conditions of each experiment. For all sites, the FE methodology exhibited the lowest errors and the highest correlations in predicting the experimental bone strength. Likely due to the improved CT image resolution, the quality of the FE prediction in the peripheral skeleton using high-resolution peripheral CT was superior to that in the axial skeleton with whole-body QCT. Because of its projective and scalar nature, the performance of aBMD in predicting bone strength depended on loading mode and was significantly inferior to FE in axial compression of radial or vertebral sections but not significantly inferior to FE in side loading of the femur. Considering the cumulated evidence from the published validation studies, it is concluded that FE models provide the most reliable surrogates of bone strength at any of the three fracture sites.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The finite element analysis is an accepted method to predict vertebral body compressive strength. This study compares measurements obtained from in vitro tests with the ones from two different simulation models: clinical quantitative computer tomography (QCT) based homogenized finite element (hFE) models and pre-clinical high-resolution peripheral QCT-based (HR-pQCT) hFE models. About 37 vertebral body sections were prepared by removing end-plates and posterior elements, scanned with QCT (390/450μm voxel size) as well as HR-pQCT (82μm voxel size), and tested in compression up to failure. Non-linear viscous damage hFE models were created from QCT/HT-pQCT images and compared to experimental results based on stiffness and ultimate load. As expected, the predictability of QCT/HR-pQCT-based hFE models for both apparent stiffness (r2=0.685/0.801r2=0.685/0.801) and strength (r2=0.774/0.924r2=0.774/0.924) increased if a better image resolution was used. An analysis of the damage distribution showed similar damage locations for all cases. In conclusion, HR-pQCT-based hFE models increased the predictability considerably and do not need any tuning of input parameters. In contrast, QCT-based hFE models usually need some tuning but are clinically the only possible choice at the moment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A 2D computer simulation method of random packings is applied to sets of particles generated by a self-similar uniparametric model for particle size distributions (PSDs) in granular media. The parameter p which controls the model is the proportion of mass of particles corresponding to the left half of the normalized size interval [0,1]. First the influence on the total porosity of the parameter p is analyzed and interpreted. It is shown that such parameter, and the fractal exponent of the associated power scaling, are efficient packing parameters, but this last one is not in the way predicted in a former published work addressing an analogous research in artificial granular materials. The total porosity reaches the minimum value for p = 0.6. Limited information on the pore size distribution is obtained from the packing simulations and by means of morphological analysis methods. Results show that the range of pore sizes increases for decreasing values of p showing also different shape in the volume pore size distribution. Further research including simulations with a greater number of particles and image resolution are required to obtain finer results on the hierarchical structure of pore space.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los sistemas de imagen por ultrasonidos son hoy una herramienta indispensable en aplicaciones de diagnóstico en medicina y son cada vez más utilizados en aplicaciones industriales en el área de ensayos no destructivos. El array es el elemento primario de estos sistemas y su diseño determina las características de los haces que se pueden construir (forma y tamaño del lóbulo principal, de los lóbulos secundarios y de rejilla, etc.), condicionando la calidad de las imágenes que pueden conseguirse. En arrays regulares la distancia máxima entre elementos se establece en media longitud de onda para evitar la formación de artefactos. Al mismo tiempo, la resolución en la imagen de los objetos presentes en la escena aumenta con el tamaño total de la apertura, por lo que una pequeña mejora en la calidad de la imagen se traduce en un aumento significativo del número de elementos del transductor. Esto tiene, entre otras, las siguientes consecuencias: Problemas de fabricación de los arrays por la gran densidad de conexiones (téngase en cuenta que en aplicaciones típicas de imagen médica, el valor de la longitud de onda es de décimas de milímetro) Baja relación señal/ruido y, en consecuencia, bajo rango dinámico de las señales por el reducido tamaño de los elementos. Complejidad de los equipos que deben manejar un elevado número de canales independientes. Por ejemplo, se necesitarían 10.000 elementos separados λ 2 para una apertura cuadrada de 50 λ. Una forma sencilla para resolver estos problemas existen alternativas que reducen el número de elementos activos de un array pleno, sacrificando hasta cierto punto la calidad de imagen, la energía emitida, el rango dinámico, el contraste, etc. Nosotros planteamos una estrategia diferente, y es desarrollar una metodología de optimización capaz de hallar de forma sistemática configuraciones de arrays de ultrasonido adaptados a aplicaciones específicas. Para realizar dicha labor proponemos el uso de los algoritmos evolutivos para buscar y seleccionar en el espacio de configuraciones de arrays aquellas que mejor se adaptan a los requisitos fijados por cada aplicación. En la memoria se trata el problema de la codificación de las configuraciones de arrays para que puedan ser utilizados como individuos de la población sobre la que van a actuar los algoritmos evolutivos. También se aborda la definición de funciones de idoneidad que permitan realizar comparaciones entre dichas configuraciones de acuerdo con los requisitos y restricciones de cada problema de diseño. Finalmente, se propone emplear el algoritmo multiobjetivo NSGA II como herramienta primaria de optimización y, a continuación, utilizar algoritmos mono-objetivo tipo Simulated Annealing para seleccionar y retinar las soluciones proporcionadas por el NSGA II. Muchas de las funciones de idoneidad que definen las características deseadas del array a diseñar se calculan partir de uno o más patrones de radiación generados por cada solución candidata. La obtención de estos patrones con los métodos habituales de simulación de campo acústico en banda ancha requiere tiempos de cálculo muy grandes que pueden hacer inviable el proceso de optimización con algoritmos evolutivos en la práctica. Como solución, se propone un método de cálculo en banda estrecha que reduce en, al menos, un orden de magnitud el tiempo de cálculo necesario Finalmente se presentan una serie de ejemplos, con arrays lineales y bidimensionales, para validar la metodología de diseño propuesta comparando experimentalmente las características reales de los diseños construidos con las predicciones del método de optimización. ABSTRACT Currently, the ultrasound imaging system is one of the powerful tools in medical diagnostic and non-destructive testing for industrial applications. Ultrasonic arrays design determines the beam characteristics (main and secondary lobes, beam pattern, etc...) which assist to enhance the image resolution. The maximum distance between the elements of the array should be the half of the wavelength to avoid the formation of grating lobes. At the same time, the image resolution of the target in the region of interest increases with the aperture size. Consequently, the larger number of elements in arrays assures the better image quality but this improvement contains the following drawbacks: Difficulties in the arrays manufacturing due to the large connection density. Low noise to signal ratio. Complexity of the ultrasonic system to handle large number of channels. The easiest way to resolve these issues is to reduce the number of active elements in full arrays, but on the other hand the image quality, dynamic range, contrast, etc, are compromised by this solutions In this thesis, an optimization methodology able to find ultrasound array configurations adapted for specific applications is presented. The evolutionary algorithms are used to obtain the ideal arrays among the existing configurations. This work addressed problems such as: the codification of ultrasound arrays to be interpreted as individuals in the evolutionary algorithm population and the fitness function and constraints, which will assess the behaviour of individuals. Therefore, it is proposed to use the multi-objective algorithm NSGA-II as a primary optimization tool, and then use the mono-objective Simulated Annealing algorithm to select and refine the solutions provided by the NSGA I I . The acoustic field is calculated many times for each individual and in every generation for every fitness functions. An acoustic narrow band field simulator, where the number of operations is reduced, this ensures a quick calculation of the acoustic field to reduce the expensive computing time required by these functions we have employed. Finally a set of examples are presented in order to validate our proposed design methodology, using linear and bidimensional arrays where the actual characteristics of the design are compared with the predictions of the optimization methodology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The previously established cortical representation of rat whiskers in layer IV of the cortex contains distinct cylindrical columns of cellular aggregates, which are termed barrels and correlate in a one-to-one relation to whiskers on the contralateral rat face. In the present study, functional magnetic resonance imaging (fMRI) of the rat brain was used to map whisker barrel activation during mechanical up-down movement (+/- 2.5 mm amplitude at 8 Hz) of single/multiple whisker(s). Multislice gradient echo fMRI experiments were performed at 7 T with in-plane image resolution of 220 x 220 microns, slice thickness of 1 mm, and echo time of 16 ms. Highly significant (P < 0.001) and localized contralateral regions of activation were observed upon stimulation of single/multiple whisker(s). In all experiments (n = 10), the locations of activation relative to bregma and midline were highly correlated with the neuroanatomical position of the corresponding whisker barrels, and the results were reproducible intra- and interanimal. Our results indicate that fMRI based on blood oxygenation level-dependent image contrast has the sensitivity to depict activation of a single whisker barrel in the rat brain. This noninvasive technique will supplement existing methods in the study of rat barrel cortex and should be particularly useful for the long-term investigations of central nervous system in the same animal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The volume size of a converging wave, which plays a relevant role in image resolution, is governed by the wavelength of the radiation and the numerical aperture (NA) of the wavefront. We designed an ultrathin (λ/8 width) curved metasurface that is able to transform a focused field into a high-NA optical architecture, thus boosting the transverse and (mainly) on-axis resolution. The elements of the metasurface are metal-insulator subwavelength gratings exhibiting extreme anisotropy with ultrahigh index of refraction for TM polarization. Our results can be applied to nanolithography and optical microscopy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Retrieval, treatment, and disposal of high-level radioactive waste (HLW) is expected to cost between 100 and 300 billion dollars. The risk to workers, public health, and the environment are also a major area of concern for HLW. Visualization of the interface between settled solids and the optically opaque liquid is needed for retrieval of the waste from underground storage tanks. A Profiling sonar selected for this research generates 2-D image of the interface. Multiple experiments were performed to demonstrate the effectiveness of sonar in real-time monitoring the interface inside HLW tanks. First set of experiments demonstrated that objects shapes could be identified even when 30% of solids entrained in liquid, thereby mapping the interface. Simulation of sonar system validated these results. Second set of experiments confirmed the sonar’s ability in detecting the solids with density similar to the immersed liquid. Third set of experiments determined the affects of near by objects on image resolution. Final set of experiments proved the functional and chemical capabilities of sonar in caustic solution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Medical imaging technologies are experiencing a growth in terms of usage and image resolution, namely in diagnostics systems that require a large set of images, like CT or MRI. Furthermore, legal restrictions impose that these scans must be archived for several years. These facts led to the increase of storage costs in medical image databases and institutions. Thus, a demand for more efficient compression tools, used for archiving and communication, is arising. Currently, the DICOM standard, that makes recommendations for medical communications and imaging compression, recommends lossless encoders such as JPEG, RLE, JPEG-LS and JPEG2000. However, none of these encoders include inter-slice prediction in their algorithms. This dissertation presents the research work on medical image compression, using the MRP encoder. MRP is one of the most efficient lossless image compression algorithm. Several processing techniques are proposed to adapt the input medical images to the encoder characteristics. Two of these techniques, namely changing the alignment of slices for compression and a pixel-wise difference predictor, increased the compression efficiency of MRP, by up to 27.9%. Inter-slice prediction support was also added to MRP, using uni and bi-directional techniques. Also, the pixel-wise difference predictor was added to the algorithm. Overall, the compression efficiency of MRP was improved by 46.1%. Thus, these techniques allow for compression ratio savings of 57.1%, compared to DICOM encoders, and 33.2%, compared to HEVC RExt Random Access. This makes MRP the most efficient of the encoders under study.