917 resultados para post-processing method
Resumo:
The main objective of this work was to evaluate the linear regression between spectral response and soybean yield in regional scale. In this study were monitored 36 municipalities from the west region of the states of Parana using five images of Landsat 5/TM during 2004/05 season. The spectral response was converted in physical values, apparent and surface reflectances, by radiometric transformation and atmospheric corrections and both used to calculate NDVI and GVI vegetation indices. Those ones were compared by multiple and simple regression with government official yield values (IBGE). Diagnostic processing method to identify influents values or collinearity was applied to the data too. The results showed that the mean surface reflectance value from all images was more correlated with yield than individual dates. Further, the multiple regressions using all dates and both vegetation indices gave better results than simple regression.
Resumo:
The aim of this study was to evaluate the stress distribution in the cervical region of a sound upper central incisor in two clinical situations, standard and maximum masticatory forces, by means of a 3D model with the highest possible level of fidelity to the anatomic dimensions. Two models with 331,887 linear tetrahedral elements that represent a sound upper central incisor with periodontal ligament, cortical and trabecular bones were loaded at 45º in relation to the tooth's long axis. All structures were considered to be homogeneous and isotropic, with the exception of the enamel (anisotropic). A standard masticatory force (100 N) was simulated on one of the models, while on the other one a maximum masticatory force was simulated (235.9 N). The software used were: PATRAN for pre- and post-processing and Nastran for processing. In the cementoenamel junction area, tensile forces reached 14.7 MPa in the 100 N model, and 40.2 MPa in the 235.9 N model, exceeding the enamel's tensile strength (16.7 MPa). The fact that the stress concentration in the amelodentinal junction exceeded the enamel's tensile strength under simulated conditions of maximum masticatory force suggests the possibility of the occurrence of non-carious cervical lesions such as abfractions.
Resumo:
Poly(3-hydroxybutyrate) (PHB) is a very promising biopolymer. In order to improve its processability and decrease its brittleness, PHB/elastomer blends can be prepared. In the work reported, the effect of the addition of a rubbery phase, i.e. ethylene - propylene-diene terpolymer (EPDM) or poly(vinyl butyral) (PVB), on the properties of PHB was studied. The effects of rubber type and of changing the PHB/elastomer blend processing method on the crystallinity and physical properties of the blends were also investigated. For blends based on PHB, the main role of EPDM is its nucleating effect evidenced by a decrease of crystallization temperature and an increase of crystallinity with increasing EPDM content regardless of the processing route. While EPDM has a weak effect on PHB glass transition temperature, PVB induces a marked decrease of this temperature thank to its plasticizer that swells the PHB amorphous phase. A promising solution to improve the mechanical properties of PHB seems to be the melt-processing of PHB with both plasticizer and EPDM. In fact, the plasticizer is more efficient than the elastomer in decreasing the PHB glass transition temperature and, because of the nucleating effect of EPDM, the decrease of the PHB modulus due to the plasticizer can be counterbalanced. (C) 2010 Society of Chemical Industry
Resumo:
In natural estuaries, contaminant transport is driven by the turbulent momentum mixing. The predictions of scalar dispersion can rarely be predicted accurately because of a lack of fundamental understanding of the turbulence structure in estuaries. Herein detailed turbulence field measurements were conducted at high frequency and continuously for up to 50 hours per investigation in a small subtropical estuary with semi-diurnal tides. Acoustic Doppler velocimetry was deemed the most appropriate measurement technique for such small estuarine systems with shallow water depths (less than 0.5 m at low tides), and a thorough post-processing technique was applied. The estuarine flow is always a fluctuating process. The bulk flow parameters fluctuated with periods comparable to tidal cycles and other large-scale processes. But turbulence properties depended upon the instantaneous local flow properties. They were little affected by the flow history, but their structure and temporal variability were influenced by a variety of mechanisms. This resulted in behaviour which deviated from that for equilibrium turbulent boundary layer induced by velocity shear only. A striking feature of the data sets is the large fluctuations in all turbulence characteristics during the tidal cycle. This feature was rarely documented, but an important difference between the data sets used in this study from earlier reported measurements is that the present data were collected continuously at high frequency during relatively long periods. The findings bring new lights in the fluctuating nature of momentum exchange coefficients and integral time and length scales. These turbulent properties should not be assumed constant.
Resumo:
In natural estuaries, the predictions of scalar dispersion are rarely predicted accurately because of a lack of fundamental understanding of the turbulence structure in estuaries. Herein detailed turbulence field measurements were conducted continuously at high frequency for 50 hours in the upper zone of a small subtropical estuary with semi-diurnal tides. Acoustic Doppler velocimetry was deemed the most appropriate measurement technique for such shallow water depths (less than 0.4 m at low tides), and a thorough post-processing technique was applied. In addition, some experiments were conducted in laboratory under controlled conditions using water and soil samples collected in the estuary to test the relationship between acoustic backscatter strength and suspended sediment load. A striking feature of the field data set was the large fluctuations in all turbulence characteristics during the tidal cycle, including the suspended sediment flux. This feature was rarely documented.
Resumo:
There is not a specific test to diagnose Alzheimer`s disease (AD). Its diagnosis should be based upon clinical history, neuropsychological and laboratory tests, neuroimaging and electroencephalography (EEG). Therefore, new approaches are necessary to enable earlier and more accurate diagnosis and to follow treatment results. In this study we used a Machine Learning (ML) technique, named Support Vector Machine (SVM), to search patterns in EEG epochs to differentiate AD patients from controls. As a result, we developed a quantitative EEG (qEEG) processing method for automatic differentiation of patients with AD from normal individuals, as a complement to the diagnosis of probable dementia. We studied EEGs from 19 normal subjects (14 females/5 males, mean age 71.6 years) and 16 probable mild to moderate symptoms AD patients (14 females/2 males, mean age 73.4 years. The results obtained from analysis of EEG epochs were accuracy 79.9% and sensitivity 83.2%. The analysis considering the diagnosis of each individual patient reached 87.0% accuracy and 91.7% sensitivity.
Resumo:
The objective was to compare fracture toughness (K(Ic)), stress corrosion susceptibility coefficient (n), and stress intensity factor threshold for crack propagation (K(I0)) of two porcelains [VM7/Vita (V) and d.Sign/Ivoclar (D)], two glass-ceramics [Empress/Ivolcar (E1) and Empress2/Ivlocar (E2)] and a glass-infiltrated alumina composite [In-Ceram Alumina/Vita (IC)]. Disks were constructed according to each manufacturer`s processing method, and polished before induction of cracks by a Vickers indenter. Crack lengths were measured under optical microscopy at times between 0.1 and 100 h. Specimens were stored in artificial saliva at 37A degrees C during the whole experiment. K(Ic) and n were determined using indentation fracture method. K(I0) was determined by plotting log crack velocity versus log K(I). Microstructure characterization was carried out under SEM, EDS, X-ray diffraction and X-ray fluorescence. IC and E2 presented higher K(Ic) and K(I0) compared to E1, V, and D. IC presented the highest n value, followed by E2, D, E1, and V in a decreasing order. V and D presented similar K(Ic), but porcelain V showed higher K(I0) and lower n compared to D. Microstructure features (volume fraction, size, aspect ratio of crystalline phases and chemical composition of glassy matrix) determined K(Ic). The increase of K(Ic) value favored the increases of n and K(I0).
Resumo:
Land related information about the Earth's surface is commonIJ found in two forms: (1) map infornlation and (2) satellite image da ta. Satellite imagery provides a good visual picture of what is on the ground but complex image processing is required to interpret features in an image scene. Increasingly, methods are being sought to integrate the knowledge embodied in mop information into the interpretation task, or, alternatively, to bypass interpretation and perform biophysical modeling directly on derived data sources. A cartographic modeling language, as a generic map analysis package, is suggested as a means to integrate geographical knowledge and imagery in a process-oriented view of the Earth. Specialized cartographic models may be developed by users, which incorporate mapping information in performing land classification. In addition, a cartographic modeling language may be enhanced with operators suited to processing remotely sensed imagery. We demonstrate the usefulness of a cartographic modeling language for pre-processing satellite imagery, and define two nerv cartographic operators that evaluate image neighborhoods as post-processing operations to interpret thematic map values. The language and operators are demonstrated with an example image classification task.
Resumo:
Recently, regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. The development of accurate and reliable segmentation techniques may be essential to improve research outcomes. This work presents an image processing method to measure the perimeter and area of lung branches on fetal rat explants. The algorithm starts by reducing the noise corrupting the image with a pre-processing stage. The outcome is input to a watershed operation that automatically segments the image into primitive regions. Then, an image pixel is selected within the lung explant epithelial, allowing a region growing between neighbouring watershed regions. This growing process is controlled by a statistical distribution of each region. When compared with manual segmentation, the results show the same tendency for lung development. High similarities were harder to obtain in the last two days of culture, due to the increased number of peripheral airway buds and complexity of lung architecture. However, using semiautomatic measurements, the standard deviation was lower and the results between independent researchers were more coherent
Resumo:
Recently, regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. The development of accurate and reliable segmentation techniques may be essential to improve research outcomes. This work presents an image processing method to measure the perimeter and area of lung branches on fetal rat explants. The algorithm starts by reducing the noise corrupting the image with a pre-processing stage. The outcome is input to a watershed operation that automatically segments the image into primitive regions. Then, an image pixel is selected within the lung explant epithelial, allowing a region growing between neighbouring watershed regions. This growing process is controlled by a statistical distribution of each region. When compared with manual segmentation, the results show the same tendency for lung development. High similarities were harder to obtain in the last two days of culture, due to the increased number of peripheral airway buds and complexity of lung architecture. However, using semiautomatic measurements, the standard deviation was lower and the results between independent researchers were more coherent.
Resumo:
The surface morphology, structure and composition of human dentin treated with a femtosecond infrared laser (pulse duration 500 fs, wavelength 1030 nm, fluences ranging from 1 to 3 J cm(-2)) was studied by scanning electron microscopy, x-ray diffraction, x-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy. The average dentin ablation threshold under these conditions was 0.6 +/- 0.2 J cm(-2) and the ablation rate achieved in the range 1 to 2 mu m/pulse for an average fluence of 3 J cm(-2). The ablation surfaces present an irregular and rugged appearance, with no significant traces of melting, deformation, cracking or carbonization. The smear layer was entirely removed by the laser treatment. For fluences only slightly higher than the ablation threshold the morphology of the laser-treated surfaces was very similar to the dentin fracture surfaces and the dentinal tubules remained open. For higher fluences, the surface was more porous and the dentin structure was partially concealed by ablation debris and a few resolidified droplets. Independently on the laser processing parameters and laser processing method used no sub-superficial cracking was observed. The dentin constitution and chemical composition was not significantly modified by the laser treatment in the processing parameter range used. In particular, the organic matter is not preferentially removed from the surface and no traces of high temperature phosphates, such as the beta-tricalcium phosphate, were observed. The achieved results are compatible with an electrostatic ablation mechanism. In conclusion, the high beam quality and short pulse duration of the ultrafast laser used should allow the accurate preparation of cavities, with negligible damage of the underlying material.
Resumo:
Introdução – A mamografia é o principal método de diagnóstico por imagem utilizado no rastreio e diagnóstico do cancro da mama, sendo a modalidade de imagem recomendada em vários países da Europa e Estados Unidos para utilização em programas de rastreio. A implementação da tecnologia digital causou alterações na prática da mamografia, nomeadamente a necessidade de adaptar os programas de controlo de qualidade. Objetivos – Caracterizar a tecnologia instalada para mamografia em Portugal e as práticas adotadas na sua utilização pelos profissionais de saúde envolvidos. Concluir sobre o nível de harmonização das práticas em mamografia em Portugal e a conformidade com as recomendações internacionais. Identificar oportunidades para otimização que permitam assegurar a utilização eficaz e segura da tecnologia. Metodologia – Pesquisa e recolha de dados sobre a tecnologia instalada, fornecidos por fontes governamentais, prestadores de serviços de mamografia e indústria. Construção de três questionários, orientados ao perfil do médico radiologista, técnico de radiologia com atividade em mamografia digital e técnico de radiologia coordenador. Os questionários foram aplicados em 65 prestadores de serviços de mamografia selecionados com base em critérios de localização geográfica, tipo de tecnologia instalada e perfil da instituição. Resultados – Foram identificados 441 sistemas para mamografia em Portugal. A tecnologia mais frequente (62%) e vulgarmente conhecida por radiografia computorizada (computed radiography) é constituída por um detector (image plate) de material fotoestimulável inserido numa cassete de suporte e por um sistema de processamento ótico. A maioria destes sistemas (78%) está instalada em prestadores privados. Aproximadamente 12% dos equipamentos instalados são sistemas para radiografia digital direta (Direct Digital Radiography – DDR). Os critérios para seleção dos parâmetros técnicos de exposição variam, observando-se que em 65% das instituições são adotadas as recomendações dos fabricantes do equipamento. As ferramentas de pós-processamento mais usadas pelos médicos radiologistas são o ajuste do contraste e brilho e magnificação total e/ou localizada da imagem. Quinze instituições (em 19) têm implementado um programa de controlo de qualidade. Conclusões – Portugal apresenta um parque de equipamentos heterogéneo que inclui tecnologia obsoleta e tecnologia “topo de gama”. As recomendações/guidelines (europeias ou americanas) não são adotadas formalmente na maioria das instituições como guia para fundamentação das práticas em mamografia, dominando as recomendações dos fabricantes do equipamento. Foram identificadas, pelos técnicos de radiologia e médicos radiologistas, carências de formação especializada, nomeadamente nas temáticas da intervenção mamária, otimização da dose e controlo da qualidade. A maioria dos inquiridos concorda com a necessidade de certificação da prática da mamografia em Portugal e participaria num programa voluntário. ABSTRACT - Introduction – Mammography is the gold standard for screening and imaging diagnosis of breast disease. It is the imaging modality recommended by screening programs in various countries in Europe and the United States. The implementation of the digital technology promoted changes in mammography practice and triggered the need to adjust quality control programs. Aims –Characterize the technology for mammography installed in Portugal. Assess practice in use in mammography and its harmonization and compliance to international guidelines. Identify optimization needs to promote an effective and efficient use of digital mammography to full potential. Methodology – Literature review was performed. Data was collected from official sources (governmental bodies, mammography healthcare providers and medical imaging industry) regarding the number and specifications of mammography equipment installed in Portugal. Three questionnaires targeted at radiologists, breast radiographers and the chief-radiographer were designed for data collection on the technical and clinical practices in mammography. The questionnaires were delivered in a sample of 65 mammography providers selected according to geographical criteria, type of technology and institution profile. Results – Results revealed 441 mammography systems installed in Portugal. The most frequent (62%) technology type are computerized systems (CR) mostly installed in the private sector (78%). 12% are direct radiography systems (DDR). The criteria for selection of the exposure parameters differ between the institutions with the majority (65%) following the recommendations from the manufacturers. The use of available tools for post-processing is limited being the most frequently reported tools used the contrast/ brightness and Zoom or Pan Magnification tools. Fifteen participant institutions (out of 19) have implemented a quality control programme. Conclusions – The technology for mammography in Portugal is heterogeneous and includes both obsolete and state of the art equipment. International guidelines (European or American) are not formally implemented and the manufacturer recommendations are the most frequently used guidance. Education and training needs were identified amongst the healthcare professionals (radiologists and radiographers) with focus in the areas of mammography intervention, patient dose optimization and quality control. The majority of the participants agree with the certification of mammography in Portugal.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde. Especialização: Ressonância Magnética.
Resumo:
In context of electricity market, the transmission price is an important tool to an efficient development of the electricity system. The electricity market is influenced by several factors; however the transmission network management is one of the most important aspects, because the network is a natural monopoly. The transmission tariffs can help to regulate the market, for that reason evaluate tariff must have strict criterions. This paper explains several methodologies to tariff the use of transmission network by transmission network users. The methods presented are: Post-Stamp Method; MW-Mile Method; Distribution Factors Methods; Tracing Methodology; Bialek’s Tracing Method and Locational Marginal Price.
Resumo:
In the context of electricity markets, transmission pricing is an important tool to achieve an efficient operation of the electricity system. The electricity market is influenced by several factors; however the transmission network management is one of the most important aspects, because the network is a natural monopoly. The transmission tariffs can help to regulate the market, for this reason transmission tariffs must follow strict criteria. This paper presents the following methods to tariff the use of transmission networks by electricity market players: Post-Stamp Method; MW-Mile Method Distribution Factors Methods; Tracing Methodology; Bialek’s Tracing Method and Locational Marginal Price. A nine bus transmission network is used to illustrate the application of the tariff methods.