967 resultados para High-dynamic range images
Resumo:
A thermodynamic approach based on the Bender equation of state is suggested for the analysis of supercritical gas adsorption on activated carbons at high pressure. The approach accounts for the equality of the chemical potential in the adsorbed phase and that in the corresponding bulk phase and the distribution of elements of the adsorption volume (EAV) over the potential energy for gas-solid interaction. This scheme is extended to subcritical fluid adsorption and takes into account the phase transition in EAV The method is adapted to gravimetric measurements of mass excess adsorption and has been applied to the adsorption of argon, nitrogen, methane, ethane, carbon dioxide, and helium on activated carbon Norit R I in the temperature range from 25 to 70 C. The distribution function of adsorption volume elements over potentials exhibits overlapping peaks and is consistently reproduced for different gases. It was found that the distribution function changes weakly with temperature, which was confirmed by its comparison with the distribution function obtained by the same method using nitrogen adsorption isotherm at 77 K. It was shown that parameters such as pore volume and skeleton density can be determined directly from adsorption measurements, while the conventional approach of helium expansion at room temperature can lead to erroneous results due to the adsorption of helium in small pores of activated carbon. The approach is a convenient tool for analysis and correlation of excess adsorption isotherms over a wide range of pressure and temperature. This approach can be readily extended to the analysis of multicomponent adsorption systems. (C) 2002 Elsevier Science (USA).
Resumo:
Most external assessments of cervical range of motion assess the upper and lower cervical regions simultaneously. This study investigated the within and between days reliability of the clinical method used to bias this movement to the upper cervical region, namely measuring rotation of the head and neck in a position of full cervical flexion. Measurements were made using the Fastrak measurement system and were conducted by one operator. Results indicated high levels of within and between days repeatability (range of ICC2,1 values: 0.85-0.95). The ranges of axial rotation to right and left, measured with the neck positioned in full flexion, were approximately 56% and 50%, respectively of total cervical rotation, which relates well to the proportional division of rotation in the upper and lower cervical regions. These results suggest that this method of measuring rotation would be appropriate for use in subject studies where movement dysfunction is present in the upper cervical region, such as those with cervicogenic headache. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Skin-friction measurements are reported for high-enthalpy and high-Mach-number laminar, transitional and turbulent boundary layers. The measurements were performed in a free-piston shock tunnel with air-flow Mach number, stagnation enthalpy and Reynolds numbers in the ranges of 4.4-6.7, 3-13 MJ kg(-1) and 0.16 x 10(6)-21 x 10(6), respectively. Wall temperatures were near 300 K and this resulted in ratios of wall enthalpy to flow-stagnation enthalpy in the range of 0.1-0.02. The experiments were performed using rectangular ducts. The measurements were accomplished using a new skin-friction gauge that was developed for impulse facility testing. The gauge was an acceleration compensated piezoelectric transducer and had a lowest natural frequency near 40 kHz. Turbulent skin-friction levels were measured to within a typical uncertainty of +/-7%. The systematic uncertainty in measured skin-friction coefficient was high for the tested laminar conditions; however, to within experimental uncertainty, the skin-friction and heat-transfer measurements were in agreement with the laminar theory of van Driest (1952). For predicting turbulent skin-friction coefficient, it was established that, for the range of Mach numbers and Reynolds numbers of the experiments, with cold walls and boundary layers approaching the turbulent equilibrium state, the Spalding & Chi (1964) method was the most suitable of the theories tested. It was also established that if the heat transfer rate to the wall is to be predicted, then the Spalding & Chi (1964) method should be used in conjunction with a Reynolds analogy factor near unity. If more accurate results are required, then an experimentally observed relationship between the Reynolds analogy factor and the skin-friction coefficient may be applied.
Resumo:
Background: Alcohol consumption has beneficial effects on mortality which are mainly due to reduction in cardiovascular disease. These are believed to be due, at least in part, to the increase in plasma high-density lipoprotein (HDL) which is associated with alcohol consumption. It has been proposed that ADH3 genotype modifies the relationships between alcohol intake and cardiovascular disease by altering the HDL response to alcohol. The aim of this paper was to test for effects of ADH2 and ADH3 genotypes on the response of HDL components to habitual alcohol consumption. Methods: Adult male and female subjects were genotyped for ADH2 and ADH3; and plasma HDL cholesterol, apolipoprotein A-I, and apolipoprotein A-II were measured. Nine hundred one subjects had both ADH2 and ADH3 genotypes and HDL cholesterol results, while 753 had both genotypes and all three lipid results. The effect of alcohol intake on the three measured HDL components, and a factor score derived from them, was estimated for each of the ADH2 and ADH3 genotype groups. Results: All the measured components of HDL increased with increasing alcohol consumption over the range of intakes studied, 0-4 drinks per day. There were no significant interactions between alcohol consumption and ADH2 or ADH3 genotypes. Conclusions: The concept that alcohol dehydrogenase genotype and alcohol metabolic rate modify the effects of alcohol on plasma HDL concentration is not supported by our results.
Resumo:
In modern magnetic resonance imaging (MRI), patients are exposed to strong, nonuniform static magnetic fields outside the central imaging region, in which the movement of the body may be able to induce electric currents in tissues which could be possibly harmful. This paper presents theoretical investigations into the spatial distribution of induced electric fields and currents in the patient when moving into the MRI scanner and also for head motion at various positions in the magnet. The numerical calculations are based on an efficient, quasi-static, finite-difference scheme and an anatomically realistic, full-body, male model. 3D field profiles from an actively shielded 4T magnet system are used and the body model projected through the field profile with a range of velocities. The simulation shows that it possible to induce electric fields/currents near the level of physiological significance under some circumstances and provides insight into the spatial characteristics of the induced fields. The results are extrapolated to very high field strengths and tabulated data shows the expected induced currents and fields with both movement velocity and field strength. (C) 2003 Elsevier Science (USA). All rights reserved.
Resumo:
Protein aggregation became a widely accepted marker of many polyQ disorders, including Machado-Joseph disease (MJD), and is often used as readout for disease progression and development of therapeutic strategies. The lack of good platforms to rapidly quantify protein aggregates in a wide range of disease animal models prompted us to generate a novel image processing application that automatically identifies and quantifies the aggregates in a standardized and operator-independent manner. We propose here a novel image processing tool to quantify the protein aggregates in a Caenorhabditis elegans (C. elegans) model of MJD. Confocal mi-croscopy images were obtained from animals of different genetic conditions. The image processing application was developed using MeVisLab as a platform to pro-cess, analyse and visualize the images obtained from those animals. All segmenta-tion algorithms were based on intensity pixel levels.The quantification of area or numbers of aggregates per total body area, as well as the number of aggregates per animal were shown to be reliable and reproducible measures of protein aggrega-tion in C. elegans. The results obtained were consistent with the levels of aggrega-tion observed in the images. In conclusion, this novel imaging processing applica-tion allows the non-biased, reliable and high throughput quantification of protein aggregates in a C. elegans model of MJD, which may contribute to a significant improvement on the prognosis of treatment effectiveness for this group of disor-ders
Resumo:
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.
Resumo:
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.
Resumo:
Fluorescence confocal microscopy (FCM) is now one of the most important tools in biomedicine research. In fact, it makes it possible to accurately study the dynamic processes occurring inside the cell and its nucleus by following the motion of fluorescent molecules over time. Due to the small amount of acquired radiation and the huge optical and electronics amplification, the FCM images are usually corrupted by a severe type of Poisson noise. This noise may be even more damaging when very low intensity incident radiation is used to avoid phototoxicity. In this paper, a Bayesian algorithm is proposed to remove the Poisson intensity dependent noise corrupting the FCM image sequences. The observations are organized in a 3-D tensor where each plane is one of the images acquired along the time of a cell nucleus using the fluorescence loss in photobleaching (FLIP) technique. The method removes simultaneously the noise by considering different spatial and temporal correlations. This is accomplished by using an anisotropic 3-D filter that may be separately tuned in space and in time dimensions. Tests using synthetic and real data are described and presented to illustrate the application of the algorithm. A comparison with several state-of-the-art algorithms is also presented.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
The top velocity of high-speed trains is generally limited by the ability to supply the proper amount of energy through the pantograph-catenary interface. The deterioration of this interaction can lead to the loss of contact, which interrupts the energy supply and originates arcing between the pantograph and the catenary, or to excessive contact forces that promote wear between the contacting elements. Another important issue is assessing on how the front pantograph influences the dynamic performance of the rear one in trainsets with two pantographs. In this work, the track and environmental conditions influence on the pantograph-catenary is addressed, with particular emphasis in the multiple pantograph operations. These studies are performed for high speed trains running at 300 km/h with relation to the separation between pantographs. Such studies contribute to identify the service conditions and the external factors influencing the contact quality on the overhead system. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Purpose: The most recent Varian® micro multileaf collimator(MLC), the High Definition (HD120) MLC, was modeled using the BEAMNRCMonte Carlo code. This model was incorporated into a Varian medical linear accelerator, for a 6 MV beam, in static and dynamic mode. The model was validated by comparing simulated profiles with measurements. Methods: The Varian® Trilogy® (2300C/D) accelerator model was accurately implemented using the state-of-the-art Monte Carlo simulation program BEAMNRC and validated against off-axis and depth dose profiles measured using ionization chambers, by adjusting the energy and the full width at half maximum (FWHM) of the initial electron beam. The HD120 MLC was modeled by developing a new BEAMNRC component module (CM), designated HDMLC, adapting the available DYNVMLC CM and incorporating the specific characteristics of this new micro MLC. The leaf dimensions were provided by the manufacturer. The geometry was visualized by tracing particles through the CM and recording their position when a leaf boundary is crossed. The leaf material density and abutting air gap between leaves were adjusted in order to obtain a good agreement between the simulated leakage profiles and EBT2 film measurements performed in a solid water phantom. To validate the HDMLC implementation, additional MLC static patterns were also simulated and compared to additional measurements. Furthermore, the ability to simulate dynamic MLC fields was implemented in the HDMLC CM. The simulation results of these fields were compared with EBT2 film measurements performed in a solid water phantom. Results: Overall, the discrepancies, with and without MLC, between the opened field simulations and the measurements using ionization chambers in a water phantom, for the off-axis profiles are below 2% and in depth-dose profiles are below 2% after the maximum dose depth and below 4% in the build-up region. On the conditions of these simulations, this tungsten-based MLC has a density of 18.7 g cm− 3 and an overall leakage of about 1.1 ± 0.03%. The discrepancies between the film measured and simulated closed and blocked fields are below 2% and 8%, respectively. Other measurements were performed for alternated leaf patterns and the agreement is satisfactory (to within 4%). The dynamic mode for this MLC was implemented and the discrepancies between film measurements and simulations are within 4%. Conclusions: The Varian® Trilogy® (2300 C/D) linear accelerator including the HD120 MLC was successfully modeled and simulated using the Monte CarloBEAMNRC code by developing an independent CM, the HDMLC CM, either in static and dynamic modes.
Resumo:
Este trabalho surgiu do âmbito da Tese de Dissertação do Mestrado em Energias Sustentáveis do Instituto Superior de Engenharia do Porto, tendo o acompanhamento dos orientadores da empresa Laboratório Ecotermolab do Instituto de Soldadura e Qualidade e do Instituto Superior de Engenharia do Porto, de forma a garantir a linha traçada indo de acordo aos objectivos propostos. A presente tese abordou o estudo do impacto da influência do ar novo na climatização de edifícios, tendo como base de apoio à análise a simulação dinâmica do edifício em condições reais num programa adequado, acreditado pela norma ASHRAE 140-2004. Este trabalho pretendeu evidenciar qual o impacto da influência do ar novo na climatização de um edifício com a conjugação de vários factores, tais como, ocupação, actividades e padrões de utilização (horários), iluminação e equipamentos, estudando ainda a possibilidade do sistema funcionar em regime de “Free-Cooling”. O princípio partiu fundamentalmente por determinar até que ponto se pode climatizar recorrendo único e exclusivamente à introdução de ar novo em regime de “Free-Cooling”, através de um sistema tudo-ar de Volume de Ar Variável - VAV, sem o apoio de qualquer outro sistema de climatização auxiliar localizado no espaço, respeitando os caudais mínimos impostos pelo RSECE (Decreto-Lei 79/2006). Numa primeira fase foram identificados todos os dados relativos à determinação das cargas térmicas do edifício, tendo em conta todos os factores e contributos alusivos ao valor da carga térmica, tais como a transmissão de calor e seus constituintes, a iluminação, a ventilação, o uso de equipamentos e os níveis de ocupação. Consequentemente foram elaboradas diversas simulações dinâmicas com o recurso ao programa EnergyPlus integrado no DesignBuilder, conjugando variáveis desde as envolventes à própria arquitectura, perfis de utilização ocupacional, equipamentos e taxas de renovação de ar nos diferentes espaços do edifício em estudo. Obtiveram-se vários modelos de forma a promover um estudo comparativo e aprofundado que permitisse determinar o impacto do ar novo na climatização do edifício, perspectivando a capacidade funcional do sistema funcionar em regime de “Free-Cooling”. Deste modo, a análise e comparação dos dados obtidos permitiram chegar às seguintes conclusões: Tendo em consideração que para necessidades de arrefecimento bastante elevadas, o “Free-Cooling” diurno revelou-se pouco eficaz ou quase nulo, para o tipo de clima verificado em Portugal, pois o diferencial de temperatura existente entre o exterior e o interior não é suficiente de modo a tornar possível a remoção das cargas de forma a baixar a temperatura interior para o intervalo de conforto. Em relação ao “Free-Cooling” em horário nocturno ou pós-laboral, este revelou-se bem mais eficiente. Obtiveram-se prestações muito interessantes sobretudo durante as estações de aquecimento e meia-estação, tendo em consideração o facto de existir necessidades de arrefecimento mesmo durante a estação de aquecimento. Referente à ventilação nocturna, isto é, em períodos de madrugada e fecho do edifício, concluiu-se que tal contribui para um abaixamento do calor acumulado durante o dia nos materiais construtivos do edifício e que é libertado ou restituído posteriormente para os espaços em períodos mais tardios. De entre as seguintes variáveis, aumento de caudal de ar novo insuflado e o diferencial de temperatura existente entre o ar exterior e interior, ficou demonstrado que este último teria maior peso contributivo na remoção do calor. Por fim, é ponto assente que de um modo geral, um sistema de climatização será sempre indispensável devido a cargas internas elevadas, requisitos interiores de temperatura e humidade, sendo no entanto aconselhado o “Free- Cooling” como um opção viável a incorporar na solução de climatização, de forma a promover o arrefecimento natural, a redução do consumo energético e a introdução activa de ar novo.
Resumo:
The tongue is the most important and dynamic articulator for speech formation, because of its anatomic aspects (particularly, the large volume of this muscular organ comparatively to the surrounding organs of the vocal tract) and also due to the wide range of movements and flexibility that are involved. In speech communication research, a variety of techniques have been used for measuring the three-dimensional vocal tract shapes. More recently, magnetic resonance imaging (MRI) becomes common; mainly, because this technique allows the collection of a set of static and dynamic images that can represent the entire vocal tract along any orientation. Over the years, different anatomical organs of the vocal tract have been modelled; namely, 2D and 3D tongue models, using parametric or statistical modelling procedures. Our aims are to present and describe some 3D reconstructed models from MRI data, for one subject uttering sustained articulations of some typical Portuguese sounds. Thus, we present a 3D database of the tongue obtained by stack combinations with the subject articulating Portuguese vowels. This 3D knowledge of the speech organs could be very important; especially, for clinical purposes (for example, for the assessment of articulatory impairments followed by tongue surgery in speech rehabilitation), and also for a better understanding of acoustic theory in speech formation.