999 resultados para radiation mechanisms
Resumo:
We report on a simple method to obtain surface gratings using a Michelson interferometer and femtosecond laser radiation. In the optical setup used, two parallel laser beams are generated using a beam splitter and then focused using the same focusing lens. An interference pattern is created in the focal plane of the focusing lens, which can be used to pattern the surface of materials. The main advantage of this method is that the optical paths difference of the interfering beams is independent of the distance between the beams. As a result, the fringes period can be varied without a need for major realignment of the optical system and the time coincidence between the interfering beams can be easily monitored. The potential of the method was demonstrated by patterning surface gratings with different periods on titanium surfaces in air.
Resumo:
To increase the amount of logic available in SRAM-based FPGAs manufacturers are using nanometric technologies to boost logic density and reduce prices. However, nanometric scales are highly vulnerable to radiation-induced faults that affect values stored in memory cells. Since the functional definition of FPGAs relies on memory cells, they become highly prone to this type of faults. Fault tolerant implementations, based on triple modular redundancy (TMR) infrastructures, help to keep the correct operation of the circuit. However, TMR is not sufficient to guarantee the safe operation of a circuit. Other issues like the effects of multi-bit upsets (MBU) or fault accumulation, have also to be addressed. Furthermore, in case of a fault occurrence the correct operation of the affected module must be restored and the current state of the circuit coherently re-established. A solution that enables the autonomous correct restoration of the functional definition of the affected module, avoiding fault accumulation, re-establishing the correct circuit state in realtime, while keeping the normal operation of the circuit, is presented in this paper.
Resumo:
To increase the amount of logic available to the users in SRAM-based FPGAs, manufacturers are using nanometric technologies to boost logic density and reduce costs, making its use more attractive. However, these technological improvements also make FPGAs particularly vulnerable to configuration memory bit-flips caused by power fluctuations, strong electromagnetic fields and radiation. This issue is particularly sensitive because of the increasing amount of configuration memory cells needed to define their functionality. A short survey of the most recent publications is presented to support the options assumed during the definition of a framework for implementing circuits immune to bit-flips induction mechanisms in memory cells, based on a customized redundant infrastructure and on a detection-and-fix controller.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
Second International Workshop on Analog and Mixed Signal Integrated Circuits for Space Applications (AMICSA 2008), Sintra, Portugal, Setembro de 2008
Resumo:
Coal contains trace quantities of natural radionuclides such as Th-232, U-235, U-238, as well as their radioactive decay products and 40K. These radionuclides can be released as fly ash in atmospheric emissions from coal-fired power plants, dispersed into the environment and deposited on the surrounding top soils. Therefore, the natural radiation background level is enhanced and consequently increase the total dose for the nearby population. A radiation monitoring programme was used to assess the external dose contribution to the natural radiation background, potentially resulting from the dispersion of coal ash in past atmospheric emissions. Radiation measurements were carried out by gamma spectrometry in the vicinity of a Portuguese coal-fired power plant. The radiation monitoring was achieved both on and off site, being the boundary delimited by a 20 km circle centered in the stacks of the coal plant. The measured radionuclides concentrations for the uranium and thorium series ranged from 7.7 to 41.3 Bq/kg for Ra-226 and from 4.7 to 71.6 Bq/kg for Th-232, while K-40 concentrations ranged from 62.3 to 795.1 Bq/kg. The highest values were registered near the power plant and at distances between 6 and 20 km from the stacks, mainly in the prevailing wind direction. The absorbed dose rates were calculated for each sampling location: 13.97-84.00 ηGy/h, while measurements from previous studies carried out in 1993 registered values in the range of 16.6-77.6 ηGy/h. The highest values were registered at locations in the prevailing wind direction (NW-SE). This study has been primarily done to assess the radiation dose rates and exposure to the nearby population in the surroundings of a coal-fired power plant. The results suggest an enhancement or at least an influence in the background radiation due to the coal plant past activities.
Resumo:
Naturally occurring radioactive materials (NORM) under certain conditions can reach hazardous radiological levels contributing to an additional exposure dose to ionizing radiation. Most environmental concerns are associated with uranium mining and milling sites, but the same concerns should be addressed to natural near surface occurrences of uranium as well as man-made sources such as technologically enhanced naturally occurring radioactive materials (TENORM) resulting from phosphates industry, ceramic industry and energy production activities, in particular from coal-fired power plants which is one of the major sources of increased exposure to man from enhanced naturally occurring materials. This work describes the methodology developed to assess the environmental radiation by in situ gamma spectrometry in the vicinity of a Portuguese coal fired power plant. The current investigation is part of a research project that is undergoing in the vicinity of Sines Coal-Fired Power Plant (south of Portugal) until the end of 2013.
Resumo:
Certain materials used and produced in a wide range of non-nuclear industries contain enhanced activity concentrations of natural radionuclides. In particular, electricity production from coal is one of the major sources of increased exposure to man from enhanced naturally occurring materials. Over the past decades there has been some discussion about the elevated natural background radiation in the area near coal-fired power plants due to high uranium and thorium content present in coal. This work describes the methodology developed to assess the radiological impact due to natural radiation background increasing levels, potentially originated by a coal-fired power plant’s operation. Gamma radiation measurements have been done with two different instruments: a scintillometer (SPP2 NF, Saphymo) and a gamma ray spectrometer with energy discrimination (Falcon 5000, Canberra). A total of 40 relevant sampling points were established at locations within 20 km from the power plant: 15 urban and 25 suburban measured stations. The highest values were measured at the sampling points near to the power plant and those located in the area within the 6 and 20 km from the stacks. This may be explained by the presence of a huge coal pile (1.3 million tons) located near the stacks contributing to the dispersion of unburned coal and, on the other hand, the height of the stacks (225 m) which may influence ash’s dispersion up to a distance of 20 km. In situ gamma radiation measurements with energy discrimination identified natural emitting nuclides as well as their decay products (212Pb, 214Pb, 226Ra 232Th, 228Ac, 234Th 234Pa, 235U, etc.). This work has been primarily done to in order to assess the impact of a coal-fired power plant operation on the background radiation level in the surrounding area. According to the results, an increase or at least an influence has been identified both qualitatively and quantitatively.
Resumo:
Coal contains trace elements and naturally occurring radionuclides such as 40K, 232Th, 238U. When coal is burned, minerals, including most of the radionuclides, do not burn and concentrate in the ash several times in comparison with their content in coal. Usually, a small fraction of the fly ash produced (2-5%) is released into the atmosphere. The activities released depend on many factors (concentration in coal, ash content and inorganic matter of the coal, combustion temperature, ratio between bottom and fly ash, filtering system). Therefore, marked differences should be expected between the by-products produced and the amount of activity discharged (per unit of energy produced) from different coal-fired power plants. In fact, the effects of these releases on the environment due to ground deposition have been received some attention but the results from these studies are not unanimous and cannot be understood as a generic conclusion for all coal-fired power plants. In this study, the dispersion modelling of natural radionuclides was carried out to assess the impact of continuous atmospheric releases from a selected coal plant. The natural radioactivity of the coal and the fly ash were measured and the dispersion was modelled by a Gaussian plume estimating the activity concentration at different heights up to a distance of 20 km in several wind directions. External and internal doses (inhalation and ingestion) and the resulting risk were calculated for the population living within 20 km from the coal plant. In average, the effective dose is lower than the ICRP’s limit and the risk is lower than the U.S. EPA’s limit. Therefore, in this situation, the considered exposure does not pose any risk. However, when considering the dispersion in the prevailing wind direction, these values are significant due to an increase of 232Th and 226Ra concentrations in 75% and 44%, respectively.
Resumo:
Abstract The emergence of multi and extensively drug resistant tuberculosis (MDRTB and XDRTB) has increased the concern of public health authorities around the world. The World Health Organization has defined MDRTB as tuberculosis (TB) caused by organisms resistant to at least isoniazid and rifampicin, the main first-line drugs used in TB therapy, whereas XDRTB refers to TB resistant not only to isoniazid and rifampicin, but also to a fluoroquinolone and to at least one of the three injectable second-line drugs, kanamycin, amikacin and capreomycin. Resistance in Mycobacterium tuberculosis is mainly due to the occurrence of spontaneous mutations and followed by selection of mutants by subsequent treatment. However, some resistant clinical isolates do not present mutations in any genes associated with resistance to a given antibiotic, which suggests that other mechanism(s) are involved in the development of drug resistance, namely the presence of efflux pump systems that extrude the drug to the exterior of the cell, preventing access to its target. Increased efflux activity can occur in response to prolonged exposure to subinhibitory concentrations of anti-TB drugs, a situation that may result from inadequate TB therapy. The inhibition of efflux activity with a non-antibiotic inhibitor may restore activity of an antibiotic subject to efflux and thus provide a way to enhance the activity of current anti-TB drugs. The work described in this thesis foccus on the study of efflux mechanisms in the development of multidrug resistance in M. tuberculosis and how phenotypic resistance, mediated by efflux pumps, correlates with genetic resistance. In order to accomplish this goal, several experimental protocols were developed using biological models such as Escherichia coli, the fast growing mycobacteria Mycobacterium smegmatis, and Mycobacterium avium, before their application to M. tuberculosis. This approach allowed the study of the mechanisms that result in the physiological adaptation of E. coli to subinhibitory concentrations of tetracycline (Chapter II), the development of a fluorometric method that allows the detection and quantification of efflux of ethidium bromide (Chapter III), the characterization of the ethidium bromide transport in M. smegmatis (Chapter IV) and the contribution of efflux activity to macrolide resistance in Mycobacterium avium complex (Chapter V). Finally, the methods developed allowed the study of the role of efflux pumps in M. tuberculosis strains induced to isoniazid resistance (Chapter VI). By this manner, in Chapter II it was possible to observe that the physiological adaptation of E. coli to tetracycline results from an interplay between events at the genetic level and protein folding that decrease permeability of the cell envelope and increase efflux pump activity. Furthermore, Chapter III describes the development of a semi-automated fluorometric method that allowed the correlation of this efflux activity with the transport kinetics of ethidium bromide (a known efflux pump substrate) in E. coli and the identification of efflux inhibitors. Concerning M. smegmatis, we have compared the wild-type M. smegmatis mc2155 with knockout mutants for LfrA and MspA for their ability to transport ethidium bromide. The results presented in Chapter IV showed that MspA, the major porin in M. smegmatis, plays an important role in the entrance of ethidium bromide and antibiotics into the cell and that efflux via the LfrA pump is involved in low-level resistance to these compounds in M. smegmatis. Chapter V describes the study of the contribution of efflux pumps to macrolide resistance in clinical M. avium complex isolates. It was demonstrated that resistance to clarithromycin was significantly reduced in the presence of efflux inhibitors such as thioridazine, chlorpromazine and verapamil. These same inhibitors decreased efflux of ethidium bromide and increased the retention of [14C]-erythromycin in these isolates. Finaly, the methods developed with the experimental models mentioned above allowed the study of the role of efflux pumps on M. tuberculosis strains induced to isoniazid resistance. This is described in Chapter VI of this Thesis, where it is demonstrated that induced resistance to isoniazid does not involve mutations in any of the genes known to be associated with isoniazid resistance, but an efflux system that is sensitive to efflux inhibitors. These inhibitors decreased the efflux of ethidium bromide and also reduced the minimum inhibitory concentration of isoniazid in these strains. Moreover, expression analysis showed overexpression of genes that code for efflux pumps in the induced strains relatively to the non-induced parental strains. In conclusion, the work described in this thesis demonstrates that efflux pumps play an important role in the development of drug resistance, namely in mycobacteria. A strategy to overcome efflux-mediated resistance may consist on the use of compounds that inhibit efflux activity, restoring the activity of antimicrobials that are efflux pump substrates, a useful approach particularly in TB where the most effective treatment regimens are becoming uneffective due to the increase of MDRTB/XDRTB.
Resumo:
Resumo: A alimentação e o estado nutricional são factores determinantes do estado de saúde e sabe-se hoje que os mecanismos da patogénese de várias doenças crónicas não-transmissíveis podem ocorrer no início da idade adulta. A alimentação é influenciada por uma multiplicidade de factores, entre os quais se contam a importância atribuída à alimentação, o peso e imagem corporal e a percepção dos riscos associados à escolha de alimentos. Esta investigação teve como objectivos analisar, em estudantes universitários até aos 30 anos de idade, o estado nutricional, a importância atribuída a alimentação, as percepções do peso e da imagem corporal e a percepção do risco de doença relacionada com a alimentação. Os objectivos foram cumpridos através de dois desenhos de estudo distintos: estudo descritivo transversal e estudo de caso-controlo. Avaliou-se o peso, altura e os perímetros da cintura e da anca e construiu-se um questionário de auto-preenchimento para recolher a restante informação. Encontraram-se valores para a prevalência de obesidade e excesso de peso de, respectivamente, 6,5% e 24,3% e concluiu-se que existe uma percepção incorrecta do peso e da imagem corporal, mais frequente entre os indivíduos obesos. Os obesos também consideram a alimentação menos importante que os indivíduos normoponderais. A análise da percepção dos riscos revela que os inquiridos consideram que factores como a obesidade e a inactividade física são menos prejudiciais para a saúde do que factores como as alterações climáticas ou as radiações de telefones móveis. Verificaram-se também diferenças entre sexos nos parâmetros estudados: relativamente às mulheres, os homens sobrestimam mais frequentemente o peso e a imagem corporal, consideram a alimentação menos importante, julgam-se em menor risco de doença e classificam os factores de risco estudados como menos prejudiciais. Conclui-se que as estratégias de educação alimentar e de promoção da saúde devem considerar as diferenças registadas entre sexos e a importância atribuída à alimentação e as percepções do risco, do peso e da imagem corporal. Abstract: Nutrition and nutritional status are health determinants and it’s accepted that the mechanisms for the pathogenesis of several chronic non-communicable diseases can occur in early adult age. Nutrition is influenced by a large number of factors, including the value placed on food, weight and body image and the risk perception associated with food choice. Consequently, the analysis of the factors that can influence food behaviour and food choice in young adults can be useful for the control and prevention of nutrition related disease. The objectives of this research were to analyse, in college students up to 30 years of age, nutritional status, value placed on nutrition, weight and body image perceptions and the risk perception of nutrition related disease. Two study designs were used: cross-sectional and case-control. Weight, height and waist and hip circumference were measured and a questionnaire was built to collect the remaining information. Prevalences of 6,5% for obesity and 24,3% for excess weight were found, along with the existence of biased weight and body image perceptions, more frequent in obese subjects. Obese subjects also placed less value on nutrition than non-obese. Risk perception analysis shows that risk factors like obesity and physical inactivity are considered less hazardous than risk factors like climate changes and mobile phone radiation. Men, comparatively to women, overestimated more frequently their weight and body image, placed less value in nutrition, considered themselves less disease susceptible and classified the risk factors studied as less hazardous. The conclusions of this study show that nutrition education and health promotion strategies should consider the gender related differences reported and, also, the value placed on nutrition and weigh, body image and risk perceptions.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Dissertation presented to obtain a Ph.D degree in Engineering and Technology Sciences, Gene Therapy at the Instituto de Tecnologia Quimica e Biológica, Universidade Nova de Lisboa